paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_S19dR9x0b
Alternating Multi-bit Quantization for Recurrent Neural Networks
Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {-1,+1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ~16x memory saving and ~6x real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ~10.5x memory saving and ~3x real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.
accepted-poster-papers
The reviewers unanimously agree that this paper is worth publication at ICLR. Please address the feedback of the reviewers and discuss exactly how the potential speed up rates are computed in the appendix. I speed up rates to be different for different devices.
train
[ "HyOWIZjeM", "r1pQDxEZf", "HJ-WByrVG", "BJz5LyclM", "rJr11K2mz", "Bkbq0_2Qf", "rkgh3_n7f", "ryQ5eXRzM", "rJFXe7Czf", "rJUq0zAGz", "SJh8yXRMG", "SJ6WAf0MM", "S1PH3zAMM", "HyeXldDZG", "rkaECW7bz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "I have read the comments and clarifications from the authors. They have added extra experiments, and clarified the speed-ups concern raised by others. I keep my original rating of the paper.\n\n---------------\nORIGINAL REVIEW:\n\nThis paper introduces a multi-bit quantization method for recurrent neural networks, which is built on alternating the minimization formulated by Guo et al. 2017 by first fixing the \\alpha values and then finding the optimal binary codes b_i with a BST, to then estimate \\alpha with the refined approximation by Guo et al. 2017, iteratively. The observation that the optimal binary code can be computed with a BST is simple and elegant.\n\nThe paper is easy to follow and the topic of reducing memory and speeding up computations for RNN and DNN is interesting and relevant to the community.\n\nThe overall contribution on model quantization is based on existing methods, which makes the novelty of the paper suffer a bit. Said that, applying it to RNN is a convincing and a strong motivation. Also, in the paper it is shown how the matrix multiplications of the quantized model can be speeded up using 64 bits operation in CPU. This is, not only saves memory storage and usage, but also on runtime calculation using CPU, which is an important characteristic when there are limited computational resources.\n\nResults on language models show that the models with quantized weights with 3 bits obtain the same or even slightly better performance on the tested datasets with impressive speed-ups and memory savings.\n\nFor completeness, it would be interesting, and I would strongly encourage to add a discussion or even an experiment using feedforward DNN with a simple dataset as MNIST, as most of previous work discussed in the paper report experiments on DNN that are feedforward. Would the speed-ups and memory savings obtained for RNN hold also for feedforward networks?\n\n\n\n\n", "Revision:\n\nThe authors have addressed my concerns around the achievable speedup. I am increasing my score to 7.\n\nOriginal Review:\n\nThe paper proposes a technique for quantizing neural network weight matrices by representing columns of weight matrices as linear combinations of binary (+1/-1) vectors. Given a weight vector, the paper proposes an alternating optimization procedure to estimate the set of k binary vectors and coefficients that best represent (in terms of MSE) the original vector. This yields a k-bit quantization. First, the coefficients/binary weights are initialized using a greedy procedure proposed in prior work. Then, the binary weights are updated using a clever binary search procedure, followed by updates to the coefficients. Experiments are conducted in an RNN context for some language modeling tasks.\n\nThe paper is relatively easy to read, and the technique is clearly explained. The technique is as far as I can tell novel, and does seem to represent an improvement over existing approaches for similar multi-bit quantization strategies.\n\nI have a few questions/concerns. First, I am quite skeptical of many of the speedup calculations: These are rather delicate to do properly, and depend on the specific instructions available, SIMD widths, the number of ALUs present in a core, etc. All of these can easily shift numbers around by a factor of 2-8x. Without an implementation in hand, comparing against a well-optimized reference GEMM for full floating point, it's not clear how much faster this approach really would be in practice. Also, the online quantization of activations doesn't seem to be factored into the speedup calculations, and no benchmarks are provided demonstrating how fast the quantization is (unless I'm missing something). This is concerning since the claimed speedups aren't possible without the online quantization of actiations.\n\nIt would have been nice to have more discussion of/comparison with other approaches capable of 2-4 bit quantization, such as some of the recent work on ternary quantization, product quantization approaches, or at least scalar (per-dimension) k-means (non-uniform quantization).\n\nFinally, the experiments are reasonable, but the choice of RNN setting isn't clear to me. It would have been easier to compare to prior work if the experiments also included some standard image classification tasks (e.g., CIFAR10).\n\nOverall though, I think the paper does just enough to warrant acceptance.", "Thanks for the additional experiments!", "\nSummary of the paper\n-------------------------------\n\nThe authors propose a new way to perform multi-bit quantization based on greedy approximation and binary search tree for RNNs. They first show how this method, applied to the parameters only, performs on pre-trained networks and show great performances compared to other existing techniques on PTB. Then they present results with the method applied to both parameters and activations during training on 3 NLP datasets, showing again great performances compared to existing technique.\n\nClarity, Significance and Correctness\n--------------------------------------------------\n\nClarity: The paper is clearly written.\n\nSignificance: I'm not familiar with the quantization literature, so I'll let more knowledgeable reviewers evaluate this point.\n\nCorrectness: The paper is technically correct.\n\nQuestions\n--------------\n\n1. It would be nice to have those memory and speed gains for training as well. Is it possible to use those quantization methods to train networks from scratch, i.e. without using a pre-train model?\n\nPros\n------\n\n1. The paper defines clear goals and contributions.\n2. Existing methods (and their differences) are clearly and concisely presented.\n3. The proposed method is well explained.\n4. The experimental setup shows clear results compared to the non-quantized baselines and other quantization techniques.\n\nCons\n-------\n\n1. It would be nice to have another experiment not based on text (speech recognition / synthesis, audio, biological signals, ...) to see how it generalizes to other kind of data (although I can't see why it wouldn't).\n\nTypos\n--------\n\n1. abstract: \"gate recurrent unit\" -> \"gated recurrent unit\"\n2. equation (6): remove parenthesis in c_(t-1)\n3. section 4, paragraph 1: \"For the weight matrices, instead of on the whole, we quantize them row by row.\" -> \"We don't apply quantization on the full matrices but rather row by row.\"\n4. section 4, paragraph 2: Which W matrix is it? W_h? (2x)\n\nNote\n-------\n\nSince I'm not familiar with the quantization literature, I'm flexible with my evaluation based on what other reviewers with more expertise have to say.", "We add an experiment on CIFAR10, see the comment \"Experiments on CIFAR10 and Sequential MNIST\".", "We add an experiment on sequential MNIST classification task, see the comment \"Experiments on CIFAR10 and Sequential MNIST\".", "Q1:“Including some standard image classification tasks (e.g., CIFAR10)” by Reviewer1\n\nReply: We conduct experiments on CIFAR-10 and follow the same setting as [1]. That is, we use 45000 images for training, another 5000 for validation, and the remaining 10000 for testing. The images are preprocessed with global contrast normalization and ZCA whitening. We also use the VGG-like architecture:\n\n(2×128C3)−MP2−(2×256C3)−MP2−(2×512C3)−MP2−(2×1024FC)−10SVM,\n\nwhere C3 is a 3×3 convolution layer, and MP2 is a 2×2 max-pooling layer. Batch Normalization, with a mini-batch size of 50, and ADAM are used. The maximum number of epochs is 200. The learning rate starts at 0.02 and decays by a factor of 0.5 after every 30 epochs. The testing error rates for 2-bit weight and 1-bit activation are as follows:\n\nAlternating (our method): 11.70%\nRefined (our implementation): 12.08%\nXNOR-Net (1-bit weight & 1-bit activation, reported in [1]) 12.62%\nFull Precision (reported in [1]) 11.90% \nwhere our alternating quantization method achieves the lowest test error rate. \n \nQ2: “Including another experiment not based on text (speech recognition / synthesis, audio, biological, signals, ...) to see how it generalizes to other kind of data” by Reviewer 3\n\nReply: As a simple illustration, we conduct experiments on the sequential MNIST (images of size 28×28) classification task [2]. In every time, we sequentially use one row of the image as the input (of size 28×1), which results in a total of 28 time steps. We use 1 hidden layer’s LSTM of size 128 and the same optimization hyper-parameters as the Language Models in our paper. The testing error rates for 1-bit input, 2-bit weight, and 2-bit activation are as follows:\n\nFull Precision (our implementation) 1.10%\nAlternating (our method) 1.19%\nRefined (our implementation) 1.39%\nwhere our alternating quantized method still achieves plausible performance in this task. \n \nWe will add all the above experiments in the revised version.\n\n[1] Hou, Lu, et al. Loss-aware Binarization of Deep Networks, ICLR 2017.\n[2] Cooijmans, Tim, et al. Recurrent Batch Normalization, ICLR 2017.", "We will correct it in the revised version.", "Please refer to the reply to common issues.", "As we are quantizing the weight and activation to reduce the most costly matrix multiplication to binary operation, it is of no difference for RNNs and feedforward networks when concerning the speed-ups and memory savings. \n\nPlease refer to the replies to common issues for the experiments on MNIST.\n", "The memory costs during training can mainly be divided into two parts: the weights and the activations for backpropagation. For the weights, as a full precision should be maintained (See Eq. (7)), they cannot be reduced. For the activations, as it is enough to maintain a quantized version for backpropagation, we can have memory gains in this part. The time costs during training can also be divided into two parts: the forward and backward pass. During the forward pass, as the most costly full precision multiplications are transformed into the much faster binarized multiplications, we can have speed-ups in this part. During the backward, as we need to compute a full precision gradient, no speed-ups can be achieved.\n\nWe conduct experiments of training from scratch in the PTB dataset and observe that it would result in 1~2 PPW worse than using a pre-trained model. But when combining with the continuation technique, that is, setting the initial number of bit to be large, then gradually decreasing it during training, it will result in almost no loss or even slightly better on accuracy. In fact, using a pre-trained model can also be regarded as such continuation technique, but coarser and simpler. \n\nIn section 4, paragraph 2, W do means W_h.\n\nWe will address other small typos in the revised version. We are also conducting experiments on non-text data and will report the results if time permits.\n", "Please refer to the replies to common issues on speedup.\n\nTernary quantization [1] is an extension to the binary quantization with one more feasible state, 0. It does quantization by tackling $\\min_{\\alpha, t}\\|w – \\alpha* t \\|_2^2$ with $t$ restricted to {-1,0,+1}. However, currently there is no efficient algorithm to solve this problem. Instead, Li et al. [1] suggested to empirically set the entries $w_i$ with absolute scales less than $0.7/n \\|w\\|_1$ to 0 (n is the number of entries) and binarize the left entries with a closed-form solution as discussed in our paper. In fact, ternary quantization is a special case of the 2-bit quantization in our paper, i.e., $\\min_{\\alpha_1,\\alpha_2, b_1,b_2}\\|w – \\alpha_1* b_1 - \\alpha_2 * b_2 \\|_2^2$ with an additional constraint that $\\alpha_1 = \\alpha_2$. Thus our alternating multi-bit quantization method can easily extend to solve it.\n\nIn parallel to our binarized quantization, vector quantization is applied to compress the weights for feedforward neural networks [2][3]. Different from ours where all weights are directly constraint to {-1, +1}, vector quantization learns a small codebook by applying k-means clustering to the weights or conducting product quantization. The weights are then reconstructed by indexing the codebook. It has been shown that by such a technique, the number of parameters can be reduced by an order of magnitude with limited accuracy loss [2]. It is possible that our mutli-bit quantized binary weight can be further compressed by using the product quantization. However, this is out-of-the scope of this paper and we leave it for future work.\n\nWe will incorporate the above discussions in the revised version. As for the experiment on image classification tasks, we have done on MNIST (see the replies to common issues). We will also report the results on CIFAR10 if time permits.\n\n[1] Li, Fengfu et al. Ternary weight networks, arXiv:1605.04711.\n[2] Gong, Yunchao et al. Compressing Deep Convolutional Networks using Vector Quantization, arXiv:1412.6115\n[3] Han, Song et al. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding, ICLR 15.\n", "We thank all the reviewers for being positive towards our paper. Below are some clarifications for issues concerned in common:\n\nQ1: “Acceleration for binary multiplication on CPU, both in theory and real implementation”\n\nReply: As the binary multiplication operates in 1 bit, whereas the full precision multiplication operates in 32 bit, despite the feasible implementations, the acceleration should be 32x in theory (Not 64x as claimed in XNOR-NET[1], the acceleration claimed in our paper was calculated based on this wrong factor. We will correct it in the revised version). In addition to binary operation, in real implementations, the acceleration can be largely affected by the size of the matrix, where much memory reduce can result in better utilizing in the limited cache (it is much faster than CPU main memory). \n\nIn this work, we implement the binary multiplication kernel in CPUs ourselves. The binary multiplication is divided into two steps: Entry-wise XNOR operation (corresponding to entry-wise product in the full precision multiplication) and bit count operation for accumulation (corresponding to compute the sum of all multiplied entries in the full precision multiplication). We test it on Intel Xeon E5-2682 v4 @ 2.50GHz CPU. For the XNOR operation, we use the Single instruction, multiple data (SIMD) _mm256_xor_ps, which can execute 256 bit simultaneously. For the bit count operation, we use the function _popcnt64 (Note that this step can further be accelerated by the up-coming instruction _mm512_popcnt_epi64, which can execute 512 bits simultaneously. Similarly, the XNOR operation can also be further accelerated by the up-coming _mm512_xor_ps instruction to execute 512 bits simultaneously). We compare with the much optimized Intel Math Kernel Library (MKL) on full precision matrix vector multiplication and execute all codes in the single-thread mode. We conduct two scales of experiments: a matrix of size $4096 \\times 1024$ multiplying a vector of size $1024 \\times 1$ and a matrix of size $42000 \\times 1024$ multiplying a vector of size $1024 \\times 1$, which respectively correspond to the hidden state product $W_h h_{t-1}$ and the softmax layer $W_s h_t$ for Text8 dataset during inference with batch size =1 (See Eq. (6) in the paper).\n\nFor a matrix of size $4096 \\times 1024$ multiplying a vector of size $1024 \\times 1$, we have\n\nFull precision: 1.95ms\n2-bit: 0.35ms (including 0.07ms for on-line quantizing the vector, taking 20%)\n3-bit: 0.72ms (including 0.11ms for on-line quantizing the vector, taking 15%)\n\nin which our 2-bit quantization has 5.6x acceleration and our 3-bit quantization has 2.7x acceleration.\n\nFor a matrix of size $42000 \\times 1024$ multiplying a vector of size $1024 \\times 1$, we have\n\nFull precision: 19.10ms\n2-bit: 3.17ms (including 0.07ms for on-line quantizing the vector, taking 2%)\n3-bit: 6.46ms (including 0.11ms for on-line quantizing the vector, taking 1.7%)\n\nin which our 2-bit quantization has 6x acceleration and our 3-bit quantization has 3x acceleration.\n\nNote that this is only a simple test on CPU. Our alternating quantization method can also be extended to GPU, ASIC, and FPGA.\n\nFinally, we deem that it may be too demanding to compare the speedups by pushing the limit of implementation. The exact number of speedups may vary across different computing devices and also depends on how much the compliers can be optimized. Our research is anyway valuable by showing the theoretical potential and inspiring future exploration.\n \nQ2: “Illustration of experiments using feedforward neural networks”:\n\nReply: We conduct a classification task on MNIST and compare with existing work [2]. Besides the weights and activations, the input images are also quantized. The method proposed in [2] is intrinsically a greedy multi-bit quantization method. For fair comparison, we follow the same setting. We use the MLP consisting of 3 hidden layers of 4,096 units and an L2-SVM output layer. No convolution, preprocessing, data augmentation or pre-training is used. We also use ADAM with an exponentially decaying learning rate and Batch Normalization with a batch size 100. The testing error rates for 2 bit input, 2 bit weight, and 1 bit activation are as follows:\n\nFull Precision (our implementation): 0.97%\nAlternating (our method): 1.13%\nRefined (our implementation): 1.22% \nGreedy (reported in [2]): 1.25% \n\nAmong all the compared multi-bit quantization methods, our alternating one achieves the lowest the testing error.\n\nWe will add all the above discussions and experiments in the revised version.\n\n[1] Rastegari, Mohammad, et al. Xnor-net: Imagenet classification using binary convolutional neural networks, ECCV 2016.\n[2] Li, Zefan, et al. Performance Guaranteed Network Acceleration via High-Order Residual Quantization, ICCV 2017.\n", "Interesting work :) Just wanted to briefly note that the official name for the Wikipedia language modeling dataset is WikiText-2 rather than Wikidata.\n\n(I'd have submitted this just to the authors but there is no option for that within the comment posting)", "> In the current generation of CPUs, we can perform 64 binary operations in one clock of CPU...\n\n> With 2-bit weights and activations, we achieve only a reasonably accuracy loss compared with full precision one, with ∼16× reduction in memory and potential ∼13.5× acceleration on CPUs.\n\nThis is incorrect. To do an full binary inner-product MAC (i.e. xnor, popcount, accumulate) on an Intel CPU with AVX2 (until AVX512's popcount is available), the peak is approximately 256 GOP/cycle or so via xnor (issues on 3 ports, 1/3 of a cycle) and a fast popcnt method (e.g. Harley-Seal for larger reductions, lookup for smaller reductions in https://github.com/WojciechMula/sse-popcount/blob/master/results/skylake/skylake-i7-6700-gcc5.3.0-avx2.rst, which work out at a little bit over 2 cycles/vector in the best case). An equivalent Intel CPU can execute 2 fp32 AVX2 FMAs per cycle, which is a throughput of 32 FLOPs/cycle. Ignoring AVX2 thermal throttling, 1bit/1bit inner products vs fp32/fp32 inner products are then sped up by a factor of about 10x or so (and lower for Skylake's AVX-512 FMAs). \n\nThus, when you have to do 4 binary inner products (for 2b/2b inner products), your ideal theoretical speedup is 10x / 4 = 2.5x or so. This is a much less attractive number than the 13.5x quoted.\n\nThis is a key issue with several of these binary/ultra-low-precision convolution papers (arguably stemming from XNOR-Net), in that they overestimate the performance of binary operations on CPUs (either Intel or ARM), and underestimate int8/fp32 arithmetic throughput." ]
[ 8, 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S19dR9x0b", "iclr_2018_S19dR9x0b", "Bkbq0_2Qf", "iclr_2018_S19dR9x0b", "SJ6WAf0MM", "SJh8yXRMG", "iclr_2018_S19dR9x0b", "HyeXldDZG", "rkaECW7bz", "HyOWIZjeM", "BJz5LyclM", "r1pQDxEZf", "iclr_2018_S19dR9x0b", "iclr_2018_S19dR9x0b", "iclr_2018_S19dR9x0b" ]
iclr_2018_HJNMYceCW
Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback
We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode. We introduce a novel algorithm, RESIDUAL LOSS PREDICTION (RESLOPE), that solves such problems by automatically learning an internal representation of a denser reward function. RESLOPE operates as a reduction to contextual bandits, using its learned loss representation to solve the credit assignment problem, and a contextual bandit oracle to trade-off exploration and exploitation. RESLOPE enjoys a no-regret reduction-style theoretical guarantee and outperforms state of the art reinforcement learning algorithms in both MDP environments and bandit structured prediction settings.
accepted-poster-papers
The reviewers agree that the problem of learning learning credit assignment from terminal rewards is interesting, and that the presented approach is promising. There are some concerns regarding the rigor and correctness of the theoretical results, and I ask the authors to improve those aspects of the paper. I also ask the authors to the result figures easier to read. The chosen colors are not ideal and the use of log-scale x-axis is not standard. Finally, including DAgger in the same plot is confusing assuming that DAgger user more information.
train
[ "r14M3-KxM", "S1BCCpjrM", "H159Cpirz", "ryRaK9sHM", "HJ3M8diHM", "ByCeFUNgz", "r1ajkbceM", "rJeBU_a7M", "HkZ05YT7z", "Bk6bEOT7f", "HkjWXd67G", "Hy1aVOp7z", "Bkcg5uT7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "After reading the other reviews and the authors' responses, I am satisfied that this paper is above the accept threshold. I think there are many areas of further discussion that the authors can flesh out (as mentioned below and in other reviews), but overall the contribution seems solid. I also appreciate the reviewers' efforts to run more experiments and flesh out the discussion in the revised version of the submission.\n\nFinal concluding thoughts:\n-- Perhaps pi^ref was somehow better for the structured prediction problems than RL problems?\n-- Can one show regret bound for multi-deviation if one doesn't have to learn x (i.e., we were given a good x a priori)?\n\n\n\n---------------------------------------------\nORIGINAL REVIEW\n\nFirst off, I think this paper is potentially above the accept threshold. The ideas presented are interesting and the results are potentially interesting as well. However, I have some reservations, a significant portion of which stem from not understanding aspects of the proposed approach and theoretical results, as outlined below.\n\n\n\nThe algorithm design and theoretical results in the appendix could be made substantially more rigorous. Specifically:\n\n-- basic notations such as regret (in Theorem 1), the total reward (J), Q-value (Q), and value function (V) are not defined. While these concepts are fairly standard, it would be highly beneficial to define them formally. \n\n-- I'm not convinced that the \"terms in the parentheses\" (Eq. 7) are \"exactly the contextual bandit cost\". I would like to see a more rigorous derivation of the connection. For instance, one could imagine that the policy disadvantage should be the difference between the residual costs of the bandit algorithm and the reference policy, rather than just the residual cost of the bandit algorithm. \n\n-- I'm a little unclear in the proof of Theorem 1 where Q(s,pi_n) from Eq 7 fits into Eq 8.\n\n-- The residual cost used for CB.update depends on estimated costs at other time steps h!=h_dev. Presumably, these estimated costs will change as learning progresses. How does one reconcile that? I imagine that it could somehow work out using a bandit algorithm with adversarial guarantees, but I can also imagine it not working out. I would like to see a rigorous treatment of this issue.\n\n-- It would be nice to see an end-to-end result that instantiates Theorem 1 (and/or Theorem 2) with a contextual bandit algorithm to see a fully instantiated guarantee. \n\n\n\nWith regards to the algorithm design itself, I have some confusions:\n\n-- How does one create x in practice? I believe this is described in Appendix H, but it's not obvious. \n\n-- What happens if we don't have a good way to generate x and it must be learned as well? I'd imagine one would need larger RNNs in that case. \n\n-- If x is actually learned on-the-fly, how does that impact the theoretical results?\n\n-- I find it curious that there's no notion of future reward learning in the learning algorithm. For instance, in Q learning, one directly models the the long-term (discounted) rewards during learning. In fact, the theoretical analysis talks about advantage functions as well. It would be nice to comment on this aspect at an intuitive level.\n\n\n\nWith regards to the experiments:\n\n-- I find it very curious that the results are so negative for using only 1-dev compared to multi-dev (Figure 9 in Appendix). Given that much of the paper is devoted to 1-dev, it's a bit disappointing that this issue is not analyzed in more detail, and furthermore the results are mostly hidden in the appendix.\n\n-- It's not clear if a reference policy was used in the experiments and what value of beta was used.\n\n-- Can the authors speculate about the difference in performance between the RL and bandit structured prediction settings? My personal conjecture is that the bandit structured prediction settings are more easily decomposable additively, which leads to a greater advantage of the proposed approach, but I would like to hear the authors' thoughts.\n\n\n\nFinally, the overall presentation of this paper could be substantially improved. In addition to the above uncertainties, some more points are described below. I don't view these points as \"deal breakers\" for determining accept/reject.\n\n-- This paper uses too many examples, from part-of-speech tagging to credit assignment in determining paths. I recommend sticking to one running example, which substantially reduces context switching for the reader. In every such example, there are extraneous details are not relevant to making the point, and the reader needs to spend considerable effort figuring that out for each example used. \n\n-- Inconsistent language. For instance, x is sometimes referred to as the \"input example\", \"context\" and \"features\".\n\n-- At the end of page 4, \"Internally, ReslopePolicy takes a standard learning to search step.\" Two issues: 1) ReslopePolicy is not defined or referred to anywhere else. 2) is the remainder of that paragraph a description of a \"standard learning to search step\"?\n\n-- As mentioned before, Regret is not defined in Theorem 1 & 2.\n\n-- The discussion of the high-level algorithmic concepts is a bit diffuse or lacking. For instance, one key idea in the algorithmic development is that it's sufficient to make a uniformly random deviation. Is this idea from the learning to search literature? If so, it would be nice to highlight this in Section 2.2.\n\n", "This comment acknowledges the author response. My official review has been edited.", "Hmm, I'm not sure the solution is that simple.\n\nCB.cost is predicting the advantage cost of pi^mix or pi^learn, both of which are evolving functions because the policy is learning over time. Hence, I don't see a realizability assumption as reasonable except for characterizing the CB.cost of the final pi you learn. \n\nAs for the regret analysis, there is an issue with two interacting online learning reductions, one for learning CB.Cost and one for learning Pi (i.e., CB.Act). The regret analysis of CB.Act will depend on the convergence of CB.Cost and vice versa. \n\nThis issue arises in other settings:\n-- Learnability of (approximate) Nash equilibria in two-player zero-sum games by using two no-regret online learning algorithms for the two players. The convergence analysis of each player depends on the convergence of the other player.\n[1] http://www.cs.cmu.edu/~avrim/ML07/lect1028-1102.pdf\n\n\n-- Convergence analysis in online learning of GANs (where both the generator and discriminator are trained via online learning): \n[2] https://arxiv.org/abs/1706.03269\n\n-- Convergence analysis of sparring-style reductions of the dueling bandits problem: \n[3] https://arxiv.org/abs/1502.06362\n[4] https://arxiv.org/abs/1705.00253\n\nIn [1] and [3], one resorts to using online learning algorithms with adversarial guarantees (which includes settings where the \"environment\" is influenced by another online learning algorithm). \n\nIn [2] and [4], the authors are able to more carefully analyze the structure of the interaction, and do not have to resort to the adversarial setting (online learning algorithms with adversarial guarantees can be very inefficient in practice).\n\nIn lieu of more carefully analyzing the interaction between these two online learning procedures, I suspect you'll have to resort to online learning algorithms (for CB.Cost and CB.Act) that have guarantees in the adversarial setting. I think that will probably work out, but this whole discussion needs to be much more thoroughly fleshed out in the paper.", "Thank you for asking about this; indeed, you're right, there's a missing term. In going from Eq 7 to Eq 8 as it stands right now, we're assuming that we have access to exact quantities, which is not actually the case in practice. In order to account for this, we need to add an additional term \\epsilon_{CS} that captures the regret of the cost-sensitive learner. This will then be an additive term in Eq 8. Under a realizability assumption this will go to zero over time, so the impact on Theorem 1 is that an additional realizability assumption is required, or (probably better) to explicitly pull \\epsilon_{CS} out as an additional approximation error term in the final bound. We really appreciate you catching this!\n", "I'm still confused on this point. The terms in the (...) in Eq.7 uses the \"true\" Q values. Whereas the RESLOPE algorithm uses the CB.cost function (Line 17 & Line 20 in Alg 1). As far as I can tell, CB.cost is an **estimate** of the (dis)advantage. Thus, I don't understand how \"is exactly the expected value of the target of the contextual bandit cost\", as stated in the proof of Theorem 1. Are you saying that, CB.cost used in Line 20 of Alg 1 is an unbiased estimate?\n\nEverything else about the paper seems OK to me, modulo polishing.", "The authors propose a new episodic reinforcement learning algorithm based on contextual bandit oracles.\nThe key specificity of this algorithm is its ability to deal with the credit assignment problem by learning automatically a progressive \"reward shaping\" (the residual losses) from a feedback that is only provided at the end of the epochs.\n\nThe paper is dense but well written. \n\nThe theoretical grounding is a bit thin or hard to follow.\nThe authors provide a few regret theoretical results (that I did not check deeply) obtained by reduction to \"value-aware\" contextual bandits.\n\nThe experimental section is solid. The method is evaluated on several RL environments against state of the art RL algorithms. It is also evaluated on bandit structured prediction tasks.\nAn interesting synthetic experiment (Figure 4) is also proposed to study the ability of the algorithm to work on both decomposable and non-decomposable structured prediction tasks.\n\n\nQuestion 1: The credit assignment approach you propose seems way more sophisticated than eligibility traces in TD learning. But sometimes old and simple methods are not that bad. Could you develop a bit on the relation between RESLOPE and eligibility traces ?\n\nQuestion 2: RESLOPE is built upon contextual bandits which require a stationary environment. Does RESLOPE inherit from this assumption?\n\n\nTypos:\npage 1 \n\"scalar loss that output.\" -> \"scalar loss.\"\n\", effectively a representation\" -> \". By effective we mean effective in term of credit assignment.\"\npage 5\n\"and MTR\" -> \"and DR\"\npage 6\n\"in simultaneously.\" -> ???\n\".In greedy\" -> \". In greedy\"\n", "The authors present a new RL algorithm for sparse reward tasks. The work is fairly novel in its approach, combining a learned reward estimator with a contextual bandit algorithm for exploration/exploitation. The paper was mostly clear in its exposition, however some additional information of the motivation for why the said reduction is better than simpler alternatives would help. \n\nPros\n1. The results on bandit structured prediction problems are pretty good\n2. The idea of a learnt credit assignment function, and using that to separate credit assignment from the exploration/exploitation tradeoff is good. \n\nCons: \n1. The method seems fairly more complicated than PPO / A2C, yet those methods seem to perform equally well on the RL problems (Figure 2.). It also seems to be designed only for discrete action spaces.\n2. Reslope Boltzmann performs much worse than Reslope Bootstrap, thus having a bag of policies helps. However, in the comparison in Figures 2 and 3, the policy gradient methods dont have the advantage of using a bag of policies. A fairer comparison would be to compare with methods that use ensembles of Q-functions. (like this https://arxiv.org/abs/1706.01502 by Chen et al.). The Q learning methods in general would also have better sample efficiency than the policy gradient methods.\n3. The method claims to learn an internal representation of a denser reward function for the sparse reward problem, however the experimental analysis of this is pretty limited (Section 5.3). It would be useful to do a more thorough investigation of whether it learnt a good credit assignment function in the games. One way to do this would be to check the qualitative aspects of the function in a well understood game, like Blackjack.\n\nSuggestions:\n1. What is the advantage of the method over a simple RL method that predicts a reward at every step (such that the dense rewards add up to match the sparse reward for the episode), and uses this predicted dense reward to perform RL? This, and also a bigger discussion on prior bandit learning methods like LOLS will help under the context for why we’re performing the reduction stated in the paper. \n\nSignificance: While the method is novel and interesting, the experimental analysis and the explanations in the paper leave it unclear as to whether its significant compared to prior work.\n\nRevision: I thank the authors for addressing some of my concerns. The comparison with relative gain of bootstrap wrt ensemble of policies still needs more thorough experimentation, but the approach is novel and as the authors point out, does improve continually with better Contextual Bandit algorithms. I update my review to 6. ", "The authors appreciate the reviewer’s suggestions for improving the overall exposure of the paper. In order to make it easier for reviewers’ to track the changes we kept the structure largely consistent with the original submission, but we’ll take all of these comments into account in the final version.\n\n@ AnonReviewer3\nThanks for the clarification suggestions on the analysis; we can add explicit definitions of J, Q and V in the background material. The terms in the parentheses are the CB costs because these are exactly the residuals computed and shown as costs to the CB algorithm by construction (essentially the analysis says exactly what these costs should be). We will try to find a way to make this clearer. The issue of non-stationarity is discussed below in greater detail.\n\nList of changes in this version:\n\n1) Extended the discussion sections (section 6) to include some of the open problems and comments highlighted by the reviewers;\n2) Added Appendix K. This appendix includes experiments performed for the analysis of the loss representation for the grid world environment;\n3) Fixed all the typos highlighted by the reviewers;\n4) Updated Appendix H to include the set of values used for tuning the roll-out probability beta.", "[1] Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume ́, III, and John Langford. Learning to search better than your teacher. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pp. 2058–2066. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045337. \n[2] Amr Sharaf and Hal Daume ́, III. Structured prediction via learning to search under bandit feedback. In Proceedings of the 2nd Workshop on Structured Prediction for Natural Language Processing, pp. 17–26, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W17-4304.\n[3] Miroslav Dud ́ık, Dumitru Erhan, John Langford, and Lihong Li. Doubly robust policy evaluation and optimization. Statist. Sci., 29(4):485–511, 11 2014. doi: 10.1214/14-STS500. URL https: //doi.org/10.1214/14-STS500.\n\n", "Authors’ Response to highlighted cons:\n\nRESLOPE is more complicated than PPO / A2C\n\nWe suppose this depends on how \"complicated\" is measured. Given a known, fixed contextual bandit algorithm, RESLOPE becomes quite straightforward to implement, certainly more simple (in lines of code) than PPO/A2C would be if you did not have access to, for instance, an autodiff toolkit. Given good lower-level abstractions (autodiff, CB, etc.), both are quote straightforward to implement. Furthermore, RESLOPE comes with significant advantage over PPO/A2C: RESLOPE continually improves as better contextual bandit algorithms become available, a property lacked by PPO/A2C. RESLOPE also fares well empirically, and comes with some nice theoretical guarantees (which, for instance, A2C lacks).\n\t\n\nRESLOPE and Continuous Action Spaces\n\nIt’s true that RESLOPE is designed for discrete action spaces. Extension to continuous action spaces remains an open problem. We have updated the discussion section (section 6) to include this extension as a future work.\n\nComparison with Ensemble Learning Methods\n\nThis is a great question, thank you for raising it! Indeed, the original submission did not adequately separate gains from more complex representation (bag of policies) and alternative estimation methods (bootstrap).\n\nTo address this, we have done an addition set of experiments to answer the following question empirically: what is the relative gain of bootstrap exploration with respect to using an ensemble of policies.\n\nEnsemble exploration trains a bag of multiple policies simultaneously. Each policy in the bag generates a Boltzmann probability distribution over actions. These probability distributions are aggregated by averaging. An action is sampled from the aggregated distribution. This has the property of identical network representation, but not using the Bootstrap estimation method.\n\nThe result is that in the MTR setting, the Ensemble method is worse than Bootstrap by factors of 3.52, 0.757 and 0.815 respectively on the first three RL tests and, surprisingly, better by a factor of 6.39 on the last. We plan to complete these experiments with more rigor, and extend to the SP setting, in a final version.\n\nEvaluating the learned loss representation for a well-understood RL Environment\n\nWe added additional experiments for evaluating the learned loss representation in the grid world reinforcement \nlearning environment (Appendix K). This experiment is akin to the experimental setting in section 5.3, however\nit’s performed on the grid world RL environment, where the quantitative aspects of the loss function is well understood. Results are very similar to the structured prediction setting (section 5.3). Performance is better when the loss is additive vs non-additive.\n\nAuthors’ Response to proposed suggestions:\n\nRESLOPE & Reward Prediction at Every Step\n\nWe’re not aware of a different way for learning the reward in every time step without computing the residual loss as we do in RESLOPE. After estimating the residual losses, RESLOPE reduces the problem to a contextual bandit oracle. This is crucial for accounting for the exploration probability and is necessary for obtaining an unbiased and convergent estimates for the loss. It’s not clear how standard RL can account for the exploration probability when the estimated rewards is used instead of the true reward values, and thus, we didn’t consider this approach in our experiments. (But we're open to suggestions!)\n\nRESLOPE vs LOLS\n\nBoth RESLOPE and the bandit version of LOLS (Chang et al., 2015) aim to learn from sparse reward signals by building on the bandit learning to search frameworks. As highlighted in the discussion section (Section 6), they differ significantly in both theory and practice:\n The “bandit” version of LOLS was analyzed theoretically but not empirically in the original paper; Sharaf & Daumé (2017) found that it failed to learn empirically;\nRESLOPE learns a representation for the episodic loss as a decomposition over time-steps, while LOLS learns directly from the episodic loss signal, this is prone to high variance and doesn’t work in practice (Sharaf & Daumé 2017);\nRESLOPE separates the problem of credit assignment from the exploration problem via a reduction to a contextual bandit oracle. This enables the usage of better variance reduction techniques (e.g. Doubly Robust cost estimation & Multi-task Regression) as well as different exploration algorithms (e.g. bootstrap exploration). LOLS can only use Inverse Propensity Scoring and greedy exploration.", "Question 1: RESLOPE and Eligibility Traces\n\nBoth RESLOPE and eligibility trace algorithms tackles the problem of credit assignment when learning by interaction with the environment. In eligibility trace algorithms, e.g. TD(λ), a state is eligible for credit assignment if it was recently visited, with the eligibility declining over time [1]. In our episodic setting, our notion of eligibility decay is \"the end of the episode\": any reward from this episode is eligible, and reward from other episodes is not. The \"degree\" of eligibility is most similar to the probability of the exploration event which created the observation (the deviation). This is particularly important for getting unbiased & convergent estimates.\n\nQuestion 2: RESLOPE and Non-stationary Environments\n\nThank you for raising this point: we were remiss to not include this in the initial draft and have now added a bit of discussion in the last section. The issue pointed out here is that because the policy is changing, the reward decomposition is changing, so the costs that the CB algorithm sees are also changing. While many CB algorithms operate effectively under shifting distributions of x (e.g. most online CB algorithms), many cannot work with the \"label distribution\" shifts. There has been some work on CB in an adversarial environment, but to our knowledge none of these algorithms is efficient. It seems likely that the RESLOPE setting is probably not as bad as full adversarial, and perhaps something could be done in the middle, but this is still an open question.\n\n[1] Satinder P. Singh and Richard S. Sutton, Reinforcement learning with replacing eligibility traces, pp. 123–158, Springer US, Boston, MA, 1996.\n", "How does RESLOPE create x?\n\nRESLOPE learns a representation for the input x on the fly using a neural network architecture as described in Appendix H. We start off with a simple feature representation in all the problems and the model learns a better representation using a neural network architecture. We’d appreciate any comments regarding the clarity of this section and we’ll incorporate any suggestions in the final version.\n\nAs a recap: For English POS tagging and dependency parsing we use 300 dimensional word embeddings, 300 dimensional 1 layer LSTM, and 2 layer 300 dimensional RNN policy; for the Chinese POS tagging: we use 300 dimensional word embeddings, 50 dimensional two layer LSTM, one layer 50 dimensional RNN policy. For reinforcement learning, we chose a two layer RNN policy with 20 dimensional vectors. We start off with a simple initial state representation and learn a better representation using the policy network. The initial state representation is task dependant. For instance, in cartpole, the state is represented by a four dimensional vector: [position of cart, velocity of cart, angle of pole, rotation rate of pole].\n\nWhat happens if we don't have a good way to generate x and it must be learned as well?\n\nThis is the case in all our experiments. We start-off with simple features and learn a better representation on the fly using a neural network architecture. For structured prediction tasks, the simple features are just the word indices in the dictionary, we learn word embedding for these words and keep track of the state using an RNN architecture (as described above). For RL tasks we start off by simple features of the current state and feed these features to an RNN network to compute the final input x. \n\nIf x is learned on the fly, how does that impact the theoretical results?\n\nIn the single deviation case, one can think of the \"x\" used at the deviation point as the result of applying a deep (unrolled) neural network to the base features (eg word indices). The contextual bandit problem, then, is to learn that neural network well. This basically reduces the question to: are there good CB algorithms for learning neural networks. But the analysis for RESLOPE holds.\n\nIn the multi-deviation case, things are much more complicated. In fact, this is one of the things that blocked us from a good analysis in the multi-deviation setting. The problem is that if you deviate at steps 2 and 5, what might be good for improving the reward prediction at step 2 could be bad for step 5 or vice versa, because these two decisions are tied through the network structure as well as the action sequence. (This issue also arises in other learning to search algorithms, like CPI and Searn, which effectively use a sufficiently small learning rate the ensure that there's only one deviation per episode.)\n\t\n\nModeling Notions of future Reward\n\nRather than modeling the Q-function, RESLOPE aims at modeling the advantage function instead, which could be easier to learn in several cases. Learning either the Q-function or the advantage function is sufficient for extracting a greedy policy. Lemma 1 shows that the difference in total loss between two policies can always be computed exactly as a sum of per-time-step advantages of one over the other. We chose to learn the advantages rather than Q-functions as it might be easier to learn and more local. For example, in POS tagging, learning advantages corresponds to learning whether or not the policy made a prediction mistake at a single word which is much easier to learn than the Q-function which requires keeping track of the number of mistakes made from the beginning of the sequence.\n\nReference Policy Used & Value for Beta\n\nFor the structured prediction experiments, the reference policy is a pre-trained model on supervised data (Appendix G). The roll-out probability β is a hyper-parameter that we tune along all the other hyperparameters as described in Appendix H. We pick the best value for β from the set: {0.0, 0.5, 1.0}. \n\nFor the reinforcement learning experiments, we don’t assume access to a reference policy and the roll-out probability β is always set to zero.\n\nNote, though, that in the multi-deviation algorithm, there is not a separate notion of a \"rollout\" policy, like there is in the single-deviation setting.\n\t\t\t\nDifference in performance between the RL and Structured Prediction\n \nThis is a good question that unfortunately we don't have a good answer to; we are particularly confused by the poor performance of RESLOPE on cartpole, which is the only place where its behavior is really subpar to even simple approaches like reinforce with baseline (reinforce without a baseline fails quite poorly here, much worse than RESLOPE). This could partially be because RESLOPE came out of a line of work focusing on structured prediction and so the algorithmic style simply is a better fit there, but that's not at all a convincing answer. More work is needed here.\n\n", "It’s true that the empirical results for the one-step deviation setting is are worse (particularly in terms of the number of samples needed to learn) than doing multiple deviations. While we don’t have a theoretical analysis for the multi-deviation case, empirically, we found this to be crucial empirically. Although the generated samples for the same episode are not independent, this is made-up for by the huge increase in the number of available samples for training. This is a case where there is a gap between what we can prove theoretically and what works best in practice. We can restructure the outline of the paper to promote the display of the 1-step deviation results on earlier exposure. \n" ]
[ 7, -1, -1, -1, -1, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, 2, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJNMYceCW", "Hy1aVOp7z", "ryRaK9sHM", "HJ3M8diHM", "rJeBU_a7M", "iclr_2018_HJNMYceCW", "iclr_2018_HJNMYceCW", "iclr_2018_HJNMYceCW", "Bk6bEOT7f", "r1ajkbceM", "ByCeFUNgz", "r14M3-KxM", "r14M3-KxM" ]
iclr_2018_SyOK1Sg0W
Adaptive Quantization of Neural Networks
Despite the state-of-the-art accuracy of Deep Neural Networks (DNN) in various classification problems, their deployment onto resource constrained edge computing devices remains challenging due to their large size and complexity. Several recent studies have reported remarkable results in reducing this complexity through quantization of DNN models. However, these studies usually do not consider the changes in the loss function when performing quantization, nor do they take the different importances of DNN model parameters to the accuracy into account. We address these issues in this paper by proposing a new method, called adaptive quantization, which simplifies a trained DNN model by finding a unique, optimal precision for each network parameter such that the increase in loss is minimized. The optimization problem at the core of this method iteratively uses the loss function gradient to determine an error margin for each parameter and assigns it a precision accordingly. Since this problem uses linear functions, it is computationally cheap and, as we will show, has a closed-form approximate solution. Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rates. Furthermore, it can achieve compressions close to floating-point model compression methods without loss of accuracy.
accepted-poster-papers
Given the changes to the paper, the reviewers agree that the paper meets the bar for publication at ICLR. There are some concerns regarding the practical impact on CPUs and GPUs. I ask the authors to clearly discuss the impact on different hardware. One can argue if adaptive quantization techniques are helpful, then there is a chance that future hardware will support them. All of the experiments are conducted on toy datasets. Please consider including some experiments on Imagenet as well.
train
[ "rkIpwEslz", "Hkcb6tG-M", "HkrUKW5eM", "B1ck33hXM", "BkFMZah7f", "SJxPAhhXG", "BJSHpnnmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I have read the responses to the concerns raised by all reviewers. I find the clarifications and modifications satisfying, therefore I keep my rating of the paper to above acceptance threshold.\n\n-----------------\nORIGINAL REVIEW:\n\nThe paper proposes a method for quantizing neural networks that allows weights to be quantized with different precision depending on their importance, taking into account the loss. If the weights are very relevant, it assigns more bits to them, and in the other extreme it does pruning of the weights.\n\nThis paper addresses a very relevant topic, because in limited resources there is a constrain in memory and computational power, which can be tackled by quantizing the weights of the network. The idea presented is an interesting extension to weight pruning with a close form approximate solution for computing the adaptive quantization of the weights.\n\nThe results presented in the experimental section are promising. The quantization is quite cheap to compute and the results are similar to other state-of-the-art quantization methods. \nFrom the tables and figures, it is difficult to grasp the decrease in accuracy when using the quantized model, compared to the full precision model, and also the relative memory compression. It would be nice to have this reference in the plots of figure 3. Also, it is difficult to see the benefits in terms of memory/accuracy compromise since not all competing quantization techniques are compared for all the datasets.\nAnother observation is that it seems from figure 2 that a lot of the weights are quantized with around 10 bits, and it is not clear how the compromise accuracy/memory can be turned to less memory, if possible. It would be interesting to know an analogy, for instance, saying that this adaptive compression in memory would be equivalent to quantizing all weights with n bits.\n\nOTHER COMMENTS:\n\n-missing references in several points of the paper. For instance, in the second paragraph of the introduction, 1st paragraph of section 2.\n\n- few typos:\n*psi -> \\psi in section 2.3\n*simply -> simplify in proof of lemma 2.2\n*Delta -> \\Delta in last paragraph of section 2.2\n*l2 -> L_2 or l_2 in section 3.1 last paragraph.", "Revised Review:\n\nThe authors have addressed most of my concerns with the revised manuscript. I now think the paper does just enough to warrant acceptance, although I remain a bit concerned that since the benefits are only achievable with customized hardware, the relevance/applicability of the work is somewhat limited.\n\nOriginal Review:\n\nThe paper proposes a technique for quantizing the weights of a neural network, with bit-depth/precision varying on a per-parameter basis. The main idea is to minimize the number of bits used in the quantization while constraining the loss to remain below a specified upper bound. This is achieved by formulating an upper bound on the number of bits used via a set of \"tolerances\"; this upper bound is then minimized while estimating any increase in loss using a first order Taylor approximation.\n\nI have a number of questions and concerns about the proposed approach. First, at a high level, there are many details that aren't clear from the text. Quantization has some bookkeeping associated with it: In a per-parameter quantization setup it will be necessary to store not just the quantized parameter, but also the number of bits used in the quantization (takes e.g. 4-5 extra bits), and there will be some metadata necessary to encode how the quantized value should be converted back to floating point (e.g., for 8-bit quantization of a layer of weights, usually the min and max are stored). From Algorithm 1 it appears the quantization assumes parameters in the range [0, 1]. Don't negative values require another bit? What happens to values larger than 1? How are even bit depths and associated asymmetries w.r.t. 0 handled (e.g., three bits can represent -1, 0, and 1, but 4 must choose to either not represent 0 or drop e.g. -1)? None of these details are clearly discussed in the paper, and it's not at all clear that the estimates of compression are correct if these bookkeeping matters aren't taken into account properly.\n\nAdditionally the paper implies that this style of quantization has benefits for compute in addition to memory savings. This is highly dubious, since the method will require converting all parameters to a standard bit-depth on the fly (probably back to floating point, since some parameters may have been quantized with bit depth up to 32). Alternatively custom GEMM/conv routines would be required which are impossible to make efficient for weights with varying bit depths. So there are likely not runtime compute or memory savings from such an approach.\n\nI have a few other specific questions: Are the gradients used to compute \\mu computed on the whole dataset or minibatches? How would this scale to larger datasets? I am confused by the equality in Equation 8: What happens for values shared by many different quantization bit depths (e.g., representing 0 presumably requires 1 bit, but may be associated with a much finer tolerance)? Should \"minimization in equation 4\" refer to equation 3?\n\nIn the end, while do like the general idea of utilizing the gradient to identify how sensitive the model might be to quantization of various parameters, there are significant clarity issues in the paper, I am a bit uneasy about some of the compression results claimed without clearer description of the bookkeeping, and I don't believe an approach of this kind has any significant practical relevance for saving runtime memory or compute resources. ", "The authors present an interesting idea to reduce the size of neural networks via adaptive compression, allowing the network to use high precision where it is crucial and low precision in other parts. The problem and the proposed solution is well motivated. However, there are some elements of the manuscript that are hard to follow and need further clarification/information. These need to definitely be addressed before this paper can be accepted.\n\nSpecific comments/questions:\n- Page 1: Towards the bottom, in the 3rd to last line, reference is missing.\n- Page 1: It is a little hard to follow the motivation against existing methods.\n- Page 2: DenseNets and DeepCompression need citations\n- Lemma 2.1 seems interesting - is this original work? This needs to be clarified.\n- Lemma 2.2: Reference to Equation 17 (which has not been presented in the manuscript at this point) seems a little confusing and I am unable to following the reasoning and the subsequent proof which again refers to Equation 17.\n- Alg 2: Should it be $\\Delta$ or $\\Delta_{k+1}$? Because in one if branch, we use $\\Delta$, in the other, we use the subscripted one.\n- Derivation in section 2.3 has some typographical errors.\n- What is $d$ in Equation 20 (with cases)? Without this information, it is unclear how the singular points are handled.\n- Page 6, first paragraph of Section 3: The evaluation is a little confusing - when is the compression being applied during the training process, and how is the training continued post-compression? What does each compression 'pass' constitute of?\n- Figure 1b: what is the 'iteration' on the horizontal axis, is it the number of iterations of Alg3 or Alg2? Hoping it is Alg3 but needs to be clarified in the text.\n- Section 3: What about compression results for CIFAR and SVNH? ", "Thank you for your insightful comments. We have modified the manuscript based on the questions from the reviewer. The changes and additions to the paper have been highlighted in blue. Below we discuss each of the questions in more detail one-by-one.\n\nQuestion: I have a number of questions and concerns about the proposed approach. First, at a high level, there are many details that aren't clear from the text. Quantization has some bookkeeping associated with it: In a per-parameter quantization setup it will be necessary to store not just the quantized parameter, but also the number of bits used in the quantization (takes e.g. 4-5 extra bits), and there will be some metadata necessary to encode how the quantized value should be converted back to floating point (e.g., for 8-bit quantization of a layer of weights, usually the min and max are stored). From Algorithm 1 it appears the quantization assumes parameters in the range [0, 1]. Don't negative values require another bit? What happens to values larger than 1? How are even bit depths and associated asymmetries w.r.t. 0 handled (e.g., three bits can represent -1, 0, and 1, but 4 must choose to either not represent 0 or drop e.g. -1)? None of these details are clearly discussed in the paper, and it's not at all clear that the estimates of compression are correct if these bookkeeping matters aren't taken into account properly. \n\nAnswer: We agree with the reviewer that it is important to evaluate the potential overhead of bookkeeping. However, we should also have in mind that bookkeeping has an intricate relationship with the target hardware, which may lead to radically different results on different hardware platforms (ranging from 0 to ~60%). For example, our experiments show that on specialized hardware, such as the one designed by Albericio et al (2017) for processing variable bit width CNN, we can fully offset all bookkeeping overheads of storing quantization depths, while CPU/GPU may require up to 60% additional storage. We will study this complex relationship separately, in our future work, and in the context of hardware implementation. In this paper, we limit the scope to algorithm analysis, independent of underlying hardware architectures. We note that in this analysis, we have evaluated the metadata as well as the additional sign bits. The metadata overhead is negligible (about 4 bytes per layer) due to the balanced quantization of algorithm 1 which divides the range [0,1] into equally sized partitions and assigns a single bit to each parameter. As we discuss in the answer to the next question, this scheme eliminates the need to convert parameters back to floating-point, and computations can be performed directly on the quantized values. For example, the 5-bit signed value 01011, for example, represents 2^(-1)+2^(-3)+2^(-4)=0.6875 (the initial 0 bit represents a positive value), which can be easily multiplied with other values using fixed-point shifts and additions. If it is necessary to have parameters in a larger range, say [-S, S], a scale value like S (4 bytes of metadata) could be allocated for each layer, that is applied to the output of that layer. We have clarified these points in the updated version of the paper, in section 2 and section 3. \n\nAlbericio, Jorge, et al. \"Bit-pragmatic deep neural network computing.\" Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture. ACM, 2017. \n\n \n\nQuestion: Additionally the paper implies that this style of quantization has benefits for compute in addition to memory savings. This is highly dubious, since the method will require converting all parameters to a standard bit-depth on the fly (probably back to floating point, since some parameters may have been quantized with bit depth up to 32). Alternatively custom GEMM/conv routines would be required which are impossible to make efficient for weights with varying bit depths. So there are likely not runtime compute or memory savings from such an approach.  \n\nAnswer: We agree that on CPU/GPU interpreting variable-bit width parameters may incur computational costs. However, our quantization scheme significantly reduces the necessary computation on our target platforms, that is, specialized hardware like Alberricio et al (2017) or configurable hardware like FPGAs. These platforms can directly process the variable-bit width, fixed-point parameters without the need to convert them into floating point, and can implement custom computation units to efficiently perform matrix multiplication/convolutions by taking advantage of the small quantization depths of the parameters in the quantized model. We note that in our experiments, parameters are often quantized with far fewer bits than 32, with little to no accuracy loss. Thus, our approach can significantly accelerate performance on this class of hardware by minimizing the required computations. We have clarified this in section 2. \n", "We appreciate your insightful comments. In the updated version of the paper, we have fixed the missing references and typos, and clarified the evaluation methodology as well as the other points mentioned by the reviewer. The changes have been highlighted in blue. Here, we address specific questions by the reviewer one-by-one.\n\n1. Page 1: Towards the bottom, in the 3rd to last line, reference is missing. \n\n- Added references Hubara (2016a) and Han (2015). \n\n\n2. Page 1: It is a little hard to follow the motivation against existing methods. \n\n- Modified the discussion in the introduction (highlighted blue).\n\n\n3. Page 2: DenseNets and DeepCompression need citations\n\n- Added references Huang (2017) and Han (2015) in section 1. \n\n\n4. Lemma 2.1 seems interesting - is this original work? This needs to be clarified.  \n\n- Lemma 2.1 is an original contribution of the paper. We added clarification in Section 2.  \n\n\n5. Lemma 2.2: Reference to Equation 17 (which has not been presented in the manuscript at this point) seems a little confusing and I am unable to following the reasoning and the subsequent proof which again refers to Equation 17.  \n\n- We revised the cross references in the proof of lemma 2.2. The constraint refers to the definitions in equations 11 and 12. \n\n\n6. Alg 2: Should it be $\\Delta$ or $\\Delta_{k+1}$? Because in one if branch, we use $\\Delta$, in the other, we use the subscripted one.  \n\n- Added the subscript in algorithm 2. \n\n\n7. Derivation in section 2.3 has some typographical errors.  \n\n- Fixed the typographical errors. \n\n\n8. What is $d$ in Equation 20 (with cases)? Without this information, it is unclear how the singular points are handled.  \n\n- $d$ in equation 20 refers to the difference between the loss bound $\\overline{l}$ and the loss in the current iteration of the algorithm $l(W_k)$: $d = \\overline{l}-l(W_k)$. We have modified equation 20 accordingly. \n\n\n9. Page 6, first paragraph of Section 3: The evaluation is a little confusing  \n\n- Revised the first paragraph of section 3 to clarify the process of evaluation (highlighted blue). \n\n\n10. when is the compression being applied during the training process, and how is the training continued post-compression? What does each compression 'pass' constitute of?  \n\n- We added additional explanation in section 3 regarding when compression is performed and what a pass of compression constitutes. Specifically, adaptive quantization is applied to a model after the training is complete. The retraining steps after the compression are performed in full-precision, floating-point domain. Also, each pass of compression refers to a complete execution of algorithm 3. \n\n\n11. Figure 1b: what is the 'iteration' on the horizontal axis, is it the number of iterations of Alg3 or Alg2? Hoping it is Alg3 but needs to be clarified in the text.  \n\n- We clarified the definition of iteration in figure 1. Each iteration, refers to one iteration of the loop in algorithm 2. \n\n\n12. Section 3: What about compression results for CIFAR and SVNH?  \n\n- We have added the compression results for the optimal trade-off for all three datasets in the revised version (Figure 3). We have further added comparison with BinaryConnect for all datasets and shown that the original conclusions hold. That is, the proposed algorithm almost always outperforms state-of-the-art of quantization (BinaryConnect and BNN) and consistently produces competitive results.\n\n", "Thank you for your valuable comments. We have modified the paper accordingly and highlighted the changes in blue. We have also resolved the missing references and typos. Below, we discuss the points mentioned by the reviewer in detail.\n\nQuestion: From the tables and figures, it is difficult to grasp the decrease in accuracy when using the quantized model, compared to the full precision model, and also the relative memory compression. It would be nice to have this reference in the plots of figure 3.   \n\nAnswer: Thanks for pointing this out. In the revised version, we highlight the optimal trade-off between accuracy and model size for each model in Figure 3. We further report the accuracy and the reduction in the model size for these optimal models. We observe compression ratios of these optimal models equal to 64x, 35x, and 13x (corresponding to 98.4%, 97%, and 92% reductions in model size) for MNIST, CIFAR-10, and SVHN, with 0.12%, -0.02%, and 0.7% decrease in accuracy, respectively. We modified section 4 to clarify these results. \n\n \n\nQuestion: Also, it is difficult to see the benefits in terms of memory/accuracy compromise since not all competing quantization techniques are compared for all the datasets.  \n\nAnswer: In Figure 3, we have added comparisons with the BinaryConnect technique for all three datasets. This technique can often improve the accuracy of BNN with the same model size. Yet, these comparisons confirm our original results. That is, the proposed method almost always outperforms state-of-the-art of quantization (BinaryConnect and BNN) and consistently produces competitive results. We have modified section 4 with the discussion of these results. \n\n\n \nQuestion: Another observation is that it seems from figure 2 that a lot of the weights are quantized with around 10 bits, and it is not clear how the compromise accuracy/memory can be turned to less memory, if possible. It would be interesting to know an analogy, for instance, saying that this adaptive compression in memory would be equivalent to quantizing all weights with n bits.  \n\nAnswer: Figures 2 (a, b, c), for clarity, only show non-pruned parameters, which comprise a small portion of the original parameters of the model. Taking these parameters into account, adaptive quantization compresses MNIST, CIFAR-10, and SVHN models to equivalent of 0.03, 0.27, and 1.3 bits per parameter, respectively (results are for the optimal trade-off points highlighted in Figure 3). These are all significantly smaller or comparable to state-of-the-art of quantization, that is, BNN and BinaryConnect (1 bit per parameter). We have modified section 4 with these clarifications and updated Figure 2 (a, b, c) to include the pruned parameters for comparison with non-pruned parameters. \n\n \n\nQuestion: missing references in several points of the paper. For instance, in the second paragraph of the introduction, 1st paragraph of section 2. \n\nAnswer: Thanks. We have included the references in the revised version of the paper. ", "\nQuestion: I have a few other specific questions: Are the gradients used to compute \\mu computed on the whole dataset or minibatches? How would this scale to larger datasets?  \n\nAnswer: Gradients are calculated on minibatches. As we have specified in section 3, we use the same batch size for training and quantization to keep the computation time short. Our experiments show that this decision does not have a negative effect on the accuracy of the quantized model. Thus, as long as we choose representative batch sizes, as we do for training, the algorithm scales to larger datasets with no need for modifications. We have modified Section 3 for clarification.  \n\n \n\nQuestion: I am confused by the equality in Equation 8: What happens for values shared by many different quantization bit depths (e.g., representing 0 presumably requires 1 bit, but may be associated with a much finer tolerance)?  \n\nAnswer: This equation explores the worst case for quantization error and shows that in this case the quantization depth is bounded by negative logarithm of the tolerance. In general, we can expect the quantization depth to be smaller than this value. That is because Algorithm 1 minimizes the bit width of a parameter with respect to its tolerance. If multiple bit widths satisfy this requirement, the smallest is always chosen. For example, a parameter with the signed value equal to 0.25 can be represented by both 001 and 0010. Algorithm 1 however, will always return the former. We have modified Section 2 for clarification.  \n\n \n\nQuestion: Should \"minimization in equation 4\" refer to equation 3?  \n\nAnswer: Yes. Thank you for pointing this out. We have corrected the typo.  " ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_SyOK1Sg0W", "iclr_2018_SyOK1Sg0W", "iclr_2018_SyOK1Sg0W", "Hkcb6tG-M", "HkrUKW5eM", "rkIpwEslz", "B1ck33hXM" ]
iclr_2018_BkUp6GZRW
Boosting the Actor with Dual Critic
This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC. It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic. Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor. We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multi-step bootstrapping, path regularization, and stochastic dual ascent algorithm. We demonstrate that the proposed algorithm achieves the state-of-the-art performances across several benchmarks.
accepted-poster-papers
All of the reviewers agree that the paper clearly presents promising ideas in developing a novel actor critic algorithm. The experiments do not show a significant gain against the baselines, but they support the presented ideas. I appreciated the ablation study on dual-AC. Detailed comments: My understanding is that the x-axis in Figures 1 & 2 shows the number of iterations each of which contains batch_size*1000 environment steps. It is more standard to show those plots in terms of the number of environment steps. Further, the optimal batch_size for different algorithms may be different, so using the same batch_size for all of the algorithms is not fair.
train
[ "HkJ6DWtgf", "Bysjjx5lG", "Hyu5lW5xf", "B1Px5vamf", "Byg6DR5QM", "Byg5DCqQM", "S1iWD09Qz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper studies a new architecture DualAC. The author give strong and convincing justifications based on the Lagrangian dual of the Bellman equation (although not new, introducing this as the justification for the architecture design is plausible).\n\nThere are several drawbacks of the current format of the paper:\n1. The algorithm is vague. Alg 1 line 5: 'closed form': there is no closed form in Eq(14). It is just an MC approximation.\nline 6: Decay O(1/t^\\beta). This is indeed vague albeit easy to understand. The algorithm requires that every step is crystal clear.\n\n2. Also, there are several format error which may be due to compiling, e.g., line 2 of Abstract,'Dual-AC ' (an extra space). There are many format errors like this throughout the paper. The author is suggested to do a careful format check.\n\n3. The author is suggested to explain more about the necessity of introducing path regularization and SDA. The current justification is reasonable but too brief.\n\n4. The experimental part is ok to me, but not very impressive.\n\nOverall, this seems to be a nice paper to me.", "The paper is well written, and the authors do an admirable job of motivating their primary contributions throughout the early portions of the paper. Each extension to the Dual Actor-Critic is well motivated and clear in context. Perhaps the presentation of these extensions could be improved by providing a less formal explanation of what each does in practice; multi-step updates, regularized against MC returns, stochastic mirror descent. \n\nThe practical implementation section losses some of this clear organization, and could certainly be clarified each part tied into Algorithm 1, and this was itself made less high-level. But these are minor gripes overall.\n\nTurning to the experimental section, I think the authors did a good job of evaluating their approach with the ablation study and comparisons with PPO and TRPO. There were a few things that jumped out to me that I was surprised by. The difference in performance for Dual-AC between Figure 1 and Figure 2b is significant, but the only difference seems to be a reduce batch size, is this right? This suggests a fairly significant sensitivity to this hyperparameter if so.\n\nReproducibility in continuous control is particularly problematic. Nonetheless, in recent work PPO and TRPO performance on the same set of tasks seem to be substantively different than what the authors get in their experiments. I'm thinking in particular of:\n\nProximal Policy Optimization Algorithms (Schulman et. al., 2017)\nMulti-Batch Experience Replay for Fast Convergence of Continuous Action Control (Han and Sung, 2017)\n\nIn both these cases the results for PPO and TRPO vary pretty significantly from what we see here, and an important one to look at is the InvertedDoublePendulum-v1 task, which I would think PPO would get closer to 8000, and TRPO not get off the ground. Part of this could be the notion of an \"iteration\", which was not clear to me how this corresponded to actual time steps. Most likely, to my mind, is that the parameterization used (discussed in the appendix) is improving TRPO and hurting PPO.\n\nWith these in mind I view the comparison results with a bit of uncertainty about the exact amount of gain being achieved, which may beg the question if the algorithmic contributions are buying much for their added complexity?\n\nPros:\nWell written, thorough treatment of the approaches\nImprovements on top of Dual-AC with ablation study show improvement\n\nCons:\nEmpirical gains might not be very large\n", "This paper proposes a method, Dual-AC, for optimizing the actor(policy) and critic(value function) simultaneously which takes the form of a zero-sum game resulting in a principled method for using the critic to optimize the actor. In order to achieve that, they take the linear programming approach of solving the bellman optimality equations, outline the deficiencies of this approach, and propose solutions to mitigate those problems. The discussion on the deficiencies of the naive LP approach is mostly well done. Their main contribution is extending the single step LP formulation to a multi-step dual form that reduces the bias and makes the connection between policy and value function optimization much clearer without loosing convexity by applying a regularization. They perform an empirical study in the Inverted Double Pendulum domain to conclude that their extended algorithm outperforms the naive linear programming approach without the improvements. Lastly, there are empirical experiments done to conclude the superior performance of Dual-AC in contrast to other actor-critic algorithms. \n\nOverall, this paper could be a significant algorithmic contribution, with the caveat for some clarifications on the theory and experiments. Given these clarifications in an author response, I would be willing to increase the score. \n\nFor the theory, there are a few steps that need clarification and further clarification on novelty. For novelty, it is unclear if Theorem 2 and Theorem 3 are both being stated as novel results. It looks like Theorem 2 has already been shown in \"Randomized Linear Programming Solves the Discounted Markov Decision Problem in Nearly-Linear Running Time”. There is a statement that “Chen & Wang (2016); Wang (2017) apply stochastic first-order algorithms (Nemirovski et al., 2009) for the one-step Lagrangian of the LP problem in reinforcement learning setting. However, as we discussed in Section 3, their algorithm is restricted to tabular parametrization”. Is you Theorem 2 somehow an extension? Is Theorem 3 completely new?\n\nThis is particularly called into question due to the lack of assumptions about the function class for value functions. It seems like the value function is required to be able to represent the true value function, which can be almost as restrictive as requiring tabular parameterizations (which can represent the true value function). This assumption seems to be used right at the bottom of Page 17, where U^{pi*} = V^*. Further, eta_v must be chosen to ensure that it does not affect (constrain) the optimal solution, which implies it might need to be very small. More about conditions on eta_v would be illuminating. \n\nThere is also one step in the theorem that I cannot verify. On Page 18, how is the squared removed for difference between U and Upi? The transition from the second line of the proof to the third line is not clear. It would also be good to more clearly state on page 14 how you get the first inequality, for || V^* ||_{2,mu}^2. \n\n\nFor the experiments, the following should be addressed.\n\n1. It would have been better to also show the performance graphs with and without the improvements for multiple domains.\n\n2. The central contribution is extending the single step LP to a multi-step formulation. It would be beneficial to empirically demonstrate how increasing k (the multi-step parameter) affects the performance gains.\n\n3. Increasing k also comes at a computational cost. I would like to see some discussions on this and how long dual-AC takes to converge in comparison to the other algorithms tested (PPO and TRPO).\n\n4. The authors concluded the presence of local convexity based on hessian inspection due to the use of path regularization. It was also mentioned that increasing the regularization parameter size increases the convergence rate. Empirically, how does changing the regularization parameter affect the performance in terms of reward maximization? In the experimental section of the appendix, it is mentioned that multiple regularization settings were tried but their performance is not mentioned. Also, for the regularization parameters that were tried, based on hessian inspection, did they all result in local convexity? A bit more discussion on these choices would be helpful. \n\nMinor comments:\n1. Page 2: In equation 5, there should not be a 'ds' in the dual variable constraint", "Thanks for the constructive reviews and comments!\n\nWe have submitted our updated manuscript with a few revisions and more experiments for clarity accordingly, including:\n\n1, Discussion about the parametrization effect w.r.t. the path regularization. \n\n2, More explanation on the benefits of the proposed several important extensions.\n\n3, More details for proofs in Appendix.\n\n4, More ablation experiments with different k = {1, 10, 50} on two more MuJoCo tasks, i.e., Swimmer-v1 and Hopper-v1, and more comparison with TRPO and PPO on Walker-v1. \n", "Thanks for the constructive suggestions. \n\nWe modified the stepsize decay form more concretely (line 6 of Alg 1). It is adjusted based on the theoretical requirement for convergence [2, 3]\n\nWe fixed the extra space after `Dual-AC'. \n\nWe added more discussion of the benefits and the necessity of the path-regularization and stochastic dual ascent in the updated version in the 2nd paragraph and 3rd paragraph in page 5, respectively. For better illustrating the necessity of path-regularization and stochastic dual ascent, we also added more empirical experiments in the ablation study part in Figure 1. \n\nFor the experiment parts, we picked the **best** implementation of the state-of-the-art TRPO and PPO as our baselines based on the recent comprehensive comparison [1]. With the best implementations of TRPO and PPO, these two algorithms consistently achieve the best performance in most of the MuJoCo tasks, beating other alternatives, e.g., DDPG and ACKTR, with significant margins in [1]. Despite such strong baselines, our Dual-AC algorithm still shows substantial gain in 5 out of 6 domains (Fig 2), with a tie in the Swimmer-v1 task. In InvertedDoublePendulum-v2, Dual-AC achieves almost 3x reward of TRPO and 4x of PPO.\n\n[1], Deep reinforcement learning that matters. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger. AAAI 2018.\n[2], Robust stochastic approximation approach to stochastic programming. A Nemirovski, A Juditsky, G Lan, A Shapiro. SIAM Journal on optimization 19 (4), 1574-1609.\n[3], Stochastic first-and zeroth-order methods for nonconvex stochastic programming. S Ghadimi, G Lan. SIAM Journal on Optimization 23 (4), 2341-2368.\n", "Thanks for the constructive comments.\n\nAs suggested by the reviewer, we provided further details to explain the benefits of several important extensions: path regularization (2nd paragraph on page 5), stochastic dual ascent (1st paragraph of section 4.3 on page 5), practical updates for policy (the paragraphs surrounding Eqns 16 & 17 on page 7) and critic (2nd last paragraph on page 6). \n\nThe gaps between Figs 1 and 2 are indeed mainly due to the batch size used in the algorithm. As expected, the batch size affects the variance of the gradients, thereby affecting the convergence of the algorithm. Such an effect is not unique to our algorithm and has been observed in the literature; see for example similar results for the TRPO baseline in a recent empirical study [1].\n\nFor the comparison between the TRPO and PPO, the recent empirical study [1] shows that different implementations will affect their performance a lot. Based on the evaluation results in [1], we compared our algorithm with the **best** implementation of TRPO, i.e., the original implementation by Schulman, 2015. From Table 1 and Figure 26 in [1], we can see that the best implementation of TRPO may achieve comparable or even better results comparing to PPO on several tasks. On the other hand, we used the same parametrization for all the algorithms, which may be preferable to TRPO. We follow [2] using the “iteration” in the experiments to illustrate the policy behaviors along with the number of updates in the algorithm, rather than the number of data collected for a better understanding of the algorithms in terms of each update. \n\nRe gains of our algorithm: Since the major contribution of our paper is a new algorithm, rather than an alternative parametrization, we conduct the comparison with the baseline using the same parametrizations for fairness. We did not introduce any extra complexity in terms of parameterization. In terms of updates in algorithm, although the update rule for value function needs an extra sample reweighting, the update rule for policy is much simpler than TRPO, which requires extra adjustments for policy and related parameters. Therefore, the gains are **not** achieved by added complexity.\n\n[1], Deep reinforcement learning that matters. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger, AAAI 2018.\n[2], Towards generalization and simplicity in continuous control. Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, Sham Kakade, NIPS 2017.", "We appreciate the constructive comments on both theoretical and empirical aspects by the reviewer. \n\nWe first emphasize our contributions. The major contributions of this paper are (1) the **first** establishment of the competition between actor and critic in a **multi-step** setting; and (2) a novel algorithm that is make effective thanks to several critical components we introduce, including path-regularization and stochastic dual ascent, to deal with potential numerical issues that arise when one directly solves the zero-sum game. \n\nTheory Clarification: \n1, Novelty of Theorems: Theorem 2 (one-step dual form) is indeed an extension of existing results to continuous state and action MDP. Theorem 3 (multi-step dual form) is one of our major contributions and is a novel result. The claim in Theorem 3 may appear natural, but its proof is a highly nontrivial generalization of the one-step case, since the convex-concave structure breaks down in the multi-step setting. We have made this clearer in the revision. \n\n2, Assumptions on value function class and choice of regularization parameter: We tried to separate the justification of path-regularization (theory) from the parametrization of value function (practice). \n\ti), Theoretically, regarding Theorem 4 and its proof, we consider the entire value function space, i.e., the nonparametric limit, without taking into account of parametrization. Hence, as long as the regularization parameter (i.e., eta) is selected appropriately, this doesn’t affect the optimality. Note that an implicit condition of eta is provided on Page 18; however, finding an explicit condition for the regularization parameter seems to be rather difficult and is beyond the scope of this work.\n\tii), Practically, we always parametrize the value function (which affects the valid range of eta) and tune the regularization parameter to achieve the best performance. \n\n3, Minor gaps in proofs: Yes, there should be a square in the proof on page 18. This does not jeopardize the rest of the proof as we only need boundedness of this term. We have fixed the issue in our revision. For the first inequality on page 14 about ||V^*||_{2, .mu}^2, it comes from the inequality E[(X+Y)^2] <= 2 * (E[X^2] + E[Y^2]), a generalization of (a+b)^2<= 2(a^2 +b^2). We have added more details to the proof. Thanks for pointing out these issues. \n\n\nExperiments Clarification: \n1, Performance comparisons with and without the improvements: In the ablation experiment part, we compared the proposed dual-AC algorithm with/without the path-regularization, and with/without multi-step on several MuJoCo tasks, including InvertedDoublePendulum, Swimmer, and Hopper. The results suggest that using path-regularization and multi-step significantly improves the performances. Detailed experimental results can be found in Figure 1.\n\n2, Effects of the length of multi-steps: We conducted additional experiments to investigate the effects of multi-step lengths. Specifically, we compared the performance with different k = {1, 10, 50}, and tested on three tasks. Better performances are observed with increasing k, which indicates that reducing the bias is indeed critical. Detailed experimental results can be found in Figure 1. \n\n3, Computation overheads of using multi-step: In terms of the computational cost, assuming the length of the trajectories are m, with a simple moving sum algorithm, we calculate all the k length partial reward sums for a trajectory in O(m) with a O(1) amortized cost to calculate each sum of rewards. This method was used in our experiments for our algorithm as well as all competitors. Comparing to TRPO and PPO, the cost for summation is the same for all algorithms and the update costs are constant for each individual algorithm regardless of the choice of k.\n\n4, Local convexity in practice: In general, using a positive eta_V coefficient with path-regularization always enhances the local convexity. For example, if V is parametrized in a linear form, as long as eta_V is not zero, local convexity will hold. A larger eta_V will result in faster convergence, at the cost of extra bias. It is not easy to theoretically/empirically inspect the exact local convexity condition when a complicated parametrization of V is used. In practice, we suggest to simply tune the regularization parameter, and that’s what we have done in the experiments. \n" ]
[ 7, 6, 5, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BkUp6GZRW", "iclr_2018_BkUp6GZRW", "iclr_2018_BkUp6GZRW", "iclr_2018_BkUp6GZRW", "HkJ6DWtgf", "Bysjjx5lG", "Hyu5lW5xf" ]
iclr_2018_BJk59JZ0b
Guide Actor-Critic for Continuous Control
Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic. However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter. In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC). GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning. Our main theoretical contributions are two folds. First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic. Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored. Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.
accepted-poster-papers
The reviewers agree that the formulation is novel and interesting, but they raised concerns regarding the motivation and the complexity of the approach. I find the authors' response mostly satisfying, and I ask them to improve the paper by incorporating the comments. Detailed comments: The maximum-entropy objective used in Eq. (13) reminds me of maximum-entropy RL objective in previous work including [Ziebart, 2010], [Azar, 12], [Nachum, 2017], and [Haarnoja, 2017].
train
[ "rJwNjgqef", "Hk6bJG9gf", "H1broNZ-G", "r1_35T2Xz", "B1pXz9jbz", "Hy4vbqjWf", "S1qXxqiWz", "B11ykcs-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "\n\nThe authors devise and explore use of the hessian of the\n(approximate/learned) value function (the critic) to update the policy\n(actor) in the actor-critic approach to RL. They connect their\ntechnique, 'guide actor-critic' or GAC, to existing actor-critic\nmethods (authors claim only two published work use 1st order\ninformation on the critic). They show that the 2nd order information\ncan be useful (in several of the 9 tasks, their GAC techniques were\nbest or competitive, and in only one, performed poorly compared to best).\n\nThe paper has a technical focus.\n\npros:\n\n- Strict generalization of an existing (up to 1st order) actor-critic approaches.\n\n- Compared to many existing techniques, on 9 tasks\n\ncons:\n\n- no mention of time costs, except that for more samples, S > 1, for\n taylor approximation, it can be very expensive.\n\n- one would expect more information to strictly improve performance,\n but the results are a bit mixed (perhaps due to convergence to local\n optima and both actor and critic being learned at same time, \n or the Gaussian assumptions, etc.).\n\n- relevance: the work presents a new approach to actor-critique strategy for\n reinforcement learning, remotely related to 'representation\n learning' (unless value and policies are deemed a form of\n representation).\n\n\nOther comments/questions:\n\n- Why does the performance start high on Ant (1000), then goes to 0\n(all approaches)?\n\n- How were the tasks selected? Are they all the continuous control\n tasks available in open ai?\n\n\n \n\n\n\n\n", "The paper introduces a modified actor-critic algorithm where a “guide actor” uses approximate second order methods to aid computation. The experimental results are similar to previously proposed methods. \n\nThe paper is fairly well-written, provides proofs of detailed properties of the algorithm, and has decent experimental results. However, the method is not properly motivated. As far as I can tell, the paper never answers the questions: Why do we need a guide actor? What problem does the guide actor solve? \n\nThe paper argues that the guide actor allows to introduce second order methods, but (1) there are other ways of doing so and (2) it’s not clear why we should want to use second-order methods in reinforcement learning in the first place. Using second order methods is not an end in itself. The experimental results show the authors have found a way to use second order methods without making performance *worse*. Given the high variability of deep RL, they have not convincingly shown it performs better.\n\nThe paper does not discuss the computational cost of the method. How does it compare to other methods? My worry is that the method is more complicated and slower than existing methods, without significantly improved performance.\n\nI recommend the authors take the time to make a much stronger conceptual and empirical case for their algorithm. \n", "The paper presents a clever trick for updating the actor in an actor-critic setting: computing a guide actor that diverges from the actor to improve critic value, then updating the actor parameters towards the guide actor. This can be done since, when the parametrized actor is Gaussian and the critic value can be well-approximated as quadratic in the action, the guide actor can be optimized in closed form.\n\nThe paper is mostly clear and well-presented, except for two issues: 1) there is virtually nothing novel presented in the first half of the paper (before Section 3.3); and 2) the actual learning step is only presented on page 6, making it hard to understand the motivation behind the guide actor until very late through the paper.\n\nThe presented method itself seems to be an important contribution, even if the results are not overwhelmingly positive. It'd be interesting to see a more elaborate analysis of why it works well in some domains but not in others. More trials are also needed to alleviate any suspicion of lucky trials.\n\nThere are some other issues with the presentation of the method, but these don't affect the merit of the method:\n\n1. Returns are defined from an initial distribution that is stationary for the policy. While this makes sense in well-mixing domains, the experiment domains are not well-mixing for most policies during training, for example a fallen humanoid will not get up on its own, and must be reset.\n\n2. The definition of beta(a|s) as a mixture of past actors is inconsistent with the sampling method, which seems to be a mixture of past trajectories.\n\n3. In the first paragraph of Section 3.3: \"[...] the quality of a guide actor mostly depends on the accuracy of Taylor's approximation.\" What else does it depend on? Then: \"[...] the action a_0 should be in a local vicinity of a.\"; and \"[...] the action a_0 should be similar to actions sampled from pi_theta(a|s).\" What do you mean \"should\"? In order for the Taylor approximation to be good?\n\n4. The line before (19) is confusing, since (19) is exact and not an approximation. For the approximation (20), it isn't clear if this is a good approximation. Why/when is the 2nd term in (19) small?\n\n5. The parametrization nu of \\hat{Q} is never specified in Section 3.6. This is important in order to evaluate the complexities involved in computing its Hessian.\n", "Dear reviewers,\n\nWe have revised the paper according to the comments and suggestions. The following changes are made to the paper:\n1. We include a new paragraph in Section 1 and a new subsection in Section 2 to explain our motivation more clearly.\n2. We include a discussion about the computation time at the end of Section 5.\n3. We improve the explanation about Taylor's approximation in the first paragraph of Section 3.3 and about the Hessian approximation in Section 3.4.\n4. We include a discussion about the parameterization of the Q-function at the end of Section 3.6.\n5. The number of experimental trials is increased from 5 to 10. The number of training time steps is also increased from 700,000 to 1,000,000.\n6. Grammatical errors have been corrected. \n", "Thank you for your constructive review. We address the reviewer's questions and comments below.\n\n3.1: There is virtually nothing novel presented in the first half of the paper (before Section 3.3). 2) the actual learning step is only presented on page 6, making it hard to understand the motivation behind the guide actor until very late through the paper.\n- We expect the new subsection (please see our \"Author response\" comment again) will improve the clarity of the paper. Thank you again for the comment.\n\n3.2: More trials are also needed to alleviate any suspicion of lucky trials.\n- We will increase the experiment trials to 10 to make the results more convincing.\n\n3.3: Returns are defined from an initial distribution that is stationary for the policy. While this makes sense in well-mixing domains, the experiment domains are not well-mixing for most policies during training. The definition of beta(a|s) as a mixture of past actors is inconsistent with the sampling method.\n- Thank you for pointing them out. We will correct them.\n\n3.4: In the first paragraph of Section 3.3: \"[...] the quality of a guide actor mostly depends on the accuracy of Taylor's approximation.\" What else does it depend on? Then: \"[...] the action a_0 should be in a local vicinity of a.\"; and \"[...] the action a_0 should be similar to actions sampled from pi_theta(a|s).\" What do you mean \"should\"? In order for the Taylor approximation to be good?\n- Beside the accuracy of Taylor’s approximation, the guide actor is determined by the step-size parameters eta and omega which depend on the accuracy of sample averages for the dual function hat{g}. The sample size can be large since we can use off-policy samples. The sample size of 256 provided a good trade-off between performance and computation time in our experiments. Regarding the latter two sentences about “a”, it is correct that we require a_0 to be close to “a” in order to obtain a good Taylor’s approximation. We will make these sentences clearer in the revise version.\n\n3.5: Why/when is the 2nd term in (19) small?\n- The second term is inverse proportion to exp(Q(s,a)) and is small for high values of Q(s,a). It also vanishes when we compute its expectation over a softmax policy pi(a|s) = exp(Q(s,a))/Z with a normalizer Z. However, this is not the case in our setting since the guide actor does not converge to such a softmax policy unless eta -> 0 and omega -> 1. We will consider alternative Hessian approximations such as BFGS updates in future work. \n\n3.6: The parametrization nu of \\hat{Q} is never specified in Section 3.6. This is important in order to evaluate the complexities involved in computing its Hessian.\n- For a Gauss-Newton approximation, the computation cost is determined by that of gradient computation and an outer-product operation. The cost of outer-product is low. The cost of gradient computation depends on the parameterization of hat{Q}. The cost is inexpensive for simple models such as a linear model: hat{Q} = nu’*phi(s,a). For neural network models, the gradients are computed by automatic-differentiation and its cost depends on network architecture. We will include this discussion in Section 3.6.\n", "Thank you for your constructive review. We address the reviewer's questions and comments below.\n\n2.1: No mention of time costs, except that for more samples, S > 1, for taylor approximation, it can be very expensive.\n- We will include a table reporting the training time. Please see also our response to 1.4.\n\n2.2: One would expect more information to strictly improve performance, but the results are a bit mixed.\n- The second-order information may not provide much benefit in simple tasks such as Inverted-pendulum, Inverted-double-pendulum and Reacher. However, in locomotion tasks except Hopper, there are significant differences in performances between the second-order method (GAC) and the first-order method (DPG). To make the results more convincing, we will increase the number of experiment trials to 10. \n\n2.3: Why does the performance start high on Ant (1000), then goes to 0 (all approaches)?\n- The reward function in the Ant task is r(s,a) = forward_movement + survive_reward - control_cost - contact_cost. The survive reward is 1 if the agent does not fall over and 0 otherwise. All methods are initialized so that initial actions are close to 0, and this makes the agent moves only slightly and survives for the entire episodes (1000 steps). With this initial configuration, the performance sharply drops since random actions from exploration can yield high control costs while making the agent falls over. This behavior does not appear in other locomotion tasks since actions close to 0 will make the agent falls over and obtains low rewards at the beginning. \n\n2.4: How were the tasks selected? Are they all the continuous control tasks available in open ai?\n- Classical continuous control tasks such as Pendulum and Mountain-Car are also available from OpenAI gym. The tasks we selected are considered more challenging than these classical tasks and are commonly used as benchmark tasks in recent literature. \n", "Thank you for your constructive review. We address the reviewer's questions and comments below.\n\n1.1: The method is not properly motivated. It’s not clear why we should want to use second-order methods in reinforcement learning in the first place. \n- Second-order methods leverage curvature information for optimization and this often leads to faster learning when compared to first-order methods. In the actor-critic framework, a similar idea was pursued by natural actor-critic and TRPO where curvature information is given by the Fisher information matrix. More recently, an approximate Newton’s method in the policy search framework was also proposed by Furmston et al. (2016). These methods have established that second-order methods can significantly outperforms first-order methods in RL. \n\n1.2: There are other ways of doing second-order methods. Why do we need a guide actor? What problem does the guide actor solve? \n- It is true that second-order methods can be applied without the guide actor. However, they are infeasible in deep RL since the size of the Hessian matrix or the Fisher information matrix of the RL objective in Eq.(3) depends on the number of the policy parameter. Existing methods avoid this issue by a diagonal approximation or a factor approximation. Clearly, these approximations lead to a loss of useful curvature information. By reformulating the problem and optimizing the guide actor in the action space, we obtain a closed-form second-order update that utilizes a full Hessian matrix of the critic. Moreover, our second-order update incorporates both the KL and entropy constraints, and we believe this is new in deep RL setting. \n\n1.3: Given the high variability of deep RL, they have not convincingly shown it performs better.\n- We will increase the number of experiment trials to 10 to make the results more convincing.\n\n1.4: How does the computation time compare to other methods? My worry is that the method is more complicated and slower than existing methods, without significantly improved performance.\n- We will include a table reporting the training time of each method. We agree that our method is computationally more expensive than other methods due to the inner optimization for finding eta and omega, but this cost can be reduced by letting eta and omega be external tuning parameters. Without the inner optimization, the computation cost of our method for S <= 1 is comparable to that of the first order method since we use Gauss-Newton approximation where the outer product operation is computationally cheap.\n", "We thank all the reviewers for the constructive reviews. \n\nTo explain our motivation more clearly, we will include a subsection titled “Second-order Methods for Policy Learning” under the Background section in the revise paper. Its purpose is 1) to discuss existing second-order methods on deep RL and their computational issue, and 2) to motivate the use of a guide-actor to avoid the issue. These are briefly explained in our responses to 1.1 and 1.2 to the review 1. The revised paper will be submitted as soon as possible. Below, we address each review in the comment.\n\n\n" ]
[ 6, 4, 7, -1, -1, -1, -1, -1 ]
[ 2, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJk59JZ0b", "iclr_2018_BJk59JZ0b", "iclr_2018_BJk59JZ0b", "B11ykcs-M", "H1broNZ-G", "rJwNjgqef", "Hk6bJG9gf", "iclr_2018_BJk59JZ0b" ]
iclr_2018_ByOnmlWC-
Policy Optimization by Genetic Distillation
Genetic algorithms have been widely used in many practical optimization problems. Inspired by natural selection, operators, including mutation, crossover and selection, provide effective heuristics for search and black-box optimization. However, they have not been shown useful for deep reinforcement learning, possibly due to the catastrophic consequence of parameter crossovers of neural networks. Here, we present Genetic Policy Optimization (GPO), a new genetic algorithm for sample-efficient deep policy optimization. GPO uses imitation learning for policy crossover in the state space and applies policy gradient methods for mutation. Our experiments on MuJoCo tasks show that GPO as a genetic algorithm is able to provide superior performance over the state-of-the-art policy gradient methods and achieves comparable or higher sample efficiency.
accepted-poster-papers
At least two of the reviewers found the proposed approach novel and interesting and worthy of publication at ICLR. The reviewers raised concerns regarding the paper's terminology, which may lead to some misunderstanding. I agree that upon a quick skim, a reader may think that the paper performs the crossover operation outlined at the bottom right of Figure 1. Please consider improving the figure and the caption to prevent such a misunderstanding. You can even slightly change the title to reflect the policy distillation operation rather than naive crossover. Finally, including some more complex baselines benefits the paper. I am curious whether performing policy gradient on an ensemble of 8 policies + periodic removal of the bottom half of the policies will provide similar gains.
train
[ "By_t2wVlG", "ByvABDcxz", "BySF6I2xz", "BkPw48aXz", "BJ7VGn2Qz", "rkJBdVtmM", "H1SlvnZ7z", "HkV9S2-7z", "SksMHhbXf", "SJCy42ZQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This is a highly interesting paper that proposes a set of methods that combine ideas from imitation learning, evolutionary computation and reinforcement learning in a novel way. It combines the following ingredients:\na) a population-based setup for RL\nb) a pair-selection and crossover operator\nc) a policy-gradient based “mutation” operator\nd) filtering data by high-reward trajectories\ne) two-stage policy distillation\n\nIn its current shape it has a couple of major flaws (but those can be fixed during the revision/rebuttal period):\n\n(1) Related work. It is presented in a somewhat ahistoric fashion. In fact, ideas for evolutionary methods applied to RL tasks have been widely studied, and there is an entire research field called “neuroevolution” that specifically looks into which mutation and crossover operators work well for neural networks. I’m listing a small selection of relevant papers below, but I’d encourage the authors to read a bit more broadly, and relate their work to the myriad of related older methods. Ideally, a more reasonable form of parameter-crossover (see references) could be compared to -- the naive one is too much of a straw man in my opinion. To clarify: I think the proposed method is genuinely novel, but a bit of context would help the reader understand which aspects are and which aspects aren’t.\n\n(2) Ablations. The proposed method has multiple ingredients, and some of these could be beneficial in isolation: for example a population of size 1 with an interleaved distillation phase where only the high-reward trajectories are preserved could be a good algorithm on its own. Or conversely, GPO without high-reward filtering during crossover. Or a simpler genetic algorithm that just preserves the kills off the worst members of the population, and replaces them by (mutated) clones of better ones, etc. \n\n(3) Reproducibility. There are a lot of details missing; the setup is quite complex, but only partially described. Examples of missing details are: how are the high-reward trajectories filtered? What is the total computation time of the different variants and baselines? The x-axis on plots, does it include the data required for crossover/Dagger? What are do the shaded regions on plots indicate? The loss on \\pi_S should be made explicit. An open-source release would be ideal.\n\nMinor points:\n- naively, the selection algorithm might not scale well with the population size (exhaustively comparing all pairs), maybe discuss that?\n- the filtering of high-reward trajectories is what estimation of distribution algorithms [2] do as well, and they have a known failure mode of premature convergence because diversity/variance shrinks too fast. Did you investigate this?\n- for Figure 2a it would be clearer to normalize such that 1 is the best and 0 is the random policy, instead of 0 being score 0.\n- the language at the end of section 3 is very vague and noncommittal -- maybe just state what you did, and separately give future work suggestions?\n- there are multiple distinct metrics that could be used on the x-axis of plots, namely: wallclock time, sample complexity, number of updates. I suspect that the results will look different when plotted in different ways, and would enjoy some extra plots in the appendix. For example the ordering in Figure 6 would be inverted if plotting as a function of sample complexity?\n- the A2C results are much worse, presumably because batchsizes are different? So I’m not sure how to interpret them: should they have been run for longer? Maybe they could be relegated to the appendix?\n\nReferences:\n[1] Gomez, F. J., & Miikkulainen, R. (1999). Solving non-Markovian control tasks with neuroevolution.\n[2] Larranaga, P. (2002). A review on estimation of distribution algorithms.\n[3] Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. \n[4] Igel, C. (2003). Neuroevolution for reinforcement learning using evolution strategies.\n[5] Hausknecht, M., Lehman, J., Miikkulainen, R., & Stone, P. (2014). A neuroevolution approach to general atari game playing.\n[6] Gomez, F., Schmidhuber, J., & Miikkulainen, R. (2006). Efficient nonlinear control through neuroevolution.\n\n\nPros:\n- results\n- novelty of idea\n- crossover visualization, analysis\n- scalability\n\nCons:\n- missing background\n- missing ablations\n- missing details\n\n[after rebuttal: revised the score from 7 to 8]", "The authors present an algorithm for training ensembles of policy networks that regularly mixes different policies in the ensemble together by distilling a mixture of two policies into a single policy network, adding it to the ensemble and selecting the strongest networks to remain (under certain definitions of a \"strong\" network). The experiments compare favorably against PPO and A2C baselines on a variety of MuJoCo tasks, although I would appreciate a wall-time comparison as well, as training the \"crossover\" network is presumably time-consuming.\n\nIt seems that for much of the paper, the authors could dispense with the genetic terminology altogether - and I mean that as a compliment. There are few if any valuable ideas in the field of evolutionary computing and I am glad to see the authors use sensible gradient-based learning for GPO, even if it makes it depart from what many in the field would consider \"evolutionary\" computing. Another point on terminology that is important to emphasize - the method for training the crossover network by direct supervised learning from expert trajectories is technically not imitation learning but behavioral cloning. I would perhaps even call this a distillation network rather than a crossover network. In many robotics tasks behavioral cloning is known for overfitting to expert trajectories, but that may not be a problem in this setting as \"expert\" trajectories can be generated in unlimited quantities.", "This paper proposes a genetic algorithm inspired policy optimization method, which mimics the mutation and the crossover operators over policy networks.\n\nThe title and the motivation about the genetic algorithm are missing leading and improper. The genetic algorithm is a black-box optimization method, however, the proposed method has nothing to do with black-box optimization. \n\nThe mutation is a method to sample individual independence of the objective function, which is very different with the gradient step. Mimicking the mutation by a gradient step is very unreasonable. \n\nThe crossover operator is the policy mixing method employed in game context (e.g., Deep Reinforcement Learning from Self-Play in Imperfect-Information Games, https://arxiv.org/abs/1603.01121 ). It is straightforward if two policies are to be mixed. Although the mixing method is more reasonable than the genetic crossover operator, it is strange to compare with that operator in a method far away from the genetic algorithm.\n\nIt is highly suggested that the method is called as population-based method as a set of networks is maintained, instead of as \"genetic\" method.\n\nAnother drawback, perhaps resulted from the \"genetic algorithm\" motivation is that the proposed method has not been well explained. The only explanation is that this method mimics the genetic algorithm. However, this explanation reveals nothing about why the method could work well -- a random exploration could also waste a lot of samples with a very high probability.\n\nThe baseline methods result in rewards much lower than those in previous experimental papers. It is problemistic that if the baselines have bad parameters.\n1. Benchmarking Deep Reinforcement Learning for Continuous Control\n2. Deep Reinforcement Learning that Matters", "1- Concerning unanswered questions like “why crossover can work” : A large portion of the paper is devoted to motivating why state-space crossover --- using a binary selection policy and imitation learning --- is a more thorough approach to mixing parents policies compared to parameter-space crossover. Section 3.2 has the intuition; section 3.3.1 has the procedure and algorithm; section 4.2 has the experimental validation of the claims with performance numbers and t-SNE plots.\n\n2- Concerning “why population is good” and Figure 7 interpretation : With Figure 7 (and/or Figure 10), we do not claim that increasing population size improves sample-complexity. Instead, Figure 7 shows that adding policies to the ensemble (and consequently using more simulation timesteps) improves performance. We believe that the state-space for the MuJoCo environments doesn’t have sufficient richness and diversity to study the potential benefit on sample-complexity with a large population. In our experiments, although we start different policies in the population with different random seeds, it’s highly likely that they explore overlapping regions. Therefore, we neither expect nor mention consistent improvement in sample-complexity with population-size with MuJoCo. The situation might be different in harder RL tasks (e.g. robotics manipulation, grasping) with more state-variability, where population constituents can explore disparate slices of the state-space. Notwithstanding task diversity, we show that having multiple policies in the population and letting them interact through the operators in GPO is beneficial -- our “Joint“ baseline represents the algorithm with population of 1, and in Figure 4 we compare GPO (population size = 8) favorably to it for most of the environments. \n\n3- Concerning batch-size and gradient steps : The batch-size of Joint is 8x more than GPO/Single because we desire a baseline algorithm that trains a single policy, using the same number of timesteps as GPO/Single (which train 8 policies). As mentioned in Section 4.3, the number of gradient update steps is the same for Joint and Single, but each step in Joint uses 8x data, leading to improved performance. Empirically, we found larger batch-size to be better than using more gradient steps with smaller batch-size.\n", "I have read the rebuttal and the revised paper. However, some problems remain.\n\nThe major problem is that the revised paper is still motivated from evolutionary algorithm, instead of the rationality of the algorithm itself. Questions like \"why crossover can work\" and \"why population is good\" are left unanswered. Actually, these questions to the evoultionary algorithms are also unanswered. It is very sad about the loss of rationality. \n\nFigure 7 is misleading. The x-axis should be the number of samples, instead of iterations. We can compare the iteration at 200 of population 16 with iteration at 100 of population 32. My observation is that population 16 is more sample efficient than population 32, and also 8 is better than 16, 4 is better than 8. Population is not effective, but the number of iterations has the key impact. Figure 10 also shows that the population size is not a positive factor, meanwhile, the modification of the batch-size introduces other effect in another dimension.\n\nWhy the batch-size of Joint is 8 times more than GPO and Single? Does that means the number of gradient updates of Joint is the same as Single? If so, the Joint has a lower performance than it could be by using 8 times more updates.\n", "Thank you for thoroughly revising the paper, in particular, the new ablation studies are very insightful and substantially improved the paper -- and I'm surprised by how much of the contribution comes from data-sharing as compared to the other ingredients.\n\n(I raised my review score accordingly)", "1- Concerning “Missing background” : Thank you for pointing us to relevant literature on neuroevolution algorithms for reinforcement learning. Previously, we had only covered NEAT and evolutionary strategies (ES, CMA-ES), but we have expanded our background section to include HyperNEAT, CoSyNE, SANE and ESP, since all these neuroevolution algorithms have been successfully applied to RL problems. Please see section 2.3. We have also include a recent work by Lehman et. al on “safe mutation” for genetic algorithms. \n\n2- Concerning “Missing ablations” : We have added a section (4.4) on ablation studies for GPO. We consider the following 3 crucial components of GPO - 1) State-space crossover 2) Selection of candidates with highest fitness 3) Data-sharing for policy gradient RL during the mutation phase. To understand the behavior of these components, we compare performance when each of them is used in isolation for policy optimization. We further experiment with the scenario when one component is removed from GPO and the other two are used. This gives us a total of 6 algorithms which we plot along with GPO and our Single baseline in Figure 5 in the revision. It helps to define what we mean by a component “not being used”. For crossover, it means that we create the offspring by using the network parameters of the stronger parent; for selection, it means that we disregard the fitness of candidates and select the population for the next generation at random; for data-sharing, it means that policies in the ensemble don’t share samples from other similar policies for PPO (or A2C) during mutation.\n\n3- Concerning “Reproducibility” : We have added a section (6.3) on implementation details for the crossover step. To train the binary policy (Equation 1 in the revision), we reuse the trajectories from the parents’ previous mutation phase rather than generating new samples. We filter the trajectories based on trajectory reward (sum of the individual reward at each transition in the trajectory). For our experiments, we simply prune the worst 40% trajectories from the dataset. We did not find the final GPO performance to be very sensitive to the % threshold hyperparameter. We will release our code on Github very soon.\n\n4- Concerning other missing details : For computation time, we include a section (6.4.1) on wall-clock time. For all the environments, we compute the time for GPO and break it down into crossover, selection and mutation phases. We compare the time with our strongest baseline - “Joint”. The x-axis on all the plots of episode-reward vs. timesteps includes the data required for crossover/Dagger, and the shaded region indicates the standard-error of the performance with different random seeds. These details, along with the loss on \\pi_S (Equation 1), are in the revision. \n\n5- Concerning “Minor points”:\n\n[Scalable selection] Yes, for a population of n, comparing all nC2 pairs is prohibitively expensive when n is large. This was indeed the case when we ran with a population size of 32 for the scalability plot (Figure 7). Our solution was to prune the population to k by probabilistic sampling (probability = fitness), and then run selection over kC2. Looking for more sophisticated and scalable alternatives is interesting future work.\n\n[Lack of diversity] Yes, we did observe that maintaining a diverse population was challenging after 3-4 rounds of GPO (algorithm 1). We did some preliminary investigation with the Hopper environment, where we believed that some policies in the GPO ensemble were getting stuck in local minima, making the overall learning slow. We increased the randomness in the selection phase and found learning to proceed at a much more rapid pace. We need to explore this further.\n\n[Language at end of section 3] We have modified the section to include details on the fitness used by our experiments. Rather than dynamically adapting the weight of performance vs. KL fitness over the rounds of GPO, our current implementation puts all the weight on performance for all rounds, and relies on the randomness in the starting seed for different policies in the ensemble for diversity in the initial rounds.\n\n[Figure 6 with timesteps on x-axis] We have included this figure in Appendix 6.4.2. For the Walker environment, we observe that the sample-complexity for a population of 32 is quite competitive with our default GPO value of 8.\n\n[A2C results] A2C runs use the same batchsize as the PPO. We believe that the KL penalty in PPO prevents (possibly destructive) large updates to the policy distribution, and also the 10x more gradient steps in PPO allow for faster learning compared to A2C. A2C performance seems to be still improving when we end training for our experiments, and running them longer could see them match the PPO numbers. A2C results are moved to the Appendix in the revision.", "1- Concerning wall-time comparison: We have added a section in the Appendix (6.4.1) comparing wall-clock time for GPO and Joint. Both the algorithms are designed to use the multi-core parallelism on offer. We observe that GPO can be 1.5 to 2 times slower than Joint depending on the environment. Note that the timing numbers also depend on the number of iterations we run mutation (policy gradient) for before crossing over the policies, and we show the numbers for the default setting of these hyperparameters for all our experiments. For GPO, Mutate takes a good portion of the overall time due to communication overheads caused by data-sharing between policies. The crossover step takes moderate amount of time. We believe this is due to the following reasons - 1) for learning the binary policy (Equation 1), we reuse the trajectories from the parents’ previous mutation phase rather than generating new samples; 2) the losses in Equation 1. and 2. are not minimized to convergence since the optimization (first-order, Adam) is only run for certain number of epochs. We provide the exact details in a new section in the Appendix (6.3); (3) the crossover phase is parallelized, i.e. once the parents are decided by the selection step, each crossover is done in an independent parallel process.\n\n2- Concerning use of term behavioral cloning: We completely agree with the reviewer that it’s imperative that we use crisp terminology. To that end, in section 3.2, where we first mention using imitation learning for the crossover, we expand on the differences between flavors of imitation learning (a.k.a behavioral cloning and inverse RL), and explicitly say that all our references to imitation learning signify behavioral cloning.\n\n3- Concerning using “crossover by distillation”: We agree with the reviewer in that the high-level objective of the crossover step is to “distill” the knowledge from the parent policy networks into the offspring network. However, we believe that there are two main differences between the distillation network proposed in [1] and our procedure for crossover. Firstly, in [1] the soft targets for training the offspring network are computed using the arithmetic (or geometric) mean of the temperature-controlled outputs from parent networks. The argument is that different parent networks trained on similar data for similar amount of time represent different local minima point on the loss surface, and averaging leads to better generalization. In contrast, the parent policies in GPO have (possibly) visited disparate regions of the state-space and have (possibly) been trained on dissimilar data. Therefore, rather than averaging the output of the parents, we train another policy \\pi_S to output the weighting, and do a weighted average. Secondly, the distillation network in [1] was trained for speech and object recognition tasks which do not have a temporal nature. However, the supervised training of the offspring in GPO should account for the compounding errors in the performance of the trained policy in areas of state-space different from the training data. Therefore, we add DAgger training to our crossover step, making it further different from vanilla distillation. \n\n[1] Hinton et al., Distilling the Knowledge in a Neural Network", "1- Concerning title and missing motivation: The reviewer is correct in pointing out that genetic algorithms (GA) fall into the category of black-box optimization techniques. Their lack of exploiting the structure in the underlying tasks, e.g. the temporal nature in RL, explains their limited success in deep learning. Black-box techniques have been able to solve some RL problems, for example in [1] and most recently in [2], but with unsatisfactory sample-complexity. Our goal with GPO was to buy the philosophy of genetic operators - mutation, selection and crossover - from GA, and marry it with model-free policy-gradient RL algorithm to achieve good sample complexity. We believe that the connection to GA is helpful because it may be possible to apply the myriad of advanced enhancements for general GA (Section 3.1 in [2]) to our policy optimization algorithm as well. For example, techniques to obtain quality diversity in GA population could be helpful for efficient exploration in large state-spaces. At the same time, using policy gradients as a plug-and-play component in our genetically-inspired algorithm enables us to exploit advances in policy gradients; see, for instance, the difference in GPO performance with PPO compared to A2C.\n\t\nThere is prior work on opening up the GA/ES “black-box” to obtain improved performance and stability for RL. For example, in [3], the authors suggest replacing the random mutations with perturbations guided by the gradients of the neural network output. A related idea was presented in [6]. [7] modifies the fitness function used in selection to aid exploration. We have updated section 2.3 with more neuroevolution algorithms which have been adjusted to work in the RL setting.\n\n2- Concerning lack of explanation for why the method works: The fact that our algorithm is not “black-box” enables us to investigate the sources of improvement. Firstly, as we show in Section 4.2 through experimentation, the crossover operator is able to transfer positive behavior from the parent policies to the offspring policy. Secondly, we do mutation through tried-and-tested algorithms like PPO/A2C and take the empirical success that they have enjoyed. Thirdly, our selection operator maintains high performance policies in the population. We believe the overall GPO performance is a culmination of these components. We have added a section (4.4) on ablation studies for GPO. \n\n3- Concerning baseline performance not same as other papers: We use the MuJoCo environments included as part of rllab [4]. The environments provided in the open-source release vary in their parameters from what the authors used for the paper (https://github.com/rll/rllab/issues/157), and therefore it’s hard to replicate their exact numbers. Regarding the numbers in [5], please note that their evaluation is done with the Gym MuJoCo environments, which again differ from rllab MuJoCo in terms of parameters like coefficients for rewards, aliveness bonus etc. For completeness, we ran GPO on Gym MuJoCo environments and compared to Joint. We have added Appendix 6.2 for this. We also had a discussion with the authors of [5] on the variance between baselines numbers for different codebases (rllab, openAIbaselines etc.). See Figure 6. in [5] for reference where “Duan 2016” is the rllab framework we use. We believe that factors such as value function approximation (Adam vs. LBFGS), observation/reward normalization method etc. lead to appreciable variation in baseline performance across codebases. Importantly, all these factors remain constant between GPO and the baselines for our results. Our baselines are very close to the rllab baselines (Figure 29.) in [5]. \n\n\n[1] Evolution Strategies as a Scalable Alternative to Reinforcement Learning\n[2] Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative For Training Deep Neural Networks for Reinforcement Learning\n[3] Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients\n[4] Benchmarking Deep Reinforcement Learning for Continuous Control\n[5] Deep Reinforcement Learning that Matters\n[6] Parameter Space Noise for Exploration\n[7] Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents", "We would like to thank the anonymous reviewers for their comments and constructive feedback. We address each reviewer's comments individually and summarize the major changes made in the revision here:\n\n1. Expanded section 2.3 to include missing background and extra citations on application of neuroevolution to reinforcement learning.\n2. Added more details to Section 3.3.1 on crossover between policies, along with a schematic diagram for better elucidation.\n3. Added ablation studies (Section 4.4).\n4. Added implementation details for reproducibility (Appendix 6.3).\n5. Added wall-clock time comparison (Appendix 6.4.1).\n6. Added experiments with environments from OpenAI Gym in addition to rllab (Appendix 6.2) for comparison. Our baseline results are comparable to those in previous papers using rllab.\n\nAll additions are highlighted using red-colored text in the revision.\n" ]
[ 8, 6, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByOnmlWC-", "iclr_2018_ByOnmlWC-", "iclr_2018_ByOnmlWC-", "BJ7VGn2Qz", "SksMHhbXf", "H1SlvnZ7z", "By_t2wVlG", "ByvABDcxz", "BySF6I2xz", "iclr_2018_ByOnmlWC-" ]
iclr_2018_SkA-IE06W
When is a Convolutional Filter Easy to Learn?
We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input. We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches. To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions. Our theory also justifies the two-stage learning rate strategy in deep neural networks. While our focus is theoretical, we also present experiments that justify our theoretical findings.
accepted-poster-papers
Dear authors, The reviewers all appreciated your work and agree that this a very good first step in an interesting direction.
train
[ "SJA4C8_gG", "Sk7b2-tlz", "By3jcVfMM", "HkqdxcuQM", "Skg9Q7X7M", "BkJpqU0fM", "BJ_w9URzz", "ryNVqLCff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper studies the problem of learning a single convolutional filter using SGD. The main result is: if the \"patches\" of the convolution are sufficiently aligned with each other, then SGD with a random initialization can recover the ground-truth parameter of a convolutional filter (single filter, ReLU, average pooling). The convergence rate, and how \"sufficiently aligned\" depend on some quantities related to the underlying data distribution. A major strength of the result is that it can work for general continuous distributions and does not really rely on the input distribution being Gaussian; the main weakness is that some of the distribution dependent quantities are not very intuitive, and the alignment requirement might be very high.\n\nDetailed comments:\n1. It would be good to clarify what the angle requirement means on page 2. It says the angle between Z_i, Z_j is at most \\rho, is this for any i,j? From the later part it seems that each Z_i should be \\rho close to the average, which would imply pairwise closeness (with some constant factor).\n2. The paper first proves result for a single neuron, which is a clean result. It would be interesting to see what are values of \\gamma(\\phi) and L(\\phi) for some distributions (e.g. Gaussian, uniform in hypercube, etc.) to give more intuitions. \n3. The convergence rate depends on \\gamma(\\phi_0), from the initialization, \\phi_0 is probably very close to \\pi/2 (the closeness depend on dimension), which is also likely to make \\gamma(\\phi_0) depend on dimension (this is especially true of Gaussian). \n4. More precisely, \\gamma(\\phi_0) needs to be at least 6L_{cross} for the result to work, and L_{cross} seems to be a problem dependent constant that is not related to the dimension of the data. Also \\gamma(\\phi_0) depends on \\gamma_{avg}(\\phi_0) and \\rho, when \\rho is reasonable (say a constant), \\gamma(\\phi_0) really needs to be a constant that is independent of dimension. On the other hand, in Theorem 3.4 we can see that the upperbound on \\alpha (the quality of initialization) depends on the dimension. \n5. Even assuming \\rho is a constant strictly smaller than \\pi/2 seems a bit strong. It is certainly plausible that nearby patches are highly correlated, but what is required here is that all patches are close to the average. Given an image it is probably not too hard to find an almost all white patch and an almost all dark patch so that they cannot both be within a good angle to the average. \n\nOverall I feel the result is interesting but hard to interpret correctly. The details of the theorem do not really support the high level claims very strongly. The paper would be much better if it goes over several example distributions and show explicitly what are the guarantees. The reviewer tried to do that for Gaussian and as I mentioned above (esp. 4) the result does not seem very impressive, maybe there are other distributions where this result works better?\n\nAfter reading the response, I feel the contribution for the single neuron case does not require too much assumptions and is itself a reasonable result. I am still not convinced by the convolution case (which is the main point of this paper), as even though it does not require Gaussian input (a major plus), it still seems very far from \"general distribution\". Overall this is a first step in an interesting direction, so even though it is currently a bit weak I think it is OK to be accepted. I hope the revised version will clearly discuss the limitations of the approach and potential future directions as the response did.", "(a) Significance\nThis is an interesting theoretical deep learning paper, where the authors try to provide the theoretical insights why SGD can learn the neural network well. The motivation is well-justified and clearly presented in the introduction and related work section. And the major contribution of this work is the generalization to the non-Gaussian case, which is more in line with the real world settings. Indeed, this is the first work analyzing the input distribution beyond Gaussian, which might be an important work towards understanding the empirical success of deep learning. \n\n(b) Originality\nThe division of the input space and the analytical formulation of the gradient are interesting, which are also essential for the convergence analysis. Also, the analysis framework relies on novel but reasonable distribution assumptions, and is different from the relevant literature, i.e., Li & Yuan 2017, Soltanolkotabi 2017, Zhong et al. 2017. I curious whether the angular smoothness assumptions can be applied to a more general network architecture, say two-layer neural network.\n\n(c) Clarity & Quality \nOverall, this is a well-written paper. The theoretical results are well-presented and followed by insightful explanations or remarks. And the experiments are demonstrated to justify the theoretical findings as well. The authors did a really good job in explaining the intuitions behind the imposed assumptions and justifying them based on the theoretical and experimental results. I think the quality of this work is above the acceptance bar of ICLR and it should be published in ICLR 2018.\n\nMinor comments: \n1. Figure 3 looks a little small. It is better to make them clearer.\n2. In the appendix, ZZ^{\\top} and the indicator function are missing in the first equation of page 13.\n", "This paper considers the convergence of (stochastic) gradient descent for learning a convolutional filter with ReLU activations. It doesn't assume the input is Gaussian as in most previous work and shows that starting from random initialization, the (stochastic) gradient descent can learn the underlying convolutional filter in polynomial time. It is also shown that the convergence rate depends on the smoothness of the input distribution and the closeness of the patches. \n\nThe main contribution and the most intriguing part is that the result doesn't require assuming the input is Gaussian. Also, the guarantee holds for random initialization. The analysis that achieves these results can potentially provide better techniques for analyzing more general deep learning optimizations. \n\nThe main drawback is that the assumptions are somewhat difficult to interpret, though significantly more general than those made in previous work. It will be great if more explanations/comments are provided for these assumptions. It will be even better if one can get a simplified set of assumptions. \n\nThe presentation is clear but can be improved. Especially, more remarks would help readers to understand the paper. \n\nminor:\n-- Thm 2.1: what are w_1 w_2 here? \n-- Assumption 3.1: the statement seems incomplete. I guess it should be \"max_... \\lambda_max(...) \\geq \\beta for some beta > 0\"?\n-- Just before Section 2.1: \" This is also consistent with empirical evidence in which more data are helpful for optimization.\" \nI don't see any evidence that more data help the optimization by filling in the holds in the distribution; they may help for other reasons. This statement here is not rigorous. \n", "Thanks for changing your score.\nWe will definitely add more discussion about our analysis and list more future directions.", "Thanks for the response. It clears up some of my concerns. On the other hand, I do still feel some of the assumptions are a bit strong (especially in the convolution case). It is true that previously there were even no analysis for the single neuron case though, so I'm adjusting my score to a bit higher.", "We thank the reviewer for raising these questions and insightful suggestions. \n\nTo the best of our knowledge, our result is the first recovery guarantee of gradient-based algorithm for learning a ReLU activated neural network for non-Gaussian input. We acknowledge that this is just the first step toward understanding why randomly initialized (stochastic) gradient descent can learn a convolutional neural network and it is by no means a full characterization of the necessary and sufficient conditions of input distribution that lead to success of SGD/GD. \n\nAt least for the one-neuron model, our proposed assumptions are general enough for explaining the success of randomly initialized gradient descent. Further, since we only deal with the single filter setting and do not take over-parametrization or other tricks into consideration that might help optimization, our proposed conditions may be further relaxed.\n\nFor the convolutional filter, our key assumptions are Assumption 3.1 and Assumption 3.2. We have listed some examples in Section 3.1.\n\n\nFor Detailed comments:\n1 & 5: We would like to emphasize that this Z_i, Z_j's angle < \\rho for some small \\rho is one sufficient condition to secure Assumption 3.2. We add this example to emphasize the closeness of patches implies the success of learning. There are definitely infinitely more distributions satisfy Assumption 3.1 and Assumption 3.2. Further note that the equation in Assumption 3.2 is in population sense. This means that a single peak value in the patch will not affect the bound too much because it will be averaged out. Therefore, the example raised by Reviewer 1 (all white and one dark example) should still satisfy our assumption. \n\n2. Thank for your suggestion. We have replaced figure 2(b) with the corresponding L(\\phi) and gamma(\\phi) for Gaussian distribution.\n\n3. We do agree \\phi_0 may be close to \\pi/2 (in fact we believe \\cos \\phi_0 is in the order of 1/\\sqrt{p} by random matrix theory) and L_\\cross may be a problem-dependent vector. However, we want to emphasize that typically, filter size is small, like 2x2 or 5x5 so even if it is dimension-dependent, requiring that \\gamma(\\phi) > 6L_\\cross is not a strong assumption.", "We thank the encouraging review. \n\nWe believe the angular smoothness assumptions with proper modifications can be applied to deeper models with ReLU activation since ReLU activation is very related to half-spaces.\n\nWe have made the figure larger and fixed the typos. Thanks for pointing out!", "We thank for your suggestions.\n\nWe have added some more explanations and remarks after our assumptions and theorems in our modified paper. We hope these can help readers understand our paper better.\n\n\nFor minor comments:\n-- Thm 2.1: what are w_1 w_2 here? \n\nThis is a typo, we have fixed the theorem. w_2 should be w_* and w_1 should be any w such that $\\theta(w,w_*) < \\pi$.\n\n\n-- Assumption 3.1: the statement seems incomplete. I guess it should be \"max_... \\lambda_max(...) \\geq \\beta for some beta > 0\"?\n\nThanks for pointing out. This is a typo and we have fixed the theorem.\n\n\n-- Just before Section 2.1: \" This is also consistent with empirical evidence in which more data are helpful for optimization.\" \nI don't see any evidence that more data help the optimization by filling in the holds in the distribution; they may help for other reasons. This statement here is not rigorous.\n\nWhat we mean is when minimizing the empirical loss, more data may lead to a bigger least eigenvalue of A_{w,w_*}. We agree this non-rigorous statement may lead to confusion and we have deleted it in our modified version." ]
[ 6, 9, 8, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkA-IE06W", "iclr_2018_SkA-IE06W", "iclr_2018_SkA-IE06W", "Skg9Q7X7M", "BkJpqU0fM", "SJA4C8_gG", "Sk7b2-tlz", "By3jcVfMM" ]
iclr_2018_BkrsAzWAb
Online Learning Rate Adaptation with Hypergradient Descent
We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.
accepted-poster-papers
All reviewers agreed that, despite the lack of novelty, the proposed method is sound and correctly linked to existing work. As the topic of automatically learning the stepsize is of great practical interest, I am glad to have this paper presented as a poster at ICLR.
train
[ "H1pbs28kG", "r1jLC23Jf", "BJ6v0V9ef", "Hy8WTMFmf", "S1WP2GFQz", "HJcZnfFXM", "H1PN17VXz", "B1wQhWzGM", "BydQzcHxf", "S1sZVWMlz", "rkaXMT-lz", "r1HU3l1kf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "author", "public", "public" ]
[ "SUMMARY:\n\nThe authors reinvent a 20 years old technique for adapting a global or component-wise learning rate for gradient descent. The technique can be derived as a gradient step for the learning rate hyperparameter, or it can be understood as a simple and efficient adaptation technique.\n\n\nGENERAL IMPRESSION:\n\nOne central problem of the paper is missing novelty. The authors are well aware of this. They still manage to provide added value.\nDespite its limited novelty, this is a very interesting and potentially impactful paper. I like in particular the detailed discussion of related work, which includes some frequently overlooked precursors of modern methods.\n\n\nCRITICISM:\n\nThe experimental evaluation is rather solid, but not perfect. It considers three different problems: logistic regression (a convex problem), and dense as well as convolutional networks. That's a solid spectrum. However, it is not clear why the method is tested only on a single data set: MNIST. Since it is entirely general, I would rather expect a test on a dozen different data sets. That would also tell us more about a possible sensitivity w.r.t. the hyperparameters \\alpha_0 and \\beta.\n\nThe extensions in section 5 don't seem to be very useful. In particular, I cannot get rid of the impression that section 5.1 exists for the sole purpose of introducing a convergence theorem. Analyzing the actual adaptive algorithm would be very interesting. In contrast, the present result is trivial and of no interest at all, since it requires knowing a good parameter setting, which defeats a large part of the value of the method.\n\n\nMINOR POINTS:\n\npage 4, bottom: use \\citep for Duchi et al. (2011).\n\nNone of the figures is legible on a grayscale printout of the paper. Please do not use color as the only cue to identify a curve.\n\nIn figure 2, top row, please display the learning rate on a log scale.\n\npage 8, line 7 in section 4.3: \"the the\" (unintended repetition)\n\nEnd of section 4: an increase from 0.001 to 0.001002 is hardly worth reporting - or am I missing something?\n", "The authors consider a method (which they trace back to 1998, but may have a longer history) of learning the learning rate of a first-order algorithm at the same time as the underlying model is being optimized, using a stochastic multiplicative update. The basic observation (for SGD) is that if \\theta_{t+1} = \\theta_t - \\alpha \\nabla f(\\theta_t), then \\partial/\\partial\\alpha f(\\theta_{t+1}) = -<\\nabla f(\\theta_t), \\nabla f(\\theta_{t+1})>, i.e. that the negative inner product of two successive stochastic gradients is equal in expectation to the derivative of the tth update w.r.t. the learning rate \\alpha.\n\nI have seen this before for SGD (the authors do not claim that the basic idea is novel), but I believe that the application to other algorithms (the authors explicitly consider Nesterov momentum and ADAM) are novel, as is the use of the multiplicative and normalized update of equation 8 (particularly the normalization).\n\nThe experiments are well-presented, and appear to convincingly show a benefit. Figure 3, which explores the robustness of the algorithms to the choice of \\alpha_0 and \\beta, is particularly nicely-done, and addresses the most natural criticism of this approach (that it replaces one hyperparameter with two).\n\nThe authors highlight theoretical convergence guarantees as an important future work item, and the lack of them here (aside from Theorem 5.1, which just shows asymptotic convergence if the learning rates become sufficiently small) is a weakness, but not, I think, a critical one. This appears to be a promising approach, and bringing it back to the attention of the machine learning community is valuable.", "\nThis paper revisits an interesting and important trick to automatically adapt the stepsize. They consider the stepsize as a parameter to be optimized and apply stochastic gradient update for the stepsize. Such simple trick alleviates the effort in tuning stepsize, and can be incorporated with popular stochastic first-order optimization algorithms, including SGD, SGD with Nestrov momentum, and Adam. Surprisingly, it works well in practice.\n\nAlthough the theoretical analysis is weak that theorem 1 does not reveal the main reason for the benefits of such trick, considering their performance, I vote for acceptance. But before that, there are several issues need to be addressed. \n\n1, the derivation of the update of \\alpha relies on the expectation formulation. I would like to see the investigation of the effect of the size of minibatch to reveal the variance of the gradient in the algorithm combined with such trick. \n\n2, The derivation of the multiplicative rule of HD relies on a reference I cannot find. Please include this part for self-containing. \n\n3, As the authors claimed, the Maclaurin et.al. 2015 is the most related work, however, they are not compared in the experiments. Moreover, the empirical comparisons are only conducted on MNIST. To be more convincing, it will be good to include such competitor and comparing on practical applications on CIFAR10/100 and ImageNet. \n\nMinors: \n\nIn the experiments results figures, after adding the new trick, the SGD algorithms become more stable, i.e., the variance diminishes. Could you please explain why such phenomenon happens?", "> One central problem of the paper is missing novelty. The authors are well aware of this. They still manage to provide added value. Despite its limited novelty, this is a very interesting and potentially impactful paper. I like in particular the detailed discussion of related work, which includes some frequently overlooked precursors of modern methods.\n\nThank you very much for your evaluation and encouraging words.\n\n> The experimental evaluation is rather solid, but not perfect. It considers three different problems: logistic regression (a convex problem), and dense as well as convolutional networks. That's a solid spectrum. However, it is not clear why the method is tested only on a single data set: MNIST. Since it is entirely general, I would rather expect a test on a dozen different data sets. That would also tell us more about a possible sensitivity w.r.t. the hyperparameters \\alpha_0 and \\beta.\n\nPlease note that we provide experimental evaluation on a non-MNIST data set, specifically CIFAR-10 (Section 4.3 on page 8 and Figure 2 on page 7).\n\n> The extensions in section 5 don't seem to be very useful. In particular, I cannot get rid of the impression that section 5.1 exists for the sole purpose of introducing a convergence theorem. Analyzing the actual adaptive algorithm would be very interesting. In contrast, the present result is trivial and of no interest at all, since it requires knowing a good parameter setting, which defeats a large part of the value of the method.\n\nWe agree with your assessment that the analysis in Section 5.1 is significantly restricted and this is a limitation of the current paper. There remains much to be done in this respect, and a theoretical convergence analysis is a highly desired future work. Please note that a convergence analysis of the technique in the multidimensional quadratic case is available in a separate work, which we will highlight prominently in the de-anonymized final revision of the paper.\n\n> MINOR POINTS\n\nThank you for pointing these out, we will fix them in the final revision.\n", "> I have seen this before for SGD (the authors do not claim that the basic idea is novel), but I believe that the application to other algorithms (the authors explicitly consider Nesterov momentum and ADAM) are novel, as is the use of the multiplicative and normalized update of equation 8 (particularly the normalization). \n\n> The experiments are well-presented, and appear to convincingly show a benefit. Figure 3, which explores the robustness of the algorithms to the choice of \\alpha_0 and \\beta, is particularly nicely-done, and addresses the most natural criticism of this approach (that it replaces one hyperparameter with two).\n\nThank you very much for your evaluation and your encouraging feedback.\n\nFigure 3 was produced with exactly the purpose that you described, and we are very glad that this was noticed and found useful.\n\n> The authors highlight theoretical convergence guarantees as an important future work item, and the lack of them here (aside from Theorem 5.1, which just shows asymptotic convergence if the learning rates become sufficiently small) is a weakness, but not, I think, a critical one. This appears to be a promising approach, and bringing it back to the attention of the machine learning community is valuable.\n\nWe agree that a theoretical convergence analysis is a highly desired future work and is a limitation of the current paper. We also agree with the assessment that the approach appears promising and therefore we would like to bring it to the attention of the larger community.\n", "Thank you for your encouraging evaluation and for the improvements suggested.\n\n> 1, the derivation of the update of \\alpha relies on the expectation formulation. I would like to see the investigation of the effect of the size of minibatch to reveal the variance of the gradient in the algorithm combined with such trick.\n\nWe do not have theoretical results about the effect of the minibatch size and gradient variance on the hypergradient descent (HD) algorithm. Considering that the reviewer was potentially referring to experimental evidence, we will make sure to include experimental results with varying minibatch sizes in an appendix in the final revision of this paper.\n\n> 2, The derivation of the multiplicative rule of HD relies on a reference I cannot find. Please include this part for self-containing.\n\nThank you for pointing this out. The mentioned reference for the multiplicative HD rule is now made accessible online, and can be located with a Google search of the title.\n\n> 3, As the authors claimed, the Maclaurin et.al. 2015 is the most related work, however, they are not compared in the experiments. Moreover, the empirical comparisons are only conducted on MNIST. To be more convincing, it will be good to include such competitor and comparing on practical applications on CIFAR10/100 and ImageNet.\n\nAs you point out, Maclaurin et al. (2015) is a highly related work, which introduces the term “hypergradient” and similarly performs gradient-based updates of hyperparameters through a reversible higher-order automatic differentiation setup. \n\nHowever, note that in the approach in Maclaurin et al. (2015) a regular optimization procedure is truncated to a fixed number N of “elementary” iterations (such as N = 100 in the paper), at the end of which the derivative of an objective is propagated all the way through this N inner optimization iterations (the “reversibility” trick introduced in the paper is for making this possible in practice), and the resulting hypergradient is used in an outer optimization of M “meta” iterations (such as M=50 in the paper). Our technique, in contrast, is an online adaptation of a hyperparameter (in particular, the learning rate) at each iteration of optimization, and does not perform derivative propagation through an inner optimization that consists of many iterations. The techniques are thus not directly comparable as competing alternatives. For instance, it is not straightforward to replicate our learning rate trajectory through the VGGNet/CIFAR-10 experiment of 78125 iterations (Figure 2 on page 7, rightmost column) in the reversible learning algorithm due to (1) uninformative gradients beyond a few hundred iterations (see Section 4 “Limitations” in Maclaurin et al. 2015) and (2) potentially prohibitive memory requirements. Having said this, we believe that it would be interesting to compare the behavior of our algorithm for the initial 100 iterations with the 100-iteration learning-rate schedules reported in Maclaurin et al. (2015) and we intend to add such an experiment in the appendix in the final revision of the paper.\n\n> Moreover, the empirical comparisons are only conducted on MNIST. \n\nPlease note that the paper does report non-MNIST empirical comparisons, specifically CIFAR-10 (Section 4.3 on page 8 and Figure 2 on page 7).\n\n> Minors: In the experiments results figures, after adding the new trick, the SGD algorithms become more stable, i.e., the variance diminishes. Could you please explain why such phenomenon happens?\n\nAs far as we can observe, the variance does not diminish, and the method behaves in a similar way to how regular SGD does with a good choice of the learning rate, as for example 10e-2 in the case of logistic regression. We would be interested in looking into this more carefully if you could point us to an experiment/figure where this behavior with SGD happens.\n\nThank you once more for all these constructive comments and suggested additions that allow us to improve the paper.\n", "Thank you very much for your time and for reporting your results. This sort of validation is extremely valuable for us and the community.\n\nFollowing the decision notification, we will make a repository public with the full code in Python (including the plotting codes that we used for producing the plots in the paper). We will also add information about the hardware setup that was used for running the presented experiments.", "This paper introduces an adaptive method to adjust the learning rate of machine learning algorithms, and aims to improve the convergence time and reduce manual tuning of learning rate. The idea is simple and straightforward: to automatically update the learning rate by performing gradient descent on the learning rate alongside the gradient descent procedure of the parameters of interest. This is achieve by introducing a new hyperparameter and specifying an initial learning rate. The idea is intuitive and the implementation is not hard.\nWe find that the way the experiments are set­up and described facilitates reproducibility. The data sets in the experiment are all publicly available, partitioning information of training and test data sets are clearly stated except the randomization control of training set for each experiment. Authors implemented and documented the Lua code of the proposed optimization algorithms for SGD, SGDN and Adam, and made those codes available within the torch.optim package on github. The python version of AdamHD can also be found publicly online. Since we do not have programming experience using Lua, we implemented the python version of SGDHD and SGDNHD by ourselves following the paper pseudocode, but we cannot guarantee that our implementation based on our understanding is exactly the same as the authors'. However, the code that authors used to generate the exact plots and graphs to illustrate their experiment results are not available. Thus we also implemented this part of code ourselves according to paper. Most parameters (including hyperparameters) used in experiments were given. We would suggest authors to include more hardware­specific information used to run their experiments in the paper, including time, memory, GPU and type of machine.\nIt is not hard to replicate the results shown in the original paper, with some effort to apply machine learning methods embedded in the Torch or PyTorch library on the given data set. Based on the results, it is great to see that most of the experiments in the study are reproducible. Specifically, the change of learning rate and training/validation loss in our replication generally follows a similar pattern to that in the paper. For example, the learning rate increases in the first few epochs in logistic regression and neural networks using SGDHD. Also, the learning rate and training/validation loss tends to oscillate starting at some point in the paper and our results shows the same pattern. However, there are also instances where the non­HD version of the optimizers perform better than the HD counterparts.\nOverall, the paper is well­written, provides a promising algorithm that works at least as well as existing gradient­descent­based optimization algorithms that use a fixed global learning rate. The authors claim that an important future work is to investigate the theoretical convergence guarantees of their algorithm, which is indeed very insightful. I am hoping that the authors can also justify the theoretical support behind the adaptation of the learning rate in the sense that to what they are trying to adapt the learning rate.", "Hi, both are very interesting potential applications! \n\nI think an application to non-stationary data, where the learning rate varies on the fly as new data comes in, would be very interesting indeed. We will keep this in mind. \n\nWe're also looking at adaptive filter theory.\n\nThank you very much for the pointers.", "You only need a logaritmic number of iterations to shift your current learning rate to another value, instead of a linear number of them. We have also seen in practice that with good hyperparameters for both implementations, the multiplicative rule adapts faster. There is also a theoretical reason that comes from the formal derivation of the rule that suggests that the multiplicative rule makes more sense than the additive one.", " One of the practical advantages of this multiplicative\nrule is that it is invariant up to rescaling and that the multiplicative adaptation is in general faster than\nthe additive adaptation. Why?", "Dear authors,\n\nThank you for this paper, I really enjoyed it! :)\n\nI have two small comments:\n\n - A related field which may provide additional insights in that of Adaptive filter theory [1]. A particularly relevant example would be the use of adaptive forgetting factors, where gradient information is used to tune a forgetting factor recursively.\n\n - A further interesting application for the proposed method could be in the context of non-stationary data. In such a setting, it may be desirable to allow the learning to rate to increase if necessary (as would be the case if, for example, the underlying data distribution changed). Potential scenarios where this could happen are streaming data applications (where model parameters are constantly updated to take into consideration new observations/drifts in the distribution) or transfer learning applications. \n\nBest wishes and good luck!\n\nReferences:\n1. Adaptive Filter Theory, Simon Haykin, Prentice Hall, 2008" ]
[ 6, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkrsAzWAb", "iclr_2018_BkrsAzWAb", "iclr_2018_BkrsAzWAb", "H1pbs28kG", "r1jLC23Jf", "BJ6v0V9ef", "B1wQhWzGM", "iclr_2018_BkrsAzWAb", "r1HU3l1kf", "rkaXMT-lz", "iclr_2018_BkrsAzWAb", "iclr_2018_BkrsAzWAb" ]
iclr_2018_HyWrIgW0W
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such out-of-equilibrium behavior is a consequence of highly non-isotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims, proven in the appendix.
accepted-poster-papers
Dear authors, Based on the comments and your rebuttal, I am glad to accept your paper at ICLR.
val
[ "HkJG6iOlM", "B1PK_0tgf", "Bkic0BclM", "S1_hIt67f", "B11u4C3ZM", "HJYXSAh-G", "BJ-c702Zz", "r1yXa6QWM", "H1vZPmg-G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "The paper takes a closer look at the analysis of SGD as variational inference, first proposed by Duvenaud et al. 2016\nand Mandt et al. 2016. In particular, the authors point out that in general, SGD behaves quite differently from Langevin diffusion due to the multivariate nature of the Gaussian noise. As the authors show based on the Fokker-Planck equation of the underlying stochastic process, there exists a conservative current (a gradient of an underlying potential) and a non-conservative current (which might induce stationary persistent currents at long times). The non-conservative part leads to the fact that the dynamics of SGD\tmay show oscillations, and these oscillations may even prevent the algorithm from converging to the 'right' local optima. The theoretical analysis is carried-out very nicely, and the theory is supported by experiments on two-dimensional toy examples, and Fourier-spectra of the iterates of SGD.\n\nThis is a nice paper which I would like to see accepted. In particular I appreciate that the authors stress the importance\nof 'non-equilibrium physics' for understanding the SGD process. Also, the presentation is quite clear and the paper well written.\n\nThere are a few minor points which I would like to ask the authors to address:\n\n1. Why cite Kingma and Welling as a source for variational inference in\tsection 3.1? VI is a much older\tfield, and Kingma and Welling proposed a very special form of VI, namely amortized VI with inference networks. A better citation would be Jordan et\tal 1999.\n\n2. I'm not sure how much to trust the Fourier-spectra. In particular, perhaps the deviations from Brownian motion could also be due to the discrete\tnature of SGD (i.e. that the continuous-time formalism is only an approximation of a discrete process). Could you elaborate on this?\n\n3. Could you give the reader more details on how the uncertainty estimates on the Fourier transformations were obtained?\n\nThanks.", "The authors discuss the regularized objective function minimized by standard SGD in the context of neural nets, and provide a variational inference perspective using the Fokker-Planck equation. They note that the objective can be very different from the desired loss function if the SGD noise matrix is low rank, as evidenced in their experiments.\n\nOverall the paper is written quite well, and the authors do a good job of explaining their thesis. However I was unable to identify any real novelty in the theory: the Fokker-Planck equation has been widely used in analysis of stochastic noise in MCMC samplers in recent years, and this paper mostly rephrases those results. Also the fact that SGD theory only works for isotropic noise is well known, and that there is divergence from the true loss function in case of low rank noise is obvious. Thus I found most of section 3 to be a reformulation of known results, including Theorem 5 and its proof.\n\nSame goes for section 5; the symmetric- anti symmetric split is a common technique used in the stochastic MCMC literature over the last few years, and I did not find any new insight into those manipulations of the Fokker-Planck equation from this paper.\n\nThus I think that although this paper is written well, the theory is mostly recycled and the empirical results in Section 4 are known; thus it is below acceptance threshold due to lack of novelty.", "This paper develop theory to study the impact of stochastic gradient noise for SGD, especially for deep neural network models. It is shown that when the gradient noise is isotropic normal, SGD converges to a distribution tilted by the original objective function. However, when the gradient noise is non isotropic normal, which is shown common in many models especially in deep neural network models, the behavior of SGD is intriguing, which will not converge to the tilted distribution by the original objective function, sometimes more interestingly, will converge to limit cycles around some critical points of the original objective function. The paper also provides some hints on why using SGD can get good generalization ability than gradient descend.\n\nI think the finding of this paper is interesting, and the technical details are correct. I still have the following comments.\n\nFirst, Assumption 4 seems a bit too abstract. It is not easy to see what the assumption means. It would be better if an example is given, which is verified to satisfy the assumption.\n\nAnother comment is related to the overall content of this paper. Thought the paper point out that SGD will have the out-of-equilibrium behavior when the gradient noise is non isotropic normal, it remains to show how far away this stationary distribution is from the original distribution defined by the objective function.", "We thank the reviewers and the commentators for their feedback, individual clarifications are enclosed below. We have updated the paper to incorporate these inputs, the summary of changes is as follows.\n\n1. The diffusion matrix D(x) depends on the current iterate of SGD. This implies that the stochastic process be interpreted in the Ito sense. The Fokker-Planck equation (FP) in Lemma 2 was written in the Fick’s Law form earlier with a term div (D grad rho). This is now changed to the Ito form with the term div (div (D rho)).\n\n2. The above change results in an extra term beta^{-1} div(D) in the definition of the conservative force in (8). All results in our paper remain unchanged upon consistently adding this term.\n\n3. Corollary 8 shows that if the diffusion matrix is identity, Theorem 5 recovers the Jordan-Kinderleher-Otto (JKO) functional in optimal transportation. Trajectories of the Fokker-Planck equation perform steepest descent in the Wasserstein metric on (11) in this case.\n \n4. We have rewritten Theorem 20 to make it more precise.\n\n5. Upon the suggestion of Reviewer 3, we have added older references for variational inference; see Remark 10.\n\n6. We point the readers to Example 17 in Assumption 4 for an illustration.", "Thank you for your comments. Please read below for our clarifications.\n\n>>Why cite Kingma and Welling for variational inference, e.g., cite Jordan ‘99, VI is a much older field\n\nGood point: we will also include older citations.\n\n>>Not sure how much to trust Fourier spectra, deviations from Brownian motion can also be due to discretization\n\nNote that the plot in Fig. 3a is the discrete Fourier transform of (x_{k+1} - x_k)_k. The trajectory is of length 10^5 epochs and sampled at each epoch; we are thus sampling at a very high frequency, well above the Nyquist rate. Low frequency modes in the continuous-time dynamics will not be affected by such a discretization, high frequency modes might, see the right part of Fig. 3a.\n\nThe FFT, which is expected to be flat for Brownian motion, is distinctly non-flat in our experiments. This result is also predicted by other experiments in Sec. 4.1 and Fig. 3b, and our theoretical results. So the Fourier spectra are just one more confirmation of the claim.\n\n>>Give more details on how the uncertainty estimates on the Fourier transformations were obtained\n\nThis is described in the caption of Fig. 3. The FFT is computed, independently, for the one-dimensional trajectory of each weight. The standard deviation across all the weights is depicted as the “error band”. The eigenmodes of the weight vector are also the eigenmodes of the trajectory of each weight; it is indeed surprising that different weights have very similar amplitude.", "Thank you for your comments. Please see our clarifications below.\n\n>>Unable to identify any novelty in the theory, reformulation of known results, empirical results are known\n\nWe are glad to help: \n1. While it is widely *believed* that SGD acts as an “implicit regularizer”, to the best of our knowledge we are first to *prove* that it performs variational inference: SGD minimizes an average potential along with an entropic regularization term.\n2. While someone may have noticed that mini-batch noise in deep networks is highly non-isotropic, nobody had connected this to convergence properties of SGD for deep nets.\n3. The fact that anisotropy in deep networks causes the potential Phi to be different than the function upon which SGD evaluates its gradients was *not known*, nor proven, before.\n4. The fact that the most likely trajectories of SGD for deep nets are limit cycles was *not known*, nor proven.\nWe have scouted the literature diligently, but of course it is possible that we may have missed work where any of the above empirical and theoretical results may have been described. We will gladly examine specific references if provided.\n\n>>Fokker-Planck equation has been widely used before\n\nWe surely do not claim to be the first to use the Fokker-Planck equation; it is a standard tool in the analysis of stochastic processes.\n\n>>Fact that SGD theory only works for isotropic noise is well-known, that there is divergence from the true loss is obvious\n\nThe issue is not that there is “divergence from the true loss”, but precisely of what *nature* it is. To the best of our knowledge, we are the first to point out -- and prove -- that SGD for deep nets has limit cycles as its most likely trajectories. This is surely not obvious: in fact, most of the literature focuses on which *critical points* SGD converges to. We show that, with anisotropic noise, it converges to none. Quite non-obvious, frankly.\n\n>>Common technique in stochastic MCMC, did not find any new insight into manipulations\n\nMCMC theory constructs grad f and D given a log-likelihood Phi that one would like to draw samples from. This paper is about the reverse direction: given a grad f and a D, what is the Phi? This is a novel question and pertinent to understanding the efficacy of SGD for deep networks; it is not under the purview of the MCMC literature. We *decompose* grad f into symmetric and anti-symmetric terms and develop assumptions and theory that enables us to do so.\n\nTo emphasize, MCMC methods start with a given Phi, whereas we find the Phi. The two are completely opposite directions, even if some formulae might look familiar from the MCMC literature.", "Thank you for your comments. Please see our clarifications below.\n\n>>Assumption 4 seems a bit too abstract, can you give an example\n\nExample 13 illustrates the effects of the assumption; we will point the readers to it in Sec. 3. Another example is in three dimensions, where the assumption is akin to Helmholtz decomposition of a vector field into divergence-free and curl-free components. We allow the force j(x) to be non-trivial, j(x) neq 0 corresponds to broken detailed balance while j(x) = 0 corresponds to detailed balance. This assumption is motivated by the second-law of thermodynamics as discussed in Appendix B.\n\n>>How far away is the stationary distribution from the original one\n\nThe relation between the two is the offset described in Thm. 17. This difference scales linearly with learning rate/batch-size; which can be large in practice because deep networks are trained with small batch-sizes and/or large learning rates. The divergence of the matrix Q is also explicitly computable, see (A13) and Remarks 19-20. Doing so is however computationally challenging for large networks, and a subject of our future investigation.", "Thank you for your comments. Our responses are enclosed below.\n\n1. “assumption 4 is not invariant w.r.t. a change in coordinates of the parameter space”, “in physics there is usually some kind of symmetry group”\n\nWe do not know of general results that indicate symmetries in SGD dynamics which would suggest the “right” metric space to perform our analysis. Indeed, our results indicate that such an analysis would be promising because this metric is expected to depend upon the architecture.\n\n1.1 “don't think the argument given backing up assumption 4 is very convincing”, “my opinion is that it is wrong (but don't think this disqualifies it from being assumed)”\n\nAssumption 4 is motivated by an argument that interprets the Fokker-Planck equation as a physical system in contact with the environment through energy exchange of the diffusion term. This assumption is sufficient to ensure that the second law of thermodynamics holds for such a system and is standard in the analysis of irreversible processes, see [1, 2, 3]. The second law may be violated when considering a few molecules of a gas, or analogously, a few trajectories of SGD, but our results always deal with the entire steady-state distribution.\n\n2. \"[Φ] is only a function of the architecture and the dataset is wildly misleading\", “They depend on both of Assumptions 4 and 16”, entirely reasonable to think that in the wild Φ(x) would depend on the learning rate”\n\nOne only needs assumption 4 to ensure that Phi does not depend on beta. The proof follows from (A4). Define Phi(x) = -beta^{-1} log rho^ss_beta(x) and J^ss_beta from Appendix D accordingly, we have used the subscript to emphasize the dependence on beta. (A4) implies that J^ss_1 is orthogonal to grad rho^ss_1. Decompose -grad f(x) again as\n -grad f = J_1^ss/rho_1^ss - D grad (log rho^ss_1)\n = (rho_1^ss)^{-beta} ((rho_1^ss)^{beta-1} J_1^ss) - beta^{-1} D grad (log rho_1^ss)^beta.\nNow note that div((rho_1^ss)^{beta-1} J_1^ss) = 0 by assumption 4, this lets us identify J_beta^ss = J_1^ss (rho_1^ss)^{beta-1} and rho_beta^ss = (rho_1^ss)^{-beta}. The later gives the result that Phi does not depend on beta.\n\nTo conclude, under assumption 4, Phi does not depend on the learning rate or the batch-size, it is only a function of the architecture and the dataset. Also see #3 below, for isotropic noise, Phi(x) = f(x) without any assumptions.\n\n3. “The very first equation of the introduction is a tautology if Phi is defined as in equation (6)” / “feels like a sleight of hand that could hide the assumptions” / “The first part feels familiar”\n\nIndeed, the minimizer of (11) is of the form (6). However, the key point of Thm. 5 is instead that the Fokker-Planck equation reaches this minimizer *monotonically*. This is far from a sleight of hand, and if gradient noise is isotropic, in complete rigor, (11) with Phi = f becomes the celebrated Jordan-Kinderleher-Otto (JKO) functional [4]; we have steepest descent in the Wasserstein metric in this case, in addition to monotonic decrease. The JKO functional is one of the major results of the theory of optimal transportation in the 20th century, see Sec. 4.3 in [5]. The implicit definition of Phi in (6) is only used for Thm. 5. We give a completely explicit formula for Phi, in terms of f(x) and D(x), in Thm. 17 and (A13).\n\n4. “needs both the assumptions 4 and 16 to be prominent. To my mind neither of the assumptions are strictly correct, but that doesn't disqualify them from being made or stop the resulting models being taken seriously.“\n\nWe will make these assumptions prominent in the introduction. In our opinion, assumption 4 is mild and the low frequency modes of the FFT in Fig. 3a already validate it. Assumption 16 is less mild, but it is widely used by physicists and biologists (we provide references in the paper) to study real systems where it has been seen to hold.\n\n5. “does SGD undergo Brownian motion near a minimum”, “Is the evidence consistent with Brownian motion in a degenerate minimum with more complicated topology?”\n\nIrrespective of the topology, for isotropic noise, at low enough temperature, SGD performs Brownian motion near a minimum up to the first order. This can be seen from (3).\n\n[1] Prigogine, I. (1955). Thermodynamics of irreversible processes, volume 404. Thomas.\n[2] Qian, H. (2014). The zeroth law of thermodynamics and volume-preserving conservative system in equilibrium with stochastic damping. Physics Letters A, 378(7):609–616.\n[3] Frank, T. D. (2005). Nonlinear Fokker-Planck equations: fundamentals and applications. Springer Science & Business Media.\n[4] Jordan, R., Kinderlehrer, D., and Otto, F. (1997). Free energy and the Fokker-Planck equation. Physica D: Nonlinear Phenomena, 107(2-4):265–271.\n[5] Santambrogio, F. (2017). Euclidean, metric, and Wasserstein gradient flows: an overview. Bulletin of Mathematical Sciences, 7(1):87–154.", "I really like this paper and have learnt a lot from reading it. I think the basic ideas behind it are very important indeed and I don't know of anywhere else they are written down. However I think it has some major issues.\n\nMost importantly I think the statements at the very start the introduction \"[Φ] is only a function of the architecture and the dataset\" and at the start of Section 3, \"The potential Φ(x) depends only on the full-gradient and the diffusion matrix, and will be made explicit in Section 5.\" are *wildly* misleading. They depend on both of Assumptions 4 and 16, which even at the start of Section 3 have not been made yet. I think's it's entirely reasonable to think that \"in the wild\" Φ(x) would depend on the learning rate, and the burden of proof to convince a reader otherwise should be very high.\n\nI am also confused by the use of the term \"full-gradient\". In Lemma 14 formula for Φ involves U, but U depends on the Hessian of f. So more than the gradient of f at x.\n\nThe very first equation of the introduction is a tautology if Phi is defined as in equation (6) and only has value if it has a given formula which is never actually given in the text and only alluded to (I don't count Lemma 14 as that applies to a quadratic form only). There is nothing logically wrong about doing this and I personally find it quite entertaining, but it does feel like a sleight of hand that could hide the assumptions from an inattentive reader.\n\nThe second part of Theorem 5 is just an entropy maximisating theorem which is in every standard textbook (eg it's a corollary of Thm 12.1.1 from Elements of Infromation Theory 2nd Ed. by Cover and Thomas). The first part feels familiar but I couldn't point you to a specific reference.\n\nConcerning Assumption 4... This assumption is not invariant w.r.t. a change in coordinates of the parameter space. So it is reliant on the Euclidean metric, but why not any other metric, perhaps the Fisher metric? In physics there is usually some kind of symmetry group on the underlying space pushing us to a metric, but there isn't here so I don't think the argument given backing up this assumption is very convincing. In fact my opinion is that this assumption is wrong (though I don't think this disqualifies it from being assumed, it's interesting enough to see what happens given the assumption).\n\nFinally in Section 4, the experimental section about Brownian motion, I don't think the null hypothesis that SGD undergoes Brownian motion at a local minimum (which I assume is approximated by a quadratic form) is very strong. Is the evidence consistent with Brownian motion in a degenerate minimum with more complicated topology?\n\nSo in summary I really like this paper, but it needs both the assumptions 4 and 16 to be prominent. To my mind neither of the assumptions are strictly correct, but that doesn't disqualify them from being made or stop the resulting models being taken seriously." ]
[ 8, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyWrIgW0W", "iclr_2018_HyWrIgW0W", "iclr_2018_HyWrIgW0W", "iclr_2018_HyWrIgW0W", "HkJG6iOlM", "B1PK_0tgf", "Bkic0BclM", "H1vZPmg-G", "iclr_2018_HyWrIgW0W" ]
iclr_2018_ByrZyglCb
Robustness of Classifiers to Universal Perturbations: A Geometric Perspective
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.
accepted-poster-papers
The idea of universal perturbation is definitely interesting and well carried out in that paper.
train
[ "H17poxceM", "SygwaSixG", "ByJeL6EWz", "S154AURbG", "ryFLaI0ZM", "Sy82jUCWz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper is written well and clear. The core contribution of the paper is the illustration that: under the assumption of flat, or curved decision boundaries with positive curvature small universal adversarial perturbations exist. \n\nPros: the intuition and geometry is rather clearly presented. \n\nCons: \nReferences to \"CaffeNet\" and \"LeNet\" (even though the latter is well-known) are missing. In the experimental section used to validate the main hypothesis that the deep networks have positive curvature decision boundaries, there is no description of how these networks were trained. \n\nIt is not clear why the authors have decided to use out-dated 5-layer \"LeNet\" and NiN (Network in network) architectures instead of more recent and much better performing architectures (and less complex than NiN architectures). It would be nice to see how the behavior and boundaries look in these cases. \n\nThe conclusion is speculative:\n\"Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it\nis key to suppress this subspace of shared positive directions, which can possibly be done through\nregularization of the objective function. This will be the subject of future works.\" \n\nIt is clear that regularization should play a significant role in shaping the decision boundaries. Unfortunately, the paper does not provide details at the basic level, which algorithms, architectures, hyper-parameters or regularization terms are used. All these factors should play a very significant role in the experimental validation of their hypothesis.\n\nNotes: I did not check the proofs of the theorems in detail. \n", "This paper discusses universal perturbations - perturbations that can mislead a trained classifier if added to most of input data points. The main results are two fold: if the decision boundary are flat (such as linear classifiers), then the classifiers tend to be vulnerable to universal perturbations when the decision boundaries are correlated. If the decision boundary are curved, then vulnerability to universal perturbations is directly resulted from existence of shared direction along with the decision boundary positively curved. The authors also conducted experiments to show that deep nets produces decision boundary that satisfies the curved model.\n\nThe main issue I am having is what are the applicable insight from the analysis:\n\n1. Why is universal perturbation an important topic (as opposed to adversarial perturbation).\n2. Does the result implies that we should make the decision boundary more flat, or curved but on different directions? And how to achieve that? It might be my mis-understanding but from my reading a prescriptive procedure for universal perturbation seems not attained from the results presented.", "The paper develops models which attempt to explain the existence of universal perturbations which fool neural networks — i.e., the existence of a single perturbation which causes a network to misclassify most inputs. The paper develops two models for the decision boundary:\n\n(a) A locally flat model in which the decision boundary is modeled with a hyperplane and the normals two the hyperplanes are assumed to lie near a low-dimensional linear subspace.\n\n(b) A locally positively curved model, in which there is a positively curved outer bound for the collection of points which are assigned a given label. \n\nThe paper works out a probabilistic analysis arguing that when either of these conditions obtains, there exists a fooling perturbation which affects most of the data. \n\nThe theoretical analysis in the paper is straightforward, in some sense following from the definition. The contribution of the paper is to posit these two conditions which can predict the existence of universal fooling perturbations, argue experimentally that they occur in (some) neural networks of practical interest. \n\nOne challenge in assessing the experimental claims is that practical neural networks are nonsmooth; the quadratic model developed from the hessian is only valid very locally. This can be seen in some of the illustrative examples in Figure 5: there *is* a coarse-scale positive curvature, but this would not necessarily come through in a quadratic model fit using the hessian. The best experimental evidence for the authors’ perspective seems to be the fact that random perturbations from S_c misclassify more points than random perturbations constructed with the previous method. \n\nI find the topic of universal perturbations interesting, because it potentially tells us something structural (class-independent) about the decision boundaries constructed by artificial neural networks. To my knowledge, the explanation of universal perturbations in terms of positive curvature is novel. The paper would be much stronger if it provided an explanation of *why* there exists this common subspace of universal fooling perturbations, or even what it means geometrically that positive curvature obtains at every data point. \n\nVisually, these perturbations seem to have strong, oriented local high-frequency content — perhaps they cause very large responses in specific filters in the lower layers of a network, and conventional architectures are not robust to this? \n\nIt would also be nice to see some visual representations of images perturbed with the new perturbations, to confirm that they remain visually similar to the original images. \n", "We thank the reviewer for the comments that helped improve the manuscript. Please see clarifications below.\n\n1. We have updated the manuscript with references for the networks, as well as a description of how these networks were trained as requested by the reviewer.\n\n2. To address the Reviewer concern, we have conducted new experiments on more architectures, in particular ResNet-18, VGG-16 for CIFAR-10 and ResNet-152 for ImageNet; all confirm and validate our results. Specifically,\n\n* Fig. 5 and 6 were updated with decision boundaries of ResNet-18 for CIFAR-10 and ResNet-152 for ImageNet.\n* We have conducted the same experiment as in Fig. 7 (b) for VGG-16 and ResNet-18 architectures. Please see Appendix C for the figures.\n* As also requested by Reviewer 1, we have shown visual examples on ImageNet of the universal perturbations computed using the curvature-based proposed approach. Please see Fig. 8.\n\nThe new experiments confirm that our conclusions hold equally well on modern architectures; in particular, these new results confirm that the existence of universal perturbations is due to the existence of shared positively curved directions in the decision boundary of deep networks.\n\n3. While our conclusion is indeed speculative, we believe that our analysis (in particular the fact that universal perturbations are random vectors in subspace S_c) can be leveraged to improve the robustness to universal perturbations. Other authors have actually already used our analysis to counter universal perturbations in a very recent paper [Anonymous, 2017]*. The authors specifically eliminate universal perturbations through random sampling from this subspace, and training a \"denoising\" module to effectively project on the orthogonal of this subspace. This is indeed a very simple way of using the proposed analysis, and we believe that such analysis will lead to more ways to counter universal perturbations.\n\n*: We anonymized this paper, as it is citing a technical report of ours and might violate the double blind policy.\n", "We thank the reviewer for the comments. Please see clarifications below.\n\n1. Universal perturbations are static images that can be used by adversaries to fool a classifier (no need to run an optimization procedure to fool each new image); classifiers hence need to be robust to this excessively simple perturbation model. Adversarial perturbations are image-specific and do not generalize well across different images.\nThe existence of universal perturbations is also informative for the geometry of the classification boundaries, which is one step towards better understanding the fundamental properties of deep networks.\n\n2. The goal of the paper is not (yet) to improve the design of classifiers, but to gain insight through their analysis. It is beyond the scope of a single paper to prescribe procedures to improve robustness by modifying the curvature of classification regions.\nNevertheless, we should mention that our analysis (in particular, the fact that universal perturbations are random vectors in subspace S_c) has already been used by others to provide a constructive procedure to combat universal perturbations [Anonymous, 2017]*. The authors specifically eliminate universal perturbations through random sampling from this subspace, and training a \"denoising\" module to effectively project on the orthogonal of this subspace.\nThis is indeed a very simple way of using the proposed analysis, and we believe that such analysis will inspire more ways to counter universal perturbations.\n\n*: We anonymized this paper, as it is citing a technical report of ours and might violate the double blind policy.\n", "We thank the reviewer for the comments. Please see clarifications below.\n\n- As requested, we have added visual representations of images perturbed with the new perturbation (see Fig. 8).\n\n- We agree curvature is indeed only informative of the local structure of the decision boundary, but a first step to understand it. In the experiments, we have looked at coarse scale second-order information through a finite difference of gradients. This is indeed inevitable as state-of-the art networks using ReLU have theoretically vanishing Hessian almost everywhere.\n\n- Visual appearance of universal perturbations: That is an interesting question. Our focus in this paper was more oriented towards explaining the existence of universal perturbations through an investigation of the geometry of the decision boundary. Interpreting the visual appearance of universal perturbations requires to draw a link between the weights in the lower layers with the curvature of the decision boundary. This would definitely be a fascinating connection, that we would like to work on in the future." ]
[ 6, 5, 7, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_ByrZyglCb", "iclr_2018_ByrZyglCb", "iclr_2018_ByrZyglCb", "H17poxceM", "SygwaSixG", "ByJeL6EWz" ]
iclr_2018_B1hYRMbCW
On the regularization of Wasserstein GANs
Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network's input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.
accepted-poster-papers
This paper proposes an interesting analysis of the limitations of WGANs as well as a solution to these limitations. I am not too convinced by the experimental part as, as some of the reviewers have mentioned, it relies on hyperparameters which can be hard to tune. The more theoretical part, even if it could be written with more care as pointed out by reviewer 2, is nonetheless interesting and could stir discussion. I think it would be a good addition to ICLR as a poster.
train
[ "ry2WdrtgM", "r1aAjU_xG", "ry1j9wpgz", "BJIh7OpmM", "Hy7NMda7f", "BJJMfOp7M", "r1z7Z_p7z", "SyheZOTmz", "r1MZlOaXM", "HkEdTwa7z", "r1iQxlMGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper proposes a novel regularization scheme for Wasserstein GAN based on a relaxation of the constraints on the Lipschitz constant of 1. The proposed regularization penalize the critic function only when its gradient has a norm larger than one using some kind of squared hinge loss. The reasons for this choice are discussed and linked to theoretical properties of OT. Numerical experiments suggests that the proposed regularization leads to better posed optimization problem and even a slight advantage in terms of inception score on the CIFAR-10 dataset.\n\nThe paper is interesting and well written, the proposed regularization makes sens since it is basically a relaxation of the constraints and the numerical experiments also suggest it's a good idea. Still as discussed below the justification do not address a lots of interesting developments and implications of the method and should better discuss the relation with regularized optimal transport.\n\nDiscussion:\n\n+ The paper spends a lot of time justifying the proposed method by discussing the limits of the \"Improved training of Wasserstein GAN\" from Gulrajani et al. (2017). The two limits (sampling from marginals instead of optimal coupling and differentiability of the critic) are interesting and indeed suggest that one can do better but the examples and observations are well known in OT and do not require proof in appendix. The reviewer believes that this space could be better spend discussing the theoretical implication of the proposed regularization (see next).\n\n+ The proposed approach is a relaxation of the constraints on the dual variable for the OT problem. As a matter of fact we can clearly recognize a squared hinge loss is the proposed loss. This approach (relaxing a strong constraint) has been used for years when learning support vector machines and ranking and a small discussion or at least reference to those venerable methods would position the paper on a bigger picture.\n\n+ The paper is rather vague on the reason to go from Eq. (6) to Eq. (7). (gradient approximation between samples to gradient on samples). Does it lead to better stability to choose one or the other? \n How is it implemented in practice? recent NN toolbox can easily compute the exact gradient and use it for the penalization but this is not clearly discussed even in appendix. Numerical experiments comparing the two implementation or at least a discussion is necessary.\n\n+ The proposed approach has a very strong relations to the recently proposed regularized OT (see [1] for a long list of regularizations) and more precisely to the euclidean regularization. I understand that GANS (and Wasserstein GAN) is a relatively young community and that references list can be short but their is a large number of papers discussing regularized optimal transport and how the resulting problems are easier to solve. A discussion of the links is necessary and will clearly bring more theoretical ground to the method. Note that a square euclidean regularization leads to a regularization term in the dual of the form max(0,f(x)+f(y)-|x-y|)^2 that is very similar to the proposed regularization. In other words the authors propose to do regularized OT (possibly with a new regularization term) and should discuss that.\n\n+ The numerical experiments are encouraging but a bit short. The 2D example seem to work very well and the convergence curves are far better with the proposed regularization. But the real data CIFAR experiments are much less detailed with only a final inception score (very similar to the competing method) and no images even in appendix. The authors should also define (maybe in appendix) the conditional and unconditional inception scores and why they are important (and why only some of them are computed in Table 1).\n\n+ This is more of a suggestion. The comparison of the dual critic to the true Wasserstein distance is very interesting. It would be nice to see the behavior for different values of lambda.\n\n\n[1] Dessein, A., Papadakis, N., & Rouas, J. L. (2016). Regularized Optimal Transport and the Rot Mover's Distance. arXiv preprint arXiv:1610.06447.\n\n\nReview update after reply:\n\nThe authors have responded to most of my concerns and I think the paper is much stronger now and discuss the relation with regularized OT. I change the rating to Accept. \n", "This paper is proposing a new formulation for regularization of Wasserstein Generative Adversarial models (WGAN). The original min/max formulation of the WGAN aim at minimizing over all measures, the maximal dispersion of expectation for 1-Lipschitz with the one provided by the empirical measure. This problem is often regularized by adding a \"gradient penalty\", \\ie a penalty of the form \"\\lambda E_{z~\\tau}}(||\\grad f (z)||-1)^2\" where \\tau is the distribution of (tx+(1-x)y) where x is drawn according to the empirical measure and y is drawn according to the target measure. In this work the authors consider substituting the previous penalty by \"\\lambda E_{z~\\tau}}(max( ||\\grad f (z)||-1,0)^2\".\n\nOverall the paper is too vague on the mathematical part, and the experiments provided are not particularly convincing in assessing the benefit of the new penalty.\nThe authors have tried to use mathematical formulations to motivate their choice, but they lack rigorous definitions/developments to make their point convincing.\nThey should also present early their model and their mathematical motivation: in what sense is their new penalty \"preferable\"?\n\n\n\nPresentation issues:\n- in printed black and white versions most figures are meaningless.\n- red and green should be avoided on the same plots, as colorblind people will not perceived any difference...\n- format for images should be vectorial (eps or pdf), not jpg or png...\n- legend/sizes are not readable (especially in printed version).\n\nReferences issues:\n- harmonize citations: if you add first name for some authors add them for all of them: why writing Harold W. Kuhn and C. Vilani for instance?\n- cramer->Cramer\n- wasserstein->Wasserstein (2x)\n- gans-> GANs\n- Salimans et al. is provided twice, and the second is wrong anyway.\n\n\n\nSpecific comments:\n\npage 1:\n- \"different more recent contributions\" -> more recent contributions\n- avoid double brackets \"))\"\n\npage 2:\n- Please rewrite the first sentence below Definition 1 in a meaningful way.\n- Section 3: if \\mu is an empirical distribution, it is customary to write it \\mu_n or \\hat \\mu_n (in a way that emphasizes the number of observations available).\n- d is used as a discriminator and then as a distance. This is confusing...\n\npage 3:\n- \"f that plays the role of an appraiser (or critic)...\": this paragraph could be extended and possibly elements of the appendix could be added here.\n- Section 4: the way clipping is presented is totally unclear and vague. This should be improved.\n- Eq (5): as written the distribution of \\tilde{x}=tx+(1-t)y is meaningless: What is x and y in this context? please can you describe the distributions in a more precise way?\n- Proof of Proposition 5 (cf. page 13): this is a sketch of proof to me. Please state precise results using mathematical formulation.\n- \"Observation 1\": real and generated data points are not introduced at this stage... data points are not even introduced neither!\n\npage 5:\n- the examples are hard to understand. It would be helpful to add the value of \\pi^* and f^* for both models, and explaining in details how they fit the authors model.\n- in Figure 2 the left example is useless to me. It could be removed to focus more extensively on the continuous case (right example).\n- the the -> the\n\npage 6:\n- deterministic coupling could be discussed/motivated when introduced. Observation 3 states some property of non non-deterministic coupling but the concept itself seems somehow to appear out of the blue.\n\npage 10:\n- Figure 6: this example should be more carefully described in terms of distribution, f*, etc.\n\npage 14:\n- Proposition 1: the proof could be shorten by simply stating in the proposition that f and g are distribution...\n\npage 15:\n- \"we wish to compute\"-> we aim at showing?\n- f_1 is not defined sot the paragraph \"the latter equation...\" showing that almost surely x \\leq y is unclear to me, so is the result then.\nIt could be also interesting to (geometrically) interpret the coupling proposed. The would help understanding the proof, and possibly reuse the same idea in different context.\n\npage 16:\n- proof of Proposition 2 : key idea here is using the positive and negative part of (f-g). This could simplify the proof.", "The article deals with regularization/penalization in the fitting of GANs, when based on a L_1 Wasserstein metric. Basics on mass transportation are briefly recalled in section 2, while section 3 formulate the GANs approach in the Wasserstein context. Taking into account the Lipschitz constraint and (non-) differentiability of optimal critic functions f are discussed in section 4 and Section 5 proposes a way to penalize candidate functions f that do not satisfy the Lipschitz condition using a tuning parameter lambda, ruling a trade-off between marginal fitting and gradient control. The approach is illustrated by numerical experiments. Such results are hardly convincing, since the tuning of the parameter lambda plays a crucial role in the performance of the method. More importantly, The heuristic proposed in the paper is interesting and promising in some respects but there is a real lack of theoretical guarantees motivating the penalty form chosen, such a theoretical development could allow to understand what may rule the choice of an ideal value for lambda in particular.", "Thank you very much for your interest in our work. We are happy to hear that you could reproduce and confirm our results. We wish you all the best for the ICLR 2018 Reproducibility Challenge!", ">> “+ The proposed approach has a very strong relations to the recently proposed regularized OT (see [1] for a long list of regularizations) and more precisely to the euclidean regularization. I understand that GANS (and Wasserstein GAN) is a relatively young community and that references list can be short but their is a large number of papers discussing regularized optimal transport and how the resulting problems are easier to solve. A discussion of the links is necessary and will clearly bring more theoretical ground to the method. Note that a square euclidean regularization leads to a regularization term in the dual of the form max(0,f(x)+f(y)-|x-y|)^2 that is very similar to the proposed regularization. In other words the authors propose to do regularized OT (possibly with a new regularization term) and should discuss that.”\n\nWe are very thankful for pointing us to the link to regularized OT. The similarity to regularized OT with Euclidean regularization is highly interesting, and we discuss it now in Sections 2 and 5. \n\nBut, at least from our understanding, the equivalence to regularized OT is not exactly given, since a regularization term in the primal does not seem to allow for the maximization in the dual over one function only. We believe the same problem to appear for any Bregman divergence and therefore doubt that any new regularization term gives the exact equivalence to any of the previously considered approaches in regularized OT that we are aware of.\nWe performed experiments with the variant of our regularization term, that is most similar to the Euclidean regularized OT (see last paragraph in the experimental section), but it showed only good performance on toy dataset, but poor performance on larger datasets such as CIFAR-10.\n\n>>+ The numerical experiments are encouraging but a bit short. The 2D example seem to work very well and the convergence curves are far better with the proposed regularization. But the real data CIFAR experiments are much less detailed with only a final inception score (very similar to the competing method) and no images even in appendix. \n\nWe run additional experiments on CIFAR-10 for 3 more values of lambda (0.1,5,100), all supporting the conclusion that WGAN-LP performs slightly better is much less dependent on the right choice of hyperparameter lambda than the WGAN-GP (see Table 1 in the revised version). We also added a plot (Fig 6) displaying the regularization term of WGAN-GP separated into contributions based on a gradient norm exceeding one and based on a gradient norm smaller one, which also supports the higher sensitivity of WGAN-GP to the right choice of hyperparameter, and additionally suggests that WGAN-GP in fact behaves similar to WGAN-LP when the hyperparameter lambda is chosen small enough to make it perform well. \n\n>> “The authors should also define (maybe in appendix) the conditional and unconditional inception scores and why they are important (and why only some of them are computed in Table 1)”\n\nWe added such a description into Appendix D.6.\n\n>>”This is more of a suggestion. The comparison of the dual critic to the true Wasserstein distance is very interesting. It would be nice to see the behavior for different values of lambda.”\n\nDue to limitations in our access to computational resources, we were not yet able to conduct these experiments, but agree that this would be very interesting and plan to report such results in the camera ready version", "We thank the reviewer for his highly valuable comments and thoughtful suggestions! Based on them, we applied the following main changes in the revised version of our paper:\nWe added a paragraph giving a short introduction to regularized OT in Section 2 and a paragraph about the connection to our proposed regularization in Section 5 (special thanks for pointing us in this direction!!!).\nWe extended the CIFAR experiments, by running more experiments with different values of the regularization parameter (all show that WGAN-LP produces equivalent or better results and is less sensitive to the value of the regularization parameter) and presenting a deeper investigation of the loss contributions of the regularization term. Interestingly we find, that the penalty of WGAN-GP is behaving similar to the one of WGAN-LP in settings with low regularization parameter. We have added theoretical considerations explaining this behaviour in Section 5.\n\nIn the following we will reply directly to specific comments:\n\n>> “The paper spends a lot of time justifying the proposed method by discussing the limits of the \"Improved training of Wasserstein GAN\" from Gulrajani et al. (2017). The two limits (sampling from marginals instead of optimal coupling and differentiability of the critic) are interesting and indeed suggest that one can do better but the examples and observations are well known in OT and do not require proof in appendix. The reviewer believes that this space could be better spend discussing the theoretical implication of the proposed regularization (see next).”\n\nWe haven’t been able to find references, where computations of the examples can be found in the literature. Approaching WGANs from a deep learning viewpoint, we are also convinced that researchers interested in GANs without the necessary background in OT will find a quick discussion of the examples at least very helpful but possibly even necessary. (See also opposing comments by Reviewer 2.) We have moved as much as we believe is adequate to the appendix.\n\n“The proposed approach is a relaxation of the constraints on the dual variable for the OT problem. As a matter of fact we can clearly recognize a squared hinge loss is the proposed loss. This approach (relaxing a strong constraint) has been used for years when learning support vector machines and ranking and a small discussion or at least reference to those venerable methods would position the paper on a bigger picture.”\n\nWe added a sentence referring to relaxation of hard constraints in the objective of SVMs.\n\n>>” The paper is rather vague on the reason to go from Eq. (6) to Eq. (7). (gradient approximation between samples to gradient on samples). Does it lead to better stability to choose one or the other? How is it implemented in practice? recent NN toolbox can easily compute the exact gradient and use it for the penalization but this is not clearly discussed even in appendix. Numerical experiments comparing the two implementation or at least a discussion is necessary.”\n \nThe main reason to go from Eq. (6) to Eq. (7) is that enforcing the constraint on the gradient norm implements a valid constraint into all directions from the given point, not just a condition on the difference between two points (and just in only one direction). This should help for better generalization to unseen samples. We performed experiments to verify this (the results are shown in Appendix D in the revised version of the paper): While regularization based on Eq. (6) worked well on toy data, it performed considerably weaker on CIFAR10, supporting the advantage of a regularization as given in Eq. (7).\n\nFor the computation we did indeed use standard implementations of the gradient in tensorflow (see https://www.tensorflow.org/api_docs/python/tf/gradients and http://pytorch.org/docs/master/autograd.html#torch.autograd.grad). \nLinks to our code will be provided in case of acceptance.\n\t\t\t\t\n.\n", "\n>> “page 10:\n- Figure 6: this example should be more carefully described in terms of distribution, f*, etc.”\n\nThe optimal coupling is described in the text. Drawing the corresponding connections of the coupling would make the image less clear. The continuous function is also sufficiently described in the text by defining the slope almost everywhere, recalling that any choice of y-intercept will produce an optimal critic function.\n\n>> “page 14:\n- Proposition 1: the proof could be shorten by simply stating in the proposition that f and g are distribution…”\n\n\nWe suspect that this refers to Proposition 2 instead. \nIn this case, we believe the proof is easier to phrase by starting with the density functions directly instead of starting with the distributions and then moving to the density functions (which we feel is necessary for our proof)\n\npage 15:\n\n\n>>“- f_1 is not defined sot the paragraph \"the latter equation...\" showing that almost surely x \\leq y is unclear to me, so is the result then.”\n\nThe latter equation was indeed unclear as written down. We have corrected it and removed f_1 from the notation.\n\n>>”It could be also interesting to (geometrically) interpret the coupling proposed. This would help understanding the proof, and possibly reuse the same idea in different context.”\n\nThe geometric intuition is given by the discrete example from Figure 2. This is also exactly the reason why we suggest to keep the discrete case in the paper. \n\nIn words that would be to move the left/right half of one distribution to the left/right half of the other distribution respectively. (We have added such a sentence to the beginning of the proof). We then use the freedom of non-uniqueness of the optimal coupling to simply find any coupling doing exactly that.\n\n>>” page 16:\n- proof of Proposition 2 : key idea here is using the positive and negative part of (f-g). This could simplify the proof.”\n\nWe do not quite understand this comment, as using the positive and negative part of f-g is exactly what we are doing. We have added the comment that the mathematical formulas describe exactly the positive and the negative part of (f-g).\n", ">> Section 3: if \\mu is an empirical distribution, it is customary to write it \\mu_n or \\hat \\mu_n (in a way that emphasizes the number of observations available).\n\nThe distributions do not necessarily need to be empirical here. Therefore, we decided to keep the distributions general with the according notation.\n\n>> page 3:\n- \"f that plays the role of an appraiser (or critic)...\": this paragraph could be extended and possibly elements of the appendix could be added here.\n\nIn a longer version, it would be nice to elaborate on this point in the main paragraph. Only due to the strong space constraints we were forced to move these considerations into the appendix, since (despite being helpful for understanding) they are not relevant for the rest of the paper. \n\n>> “- Section 4: the way clipping is presented is totally unclear and vague. This should be improved.”\n\nWe state that weight clipping is “to enforce the parameters of the network not to exceed a certain value c_max>0 in absolute value”. Translated into formulas this is: there is some c_max>0 such that |p|<c_max for all network parameters p, which is exactly the definition of weight clipping. \n\n>> “- Proof of Proposition 5 (cf. page 13): this is a sketch of proof to me. Please state precise results using mathematical formulation.”\n\nUnfortunately we are unsure what the reviewer is referring to. (There is no Prop 5 in the originally submitted version). We assume that Proposition 1 is meant.\nIn that case, we believe our version qualifies as more than a sketch of proof since it contains the complete set of arguments. If a reader feels more comfortable with mathematical notation, the reader may translate the written words into mathematical formulas to follow the arguments. In this case, we believe that formulas would even distract from the simplicity of arguments. In the end, our intention is to simplify the proof taken from the paper “Improving training of Wasserstein GANs” (https://arxiv.org/abs/1704.00028).\n\n\n>> “page 5:”\n- the examples are hard to understand. It would be helpful to add the value of \\pi^* and f^* for both models, and explaining in details how they fit the authors model.\n\nWe have added a labeled y-axis for the values of f*. The optimal coupling is indicated in red as before. In terms of (generalized) probability distributions, this would correspond to delta functions defined by the coupled points X and Os.\n\n>> “- in Figure 2 the left example is useless to me. It could be removed to focus more extensively on the continuous case (right example).”\n\nThe discrete case is in our opinion much easier to understand than the continuous one, but motivates the reasoning for an optimal critic in the continuous case. We therefore suggest to leave it in. (see also our comments on the suggestion of a geometrical interpretation of the coupling in the proof to Proposition 1)\n\n>> “page 6:\n- deterministic coupling could be discussed/motivated when introduced. Observation 3 states some property of non non-deterministic coupling but the concept itself seems somehow to appear out of the blue.”\n\nWe have added a short discussion on deterministic couplings.\n", "We thank the reviewer for the comments and suggestions and for checking the details of the arguments presented in the paper and thereby detecting room for substantial improvements.\n\nThis led to the following changes in the revised version of our paper:\nWe solved the issues in the reference section.\nWe improved the presentation according to your suggestions whenever possible (as in proofs), improved the formulations, and removed typos.\nWe improved the images. In particular, we would like to thank the reviewer for noticing the red/green issue that we missed to take care of in some plots. Our new images should thereby be better to read and understand.\n\nIn the following we will reply directly to specific comments:\n\n\n>> “Overall the paper is too vague on the mathematical part, and the experiments provided are not particularly convincing in assessing the benefit of the new penalty.\nThe authors have tried to use mathematical formulations to motivate their choice, but they lack rigorous definitions/developments to make their point convincing.”\n\nUnfortunately, the complaint about the lack of rigour is too broad for us to understand what exactly the reviewer is missing. We do believe, however, that the mathematical formulations are complete and concise, only due to the limited space available, we were forced to move most of the mathematical proofs into the appendix. We would be happy to improve by adding missing definitions or arguments that we are unaware of, if we get pointed to specific suggestions.\n\n>> “They should also present early their model and their mathematical motivation: in what sense is their new penalty \"preferable\"?”\n\nThe new penalty is preferable over the previous ones, since\nThe new penalty does not exclude approximations of optimal critic functions as the weight clipping approach does,\ndoes not enforce a constraint that cannot be justified,\nis therefore less dependent on the choice of the hyperparameter lambda,\nstill builds on the great advantages of WGANs, which are one of the best performing GANs currently out there (and even leads to slightly better and more stable performance in practice).\n\nWe made points 1,2 and 4 more clear in revised version of the paper and have added \nmore theoretical results in Section 5 and experimental results in Section 6 to verify point 3.\n", "We thank the reviewer for the valuable feedback. \n\nWe share the viewpoint that theoretical guarantees would be very much desirable and should further investigated, however we also think that rigorous convergence results, as for example in convex optimization, are hard to establish in a field of deep learning approaches, where there is still the lack of theoretical understanding in general. \nOn the other hand, we do believe that our research provides sufficient theoretical evidence for our method to be advantageous over existing approaches to WGANs. \n \nIn the following we will reply directly to specific comments:\n\n>> “The approach is illustrated by numerical experiments. Such results are hardly convincing, since the tuning of the parameter lambda plays a crucial role in the performance of the method.”\n\nIt is a weakness of many models that they do depend on tuning hyperparameters in a very sensitive way. This has also been demonstrated for various GANs in a recent paper (https://arxiv.org/abs/1711.10337). Our results, however, demonstrate that our version, WGAN-LP, is less sensitive to the tuning of lambda than WGAN-GP. That alone is a big advantage of our version to existing ones in our opinion. We tried to make this point more clear in the revised version and added theoretical considerations and more experimental results on CIFAR with different choices of the hyperparameter which consistently show a better performance of WGAN-LP and less sensitivity to the right choice of lambda.\n\n>> “More importantly, the heuristic proposed in the paper is interesting and promising in some respects but there is a real lack of theoretical guarantees motivating the penalty form chosen, such a theoretical development could allow to understand what may rule the choice of an ideal value for lambda in particular.”\n\nWe believe that our approach is theoretically justified in the sense that it does point out theoretical issues of former approaches that were not noticed and corrects them. In this way it improves on one of best-working GANs in a theoretically justified way.\nIn the revised version, we are now also discussing the link to regularized optimal transport theory (see new paragraphs in Sections 2 and 5).\nWe agree, that a theoretical analysis that could guide the choice of the right value of the hyperparameter would be highly desirable, but those guarantees are hard to derive. \nFrom a theoretical viewpoint (that we could not see reflected in experimental results however) we believe high hyperparameter choices would be ideal, since they “strengthen” the weak constraint. A choice of a high value for lambda would also be justified by the newly added connection to optimal transport theory (see Section 5). In addition, we added some theoretical observations on the dependence on lambda in Section 5.\n", "We decided to reproduce the experiment in this paper to participate in ICLR 2018 Reproducibility Challenge. Reports are currently submitted to arXiv and pending. Maybe it will be released after 2-3 days. The temporary submission\nidentifier is: submit/2105336. Below is a summary of the report. The reproduction code is in https://github.com/mikigom/WGAN-LP-tensorflow.\n\nThe ultimate goal of all experiments in this paper is that WGAN-GP is better than Equation WGAN-LP in aspect of training stability and convergence speed to optimize optimal transport problem in GAN framework.\n\nNote that subsection 'Sample quality on CIFAR-10' in the experiment section of this paper and Appendix D.5 'Optimizing the Wasserstein-2 distance' are out of our reproduction scope. At the beginning of the reproduction, we refer to the arXiv uploaded version of the paper. However, we noticed that the revised version of OpenReview recently added subsection 'Sample quality on CIFAR-10'. As a matter of time, the experiments in this subsection were excluded from the scope of this report. On the other hand, we are having trouble implementing a regularization loss term that minimizes the Wasserstein-2 distance in Tensorflow.\n\nThe reproduction code consists of four Python modules: data_generator.py, model.py, reg_losses.py, and trainer.py. data_generator.py is a module that provides a class that generates the sample data needed for learning. model.py is a module that implements 3-layer neural networks for a generator and a critic. reg_losses.py defines the sampling method and loss term for regularization. trainer.py includes a pipeline for model learning and visualization. To see our implementation in more detail, please check out the repository. \n\nWe first want to specify that the experiment follows the trends presented in the paper as a whole, but the overall learning speed is relatively slow. It is assumed that this is due to differences of hyperparameter in RMSProp, or in unrecognized elements. However, since this is not very inconsistent with the overall tendency of the experiment, we proceeded to reproduce the experiments without searching to solve it. It takes about 12 minutes to learn the 20k steps without EMD calculation, and it takes about 2 hours to learn the 2k steps when EMD calculation is included.\n\nPlease refer to the report to be published on arXiv for detailed results of the experiment.\n\nWe have confirmed what the target papers claimed: First, WGAN-LP has more stable learning and faster convergence property than WGAN-GP. Second, WGAN-LP is much more robust to regularization fraction λ than WGAN-GP. Finding and determining the appropriate hyperparameter is an important but cumbersome, on study of machine learning. Therefore, presenting a robust model to the selection of hyperparameters can be a sufficient contribution to other researchers and the field itself. We are able to accept that the target paper contributes to this part in a reproducible way.\n\n-- Update on December 20 2017\nOur report is completely uploaded on arXiv.\nurl : https://arxiv.org/abs/1712.05882" ]
[ 7, 2, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 2, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1hYRMbCW", "iclr_2018_B1hYRMbCW", "iclr_2018_B1hYRMbCW", "r1iQxlMGM", "BJJMfOp7M", "ry2WdrtgM", "SyheZOTmz", "r1MZlOaXM", "r1aAjU_xG", "ry1j9wpgz", "iclr_2018_B1hYRMbCW" ]
iclr_2018_Bk8ZcAxR-
Eigenoption Discovery through the Deep Successor Representation
Options in reinforcement learning allow agents to hierarchically decompose a task into subtasks, having the potential to speed up learning and planning. However, autonomously learning effective sets of options is still a major challenge in the field. In this paper we focus on the recently introduced idea of using representation learning methods to guide the option discovery process. Specifically, we look at eigenoptions, options obtained from representations that encode diffusive information flow in the environment. We extend the existing algorithms for eigenoption discovery to settings with stochastic transitions and in which handcrafted features are not available. We propose an algorithm that discovers eigenoptions while learning non-linear state representations from raw pixels. It exploits recent successes in the deep reinforcement learning literature and the equivalence between proto-value functions and the successor representation. We use traditional tabular domains to provide intuition about our approach and Atari 2600 games to demonstrate its potential.
accepted-poster-papers
This paper on automatic option discovery connects recent research on successor representations with eigenoptions. This is a solidly presented, conceptual paper with results in tabular and atari environments.
train
[ "BJlGRSOgf", "BysvRfjez", "HJagrMk-G", "SJUvQG1Gf", "Bk-5Mf1fG", "rysyGMyff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper extends the idea of eigenoptions, recently proposed by Machado et al. to domains with stochastic transitions and where state features are learned. An eigenoption is defined as an optimal policy for a reward function defined by an eigenvector of the matrix of successor representation (SR), which is an occupancy measure induced here by a uniform policy. In high-dimensional state space, the authors propose to approximate that matrix with a convolutional neural network (CNN). The approach is evaluated in a tabular domain (i.e., rooms) and Atari games.\n\nOverall the paper is well-written and quite clear. The proposed ideas for the extension seem natural (i.e., use of SR and CNN). The theorem stated in the paper seems to provide an interesting link between SR and the Laplacian. However, a few points are not clear to me:\n- Is the result new or not? If I understand correctly, Stachenfeld et al. discussed this result, but didn't prove it. Is that correct? So the provided proof is new?\n- Besides, how are D and W exactly defined? \n- Finally, as the matrix is not symmetric, do real eigenvalues always exist?\n\nThe execution of the proposed ideas in the experiments was a bit disappointing to me. The approximated eigenoption was simply computed as a one-step greedy policy. Besides, the eigenoptions seem to help for exploration (as a uniform policy was used) as indicated by plot 3(d), but could they help for other tasks (e.g., learn to play Atari games faster or better)? I think that would be a more useful measure for the learned eigenoptions.\n\nDuring learning SR and the features, what would be the impact if the gradient for SR estimation were also propagated?\n\nIn Figure 4, the trajectories generated by the different eigenoptions are barely visible.\n\nSome typos:\n- Section 2.1:\nin the definition of G_t, the expectation is taken over p as well\nI_w and T_w should be a subset of S\n\n- in (2), the hat is missing over \\Psi\nin the definition of v_\\pi(s), r only depends on s'? This seems inconsistent with the previous definition of \\psi\n\n- p. 6:\nin the definition of L_{SR}(s, s'), why \\psi takes \\phi(s) as argument?\n\n- in conclusion:\nthat that", "- This paper shows an equivalence between proto value functions and successor representations. It then derives the idea of eigen options from the successor representation as a mechanism for option discovery. The paper shows that even under a random policy, the eigen options can lead to purposeful options\n\n- I think this is an important conceptual paper. Automatic option discovery from raw sensors is perhaps one of the biggest open problems in RL research. This paper offers a new conceptual setup to look at the problem and consolidates different views (successor repr, proto values, eigen decomposition) in a principled manner. \n\n- I would be keen to see eigen options being used inside of the agent. Have authors performed any experiments ? \n\n- How robust are the eigen options for the Atari experiments? Basically how hand picked were the options? \n\n- Is it possible to compute eigenoptions online? This seems crucial for scaling up this approach", "Eigenoption Discovery Through the Deep Successor Representation\n\nThe paper is a follow up on previous work by Machado et al. (2017) showing how proto-value functions (PVFs) can be used to define options called “eigenoptions”. In essence, Machado et al. (2017) showed that, in the tabular case, if you interpret the difference between PVFs as pseudo-rewards you end up with useful options. They also showed how to extend this idea to the linear case: one replaces the Laplacian normally used to build PVFs with a matrix formed by sampling differences phi(s') - phi(s), where phi are features. The authors of the current submission extend the approach above in two ways: they show how to deal with stochastic dynamics and how to replace a linear model with a nonlinear one. Interestingly, the way they do so is through the successor representation (SR). Stachenfeld et al. (2014) have showed that PVFs can be obtained as a linear transformation of the eigenvectors of the matrix formed by stacking all SRs of an MDP. Thus, if we have the SR matrix we can replace the Laplacian mentioned above. This provides benefits already in the tabular case, since SRs naturally extend to domains with stochastic dynamics. On top of that, one can apply a trick similar to the one used in the linear case --that is, construct the matrix representing the diffusion model by simply stacking samples of the SRs. Thus, if we can learn the SRs, we can extend the proposed approach to the nonlinear case. The authors propose to do so by having a deep neural network similar to Kulkarni et al. (2016)'s Deep Successor Representation. The main difference is that, instead of using an auto-encoder, they learn features phi(s) such that the next state s' can be recovered from it (they argue that this way psi(s) will retain information about aspects of the environment the agent has control over).\n\nThis is a well-written paper with interesting (and potentially useful) insights. I only have a few comments regarding some aspects of the paper that could perhaps be improved, such as the way eigenoptions are evaluated.\n\nOne question left open by the paper is the strategy used to collect data in order to compute the diffusion model (and thus the options). In order to populate the matrix that will eventually give rise to the PVFs the agent must collect transitions. The way the authors propose to do it is to have the agent follow a random policy. So, in order to have options that lead to more direct, \"purposeful\" behaviour, the agent must first wander around in a random, purposeless, way, and hope that this will lead to a reasonable exploration of the state space. \n\nThis problem is not specific to the proposed approach, though: in fact, any method to build options will have to resolve the same issue. One related point that is perhaps more specific to this particular work is the strategy used to evaluate the options built: the diffusion time, or the expected number of steps between any two states of an MDP when following a random walk. First, although this metric makes intuitive sense, it is unclear to me how much it reflects control performance, which is what we ultimately care about. Perhaps more important, measuring performance using the same policy used to build the options (the random policy) seems somewhat unsatisfactory to me. To see why, suppose that the options were constructed based on data collected by a non-random policy that only visits a subspace of the state space. In this case it seems likely that the decrease in the diffusion time would not be as apparent as in the experiments of the paper. Conversely, if the diffusion time were measured under another policy, it also seems likely that options built with a random policy would not perform so well (assuming that the state space is reasonably large to make an exhaustive exploration infeasible). More generally, we want options built under a given policy to reduce the diffusion time of other policies (preferably ones that lead to good control performance).\n\nAnother point associated with the evaluation of the proposed approach is the method used to qualitatively assess options in the Atari experiments described in Section 4.2. In the last paragraph of page 7 the authors mention that eigenoptions are more effective in reducing the diffusion time than “random options” built based on randomly selected sub-goals. However, looking at Figure 4, the terminal states of the eigenoptions look a bit like randomly-selected sub-goals. This is especially true when we note that only a subset of the options are shown: given enough random options, it should be possible to select a subset of them that are reasonably spread across the state space as well. \n\nInterestingly, one aspect of the proposed approach that seems to indeed be an improvement over random options is made visible by a strategy used by the authors to circumvent computational constraints. As explained in the second paragraph of page 8, instead of learning policies to maximize the pseudo-rewards associated with eigenoptions the authors used a myopic policy that only looks one step ahead (which is the same as having a policy learned with a discount factor of zero). The fact that these myopic policies are able to navigate to specific locations and stay there suggests that the proposed approach gives rise to dense pseudo-rewards that are very informative. As a comparison, when we define a random sub-goal the resulting reward is a very sparse signal that would almost certainly not give rise to useful myopic policies. Therefore, one could argue that the proposed approach not only generate useful options, it also gives rise to dense pseudo-rewards that make it easier to build the policies associated with them.", "Thank you for your feedback and thorough review. We believe our paper is better now that we took your input into consideration. \n\nRegarding the execution of the proposed ideas in the experiments, in the new version of our submission we provide a different measure of the usefulness of eigenoptions. We now also show, in the tabular case, how they can be used to improve the agent’s control performance (Figure 4 in the main text and Figures 16-19 in the Appendix). In this new set of experiments the agent takes a random walk over eigenoptions while learning to maximize reward with primitive actions. Such an approach speeds up learning dramatically as a consequence of the agent being able to better explore the environment.\n\nThe responses to the other questions asked are itemized below:\n\n-- Is the result new or not? If I understand correctly, Stachenfeld et al. discussed this result, but didn't prove it. Is that correct? So the provided proof is new?\n\nYes, that is correct. Stachenfeld et al. (2014) discussed the result but did not provide a formal proof of it, nor the relationship between the eigenvalues of both approaches. Also, because the authors provided an informal discussion, they were not very precise in their claims, ignoring for example the fact that the equivalence we discuss is true only if the generated graph is regular (i.e., the size of the action set is the same across every state). To be precise, below is what Stanchenfeld et al. (2014) wrote: “Under a random walk policy, the transition matrix is given by $T = D^{-1}W$. If $\\phi$ is an eigenvector of the random walk’s graph Laplacian $I - T$, then $D^{1/2} \\phi$ is an eigenvector of the normalized graph Laplacian. The corresponding eigenvector for the discounted Laplacian, $I-\\gamma T$, is $\\gamma \\phi$. Since the matrix inverse preserves the eigenvectors, the normalized graph Laplacian has the same eigenvectors as the SR, $M = (I - \\gamma T)^{-1}$, scaled by $\\gamma D^{-1/2}$.” \n\n\n-- Besides, how are D and W exactly defined?\n\nD and W are defined for PVFs. We use the same definition given by Machado et al. (2017): W is the graph’s adjacency matrix and D is the diagonal matrix whose entries are the row sums of W. W(i, j) is defined to be 1 if there is an action that allows the agent to go from state i to state j.\n\n-- As the matrix is not symmetric, do real eigenvalues always exist?\n\nWe thank the reviewer for the question about the matrix symmetry and its eigenvalues because we made this discussion clearer in the paper now. The eigenvalues/eigenvector are not necessarily real for the eigendecomposition of an asymmetric matrix. However, the right eigenvectors obtained from the singular value decomposition of the matrix are always real (this is what we do in the ALE experiments). \n\n-- During learning SR and the features, what would be the impact if the gradient for SR estimation were also propagated?\n\nWe did not investigated this possibility. Because this is the first demonstration of eigenoptions discovery from raw pixels, we wanted to keep the learning process as simple as possible. Thus, we avoided the interaction between the loss function of the SR and the reconstruction error. This is something we plan to investigate in future work.\n\n-- In Figure 4, the trajectories generated by the different eigenoptions are barely visible.\n\nThis was in fact intentional. We did not want to use those results to focus on the trajectory that led the agent to the highlighted state, but on the final state itself. We did so by plotting the mass of visitation of each state. If the trajectories were visible, it would mean that the agent was navigating through the environment without a clear purpose. We wanted to show the exact opposite. That the agent was clearly spending the vast majority of the time in a specific location. We made this clearer in the updated version of our submission.\n\n-- in the definition of L_{SR}(s, s'), why \\psi takes \\phi(s) as argument?\n\nIn the definition of $L_{SR}(s, s’)$, $\\psi$ takes $\\phi(s)$ as argument because we are implicitly referring to Figure 2, in which we labeled the output of some layers as functions. We define $\\psi$ to be the SR module while $\\phi$ is the output of the representation learning module. We really appreciate this question, it made us realize the need to further clarify this in the paper, which we also did in the updated version of our submission.\n\nNaturally, we also fixed all the typos you listed (thank you for that).\n", "Thank you for your kind review. We have just updated our submission to include a new set of results where the eigenoptions are used inside of the agent. We show how eigenoptions can also be used to improve the agent’s control performance (Figure 4 in the main text and Figures 16-19 in the Appendix). We show that one can take a random walk over eigenoptions while learning to maximize reward with primitive actions. Such an approach speeds up learning dramatically, since the agent can better explore the environment.\n\nRegarding the robustness of the eigenoptions in the Atari experiments, we were able to select the options in a fairly straightforward way: by looking at those that generated a high density of visitation in a particular location on the screen. We did have multiple similar options we ended-up not reporting for clarity. We also had options that would not move the agent to anywhere (probably because of gamma being set to 0) and others in which the agent was happy regardless of the action taken (likely because the agent was trying to maximize features that were not under its control). We consider the results presented in this paper promising because we were able to replicate, using raw pixels, the results Machado et al. (2017) obtained when using the RAM state of the game (that encodes explicit information the agent cares about). However, we do think some extra work still needs to be done on option pruning. We do have some ideas, such as pruning options based on whether the agent can in fact maximize the returned generated by the eigenpurpose and pruning options that lead to the same distribution that other options lead. This is something we want to further investigate in a future work to allow us to easily obtain a set of useful options.\n\nFinally, we are very excited about the direction of research you asked about (computing eigenoptions online). This is definitely something we are planning to investigate in the future work. It should be possible to compute the eigenoptions online. There are incremental methods capable of estimating the singular value decomposition of a matrix, aside from other methods capable of discovering the top k eigenvectors of a matrix (notice that our method is much more stable than eigenoptions obtained from PVFs, which needs to estimate the *bottom* k eigenvectors). Once we have the eigenvectors, we could actually learn all eigenoptions simultaneously through off-policy learning. Also, it is not far fetched to imagine an algorithm that learns the intra-option policy and the policy over options simultaneously, bootstrapping from the option-critic architecture, for instance.\n\n", "Thank you for such a careful analysis of our paper. The main point you raised was about how we evaluated the eigenoptions. Initially we did not evaluate the options beyond the diffusion time because this metric seems to be related to the agent’s control performance (Machado et al., 2017). However, after reading the reviews, we do realize this is not something we should gloss over. Thus, we have just updated our submission to include a new set of results, in the tabular case, showing how eigenoptions can also be used to improve the agent’s control performance (Figure 4 in the main text and Figures 16-19 in the Appendix). We show that one can take a random walk over eigenoptions while learning to maximize reward with primitive actions. Such an approach speeds up learning dramatically, since the agent can better explore the environment.\n\nWe hope this new experiment also addresses, at least partially, the concern about looking at the diffusion time only over a uniform random walk. We focus on the diffusion time under a random walk because we are interested in the setting in which the agent cannot easily stumble upon a non-zero reward in the environment. In this case, most model-free agents just act randomly. We do agree that ideally we should be able to do better than the random wandering our agents do. However, this is a very hard thing to do given the fact that the agent has no information about the world, as the reviewer points out with the fact that all papers in the literature rely on that. Hopefully this paper is a step towards this direction. Our evaluation does show that augmenting the agent’s action set with eigenoptions makes the exploration process much more efficient in this case, after the short period action without purpose.\n\nWe also really appreciate the reviewer’s interpretation of our results in the ALE. We added the discussion/contrast between the sparseness of rewards generated by random subgoals and the subgoals generated by the eigenoptions. We fully agree with that point. Finally, notice it is not straightforward to define a valid random subgoal state when using function approximation because states cannot be uniquely identified. If we define, for example, a specific pixel configuration to be the random sub-goal, it is not clear we can actually observe such random configuration. Our algorithm naturally deals with this issue as well.\n" ]
[ 6, 9, 7, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1 ]
[ "iclr_2018_Bk8ZcAxR-", "iclr_2018_Bk8ZcAxR-", "iclr_2018_Bk8ZcAxR-", "BJlGRSOgf", "BysvRfjez", "HJagrMk-G" ]
iclr_2018_Bk9zbyZCZ
Neural Map: Structured Memory for Deep Reinforcement Learning
A critical component to enabling intelligent reasoning in partially observable environments is memory. Despite this importance, Deep Reinforcement Learning (DRL) agents have so far used relatively simple memory architectures, with the main methods to overcome partial observability being either a temporal convolution over the past k frames or an LSTM layer. More recent work (Oh et al., 2016) has went beyond these architectures by using memory networks which can allow more sophisticated addressing schemes over the past k frames. But even these architectures are unsatisfactory due to the reason that they are limited to only remembering information from the last k frames. In this paper, we develop a memory system with an adaptable write operator that is customized to the sorts of 3D environments that DRL agents typically interact with. This architecture, called the Neural Map, uses a spatially structured 2D memory image to learn to store arbitrary information about the environment over long time lags. We demonstrate empirically that the Neural Map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments and show that it is capable of generalizing to environments that were not seen during training.
accepted-poster-papers
Biological memory systems are grounded in spatial representation and spatial memory, so neural methods for spatial memory are highly interesting. The proposed method is novel, well-designed and the empirical results are good on unseen environments, although the noise model may be too weak. Moreover, it would have been great to evaluate this method on real data rather than in simulation.
train
[ "ByJWAeFxz", "S1Ii7lcxz", "H1E1RgqxM", "rkoNnT9mG", "BkFXGRy7M", "SkXZVqqfG", "Bkr07cqzz", "H1cqz5cGz", "H1ThWc5Mf", "H1Mke9cMM", "H1LfT3wZf", "r151wyLbz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public" ]
[ "The paper introduces a new memory mechanism specifically tailored for agent navigation in 2D environments. The memory consists of a 2D array and includes trainable read/write mechanisms. The RL agent's policy is a function of the context read, read, and next step write vectors (which are functions of the observation). The effectiveness of the proposed architecture is evaluated via reinforcement learning (% of mazes solved). The evaluation included 1000 test mazes--which sets a good precedent for evaluation in this subfield. \n\nMy main concern is the lack of experiments to test whether the agent really learned to localize and plan routes using it's memory architecture. The downsampling experiment in Section 5.1 seems to indicate the contrary: downsampling the memory should lead to position aliasing which seems to indicate that the agent is not using its memory to store the map and its own location. I'm concerned whether the proposed agent is actually employing a navigation strategy, as seems to be suggested, or is simply a good agent architecture for this task (e.g. for optimization reasons). The short experiment in Appendix E seems to try and answer this question, but it's results are anecdotal at best. \n\nIf good RL performance on navigation tasks is the ultimate goal then one can imagine an agent that directly copies the raw map observation (world centric) into memory and use something like a value iteration network or shortest path planning to plan routes. My point is that there are classical algorithms to solve navigation even in partially observable 2D grid worlds, why bother with deep RL here? ", "This paper presents a fully differentiable neural architecture for mapping and path planning for navigation in previously unseen environments, assuming near perfect* relative localization provided by velocity. The model is more general than the cognitive maps (Gupta et al, 2017) and builds on the NTM/DNC or related architectures (Graves et al, 2014, 2016, Rae et al, 2017) thanks to the 2D spatial structure of the associative memory. Basically, it consists of a 2D-indexed grid of features (the map) M_t that can be summarized at each time point into read vector r_t, and used for extracting a context c_t for the current agent state s_t, compute (thanks to an LSTM/GRU) an updated write vector w_{t+1}^{x,y} at the current position and update the map using that write vector. The position {x,y} is a binned representation of discrete or continuous coordinates. The absolute coordinate map can be replaced by a relative ego-centric map that is shifted (just like in Gupta et al, 2017) as the agent moves.\n\nThe experiments are exhaustive and include remembering the goal location with or without cues (similarly to Mirowski et al, 2017, not cited) in simple mazes of size 4x4 up to 8x8 in the 3D Doom environment. The most important aspect is the capability to build a feature map of previously unseen environments.\n\nThis paper, showing excellent and important work, has already been published on arXiv 9 months ago and widely cited. It has been improved since, through different sets of experiments and apparently a clearer presentation, but the ideas are the same. I wonder how it is possible that the paper has not been accepted at ICML or NIPS (assuming that it was actually submitted there). What are the motivations of the reviewers who rejected the paper - are they trying to slow down competing research, or are they ignorant, and is the peer review system broken? I quite like the formulation of the NIPS ratings: \"if this paper does not get accepted, I am considering boycotting the conference\".\n\n* The noise model experiment in Appendix D is commendable, but the noise model is somewhat unrealistic (very small variance, zero mean Gaussian) and assumes only drift in x and y, not along the orientation. While this makes sense in grid world environments or rectilinear mazes, it does not correspond to realistic robotic navigation scenarios with wheel skid, missing measurements, etc... Perhaps showing examples of trajectories with drift added would help convince the reader (there is no space restriction in the appendix).", "# Summary\nThis paper presents a new external-memory-based neural network (Neural Map) for handling partial observability in reinforcement learning. The proposed memory architecture is spatially-structured so that the agent can read/write from/to specific positions in the memory. The results on several memory-related tasks in 2D and 3D environments show that the proposed method outperforms existing baselines such as LSTM and MQN/FRMQN. \n\n[Pros]\n- The overall direction toward more flexible/scalable memory is an important research direction in RL.\n- The proposed memory architecture is new. \n- The paper is well-written.\n\n[Cons]\n- The proposed memory architecture is new but a bit limited to 2D/3D navigation tasks.\n- Lack of analysis of the learned memory behavior.\n\n# Novelty and Significance\nThe proposed idea is novel in general. Though [Gupta et al.] proposed an ego-centric neural memory in the RL context, the proposed memory architecture is still new in that read/write operations are flexible enough for the agent to write any information to the memory, whereas [Gupta et al.] designed the memory specifically for predicting free space. On the other hand, the proposed method is also specific to navigation tasks in 2D or 3D environment, which is hard to apply to more general memory-related tasks in non-spatial environments. But, it is still interesting to see that the ego-centric neural memory works well on challenging tasks in a 3D environment.\n\n# Quality\nThe experiment does not show any analysis of the learned memory read/write behavior especially for ego-centric neural map and the 3D environment. It is hard to understand how the agent utilizes the external memory without such an analysis. \n\n# Clarity\nThe paper is overall clear and easy-to-follow except for the following. In the introduction section, the paper claims that \"the expert must set M to a value that is larger than the time horizon of the currently considered task\" when mentioning the limitation of the previous work. In some sense, however, Neural Map also requires an expert to specify the proper size of the memory based on prior knowledge about the task. ", "Based on the author's rebuttal I have revised the score to a 7. ", "Having read the other reviews and rebuttals, I am maintaining a rating of 9 (top 15%, strong accept).", "Thank you for highlighting this related paper. We will add it to the related work section in the final version.\n", "Thank you for bringing to our attention this related work. This paper was a concurrent submission to ICLR and we were unaware of it before. While it does show that DNCs can do navigation in partially observable environments, it seems that the DNC was trained using supervised learning which can help significantly in training stability. We do not know how it would compare to the Neural Map on more general memory tasks, compared to the partially observable navigation experiments reported in that paper. We will add this paper to the related work in the final version.\n", "Dear Reviewer 3,\n\nThank you for the strong support and for your comments and feedback. We understand that the noise model is to some extent simplistic compared to those found in robotics applications, but we argue that it does at least demonstrate that the Neural Map is robust to some degree of drift/aliasing in its position estimate. We have added a figure in Appendix D showing example trajectories from the noisy model.\n\nWe have also added an analysis of the memory in the appendix where we demonstrate that the context operator is mainly used to address the positions near the starting state, where the indicator color is in full view. We also demonstrate the improved ability of the Neural Map to explore the test mazes, with the egocentric Neural Map variant exploring on average 10% more than an LSTM baseline. \n", "Dear Reviewer 2,\n\nWe thank you for your valuable comments and feedback. \n\nWe have added an analysis of the memory in the appendix E where we demonstrate more episodic examples of the context-based retrieval on 3D tasks, including both egocentric and allocentric versions of the Neural Map. From these results, we can see that the Neural Map uses its context operator to mostly retrieve states around the starting position where the indicator is in full view. In addition, we further demonstrated that the indicator identity could be inferred with 100% accuracy from the memory map using just a logistic regression model. \nTo explore whether the Neural Map used its memory to accurately plan routes, we measured its ability to do backtracking. We demonstrated the improved ability of the Neural Map to explore the test mazes, with the egocentric Neural Map variant exploring on average 10% more than an LSTM baseline. \n\nAs for setting the memory size, we argue that in many cases it can be easier for an agent designer to specify spatial distances than the time horizon of a task. For example, you could have an agent operating within a household where the agent designer only has to set the spatial extent of the map to represent an area at least as large as the house. On the other hand, estimating how long in time it might take an agent to do a task such as object collection would require knowing things such as e.g. how fast the robot navigates the house, how long it takes to grasp the object, etc. \nAlthough the memory architecture is limited to 2D/3D environments, we argue that those environments encompass a large portion of real world applications of Deep RL. The Neural Map could potentially be generalized to a memory over graphs but we leave this extension to future work.\n", "Dear Reviewer 1,\n\nWe thank you for your valuable comments and feedback. With respect to the concern over the lack of experiments, we have run experiments on 4 different memory-based environments and on each environment shown that the Neural Map exceeds the performance of previous baseline models, including LSTMs and Memory Networks. We think this has sufficiently demonstrated that the Neural Map demonstrates a performance improvement on memory-based navigation tasks. \n\nWe have also added additional results in appendix E demonstrating more episodic examples of the context-based retrieval on 3D tasks, including both egocentric and allocentric versions of the Neural Map. From these results, we can see that the Neural Map uses its context operator to mostly retrieve states around the starting position where the indicator is in full view. In addition, we further demonstrated that the indicator identity could be inferred with 100% accuracy from the memory map using just a logistic regression model. To explore whether the Neural Map used its memory to accurately plan routes, we measured its ability to do backtracking. We showed that the egocentric variant of the Neural Map explores on average around 10% more of the test mazes compared to an LSTM baseline.\n\nWith respect to the downsampling experiment in Section 5.1, each wall in the environment takes one 'pixel' in the map, so the reduction to 8x8 is only aliasing on average 2 positions compared to the 15x15 map. We argue that this is not a significant enough reduction in spatial resolution to cause a large decrease in performance, and the Neural Map can still navigate at this slightly larger spatial scale. The fact that, comparatively, the 6x6 map decreases significantly in performance due to larger aliasing (aliasing up to 3x3 positions) provides evidence that the Neural Map does utilize spatial information to navigate, but is robust to some small noise. \n\nWith respect to the point about motivating the use of Deep RL, we believe the \"Repeating\" environment shows the added capability of using memory-based Deep RL over using only traditional navigation algorithms. In this Repeating environment the indicator always changes to red after the first goal entry, meaning an agent that just writes/maps observations from its current position would not be capable of remembering the original indicator color after the first goal entry (as on being reset to the initial position after the first goal entry, its observation would be overwritten with a potentially incorrect indicator color).\n\nSimilar to the repeating environment, we can envision many other applications of Deep RL within dynamic environments, where the environment is continuously changing. For example, an office environment where objects are constantly being moved and misplaced. In such an environment, a navigation system on a map of past observations might by itself not be sufficient, and a differentiable memory that writes its own features into memory could potentially learn things such as \"if object X is not at Y, it is likely to be at Z\" in an end-to-end manner without pre-specification by an expert.\n", "You might also want to consider taking a look at Memory Augmented Control Networks (https://arxiv.org/pdf/1709.05706).\nThis paper uses a DNC style memory along with the Value Iteration Networks. The paper demonstrates strong experimental results. Possible that VIN when combined with DNC overcomes limitations of differentiable memory described as motivation for your work ?", "You might be interested to take a look\n\nNeural SLAM: Learning to Explore with External Memory \n(https://arxiv.org/pdf/1706.09520.pdf)\n\nWe present an approach for agents to learn representations of a global map from sensor data, to aid their exploration in new environments. To achieve this, we embed procedures mimicking that of traditional simultaneous localization and mapping (SLAM) into the soft attention based addressing of external memory architectures, in which the external memory acts as an internal representation of the environment for the agent. This structure encourages the evolution of SLAMlike behaviors inside a completely differentiable deep neural network. We show that this approach can help reinforcement learning agents to successfully explore new environments where long-term memory is essential. We validate our approach in both challenging grid-world environments and preliminary Gazebo experiments. A video of our experiments can be found at: https://goo.gl/G2Vu5y." ]
[ 7, 9, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bk9zbyZCZ", "iclr_2018_Bk9zbyZCZ", "iclr_2018_Bk9zbyZCZ", "ByJWAeFxz", "H1cqz5cGz", "r151wyLbz", "H1LfT3wZf", "S1Ii7lcxz", "H1E1RgqxM", "ByJWAeFxz", "iclr_2018_Bk9zbyZCZ", "iclr_2018_Bk9zbyZCZ" ]
iclr_2018_ry6-G_66b
Active Neural Localization
Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize efficiently. The proposed model incorporates ideas of traditional filtering-based localization methods, by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to minimize the number of steps required for localization. Active Neural Localizer is trained end-to-end with reinforcement learning. We use a variety of simulation environments for our experiments which include random 2D mazes, random mazes in the Doom game engine and a photo-realistic environment in the Unreal game engine. The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model's capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations. We also show that a model trained on random textures in the Doom environment generalizes well to a photo-realistic office space environment in the Unreal engine.
accepted-poster-papers
The paper proposes a neural net based method for active localization in a known map using a learnt perception model (convnet) and a learnt control policy combined with a set belief state representation. The method compares well to baselines and has good accuracy in 2d and 3d envs. All three reviewers are in favor of acceptance due to the novelty and competitive performance of the approach.
train
[ "rJ74wm5xM", "S1a6mx5xM", "BJovaI9gf", "S1Zvx0Jmf", "SJsY_I3MG", "r1tmOU3MM", "HJphLU2MG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper describes a neural network-based approach to active localization based upon RGB images. The framework employs Bayesian filtering to maintain an estimate of the agent's pose using a convolutional network model for the measurement (perception) function. A convolutional network models the policy that governs the action of the agent. The architecture is trained in an end-to-end manner via reinforcement learning. The architecture is evaluated in 2D and 3D simulated environments of varying complexity and compared favorably to traditional (structured) approaches to passive and active localization.\n\nAs the paper correctly points out, there is large body of work on map-based localization, but relatively little attention has been paid to decision theoretic formulations to localization, whereby the agent's actions are chosen in order to improve localization accuracy. More recent work instead focuses on the higher level objective of navigation, whereby any effort act in an effort to improve localization are secondary to the navigation objective. The idea of incorporating learned representations with a structured Bayesian filtering approach is interesting, but it's utility could be better motivated. What are the practical benefits to learning the measurement and policy model beyond (i) the temptation to apply neural networks to this problem and (ii) the ability to learn these in an end-to-end fashion? That's not to say that there aren't benefits, but rather that they aren't clearly demonstrated here. Further, the paper seems to assume (as noted below) that there is no measurement uncertainty and, with the exception of the 3D evaluations, no process noise.\n\nThe evaluation demonstrates that the proposed method yields estimates that are more accurate according to the proposed metric than the baseline methods, with a significant reduction in computational cost. However, the environments considered are rather small by today's standards and the baseline methods almost 20 years old. Further, the evaluation makes a number of simplifying assumptions, the largest being that the measurements are not subject to noise (the only noise that is present is in the motion for the 3D experiments). This assumption is clearly not valid in practice. Further, it is not clear from the evaluation whether the resulting distribution that is maintained is consistent (e.g., are the estimates over-/under-confident?). This has important implications if the system were to actually be used on a physical system. Further, while the computational requirements at test time are significantly lower than the baselines, the time required for training is likely very large. While this is less of an issue in simulation, it is important for physical deployments. Ideally, the paper would demonstrate performance when transferring a policy trained in simulation to a physical environment (e.g., using diversification, which has proven effective at simulation-to-real transfer).\n\nComments/Questions:\n\n* The nature of the observation space is not clear.\n\n* Recent related work has focused on learning neural policies for navigation, and any localization-specific actions are secondary to the objective of reaching the goal. It would be interesting to discuss how one would balance the advantages of choosing actions that improve localization with those in the context of a higher-level task (or at least including a cost on actions as with the baseline method of Fox et al.).\n\n* The evaluation that assigns different textures to each wall is unrealistic.\n\n* It is not clear why the space over which the belief is maintained flips as the robot turns and shifts as it moves.\n\n* The 3D evaluation states that a 360 deg view is available. What happens when the agent can only see in one (forward) direction?\n\n* AML includes a cost term in the objective. Did the author(s) experiment with setting this cost to zero?\n\n* The 3D environments rely upon a particular belief size (70 x 70) being suitable for all environments. What would happen if the test environment was larger than those encountered in training?\n\n* The comment that the PoseNet and VidLoc methods \"lack a strainghtforward method to utilize past map data to do localization in a new environment\" is unclear.\n\n* The environments that are considered are quite small compared to the domains currently considered for\n\n* Minor: It might be better to move Section 3 into Section 4 after introducing notation (to avoid redundancy).\n* The paper should be proofread for grammatical errors (e.g., \"bayesian\" --> \"Bayesian\", \"gaussian\" --> \"Gaussian\")\n\n\nUPDATES FOLLOWING AUTHORS' RESPONSE\n\n(Apologies if this is a duplicate. I added a comment in light of the authors' response, but don't see it and so I am updating my review for completeness).\n\nI appreciate the authors's response to the initial reviews and thank them for addressing several of my comments.\n\nRE: Consistency\nMy concerns regarding consistency remain. For principled ways of evaluating the consistency of an estimator, see Bar-Shalom \"Estimation with Applications to Tracking and Navigation\".\n\nRE: Measurement/Process Noise\nThe fact that the method assumes perfect measurements and, with the exception of the 3D experiments, no process noise is concerning as neither assumptions are valid for physical systems. Indeed, it is this noise in particular that makes localization (and its variants) challenging.\n\nRE: Motivation\nThe response didn't address my comments about the lack motivation for the proposed method. Is it largely the temptation of applying an end-to-end neural method to a new problem? The paper should be updated to make the advantages over traditional approaches to active localization.", "I have evaluated this paper for NIPS 2017 and gave it an \"accept\" rating at the time, but the paper was ultimately not accepted. This resubmission has been massively improved and definitely deserves to be published at ICLR.\n\nThis paper formulates the problem localisation on a known map using a belief network as an RL problem. The goal of the agent is to minimise the number of steps to localise itself (the agent needs to move around to accumulate evidence about its position), which corresponds to reducing the entropy of the joint distribution over a discretized grid over theta (4 orientations), x and y. The model is evaluated on a grid world, on textured 3D mazes with simplified motion (Doom environment) and on a photorealistic environment using the Unreal engine. Optimisation is done through A3C RL. Transfer from the crude simulated Doom environment to the photorealistic Unreal environment is achieved.\n\nThe belief network consists of an observation model, a motion prediction model that allows for translations along x or y and 90deg rotation, and an observation correction model that either perceives the depth in front of the agent (a bold and ambiguous choice) and matches it to the 2D map, or perceives the image in front of the agent. The map is part of the observation.\n\nThe algorithm outperforms Bayes filters for localisation in 2D and 3D and the idea of applying RL to minimise the entropy of position estimation is brilliant. Minor note: I am surprised that the cognitive map reference (Gupta et al, 2017) was dropped, as it seemed relevant.", "This is an interesting paper that builds a parameterized network to select actions for a robot in a simulated environment, with the objective of quickly reaching an internal belief state that is predictive of the true state. This is an interesting idea and it works much better than I would have expected. \n\nIn more careful examination it is clear that the authors have done a good job of designing a network that is partly pre-specified and partly free, in a way that makes the learning effective. In particular\n- the transition model is known and fixed (in the way it is used in the belief update process)\n- the belief state representation is known and fixed (in the way it is used to decide whether the agent should be rewarded)\n- the reward function is known and fixed (as above)\n- the mechanics of belief update\nBut we learn\n- the observation model\n- the control policy\n\nI'm not sure that global localization is still an open problem with known models. Or, at least, it's not one of our worst.\n\nEarly work by Cassandra, Kurien, et al used POMDP models and solvers for active localization with known transition and observation models. It was computationally slow but effective.\n\nSimilarly, although the online speed of your learned method is much better than for active Markov localization, the offline training cost is dramatically higher; it's important to remember to be clear on this point.\n\nIt is not obvious to me that it is sensible to take the cosine similarity between the feature representation of the observation and the feature representation of the state to get the entry in the likelihood map. It would be good to make it clear this is the right measure.\n\nHow is exploration done during the RL phase? These domains are still not huge.\n\nPlease explain in more detail what the memory images are doing.\n\nIn general, the experiments seem to be well designed and well carried out, with several interesting extensions.\n\nI have one more major concern: it is not the job of a localizer to arrive at a belief state with high probability mass on the true state---it is the job of a localizer to have an accurate approximation of the true posterior under the prior and observations. There are situations (in which, for example, the robot has gotten an unusual string of observations) in which it is correct for the robot to have more probability mass on a \"wrong\" state. Or, it seems that this model may earn rewards for learning to make its beliefs overconfident. It would be very interesting to see if you could find an objective that would actually cause the model to learn to compute the appropriate posterior.\n\nIn the end, I have trouble making a recommendation:\nCon: I'm not convinced that an end-to-end approach to this problem is the best one\nPro: It's actually a nice idea that seems to have worked out well\nCon: I remain concerned that the objective is not the right one\n\nMy rating would really be something like 6.5 if that were possible.\n\n\n\n\n", "Having seen the other reviews and rebuttals, I maintain my rating at 8 (top 50%, clear accept).", "We thank the reviewer for their valuable comments and feedback.\n\n> Minor note: I am surprised that the cognitive map reference (Gupta et al, 2017) was dropped, as it seemed relevant.\nWe agree that this reference is relevant, we have added the reference to the revision.\n", "We thank the reviewer for their valuable comments and feedback.\n\n> What are the practical benefits to learning the measurement and policy model?\nThe example at the end of the paper (see Figure 4) highlights the importance of deciding actions for fast and accurate localization. We agree that the benefits can be better motivated in the introduction and we are looking into restructuring the paper to have a motivating example in the introduction.\n\n> it is not clear from the evaluation whether the resulting distribution that is maintained is consistent:\nLooking at the output of the model manually, it seems that the estimates are consistent. It is very difficult to quantify the consistency of the resulting distribution because there is no straightforward way to calculate the ground-truth distribution/posterior in 3D environments.\n\n> while the computational requirements at test time are significantly lower than the baselines, the time required for training is likely very large:\nWe designed the Active Markov Localization (Slow) baseline keeping this in mind. The proposed model was trained for 24hrs for all experiments. AML (Slow) represents the Generalized AML algorithm using the values of hyperparameters which maximize the performance while keeping the runtime for 1000 episodes below 24hrs in each environment. This means the runtime of AML (Slow) is comparable to the training time of the proposed model. However, we agree that this point should be stated explicitly and we have made relevant changes in the paper.\n\n> The nature of the observation space is not clear.\nThe observation space in 2D environments is just the depth of the one column in front of the agent and in 3D environments, it is the 108x60 RGB image showing the first-person view of the agent.\n\n> It is not clear why the space over which the belief is maintained flips as the robot turns and shifts as it moves.\nThis happens due to the transition function as each channel represents a quantized orientation (North/East/West/South). The details of the transition function are provided in the appendix. For example, if the agent turns left, the probability of it facing north at any x-y coordinate becomes the probability of it facing west at the same-coordinate. This is why the belief flips when turning left. Similarly, the belief flips in the opposite direction when turning right and shifts when moving forward.\n\n> The 3D evaluation states that a 360 deg view is available. What happens when the agent can only see in one (forward) direction?\nThis seems to be a misunderstanding. The agent only sees in one forward direction, it needs to take actions to turn around to get the view in other directions. This misunderstanding might be due to the likelihood and belief presented in 4 directions. Note that each of these 4 channels represents the likelihood/belief of the agent’s orientation being that direction, not the likelihood/belief of the view in that direction.\n\n> AML includes a cost term in the objective. Did the author(s) experiment with setting this cost to zero?\nIn our environment, all actions have the same cost. This is equivalent to setting the cost to zero (i.e. it does not affect the optimal policy in the environment), but we found it helps the optimization of our model.\n\n> What would happen if the test environment was larger than those encountered in training?\nWe will need to discretize the test environment such that its belief is at most the size of the training environment, i.e. 70x70. The discretization of the training environments can be changed according to the desired level of accuracy in the test environment. For example, if we discretize 35m x 35m environment to a grid of 35x35, each cell would be a length of 1m. Due to this discretization, the model can make errors up to 0.5m even if it predicts the correct cell. The discretization can be increased to 70x70 to reduce errors to 0.25m.\n\n> The comment that the PoseNet and VidLoc methods \"lack a straightforward method to utilize past map data to do localization in a new environment\" is unclear.\nThe network weights in these models memorize the environment. The model has no way to ingest information about the map as input, thus the model trained in one map cannot be transferred to another map. These models need to be retrained on any new map.\n", "We thank the reviewer for their valuable comments and feedback.\n\nConcerns regarding the objective function:\nThis is a very interesting point and we thank the reviewer for this observation. We agree that one of the tasks of the localizer is to accurately approximate the true posterior under the prior and observations. But another task is to learn to take actions which lead it to arrive at a belief state with high probability mass on the true location. We provide rewards only for the correct prediction of the location and not for the correct prediction of the posterior because of two primary reasons:\n- Defining an appropriate reward function for the true posterior would require some way of estimating the true posterior, which is very difficult especially in 3D environments.\n- We want the model to be penalized if it fails to take actions in order to reach a state where it can predict its correct location, even if its estimation of the posterior under the prior and the observations is accurate.\n\nThe second point can potentially be mitigated by having an auxiliary loss on the belief, which back-propagates only through the perceptual model. This will only reward the perceptual model for predicting the true posterior, and the policy loss would still penalize the whole model for taking unfavorable actions. However, this will still require defining a reward or a loss function for the true posterior, which is difficult as there is generally no straightforward way of computing the ground-truth posterior in 3D environments with unknown models.\n\n> although the online speed of your learned method is much better than for active Markov localization, the offline training cost is dramatically higher:\nWe designed the Active Markov Localization (Slow) baseline keeping this in mind. The proposed model was trained for 24hrs for all experiments. AML (Slow) represents the Generalized AML algorithm using the values of hyperparameters which maximize the performance while keeping the runtime for 1000 episodes below 24hrs in each environment. This means that the runtime of AML (Slow) is comparable to the training time of the proposed model. However, we agree that this point should be stated explicitly and we have made relevant changes in the paper.\n\n> How is exploration done during the RL phase?.\nExploration is done implicitly using the stochastic policy in Asynchronous Advantage Actor-Critic method. As in the original A3C paper, to encourage exploration, we used an entropy loss scale of 0.01. \n\n> Please explain in more detail what the memory images are doing.\nMemory images are a part of the map information given to the agent in 3D Environments. They are used to calculate the likelihood given the current observation of the agent as follows: The perceptual model is used to get the feature representation of all the memory images and the current agent observation. The likelihood of each state in the set of memory images is calculated by taking the cosine similarity of the feature representation of the agent’s observation with the feature representation of the memory image. \n\n> It is not obvious to me that it is sensible to take the cosine similarity between the feature representation of the observation and the feature representation of the state to get the entry in the likelihood map:\nThe basic assumption here is that images containing the same “landmark” (unique texture or object) would have similar representations (e.g. high inner product value). Taking cosine similarity is similar to the standard attention operation commonly used in Deep Learning, which is exponentiated inner product. The cosine similarity, in contrast, is scaled to remain within a range of values, which can help training stability and prevent the likelihood model from becoming too sharp.\n" ]
[ 6, 8, 7, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_ry6-G_66b", "iclr_2018_ry6-G_66b", "iclr_2018_ry6-G_66b", "SJsY_I3MG", "S1a6mx5xM", "rJ74wm5xM", "BJovaI9gf" ]
iclr_2018_B1al7jg0b
Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, "Conceptor-Aided Backprop" (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
accepted-poster-papers
This paper is a timely application of linear algebra to propose a method for reducing catastrophic interference by training a new task in a subspace of the parameter space using conceptors. The conceptors are deployed in the backprop, making this a valuable alternative to recent continual learning methods such as EWC. The paper is clearly written and the results give a clear validation of the method. The reviewers agree as to the merits of the paper.
train
[ "rkY82b5lM", "HkUpFYKeM", "BJar6i8Vz", "rkhi37_gz", "H1hoeBkMG", "HJ45oNJGf", "Byej5NkGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper leaves me guessing which part is a new contribution, and which one is already possible with conceptors as described in the Jaeger 2014 report. Figure (1) in the paper is identical to the one in the (short version of) the Jaeger report but is missing an explicit reference. Figure 2 is almost identical, again a reference to the original would be better.\nConceptors can be trained with a number of approaches (as described both in the 2014 Jaeger tech report and in the JMLR paper), including ridge regression. What I am missing here is a clear indication what is an original contribution of the paper, and what is already possible using the original approach. The fact that additional conceptors can be trained does not appear new for the approach described here. If the presented approach was an improvement over the original conceptors, the evaluation should compare the new and the original version.\n\nThe evaluation also leaves me a little confused in an additional dimension: the paper title and abstract suggested that the contribution is about overcoming catastrophic forgetting. The evaluation shows that the approach performs better classifying MNIST digits than another approach. This is nice but doesn't really tell me much about overcoming catastrophic forgetting. \n", "[Reviewed on January 12th]\n\nThis article applies the notion of “conceptors” -- a form of regulariser introduced by the same author a few years ago, exhibiting appealing boolean logic pseudo-operations -- to prevent forgetting in continual learning,more precisely in the training of neural networks on sequential tasks. It proposes itself as an improvement over the main recent development of the field, namely Elastic Weight Consolidation. After a brief and clear introduction to conceptors and their application to ridge regression, the authors explain how to inject conceptors into Stochastic Gradient Descent and finally, the real innovation of the paper, into Backpropagation. Follows a section of experiments on variants of MNIST commonly used for continual learning.\n\nContinual learning in neural networks is a hot topic, and this article contributes a very interesting idea. The notion of conceptors is appealing in this particular use for its interpretation in terms of regularizer and in terms of Boolean logic. The numeric examples, although quite toy, provide a clear illustration.\n\nA few things are still missing to back the strong claims of this paper:\n* Some considerations of the computational costs: the reliance on the full NxN correlation matrix R makes me fear it might be costly, as it is applied to every layer of the neural networks and hence is the largest number of units in a layer. This is of course much lighter than if it were the covariance matrix of all the weights, which would be daunting, but still deserves to be addressed, if only with wall time measures.\n* It could also be welcome to use a more grounded vocabulary, e.g. on p.2 “Figure 1 shows examples of conceptors computer from three clouds of sample state points coming from a hypothetical 3-neuron recurrent network that was drive with input signals from three difference sources” could be much more simply said as “Figure 1 shows the ellipses corresponding to three sets of R^3 points”. Being less grandiose would make the value of this article nicely on its own.\n* Some examples beyond the contrived MNIST toy examples would be welcome. For example, the main method this article is compared to (EWC) had a very strong section on Reinforcement learning examples in the Atari framework, not only as an illustration but also as a motivation. I realise not everyone has the computational or engineering resources to try extensively on multiple benchmarks from classification to reinforcement learning. Nevertheless, without going to that extreme, it might be worth adding an extra demo on something bigger than MNIST. The authors transparently explain in their answer that they do not (yet!) belong to the deep learning community and hope finding some collaborations to pursue this further. If I may make a suggestion, I think their work would get much stronger impact by doing it the reverse way: first finding the collaboration, then adding this extra empirical results, which then leads to a bigger impact publication.\n\nThe later point would normally make me attribute a score of \"6: Marginally above acceptance threshold\" by current DL community standards, but because there is such a pressing need for methods to tackle this problem, and because this article can generate thinking along new lines about this, I give it a 7 : Good paper, accept.\n", "Dear Authors\n\nThank you for pointing to the statement on the submission webpage. I agree with your interpretation, and retract my objection: even though I find this utterly confusing (and would have wished that the PC and AC detail this in their request to reviewers), this is not for you to pay the price. \n\nApologies for the stress this may have caused you. I will revise my review.\nBest regards.", "This paper introduces a method for learning new tasks, without interfering previous tasks, using conceptors. This method originates from linear algebra, where a the network tries to algebraically infer the main subspace where previous tasks were learned, and make the network learn the new task in a new sub-space which is \"unused\" until the present task in hand.\n\nThe paper starts with describing the method and giving some context for the method and previous methods that deal with the same problem. In Section 2 the authors review conceptors. This method is algebraic method closely related to spanning sub spaces and SVD. The main advantage of using conceptors is their trait of boolean logics: i.e., their ability to be added and multiplied naturally. In section 3 the authors elaborate on reviewed ocnceptors method and show how to adapt this algorithm to SGD with back-propagation. The authors provide a version with batch SGD as well.\n\nIn Section 4, the authors show their method on permuted MNIST. They compare the method to EWC with the same architecture. They show that their method more efficiently suffers on permuted MNIST from less degradation. Also, they compared the method to EWC and IMM on disjoint MNIST and again got the best performance.\n\nIn general, unlike what the authors suggest, I do not believe this method is how biological agents perform their tasks in real life. Nevertheless, the authors show that their method indeed reduce the interference generated by a new task on the old learned tasks.\n\nI think that this work might interest the community since such methods might be part of the tools that practitioners have in order to cope with learning new tasks without destroying the previous ones. What is missing is the following: I think that without any additional effort, a network can learn a new task in parallel to other task, or some other techniques may be used which are not bound to any algebraic methods. Therefore, my only concern is that in this comparison the work bounded to very specific group of methods, and the question of what is the best method for continual learning remained open. ", "It seems that the reviewer did not read our paper carefully, since it is clear that this paper is not about improving conceptors per se, but about applying conceptors to overcoming catastrophic interference in neural networks. The permuted and disjoint MNIST classification tasks used to evaluate our approach are commonly chosen in continual learning literature to demonstrate a method can overcome catastrophic forgetting (for details, see Lee et al., 2017; Kirkpatrick et al., 2017; Kemker et al., 2017 in the References). The basic idea behind these tests is to show that a neural network can still classify the first datasets without catastrophic forgetting after it is trained on other different tasks. Reviewer 3 (and only this reviewer) entirely misunderstood the objectives and contributions of our work.", "Thank you for your feedback! We absolutely share your disbelief that „this method is how biological agents perform their tasks in real life“. But we made no such claims - and after re-reading our paper we could not find a spot that could be interpreted as if we viewed our model as biologically relevant. (We want to add in parentheses that we are engaged in a collaboration with a neuroscience group, aiming at revealing dendritic spike dynamics as a possible carrier for biological conceptors; but in our paper we made no allusion to this line of work). \n\nAs for your final concern, as we understand it, you point out that biological neural networks are able to cope with a number of different learning tasks simultaneously or in a dovetailing fashion (but we are not sure whether we understand you correctly), and you deplore that we are only comparing to the „very specific group of methods\" and problem definitions that are currently considered in the machine learning (ML) community. Yes, in ML only a rather narrow version of continual learning is addressed which one could dub „strictly sequential learning“: first learn task A, then learn B, etc. Obviously animals and humans can do better and learn (very!) many tasks interleavingly. But strictly sequential learning is difficult enough in ML/ANN research and the catastrophic forgetting problem that it raises hasn’t been satisfactorily addressed until recently. Your suggestion points out a natural and relevant extension of ML research directions!", "Thank you for your feedback! As to your main concern, i.e. that we dodged the blind submission policies by a previous ArXiv publication, we wish to emphasize that in no way did we want to violate these rules. We were relying on the statement found on the submission webpage (http://www.iclr.cc/doku.php?id=iclr2018:conference_cfp): \"While ICLR is double blind, we will not forbid authors from posting their paper on arXiv or any other public forum\". If we misunderstood this statement, we apologize and will of course retract our submission; but before we do so, we would want to get a word of guidance from the conference organizers how that statement should be properly interpreted.\n\nWe are very grateful that even while you felt, well, cheated by the previous ArXiv publication, you composed an insightful and constructive review. Regarding the computational cost, since a conceptor can be computed by ridge regression, the time complexity is O(nN^2+N^3) if the design matrix is dense, where n is the number of samples and N the number of features. In terms of wall time measures, the time taken to compute a conceptor from the entire MNIST training set (n=55000 images and N=784 pixels, corresponding to the input layer in our networks) is 0.42 seconds of standard notebook CPU time on average. Incremental online adaptation by gradient descent of conceptors is possible in principle too and would come at a cost of O(N^2) per update; we did not implement this. A detailed analysis of computational cost will be added in the revision. \n\nAs for your second suggestion (a larger-sized demo), we have to admit that due to lack of resources (time, infrastructure and manpower) we are currently unable to evaluate our method on tasks of the caliber that you suggest. If our method will be well received in the deep learning community (to which we do not really belong), we hope to find cooperation partners in the future to explore larger-than-MNIST tasks. \n\nFinally, we will go through the paper again to make the vocabulary and phrasing more grounded." ]
[ 7, 7, -1, 7, -1, -1, -1 ]
[ 5, 3, -1, 3, -1, -1, -1 ]
[ "iclr_2018_B1al7jg0b", "iclr_2018_B1al7jg0b", "Byej5NkGG", "iclr_2018_B1al7jg0b", "rkY82b5lM", "rkhi37_gz", "HkUpFYKeM" ]
iclr_2018_HyfHgI6aW
Memory Augmented Control Networks
Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory. But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan. To mitigate these challenges we propose the Memory Augmented Control Network (MACN). The network splits planning into a hierarchical process. At a lower level, it learns to plan in a locally observed space. At a higher level, it uses a collection of policies computed on locally observed spaces to learn an optimal plan in the global environment it is operating in. The performance of the network is evaluated on path planning tasks in environments in the presence of simple and complex obstacles and in addition, is tested for its ability to generalize to new environments not seen in the training set.
accepted-poster-papers
The authors have proposed an architecture that incorporates a VIN with a DNC to combine low level planning with high level memory-based optimization, resulting in a single policy for navigation and other similar problems that is trained end-to-end with sparse rewards. The reviews are mixed, but the authors did allay the concerns of the most negative reviewer by adding a comparison to traditional motion planning (A*) algorithms.
train
[ "H1QljSQxz", "HJBOB_oxf", "r1IWuK2lf", "ByRUdkx7f", "r1pfOkgmf", "Hk8IcbTbz", "BJ7deMpZf", "BJGZMyhWf", "rkpxjJ-WM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author" ]
[ "Summary:\n\nA method is proposed for robot navigation in partially observable scenarios. E.g. 2D navigation in a grid world from start to goal but the robot can only sense obstacles in a certain radius around it. A learning-based method is proposed here which takes the currently discovered partial map as input to convolutional layers and then passes through K-iterations of a VIN module to a final controller. The controller takes as input both the convolutional features, the VIN module and has access to a differential memory module. A linear layer takes inputs from both the controller and memory and predicts the next step of the robot. This architecture is termed as MACN.\n\nIn experiments on 2D randomly generated grid worlds, general graphs and a simulated ground robot with a lidar, it is shown that memory is important for navigating partially observable environments and that the VIN module is important to the architecture since a CNN replacement doesn't perform as well. Also larger start-goal distances can be better handled by increasing the memory available.\n\nComments:\n\n- My main concern is that there are no non-learning based obvious baselines like A*, D*, D*-Lite and related motion planners which have been used for this exact task very successfully and run on real-world robots like the Mars rover. In comparison to the size of problems that can be handled by such planners the experiments shown here are much smaller and crucially the network can output actions which collide with obstacles while the search-based planners by definition will always produce feasible paths and require no training data. I would like to see in the experimental tables, comparison to path lengths produced by MACN vs. those produced by D*-Lite or Multi-Heuristic A*. While it is true that motion-planning will keep the entire discovered map in memory for the problem sizes shown here (2D maps: 16x16, 32x32, 64x64 bitmaps, general graphs: 9, 16, 25, 36 nodes) that is on the order of a few kB memory only. For the 3D simulated robot which is actually still treated as a 2D task due to the line lidar scanner MxN bitmap is not specified but even a few Mb is easily handled by modern day embedded systems. I can see that perhaps when map sizes exceed say tens of Gbs then perhaps MACN's memory will be smaller to obtain similar performance since it may learn better map compression to better utilize the smaller budget available to it. But experiments at that scale have not been shown currently.\n\n- Figure 1: There is no sensor (lidar or camera or kinect or radar) which can produce the kind of sensor observations shown in 1(b) since they can't look beyond occlusions. So such observations are pretty unrealistic.\n\n- \"The parts of the map that lie within the range of the laser scanner are converted to obstacle-free ...\": How are occluded regions marked?", "The paper addresses the important problem of planning in partially observable environments with sparse rewards, and the empirical verification over several domains is convincing. My main concern is that the structure of these domains is very similar - essentially, a graph where only neighboring vertices are directly observable, and because of this, the proposed architecture might not be applicable to planning in general POMDPs (or, in their continuous counterparts, state-space models). The authors claim that what is remembered by the planner does not take the form of a map, but isn't the map estimate \\hat{m} introduced at the end of Section 2.1 precisely such a map? From Section 2.4, it appears that these map estimates are essential in computing the low-level policies from which the final, high-level policy is computed. If the ability to maintain and use such local maps is essential for this method, its applicability is likely restricted to this specific geometric structure of domains and their observability. \n\nSome additional comments:\n\nP. 2, Section 2.1: does H(s) contain 0s for non-observable and 1s for observable states? If yes, please state it.\n\nP. 3: the concatenation of state and observation histories is missing from the definition of the transition function.\n\nP. 3, Eq. 1: overloaded notation - if T is the transition function for the large MDP on histories, it should not be used for the transition function between states. Maybe the authors meant to use f() for that transition?\n\nP. 3, Eq. 3: the sum is over i, but it is not clear what i indexes.\n\nP.3, end of Section 2.1: when computing the map estimate \\hat{m}, shouldn't the operator be min, that is, a state is assumed to be open (0), unless one or more observations show that it is blocked (-1)?\n\nP.5: the description of the reward function is inconsistent - is it 0 at the goal state, or >0?\n\nP. 11, above Fig. 9: typo, \"we observe that the in the robot world\"\n \n ", "The paper presents a method for navigating in an unknown and partially observed environment is presented. The proposed approach splits planning into two levels: 1) local planning based on the observed space and 2) a global planner which receives the local plan, observation features, and access to an addressable memory to decide on which action to select and what to write into memory. \n\nThe contribution of this work is the use of value iteration networks (VINs) for local planning on a locally observed map that is fed into a learned global controller that references history and a differential neural computer (DNC), local policy, and observation features select an action and update the memory. The core concept of learned local planner providing additional cues for a global, memory-based planner is a clever idea and the thorough analysis clearly demonstrates the benefit of the approach.\n\nThe proposed method is tested against three problems: a gridworld, a graph search, and a robot environment. In each case the proposed method is more performant than the baseline methods. The ablation study of using LSTM instead of the DNC and the direct comparison of CNN + LSTM support the authors’ hypothesis about the benefits of the two components of their method. While the author’s compare to DRL methods with limited horizon (length 4), there is no comparison to memory-based RL techniques. Furthermore, a comparison of related memory-based visual navigation techniques on domains for which they are applicable should be considered as such an analysis would illuminate the relative performance over the overlapping portions problem domains For example, analysis of the metric map approaches on the grid world or of MACN on their tested environments.\n\nPrior work in visual navigation in partially observed and unknown environments have used addressable memory (e.g., Oh et al.) and used VINs (e.g., Gupta et al.) to plan as noted. In discussing these methods, the authors state that these works are not comparable as they operate strictly on discretized 2d spaces. However, it appears to the reviewer that several of these methods can be adapted to higher dimensions and be applicable at least a subclass (for the euclidean/metric map approaches) or the full class of the problems (for Oh et al.), which appears to be capable to solve non-euclidean tasks like the graph search problem. If this assessment is correct, the authors should differentiate between these approaches more thoroughly and consider empirical comparisons. The authors should further consider contrasting their approach with “Neural SLAM” by Zhang et al.\n\nA limitation of the presented method is requirement that the observation “reveals the labeling of nearby states.” This assumption holds in each of the examples presented: the neighborhood map in the gridworld and graph examples and the lidar sensor in the robot navigation example. It would be informative for the authors to highlight this limitation and/or identify how to adapt the proposed method under weaker assumptions such as a sensor that doesn’t provide direct metric or connectivity information such as a RGB camera. \n\nMany details of the paper are missing and should be included to clarify the approach and ensure reproducible results. The reviewer suggests providing both more details in the main section of the paper and providing the precise architecture including hyperparameters in the supplementary materials section. \n", "Thank you for bringing to our attention this work. This is definitely very interesting and we have included a pointer \nto your work in our related work section. ", "Dear Reviewer,\n\nWe would first off like to thank you for your strong support and feedback on our paper. Your detailed reviews will definitely help us in improving our paper. \n\nWe would like to answer some of the points raised by you in our response here:\n\nWe set up the CNN+Memory architecture to emulate the FRMQN from Oh et. al's work as closely as possible. The DNC actually improves upon the read, write and context architecture described in the paper. Further, in our experiments, we found that when training the CNN+Memory architecture with supervised learning, the network performed worse than our MACN. We hypothesized that if supervised learning was unable to learn a reasonable policy, then any reinforcement learning paradigm with sparse rewards would definitely do worse. \n\n\nWe would like to thank you for bringing to our attention the work of Zhang et. al - \"Neural SLAM\". To the best of our understanding, this paper focuses on using a SLAM formulation in a deep reinforcement learning paradigm which helps in exploration. Exploration is one topic that we have not explored in this work since we assume that there is always a path to the goal. In future work, we intend to extend our network to be trained with reinforcement learning instead of supervised learning. In such a setting, a Neural SLAM style architecture might help with exploration when the environment presents sparse rewards. \n\nWe have added a note to Section 5 regarding the need for perfect labeling of nearby states. We agree additional work is required to model sensors such as an RGB camera where such direct labeling might not be possible. The focus of this paper is to investigate the feasibility of a hierarchical learning scheme for planning in partially observable environments and hence we assume perfect sensors. In future work, using real-world sensors that do not always give a perfect labeling of nearby states will be one of our goals.\n\nWe have included all details one would need to reproduce our work in Section 2.4 under the computation graph. Further, experiment specific details are included in the appendix. It might be possible to present these in a more reader friendly format such as a table in the camera-ready version of the paper if required? Additionally, we intend to make our code publicly available. \n\n", "Dear Reviewer, \n\nWe would like to thank you for your detailed review. We would like to answer some of the points raised by you in our response here :\n\nWe have tried to address the reviewers concerns about comparing with motion planning baselines by adding another subsection. We agree it is useful to compare to A* and have added comparisons. We request the reviewer to look at Section 3.5 in the latest version. \n\nThat being said, we would like to point out several things here :\n\n1) There is a key difference between our approach and using a planning algorithm such as A*. A* is a model-based approach where you need to explicitly know beforehand knowledge about the cost, transition probabilities of the agent and explicitly construct a map. The motivation behind using a model-free approach such as ours is that these transition probabilities and cost function (in our case reward map -> low reward near obstacles and high reward in open space) is learned by the agent. \n\nThis is, in fact, the biggest motivation for using end to end learning approaches! One does not need to explicitly know the model beforehand and can use the neural network to approximate it. Our proposed model learns the transition probabilities the cost map and the learns how to plan on the local map. \n\n2) Another advantage of using a model-free approach such as ours as opposed to A* is that our model learns a compact representation of the environment. This can be seen in Experiment 2 (grid world with tunnels) and Section B in the Appendix. In the case of the tunnel environment, we can make the tunnels arbitrarily long (say 500 units in length). A* would have to expand all nodes going into the tunnel and would need to remember the entire map. \nHowever with our approach, we can use the same memory size for both 20 length tunnels as well as 500 since it records only the events where we enter and exit the tunnel as well as the end of the tunnel. Further, these events were not hand engineered. Instead, the network learned what events were important to understand the topology of a tunnel. '\n\nWe absolutely agree with the reviewer that our sensor models are simplistic and we assume perfect models. In this work, we are focused on learning how to navigate a partially observable environment when an architecture consisting of a differentiable planner and memory are used. In future work, we would focus on extending our work to model sensor effects such as noise, occlusions. \n\nWe hope this answers some of your concerns about our paper and you reconsider our paper more favorably. ", "Dear Reviewer,\n\nWe have updated our paper to address the typos you had pointed out in our paper. \n\nAdditionally, would like to answer one of the questions you had raised \n\n\"P.3, end of Section 2.1: when computing the map estimate \\hat{m}, shouldn't the operator be min, that is, a state is assumed to be open (0), unless one or more observations show that it is blocked (-1)?\"\n\nIf one were to consider a 1x1 in which we have 2 observations over time. if both obs are zero, the sum is zero and the max is zero thus indicating that the state is open. \n\nif both obs are -1, sum is -2. The max is -1 indicating state is blocked. \nIf one of the observations is -1 and the other 0, the sum is -1 and the max is still -1 telling us that the state is blocked. \n\nAll other typos have been addressed in the paper. \nWe hope this answers your concerns about our paper and you re consider our paper favorably. ", "You might be interested to take a look\n\nNeural SLAM: Learning to Explore with External Memory \n(https://arxiv.org/pdf/1706.09520.pdf)\n\nWe present an approach for agents to learn representations of a global map from sensor data, to aid their exploration in new environments. To achieve this, we embed procedures mimicking that of traditional simultaneous localization and mapping (SLAM) into the soft attention based addressing of external memory architectures, in which the external memory acts as an internal representation of the environment for the agent. This structure encourages the evolution of SLAMlike behaviors inside a completely differentiable deep neural network. We show that this approach can help reinforcement learning agents to successfully explore new environments where long-term memory is essential. We validate our approach in both challenging grid-world environments and preliminary Gazebo experiments. A video of our experiments can be found at: https://goo.gl/G2Vu5y.", "Dear Reviewer,\n\nThank you for your detailed review. We would like to answer some of the points raised by you in our response here :\n\n1) We agree that on a first pass it might look like the structure of the domains looks very similar. However, while writing this paper our focus was on the environments/domains that one might encounter in robotics and real world applications. We choose to demonstrate the feasibility of our architecture in such 2D/2.5D worlds and look to answer problems faced by other learning architectures in such worlds. Further, we added the graph experiment in section 3.3 to break up some of the structural similarity between the domains. The graph experiment differs from the other domains in that the state space is no longer 2 dimensional. Further, the number of states observed by the agent and the number of valid actions varies as the agent visits each node. This is because the action space now depends on the number of vertices connected to the current node that the agent is in. Additionally, the action space in these graphs is also no longer a choice between up/down/left/right. The agent has to learn to pick the correct next node to visit and there are N-1 choices (where N is the number of nodes). \n\n\n2) We completely agree with the reviewer that remembering a map might hamper the ability of the work to be extended to other domains which might not have explicit geometric structure. This is in fact one of the limitations of \"Cognitive Mapping and Planning for Visual Navigation\" by Gupta et. al where learning an explicit top down map of the 3d environment might not be possible in some domains. Instead, in our work, we maintain a belief estimate over the environment which is represented in the external memory as a set of activations. We would like to draw the reviewer's attention to Fig 13 in the appendix. In this figure, we show the map estimate stored in the memory for the tunnel task. As one can see, the information stored in the memory does not correspond to the geometric structure of the environment. Instead, our proposed architecture learns to output different activations when the critical parts of the environment are observed by the agent. In the tunnel task, the memory exhibits one kind of activations when the agent observes the end of the tunnel, and when it turns out of the tunnel. For all other events, the memory shows no change in its activations. Additionally, when looking at the read/write weights when entering and exiting the tunnel, we see that the write weights are activated till the agent sees the end of the tunnel and the read weights are activated when the agent turns around. Thus, planning by remembering important events encountered in the environment allows us to use the proposed planner to domains where geometric structure might not exist. This is also something we wished to demonstrate by planning on graphs where there is no such explicit structure. \n\nWe thank the reviewer for pointing out the typos and other potentially confusing statements and will address them in the updated version. \n\n\nWe hope, this answers some of your concerns about our paper. \n\n" ]
[ 4, 6, 9, -1, -1, -1, -1, -1, -1 ]
[ 5, 2, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyfHgI6aW", "iclr_2018_HyfHgI6aW", "iclr_2018_HyfHgI6aW", "BJGZMyhWf", "r1IWuK2lf", "H1QljSQxz", "rkpxjJ-WM", "iclr_2018_HyfHgI6aW", "HJBOB_oxf" ]
iclr_2018_B13njo1R-
Progressive Reinforcement Learning with Distillation for Multi-Skilled Motion Control
Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems, including agents that can move with skill and agility through their environment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine \expert policies, as evaluated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against three alternative baselines.
accepted-poster-papers
The authors propose an architecture that uses a curriculum and multi-task distillation to gain higher performance without forgetting. The paper is largely a smart composition of known methods, and it requires keeping data from all tasks to do the distillation, so it is not truly a scalable continual learning approach. There were a lot of concerns about clarity in the manuscript, but many of these have been assuaged by an update to the paper. This is a borderline paper, but the author's rebuttal and update probably tip it towards acceptance.
test
[ "H1RMUzgBM", "ByMViwPef", "SJmZPJd4z", "H1BAcZ9eG", "ByFImz_ZG", "BkSxcvaQM", "HkY9Ln-7G", "B1pGI2Zmf", "ryByL2WmG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public" ]
[ "We updated figure 1 in the paper to show a diagram for the other added baselines in this work (fine-tuning).\nThe figures in the paper have been updated, showing the results over 5 runs of each method. The findings are very similar.\nTable 1 has also been updated with data for the MultiTasker. This table now better illustrates the effects distillation on \"forgetting\".", "This paper aims to learn a single policy that can perform a variety of tasks that were experienced sequentially. The approach is to learn a policy for task 1, then for each task k+1: copy distilled policy that can perform tasks 1-k, finetune to task k+1, and distill again with the additional task. The results show that this PLAID algorithm outperforms a network trained on all tasks simultaneously. \n\nQuestions:\n- When distilling the policies, do you start from a randomly initialized policy, or do you start from the expert policy network?\n- What data do you use for the distillation? Section 4.1 states\"We use a method similar to the DAGGER algorithm\", but what is your method. If you generate trajectories form the student network, and label them with the expert actions, does that mean all previous expert policies need to be kept in memory?\n- I do not understand the purpose of \"input injection\" nor where it is used in the paper. \n\nStrengths:\n- The method is simple but novel. The results support the method's utility.\n- The testbed is nice; the tasks seem significantly different from each other. It seems that no reward shaping is used.\n- Figure 3 is helpful for understanding the advantage of PLAID vs MultiTasker.\n\nWeaknesses:\n- Figure 2: the plots are too small.\n- Distilling may hurt performance ( Figure 2.d)\n- The method lacks details (see Questions above)\n- No comparisons with prior work are provided. The paper cites many previous approaches to this but does not compare against any of them. \n- A second testbed (such as navigation or manipulation) would bring the paper up a notch. \n\nIn conclusion, the paper's approach to multitask learning is a clever combination of prior work. The method is clear but not precisely described. The results are promising. I think that this is a good approach to the problem that could be used in real-world scenarios. With some filling out, this could be a great paper.", "I find the additions to the paper satisfactory and will increase my score accordingly.", "This paper describes PLAID, a method for sequential learning and consolidation of behaviours via policy distillation; the proposed method is evaluated in the context of bipedal motor control across several terrain types, which follow a natural curriculum.\n\nPros:\n- PLAID masters several distinct tasks in sequence, building up “skills” by learning “related” tasks of increasing difficulty.\n- Although the main focus of this paper is on continual learning of “related” tasks, the authors acknowledge this limitation and convincingly argue for the chosen task domain.\n\nCons:\n- PLAID seems designed to work with task curricula, or sequences of deeply related tasks; for this regime, classical transfer learning approaches are known to work well (e.g finetunning), and it is not clear whether the method is applicable beyond this well understood case.\n- Are the experiments single runs? Due to the high amount of variance in single RL experiments it is recommended to perform several re-runs and argue about mean behaviour.\n\nClarifications:\n- What is the zero-shot performance of policies learned on the first few tasks, when tested directly on subsequent tasks?\n- How were the network architecture and network size chosen, especially for the multitasker? Would policies generalize to later tasks better with larger, or smaller networks?\n- Was any kind of regularization used, how does it influence task performance vs. transfer?\n- I find figure 1 (c) somewhat confusing. Is performance maintained only on the last 2 tasks, or all previously seen tasks? That’s what the figure suggests at first glance, but that’s a different goal compared to the learning strategies described in figures 1 (a) and (b).\n", "Hi, \n\nThis was a nice read. I think overall it is a good idea. But I find the paper lacking a lot of details and to some extend confusing. \nHere are a few comments that I have:\n\nFigure 2 is very confusing for me. Please first of all make the figures much larger. ICLR does not have a strict page limit, and the figures you have are hard to impossible to read. So you train in (a) on the steps task until 350k steps? Is (b), (d),(c) in a sequence or is testing moving from plain to different things? The plot does not explicitly account for the distillation phase. Or at least not in an intuitive way. But if the goal is transfer, then actually PLAID is slower than the MultiTasker because it has an additional cost to pay (in frames and times) for the distillation phase right? Or is this counted. \n\nGoing then to Figure 3, I almost fill that the MultiTasker might be used to simulate two separate baselines. Indeed, because the retention of tasks is done by distilling all of them jointly, one baseline is to keep finetuning a model through the 5 stages, and then at the end after collecting the 5 policies you can do a single consolidation step that compresses all. So it will be quite important to know if the frequent integration steps of PLAID are helpful (do knowing 1,2 and 3 helps you learn 4 better? Or knowing 3 is enough). \n\nWhere exactly is input injection used? Is it experiments from figure 3. What input is injecting? What do you do when you go back to the task that doesn't have the input, feed 0? What happens if 0 has semantics ? \n\nPlease say in the main text that details in terms of architecture and so on are given in the appendix. And do try to copy a bit more of them in the main text where reasonable. \n\nWhat is the role of PLAID? Is it to learn a continual learning solution? So if I have 100 tasks, do I need to do 100-way distillation at the end to consolidate all skills? Will this be feasible? Wouldn't the fact of having data from all the 100 tasks at the end contradict the traditional formulation of continual learning? \n \nOr is it to obtain a multitask solution while maximizing transfer (where you always have access to all tasks, but you chose to sequentilize them to improve transfer)? And even then maximize transfer with respect to what? Frames required from the environment? If that are you reusing the frames you used during training to distill? Can we afford to keep all of those frames around? If not we have to count the distillation frames as well. Also more baselines are needed. A simple baseline is just finetunning as going from one task to another, and just at the end distill all the policies found through out the way. Or at least have a good argument of why this is suboptimal compared to PLAID. \n\nI think the idea of the paper is interesting and I'm willing to increase (and indeed decrease) my score. But I want to make sure the authors put a bit more effort into cleaning up the paper, making it more clear and easy to read. Providing at least one more baseline (if not more considering the other things cited by them). \n\n", "We have uploaded a revised version of the paper. Changes are currently highlighted in green. \nHere is a summary of the changes:\n(1) Clarified our goal for PLAID as a continual learning method, while also evaluating its effectiveness as a multi-task solution method, and comparing to multi-task benchmarks.\n(2) Updated Figure 2 making it more readable and understandable.\n(3) Added text pertaining to noted related work by reviewers.\n(4) Clarified how the distillation method is used in PLAiD with DAGGER\n (a) At most 2 experts are used over a set of tasks\n (b) Start by selecting actions from the expert policies and anneal the probability down, leading to more actions being selected \n from the new student policy\n(5) Clearer description of why and how feature injection is used.\n (a) Included a diagram showing how and where the new network parameters are added into the network.\n (b) This injection is performed when the policy is learning how to differentiate between the flat and incline tasks.\n (c) Additional figure visualizing the state features.\n(6) TL_Only (fine-tuning) comparison -- see Section 8.4\n (a) Ran an additional baseline (average over 5 runs) where TL is done sequentially between tasks WITHOUT distillation. \n (b) Found that TL_Only can also learn new tasks quickly, however the TL_Only method suffers from increased forgetting of \n previously learned tasks. The final distillation to merge all the expert policies together proves to be much more challenging \n than the use of simpler, progressive distillations. \n (c) This method can be considered a version of PLAiD where tasks are learned in groups and after some number of tasks a \n collection of policies/skills are distilled together.\n (d) Still to be done: adding the TL_Only baseline to Figure 1.\n(7) Multiple runs\n We have completed 5 runs for TL_Only and PLAID, with no surprises. The remaining two baselines are still in progress. \n", "We believe that there is much to be explored for progressive learning and distillation of continuous action tasks, as exemplified by our control problems.\n\nRe: during distillation, do we start from random policy or expert policy?\nThe networks were initialized from the most recently trained policy, i.e., the one trained on the new task.\n\nRe: data used for distillation; do all previous expert policies need to be kept in memory?\nWe have added the following paragraph to the paper Appendix to address this.\n For each of the distillation steps we initialize the policy from the most recently trained policy. This policy has seen all of the tasks thus far but may have overfit the most recent tasks. We us a version of the DAGGER algorithm for the distillation process (Ross et al., 2010). We anneal from select- ing actions from the expert policies to selecting actions from the student policy The probability of selecting an action from the expert is annealed to near zero after 10, 000 training updates. We still add exploration noise to the policies when generating actions to take in the simulation. This is also annealed along with the probability of selecting from the expert policy. The actions used for training always come from the expert policy. Although some actions are applied in the simulation from the student, during a training update those actions will be replaced with ones from the proper expert. The expert used to generate actions for tasks 0 − i is πi and the expert used to generate action for task i + 1 is πi+1. We keep around at most 2 policies at any time.\n\nRe: purpose of input injection\nWe have added further details and explanations to the paper. Specifically, the “flat” walking is not provided with information about the upcoming terrain, while other policies (e.g., incline, steps, slopes, gaps) are provided with extra inputs (a linear height map) of the upcoming terrain. Input injection allows for the “flat” walking policy to be used as the starting point for policies that have these additional inputs.\n\n\n\nRe: comparisons with prior work\nWe compare the PLAID method to three other baselines, two of which are in the paper (MultiTasker, Parallel-Learn-then-Distill), and an additional baseline that we will soon have completed in response to the feedback (Successive Transfers then Distill). \nIn what follows below, we comment further on other specific previous work.\nProgressive nets: While this is only tested on discrete actions, the idea itself is orthogonal to this issue. However, the existing baselines using DeepRL are not very applicable for different reasons. For the progressive net, you will get a set of experts in one net but the net itself does not know which expert to choose when it is given a task, i.e., which head of the network to choose.\nAttend Adapt and Transfer: This method explains how to combine K experts in learning task T_i but it is not obvious how it would extend to learning T_i+1.\nLifelong Learning in Minecraft: This relies on options with clear end definitions in sequential tasks. It is not directly obvious how this could be applied to our domain with continuously varying terrain and continuous actions.\nDistral: This trains multiple policies in parallel, rather than one policy continually, and so it is largely captured by one of our baselines. Also, we note that the method is specific to discrete actions in its KL regularization term.\n", "Our work is (to the best of our knowledge) among the first to study, with a detailed evaluation, multi-task and continual learning on problems with continuous action spaces using deep learning. \n\n\nRe: for sequences of related tasks, transfer learning, e.g., fine-tuning, is “known to work well”\n⇒ We are computing an additional “Successive Transfers then Distill” benchmark to show that this is not the case here. This learns the tasks, in sequence, followed by a final distillation step, and thus without the progressive intermediate distillation steps used in PLAID. Our current results for this benchmark show a significant benefit for PLAID. For comments regarding other potential baselines, please also see our replies to the other reviewers.\n\nRe: single runs\nYes, currently these are single runs. We are re-running all simulations 5 times in order to provide more sound comparisons regarding the relative merit of the methods. We will post these in the coming days. There are no surprises in the results to date.\n\n\nRe: zero-shot performance\nAlthough not directly discussed in the paper, the zero-shot performance can be seen by looking at the first iterations of the training graphs. In most cases there is some zero-shot performance, as we hope would be the case, given the transfer we are hoping to achieve between tasks. However, further training greatly improves the performance of each of the tasks.\n\nRe: network architecture, network size \nWe focused on designing the PLAiD method and minimal architecture tuning was performed. The network architecture is based on that used in the following paper: “Learning Locomotion Skills Using DeepRL: Does the choice of action space matter?”.\n\nRe: Figure 1(c) confusion\nPerformance is maintained across all previous tasks (and not just the two most recent). We will clarify this in the text.\n", " We believe that there is much to be explored for progressive learning and distillation of continuous action tasks, as exemplified by our control problems. \n\nRe: Figure 2\nWe will make these figures larger and enhance the explanations.\nAs suggested, they do represent a sequence, moving from (a) to (b) to (c) to (d).\nThe cost of distillation is accounted for by giving an equal number of simulation time steps to both PLAiD and the MultiTasker. For example if we give the MultiTasker 300k iterations, for PLAID we may use 250k for transfer and 50k for distillation. We show this in Figure 2 by colouring the TL phase in green and the distillation phase in red.\n\nRe: Frequent distillation vs fine-tuning a model through all stages plus a final distillation step\n\nThis is an excellent idea for an additional baseline, and we are currently running this “Successive Transfers then Distill” baseline. We will notify the reviewers of the results, as well as update the paper. The results for this new baseline thus far look very similar to that of the MultiTasker; it has a more difficult time learning the additional tasks. We also note that we seek a method that would work when the agent is given new tasks it does not know are coming. For example, after learning to walk on “incline” terrain, the agent does not know about the next three (steps, slopes and gaps), but we want the agent to best prepared to learn it’s next skill, whatever it may be.\n\nRe: details on input injection\nWe have added further details and explanations to the paper. Specifically, the “flat” walking is not provided with information about the upcoming terrain, while other policies (e.g., incline, steps, slopes, gaps) are provided with extra inputs (a linear height map) of the upcoming terrain. Input injection allows for the “flat” walking policy to be used as the starting point for policies that have these additional inputs.\n\nRe: role of PLAID: continual learning vs multitask-solution with maximum transfer\nWe view PLAID as a continual learning method, in that we consider the problem of not knowing all tasks beforehand and want to learn any new task as easily/quickly as possible.\nHowever, it is also proves surprisingly effective as a multitask solution, given the three specific benchmarks that we compare against.\n" ]
[ -1, 7, -1, 7, 5, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_B13njo1R-", "iclr_2018_B13njo1R-", "ByMViwPef", "iclr_2018_B13njo1R-", "iclr_2018_B13njo1R-", "iclr_2018_B13njo1R-", "ByMViwPef", "H1BAcZ9eG", "ByFImz_ZG" ]
iclr_2018_B1hcZZ-AW
N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning
While bigger and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for learning reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger 'teacher' network as input and outputs a compressed 'student' network derived from the 'teacher' network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large 'teacher' model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -- a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than 10x for models such as ResNet-34 while maintaining similar performance to the input 'teacher' network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller 'teacher' networks can be used to rapidly speed up training on larger 'teacher' networks.
accepted-poster-papers
This is a meta-learning approach to model compression which trains 2 policies using RL to reduce the capacity (computational cost) of a trained network while maintaining performance, such that it can be effectively transferred to a smaller student network. The approach has similarities to recently proposed methods for architecture search, but is significantly different. The paper is well written and the experiments are clear and convincing. One of the reviews was unacceptable; I am not considering it (R1).
train
[ "S1dzeGtxz", "rJO3m40ef", "B1MNG0Z-f", "Sy0kfZPGz", "SyKQ--DGG", "SJlie-wMz", "BkE3lbvff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes to use reinforcement learning instead of pre-defined heuristics to determine the structure of the compressed model in the knowledge distillation process.\n\nThe draft is well-written, and the method is clearly explained. However, I have the following concerns for this draft:\n\n1. The technical contribution is not enough. First, the use of reinforcement learning is quite straightforward. Second, the proposed method seems not significantly different from the architecture search method in [1][2] – their major difference seems to be the use of “remove” instead of “add” when manipulating the parameters. It is unclear whether this difference is substantial, and whether the proposed method is better than the architecture search method.\n\n2. I also have concern with the time efficiency of the proposed method. Reinforcement learning involves multiple rounds of knowledge distillation, and each knowledge distillation is an independent training process that requires many rounds of forward and backward propagations. Therefore, the whole reinforcement learning process seems very time-consuming and difficult to be generalized to big models and large datasets (such as ImageNet). It would be necessary for the authors to make direct discussions on this issue, in order to convince others that their proposed method has practical value.\n\n[1] Zoph, Barret, and Quoc V. Le. \"Neural architecture search with reinforcement learning.\" ICLR (2017).\n[2] Baker, Bowen, et al. \"Designing Neural Network Architectures using Reinforcement Learning.\" ICLR (2017).\n", "Summary:\nThe manuscript introduces a principled way of network to network compression, which uses policy gradients for optimizing two policies which compress a strong teacher into a strong but smaller student model. The first policy, specialized on architecture selection, iteratively removes layers, starting with architecture of the teacher model. After the first policy is finished, the second policy reduces the size of each layer by iteratively outputting shrinkage ratios for hyperparameters such as kernel size or padding. This organization of the action space, together with a smart reward design achieves impressive compression results, given that this approach automates tedious architecture selection. The reward design favors low compression/high accuracy over high compression/low performance while the reward still monotonically increases with both compression and accuracy. As a bonus, the authors also demonstrate how to include hard constraints such as parameter count limitations into the reward model and show that policies trained on small teachers generalize to larger teacher models.\n\nReview:\nThe manuscript describes the proposed algorithm in great detail and the description is easy to follow. The experimental analysis of the approach is very convincing and confirms the author’s claims. \nUsing the teacher network as starting point for the architecture search is a good choice, as initialization strategies are a critical component in knowledge distillation. I am looking forward to seeing work on the research goals outlined in the Future Directions section.\n\nA few questions/comments:\n1) I understand that L_{1,2} in Algorithm 1 correspond to the number of layers in the network, but what do N_{1,2} correspond to? Are these multiple rollouts of the policies? If so, shouldn’t the parameter update theta_{{shrink,remove},i} be outside the loop over N and apply the average over rollouts according to Equation (2)? I think I might have missed something here.\n2) Minor: some of the citations are a bit awkward, e.g. on page 7: “algorithm from Williams Williams (1992). I would use the \\citet command from natbib for such citations and \\citep for parenthesized citations, e.g. “... incorporate dark knowledge (Hinton et al., 2015)” or “The MNIST (LeCun et al., 1998) dataset...” \n3) In Section 4.6 (the transfer learning experiment), it would be interesting to compare the performance measures for different numbers of policy update iterations.\n4) Appendix: Section 8 states “Below are the results”, but the figure landed on the next page. I would either try to force the figures to be output at that position (not in or after Section 9) or write \"Figures X-Y show the results\". Also in Section 11, Figure 13 should be referenced with the \\ref command\n5) Just to get a rough idea of training time: Could you share how long some of the experiments took with the setup you described (using 4 TitanX GPUs)?\n6) Did you use data augmentation for both teacher and student models in the CIFAR10/100 and Caltech256 experiments?\n7) What is the threshold you used to decide if the size of the FC layer input yields a degenerate solution?\n\nOverall, this manuscript is a submission of exceptional quality and if minor details of the experimental setup are added to the manuscript, I would consider giving it the full score.", "On the positive side the paper is well written and the problem is interesting. \n\nOn the negative side there is very limited innovation in the techniques proposed, that are indeed small variations of existing methods. \n", "We thank you for the review. We would appreciate if the review contained a more concrete technical discussion of the work instead of unsupported negative statements. We hope that the reviewer appreciates that we have put in substantial work into this paper and is willing to continue this discussion in a more meaningful manner. \n\nThis review appears to repeat the criticism brought up by R2, which we strongly disagree with. We have explained our stance and provided supporting details in the response below. \n\nOur answer to this question is composed of three technical contributions and critical details of the paper: (1) Two-stage policy structure and design of search space (2) Generalization capabilities of learned compression agent (3) Multiobjective reward function and constraints. The criticism of our technical contribution was not supported by any discussion of these contributions -- we hope you can provide an evaluation based on these key details of our work, which we highlight below:\n\n(1) Two-stage policy structure and design of search space: We dedicated Section 3.2 to describing a novel two-stage learning procedure which is critical for learning a better model architecture for the task of model compression. The basic idea is that the architecture search performs a coarse-to-fine search strategy to evaluate large structural changes (i.e. number of layers) before fine tuning each component (i.e., filter size). This not only reduces the computation required to train models in the second stage (since models have been compressed on a macro level in the first stage), but also reduces the dimensionality of the action sequence in both stages, making credit assignment easier. Again, our experiments provide empirical evidence showing that this approach works well. To the best of our knowledge, this is the first work to describe such a strategy and demonstrate its effectiveness. We would like to hear comments that take these technical contributions into account.\n\n(2) Generalization capabilities of our compression networks: Section (4.6) outlines our method for learning generalized policies for a family of networks (e.g., ResNet, VGG). The basic idea is that since many deep learning practitioners typically use a common subset of successful network architectures, it is essential that we can learn policies that can generalize across specific families of teacher networks. We have provided a method for learning such policies and have shown empirically that a single policy can be used for entire family of teacher networks. To the best of our knowledge, this is the first work to show the possibility of such generalization to families of architectures. R1 has offered no comments on this result and experiments.\n\n(3) Multiobjective reward function and constraints: Our task is not simply to obtain high performance as in [1, 2], but to achieve compression while maintaining good performance, a competing objective. [1, 2] therefore cannot perform the task of automatic architecture search for compression. Our approach on the other hand, does perform automatic architecture search for compression. We achieve this by introducing model compression specific reward-balancing and constraint satisfaction approaches as detailed in Section 3.3. One could argue that using the approaches of [1, 2] with a modified reward of compression and accuracy could be compared to our approach. However, this approach could result in lengthy models which are small in number of parameters (e.g. too many consecutive ReLUs). If an additional constraint on model length is added to the reward, the optimization procedure becomes harder and it is unclear whether such an approach would have any advantages over our proposed method. Section (11) on reward design covers some of the challenges with designing a reward function for model compression. This important contribution over [1,2] was left unmentioned by R1; we hope they can offer their opinion on this point.\n\n[1] Zoph, Barret, and Quoc V. Le. \"Neural architecture search with reinforcement learning.\" ICLR (2017).\n[2] Baker, Bowen, et al. \"Designing Neural Network Architectures using Reinforcement Learning.\" ICLR (2017).", "We thank R3 for their thorough and detailed review of the paper. We have included our responses to the questions below and made the relevant changes to our paper where required.\n\n>>> 1) I understand that L_{1,2} in Algorithm 1 correspond to the number of layers in the network, but what do N_{1,2} correspond to? Are these multiple rollouts of the policies? If so, shouldn’t the parameter update theta_{{shrink,remove},i} be outside the loop over N and apply the average over rollouts according to Equation (2)? I think I might have missed something here.\n#1. The rollouts were omitted in order to simplify the presentation of the algorithm. N refers to the number of total iterations (or policy updates) for which the policy is trained.\n>>> 2) Minor: some of the citations are a bit awkward, e.g. on page 7: “algorithm from Williams Williams (1992). I would use the \\citet command from natbib for such citations and \\citep for parenthesized citations, e.g. “... incorporate dark knowledge (Hinton et al., 2015)” or “The MNIST (LeCun et al., 1998) dataset...” \n#2. Thank you for this suggestion, we have fixed the citations in the new revision.\n>>> 3) In Section 4.6 (the transfer learning experiment), it would be interesting to compare the performance measures for different numbers of policy update iterations.\n#3. Figure 10 in the appendix shows the plots over multiple policy update iterations when the pre-trained policies are used. \n>>> 4) Appendix: Section 8 states “Below are the results”, but the figure landed on the next page. I would either try to force the figures to be output at that position (not in or after Section 9) or write \"Figures X-Y show the results\". Also in Section 11, Figure 13 should be referenced with the \\ref command\n#4. We have fixed this in the new revision.\n>>> 5) Just to get a rough idea of training time: Could you share how long some of the experiments took with the setup you described (using 4 TitanX GPUs)?\n#5. We have added a new section to the appendix in the revision providing details on the runtime of the experiments. In general, the shortest experiment (VGG-13/MNIST) took about 4 hours, while the longest experiment (ResNet34/ImageNet32x32) took about 272 hours in total. The experiments actually only used a single TitanX GPU. We have updated the paper to reflect this.\n>>> 6) Did you use data augmentation for both teacher and student models in the CIFAR10/100 and Caltech256 experiments?\n#6. Yes we used standard data augmentation techniques. This is discussed in Section 10.2 of the paper.\n>>> 7) What is the threshold you used to decide if the size of the FC layer input yields a degenerate solution?\n#7. We say the network is degenerate if its size exceeds that of the teacher or if the size of the FC layer is greater than 50,000.", "\n>>> 2. I also have concern with the time efficiency of the proposed method. Reinforcement learning involves multiple rounds of knowledge distillation, and each knowledge distillation is an independent training process that requires many rounds of forward and backward propagations. Therefore, the whole reinforcement learning process seems very time-consuming and difficult to be generalized to big models and large datasets (such as ImageNet). It would be necessary for the authors to make direct discussions on this issue, in order to convince others that their proposed method has practical value.\n\nThe second point is a reasonable criticism, which we have ourselves mentioned and discussed in the appendix (Section 12). Efficiency is a criticism of current architecture search methods in general. Evaluating architectures to obtain a discriminative signal for learning is fundamentally expensive process and is currently an active research topic. Our paper does indeed address this by proposing several improvements over existing architecture search methods. \n\nWe define a bounded state space (teacher architecture) and a two-stage policy system in order to reduce the length of rollouts and make credit assignment more efficient (Section 3.2). We also demonstrate generalization experiments which could improve training on larger models (Section 4.6). We think that there are many interesting research directions that can be explored.\n\nTo address issues regarding efficiency of our approach more directly, we have run experiments on ImageNet32x32 [3], which we will include in the next revision. We hope that this larger scale experiment directly addresses the concern of running too many iterations of training to find a good compressed architecture. We will also include average runtimes for various networks and datasets in order to give the reader a sense of how long the experiments take. We hope that these updated results would be sufficient to convince the reviewer that the approach is not prohibitive in practice.\n\n[1] Zoph, Barret, and Quoc V. Le. \"Neural architecture search with reinforcement learning.\" ICLR (2017).\n[2] Baker, Bowen, et al. \"Designing Neural Network Architectures using Reinforcement Learning.\" ICLR (2017).\n[3] Chrabaszcz, Patryk et al. “A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets.” ArXiV (2017).", ">>>1. The technical contribution is not enough. First, the use of reinforcement learning is quite straightforward. Second, the proposed method seems not significantly different from the architecture search method in [1][2] – their major difference seems to be the use of “remove” instead of “add” when manipulating the parameters. It is unclear whether this difference is substantial, and whether the proposed method is better than the architecture search method.\n\nWe thank you for your comments and feedback. The second point is helpful to improving our work and we have taken steps to address the comment and improve the paper. The first point however, mischaracterizes our work by stating that “the major difference is ‘remove’ instead of ‘add’” and is not supported by any discussion of the critical details of our approach. It also indiscriminately trivializes the significance of the growing field of model compression. While at a high level, model compression does require removing parameters as opposed to adding parameters, the important research question is *how* parameters should be removed. Our answer to this question is composed of three technical contributions and critical details of the paper: (1) Two-stage policy structure and design of search space (2) Generalization capabilities of learned compression agent (3) Multiobjective reward function and constraints. The criticism of our technical contribution was not supported by any discussion of these contributions -- we hope you can provide an evaluation based on these key details of our work, which we highlight below:\n\n(1) Two-stage policy structure and design of search space: We dedicated Section 3.2 to describing a novel two-stage learning procedure which is critical for learning a better model architecture for the task of model compression. The basic idea is that the architecture search performs a coarse-to-fine search strategy to evaluate large structural changes (i.e. number of layers) before fine tuning each component (i.e., filter size). This not only reduces the computation required to train models in the second stage (since models have been compressed on a macro level in the first stage), but also reduces the dimensionality of the action sequence in both stages, making credit assignment easier. Again, our experiments provide empirical evidence showing that this approach works well. To the best of our knowledge, this is the first work to describe such a strategy and demonstrate its effectiveness. We would like to hear comments that take these technical contributions into account.\n\n(2) Generalization capabilities of our compression networks: Section (4.6) outlines our method for learning generalized policies for a family of networks (e.g., ResNet, VGG). The basic idea is that since many deep learning practitioners typically use a common subset of successful network architectures, it is essential that we can learn policies that can generalize across specific families of teacher networks. We have provided a method for learning such policies and have shown empirically that a single policy can be used for entire family of teacher networks. To the best of our knowledge, this is the first work to show the possibility of such generalization to families of architectures. Furthermore, this is a unique contribution of our method, and one that is not directly applicable to [1, 2] since they build architectures from scratch while we use the teacher model as the initialization. R2 has offered no comments on this result and experiments.\n\n(3) Multiobjective reward function and constraints: Our task is not simply to obtain high statistical performance as in [1, 2], but to achieve compression while maintaining good statistical performance, a competing objective. [1, 2] therefore cannot perform the task of automatic architecture search for compression. Our approach on the other hand, does perform automatic architecture search for compression. We achieve this by introducing model compression specific reward-balancing and constraint satisfaction approaches as detailed in Section 3.3. One could argue that using the approaches of [1, 2] with a modified reward of compression and accuracy could be compared to our approach. However, this approach could result in lengthy models which are small in number of parameters (e.g. too many consecutive ReLUs). If an additional constraint on model length is added to the reward, the optimization procedure becomes harder and it is unclear whether such an approach would have any advantages over our proposed method. Section (11) on reward design covers some of the challenges with designing a reward function for model compression. This important contribution over [1,2] was left unmentioned by R2; we hope they can offer their opinion on this point." ]
[ 5, 9, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B1hcZZ-AW", "iclr_2018_B1hcZZ-AW", "iclr_2018_B1hcZZ-AW", "B1MNG0Z-f", "rJO3m40ef", "S1dzeGtxz", "S1dzeGtxz" ]
iclr_2018_SJJQVZW0b
Hierarchical and Interpretable Skill Acquisition in Multi-task Reinforcement Learning
Learning policies for complex tasks that require multiple different skills is a major challenge in reinforcement learning (RL). It is also a requirement for its deployment in real-world scenarios. This paper proposes a novel framework for efficient multi-task reinforcement learning. Our framework trains agents to employ hierarchical policies that decide when to use a previously learned policy and when to learn a new skill. This enables agents to continually acquire new skills during different stages of training. Each learned task corresponds to a human language description. Because agents can only access previously learned skills through these descriptions, the agent can always provide a human-interpretable description of its choices. In order to help the agent learn the complex temporal dependencies necessary for the hierarchical policy, we provide it with a stochastic temporal grammar that modulates when to rely on previously learned skills and when to execute new skills. We validate our approach on Minecraft games designed to explicitly test the ability to reuse previously learned skills while simultaneously learning new skills.
accepted-poster-papers
This method has a lot of strong points, but the reviewers had concerns about baselines, comparisons, and hand-engineered aspects of the method. The authors gave a strong rebuttal and made substantial updates to the paper to address the concerns. I think that this has saved the submission and tipped the balance towards acceptance.
test
[ "Sye2eNDxM", "rJWf00wEf", "S1hvkpKxf", "HJQ8haFxz", "H1lZ5eo7z", "rJScuximM", "H1GX9limf", "BkURFxoQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper aims to learn hierarchical policies by using a recursive policy structure regulated by a stochastic temporal grammar. The experiments show that the method is better than a flat policy for learning a simple set of block-related skills in minecraft (find, get, put, stack) and generalizes better to a modification of the environment (size of room). The sequence of subtasks generated by the policy are interpretable.\n\nStrengths:\n- The grammar and policies are trained using a sparse reward upon task completion. \n- The method is well ablated; Figures 4 and 5 answered most questions I had while reading.\n- Theoretically, the method makes few assumptions about the environment and the relationships between tasks.\n- The interpretability of the final behaviors is a good result. \n\nWeaknesses:\n- The implementation gives the agent a -0.5 reward if it generates a currently unexecutable goal g’. Providing this reward requires knowing the full state of the world. If this hack is required, then this method would not be useful in a real world setting, defeating the purpose of the sparse reward mentioned above. I would really like to see how the method performs without this hack. \n- There are no comparisons to other multitask or hierarchical methods. Progressive Networks or Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning seem like natural comparisons.\n- A video to show what the environments and tasks look like during execution would be helpful.\n- The performances of the different ablations are rather close. Please a standard deviation over multiple training runs. Also, why does figure 4.b not include a flat policy?\n- The stages are ordered in a semantically meaningful order (find is the first stage), but the authors claim that the order is arbitrary. If this claim is going to be included in the paper, it needs to be proven (results shown for random orderings) because right now I do not believe it. \n\nQuality:\nThe method does provide hierarchical and interpretable policies for executing instructions, this is a meaningful direction to work on.\n\nClarity:\nAlthough the method is complicated, the paper was understandable.\n\nOriginality and significance:\nAlthough the method is interesting, I am worried that the environment has been too tailored for the method, and that it would fail in realistic scenarios. The results would be more significant if the tasks had an additional degree of complexity, e.g. “put blue block next to the green block” “get the blue block in room 2”. Then the sequences of subtasks would be a bit less linear (e.g., first need to find blue, then get, then find green, then put). At the moment the tasks are barely more than the actions provided in the environment.\n\nAnother impedance to the paper’s significance is the number of hacks to make the method work (ordering of stages, alternating policy optimization, first training each stage on only tasks of previous stage). Because the method is only evaluated on one simple environment, it unclear which hacks are for the method generally, and which hacks are for the method to work on the environment.", "I am happy with the updates the authors made to the paper. The video and additional experiments are valuable, and I will increase my score accordingly. \n\nHowever, the paper would strongly benefit from either a second test environment or a more complex grammar to showcase generality.\n\n(nitpick): “We do not include the standard deviations since there are no noticeable difference among them.” including standard deviations is not just to compare the magnitude of the standard deviations, but to see whether the differences in means of the methods are within the standard deviations.", "Summary:\nThis paper proposes an approach to learning hierarchical policies in a lifelong learning context. This is achieved by stacking policies - an explicit \"switch\" policy is then used to decide whether to execute a primitive action or call the policy of the layer below it. Additionally, each task is encoded in a human-readable template, which provides interpretability.\nReview:\nOverall, I found the paper to be generally well-written and the core idea to be interesting. My main concern is about the performance against existing methods (no empirical results are provided), and while it does provide interpretability, I am not sure that other approaches (e.g. Tessler et al. 2017) could not be slightly modified to do the same. I think the paper could also benefit from at least one more experiment in a different, harder domain.\n\nI have a few questions and comments about the paper:\n\nThe first paragraph claims \"This precludes transfer of previously learned simple skills to a new policy defined over a space with differing states or actions\". I do not see how this approach avoids suffering from the same problem? Additionally, approaches such as agent-space options [Konidaris and Barto. Building Portable Options: Skill Transfer in Reinforcement Learning, IJCAI 2007] get around at least the state part.\n\nI do not quite follow what is meant by \"a global policy is assumed to be executable by only using local policies over specific options\". It sounds like this is saying that the inter-option policy can pick only options, and not primitive actions, which is obviously untrue. Can you clarify this sentence?\n\nIn section 3.1, it may be best to mention that the policy accepts both a state and task and outputs an action. This is stated shortly afterwards, but it was confusing because section 3.1 says that there is a single policy for a set of tasks, and so obviously a normal state-action policy would not work here.\n\nAt the bottom of page 6, are there any drawbacks to the instruction policy being defined as two independent distributions? What if not all skills are applicable to all items?\n\nIn section 5, what does the \"without grammar\" agent entail? How is the sampling from the switch and instruction policies done in this case?\n\nWhile the results in Figures 4 and 5 show improvement over a flat policy, as well as the value of using the grammar, I am *very* surprised there is no comparison to existing methods. For example, Tessler's H-DRLN seems like one obvious comparison here, since it learns when to execute a primitive action and when to reuse a skill.\n\nThere were also some typos/small issues (I may have missed some):\n\npg 3: \"In addition, previous work usually useS...\"\npg 3. \"we encode a human instruction to LEARN A...\" (?)\npg 4. \"...with A stochastic temporal grammar...\"\npg 4. \"... described above through A/THE modified...\"\npg 6. \"...TOTALLING six colors...\"\nThere are some issues with the references (capital letters missing e.g. Minecraft)\n\nIt also would be preferable if the figures could appear after they are referenced in the text, since it is quite confusing otherwise. For example, Figure 2 contains V(s,g), but that is only defined much later on. Also, I struggled to make out the yellow box in Figure 2, and the positioning of Figure 3 on the side is not ideal either.", "This paper introduces an iterative method to build hierarchical policies. At every iteration, a new meta policy feeds in task id to the previous policy and mixes the results with an 'augmented' policy. The resulting policy is somewhat interpretable as the task id being sampled by the meta policy corresponds to one of the subgoals that are manually designed.\n\nOne of the limitation of the method is that appropriate subgoals and curriculum must be hand designed. Another one is that the model complexity grows linearly with the number of meta iterations. \n\nThe comparison to non-hierarchical models is not totally fair in my opinion. According to the experiment, the flat policy performs much worse than the hierarchical, but it is unclear how much of this is due to the extra capacity of the model of the unfolded hierarchical policy and how much of that is due to the hierarchy. In other words, it is unclear if hierarchy is actually useful, or just the task curriculum and model capacity staging.\n\nThe paper does not appear to be fully self contained in term of notations, in particular regarding the importance sampling I could not find the definitions of mu, and regarding the STG I could not find the definition of q and rho. \n\nThe experimental results are a bit confusing. In the learning curves that are shown, it is not clear exactly when the set of task is expanded, nor when the hierarchical policy iteration occurs. Also, some curves are lacking the flat baseline.", "Thank you for your comments and suggestions. \n\n1. “performance against existing methods”\nWe have added the comparison with Tessler et al. (2017), i.e., H-DRLN, as suggested (see Table 1 and Figure 4). H-DRLN indeed also has the concept of reusing previously learned skills but the advantages of our approach are very obvious:\n1) Each H-DRLN in Tessler et al. (2017) can only learn one task like “Stack blue,” whereas ours can learn a set of tasks, like {“Stack x”}, where x can be different colors. In fact, each curve of H-DRLN we show in Figure 4 is only for training one particular task (i.e., “Get white” and “Stack white” respectively), whereas all the other curves are for training the whole set of tasks.\n2) H-DRLN treats all the old tasks {DSN_i} and the actions on the same level and learns Q(s, a) and Q(s, DSN_i). This setting is clearly not scalable. In the original paper, the learning was done where there were only 4 old tasks. However, in our settings, we have as many as 18 old tasks. And clearly from the results, H-DRLN can not learn good policies for new tasks as the input space of the Q(s, DSN_i) function becomes unbearably large (it is similar to learning a policy based on a very large action space). \n3) Our policy is also learning the semantics of the tasks from the instructions, thus it has better generalization. As we show in the experiments on the {“Put x on top of y”} tasks, ours generalizes well in noval (x, y) combinations unseen in training whereas H-DRLN does not have this capability.\n\n2. “one more experiment in a different, harder domain”\nWe have tested more difficult tasks {“Put x on top of y”}, and have also evaluated our hierarchical policy in a zero-shot setting for unseen (x, y) combinations. See Table 1, Figure 4 and also Section 5.3 and Section 5.4. As shown in the results, flat policy and H-DRLN (Tessler et al., 2017) fail to learn good policies for {”Stack x”} tasks and the new {“Put x on top of y”} tasks. This manifests that the tasks are actually quite challenging for current RL methods.\n\n3. Questions about the paper:\n\n1) “I do not see how this approach avoids suffering from the same problem”\nPolicies on different levels are learned on different corpus (e.g., pi_0 does not know the word “get”) and on different action spaces (e.g., pi_0 can be trained on actions excluding “pick up” and “put down”). They also do not need to share a same state encoding module, so the forms of states can be different (e.g., we may use symbolic states for a global policy while its local policy uses raw pixels as states).\n\n2) “It sounds like this is saying that the inter-option policy can pick only options, and not primitive actions”\nWe are referring to some of the existing methods like Kulkarni et al. (2016), Andreas et al. (2017) where the set of necessary options for a global policy is predefined and a task is executed only based on these given options. Other methods like Tessler et al. (2017) do not have this limitation. We will clarify this.\n\n3) “are there any drawbacks to the instruction policy being defined as two independent distributions”\nIt is for simplicity. For more complex instructions, we may use GRUs or LSTMs to train the instruction generator, but the training will be slower.\n\n4) “How is the sampling from the switch and instruction policies done in this case?”\nAs we stated in the text, the sampling was done w.r.t. equation (1) and (2) when not using the STG.\n\nWe agree with the other editing comments and have fixed them in the updated version.", "Our updated submission has included extensive experiments as suggested by the reviewers. We have added a set of more challenging tasks with a zero-shot evaluation and also results of new baselines including an existing multi-task hierarchical policy method, H-DRLN in Tessler et al. (2017). The updated results can be seen in Section 5. A demo video (audio included) is available here: https://www.dropbox.com/s/j5nw2cljpoofo9j/hrl_demo.mov?dl=0", "Thank you for your reviews and insights.\n\n1. “Providing this reward requires knowing the full state of the world”\n\nLearning curves of training without the penalty have been added to Figure 5. We find that the penalty does not have a significantly effect on the learning efficiency as shown in Figure 5.\n\nReviewer may be confused by our phrasing. Sorry about that. Actually we want to stress that we do not provide the full state when using this penalty. Instead, a penalty only includes information about the agent’s physical capacity and the nature of the target task, since it is given i) when the agent attempts to execute tasks that exceeds its physical capacity, such as trying to put down a block when it is not carrying one or trying to pick-up another block where there is already one in its hands, ii) or when it attempts to execute tasks that are irrelevant to that tasks. \n\nAlso, we only use this during training under the assumption that a given task in the training is always executable (the necessary blocks are present in the environment). So when a penalty was given to a task of finding an object that does not exist in the environment, it was meant to save time in game playing and also gives training signical of what old tasks are relevant for the agent.\n\nFinally, in testing, we do not use this penalty, and it does not affect the performance.\n\nTherefore, we do not think that this will prevent us to apply the approach to the real world for the aforementioned reasons.\n\n2. “There are no comparisons to other multitask or hierarchical methods”\nWe have also evaluated H-DRLN (Tessler et al., 2017). \n\n3. “A video … would be helpful”\nPlease refer to this link for the video (with audio): https://www.dropbox.com/s/j5nw2cljpoofo9j/hrl_demo.mov?dl=0\n\n4. “The performances of the different ablations are rather close. Please a standard deviation over multiple training runs.”\nWe do not include the standard deviations since there are no noticeable difference among them. The alternating and 2 value functions mainly help accelerating the learning phase 1. As shown in Figure 4a, the acceleration is significant (full model is the first one that switches to phase 2). In phase 2 (see Figure 4b), the advantage of using the STG is very clear (the blue curve reaches a plateau around an average reward of 0.8) and the others do not show a large improvement over the full model. So none of them reduces the training variance, but they all help increase the training efficiency.\n\n5. “why does figure 4.b not include a flat policy””\nWe have added the flat policy.\n\n6. “the authors claim that the order is arbitrary”\nSorry for the confusion. We will clarify this. What we meant was for arbitrary order, we can still train the hierarchical policy, but a semantically meaningful order is indeed very important for the training efficiency. And we do provide this order as weak supervision.\n\n7. “unclear which hacks are for the method generally, and which hacks are for the method to work on the environment.”\nThey are not environment dependent.\ni) The order of the stages comes from semantic meanings of the tasks, so it depends on the tasks but does not depend on the environment. E.g., you may train a real robot on the same tasks in the same order in a real environment. In fact, the generated interpretable hierarchical plans can be directly used for the same tasks in different environments without additional training as long as there are equivalent primitive actions.\nii) Alternating is purely for the optimization of the neural nets. As the experimental results show, we can also train the hierarchical policies without it.\niii) The first phase in the 2-phase curriculum is for the global policy to learn what the goals of the previous tasks are, so that in the second phase, it knows when to repeat the same task and when to stop. This is also not tailored to any specific environment.", "Thank you for your reviews.\n\n1. “a new meta policy feeds in task id to the previous policy”\nActually, the global policy feeds an instruction in human language rather than a task ID to the previous policy. Compared to using task IDs, this i) improves the interpretability of the policy and ii) facilitates the generalization of the policy in noval scenarios/tasks thanks to the semantics of the instructions.\n\n2. “appropriate subgoals and curriculum must be hand designed”\nActually we let the global policy at each level to explore the appropriate subgoals for the new tasks among all previously learned tasks. The learning efficiency of our approach does depend on the given curriculum as existing curriculum-based RL training approaches do. We regard this as a weak supervision from human knowledge. In fact, compared to some recent work (Kulkarni et al., 2016; Andreas et al., 2017) which specifically provide the necessary sub-goals and/or the order of the subgoals for a task, we do not think our setting is any less general.\n\n3. “The comparison to non-hierarchical models is not totally fair”\nFirst, the training of flat policy is also strictly following the curriculum we use for our model. It is always finetuned based on the policy trained for the previous task set. Second, the biggest benefit of our hierarchical policy comes from the more efficient exploration thanks to reusing old skills and learning the STG. As the updated Figure 4b shows, when training for more complex tasks, the flat policy can not yield any positive reward as achieving the goals requires a fairly long sequence of primitive actions and precise operations (e.g., picking up and putting down the correct blocks at the correct locations).\n\n4. “I could not find the definitions of mu, and regarding the STG I could not find the definition of q and rho”\nMu was introduced in the second paragraph of Section 4.1. q and rho were defined in the second paragraph of Section 3.3.\n\n5. “The experimental results are a bit confusing”\nSorry for the confusion. As we explained in Section 4, we adopt a 2-phase curriculum learning where the task set was expanded in the second phase. For curriculum-based training, the dip of a curve comes from the switch from phase 1 to phase 2, thus also indicates when the task set was expanded. For non-curriculum based training where there is no dip in the reward, the expansion starts from the first episode. Note that in Figure 4b, we only show the phase 2 of our curriculum, so the expansion starts from the first episode for all curves in this figure. \n\nWe have also added the flat baseline for the Figure 4b. The reason we didn’t include that in the previous version was that the flat policy failed to learn anything meaningful for the complex tasks." ]
[ 6, -1, 6, 6, -1, -1, -1, -1 ]
[ 4, -1, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_SJJQVZW0b", "Sye2eNDxM", "iclr_2018_SJJQVZW0b", "iclr_2018_SJJQVZW0b", "S1hvkpKxf", "iclr_2018_SJJQVZW0b", "Sye2eNDxM", "HJQ8haFxz" ]
iclr_2018_rJwelMbR-
Divide-and-Conquer Reinforcement Learning
Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/
accepted-poster-papers
This paper proposes a specific architecture for training an ensemble of separate policies on a family of easier tasks with the goal of obtaining a single policy that can perform well on a harder task. There are significant similarities to the recently published Distral algorithm, but I am convinced that this work offers a meaningful contribution beyond that work. Moreover, the authors performed a thorough comparison between their method and Distral and found that DnC performs better.
train
[ "r1A2hMtgz", "HJNRVMqez", "rycTQSqgG", "B1k6v2lQG", "HkH_PhlXM", "HkX5HhgmG", "H1KBH2x7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents a reinforcement learning method for learning complex tasks by dividing the state space into slices, learning local policies within each slice, while ensuring that they don't deviate too far from each other, while simultaneously learning a central policy that works across the entire state space in the process. The most closely related works to this one are Guided Policy Search (GPS) and \"Distral\", and the authors compare and contrast their work with the prior work suitably.\n\nThe paper is written well, has good insights, is technically sound, and has all the relevant references. The authors show through several experiments that the divide and conquer (DnC) technique can solve more complex tasks than can be solved with conventional policy gradient methods (TRPO is used as the baseline). The paper and included experiments are a valuable contribution to the community interested in solving harder and harder tasks using reinforcement learning.\n\nFor completeness, it would be great to include one more algorithm in the evaluation: an ablation of DnC which does not involve a central policy at all. If the local policies are trained to convergence, (and the context omega is provided by an oracle), how well does this mixture of local policies perform? This result would be instructive to see for each of the tasks.\n\nThe partitioning of each task must currently be designed by hand. It would be interesting (in future work) to explore how the partitioning could perhaps be discovered automatically.", "The submission tackles an important problem of learning highly varied skills. The approach relies on dividing the task space into subareas (defined by task context vectors) over which individual policies are trained, but are still required to operate well on tasks outside their context.\n\nThe exposition is clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. The experimental results show a convincing benefit over TRPO and Distral on a number of manipulation and locomotion tasks. I would like to have seen more discussion of the computational costs and scaling of the method over TRPO or Distral, as the pairwise KL divergence terms grow quadratically in the number of contexts. \n\nWhile the method is well-motivated, the division of tasks into subareas seems arbitrarily chosen. It would be very useful for readers to see performance of the algorithm under other task decompositions to alleviate the worries that the algorithm is not sensitive to the decomposition choice.\n\nI would also like to see more discussion of curriculum learning, which also aims at tackling a similar problem of reducing complexity in early stages of training by choosing on simper tasks and progressing to more complex. Would such progressive tasks decompositions work better in your framework? Does your framework remove the need for curriculum learning?\n\nOverall, I believe this is in interesting piece of work and I believe would be of interest to ICLR community.", "This paper presents a method for learning a global policy over multiple different MDPs (referred to as different \"contexts\", each MDP having the same dynamics and reward, but different initial state). The basic idea is to learn a separate policy for each context, but regularized in a manner that keeps all of them relatively close to each other, and then learn a single centralized policy that merges the multiple policies via supervised learning. The method is evaluated on several continuous state and action control tasks, and shows improvement over existing and similar approaches, notably the Distral algorithm.\n\nI believe there are some interesting ideas presented in this paper, but in its current form I think that the delta over past work (particularly Distral) is ultimately too small to warrant publication at ICLR. The authors should correct me if I'm wrong, but it seems as though the algorithm presented here is virtually identical to Distral except that:\n1) The KL divergence term regularizes all policies together in a pairwise manner.\n2) The distillation step happens episodically every R steps rather than in a pure SGD manner.\n3) The authors possibly use a TRPO type objective for the standard policy gradient term, rather than REINFORCE-like approach as in Distral (this one point wasn't completely clear, as the authors mention that a \"centralized DnC\" is equivalent to Distral, so they may already be adapting it to the TRPO objective? some clarity on this point would be helpful).\nThus, despite better performance of the method over Distral, this doesn't necessarily seem like a substantially new algorithmic development. And given how sensitive RL tasks are to hyperparameter selection, there needs to be some very substantial treatment of how the regularization parameters are chosen here (both for DnC and for the Distral and centralized DnC variants). Otherwise, it honestly seems that the differences between the competing methods could be artifacts of the choice of regularization (the alpha parameter will affect just how tightly coupled the control policies actually are).\n\nIn addition to this point, the formulation of the problem setting in many cases was also somewhat unclear. In particular, the notion of the contextual MDP is not very clear from the presentation. The authors define a contextual MDP setting where in addition to the initial state there is an observed context to the MDP that can affect the initial state distribution (but not the transitions or reward). It's entirely unclear to me why this additional formulation is needed, and ultimately just seems to confuse the nature of the tasks here which is much more clearly presented just as transfer learning between identical MDPs with different state distributions; and the terminology also conflicts with the (much more complex) setting of contextual decision processes (see: https://arxiv.org/abs/1610.09512). It doesn't seem, for instance, that the final policy is context dependent (rather, it has to \"infer\" the context from whatever the initial state is, so effectively doesn't take the context into account at all). Part of the reasoning seems to be to make the work seem more distinct from Distral than it really is, but I don't see why \"transfer learning\" and the presented contextual MDP are really all that different.\n\nFinally, the experimental results need to be described in substantially more detail. The choice of regularization parameters, the precise nature of the context in each setting, and the precise design of the experiments is all extremely opaque in the current presentation. Since the methodology here is so similar to previous approaches, much more emphasis is required to better understand the (improved) empirical results in this eating.\n\nIn summary, while I do think the core ideas of this paper are interesting: whether it's better to regularize policies to a single central policy as in Distral or whether it's better to use joint regularization, whether we need two different timescales for distillation versus policy training, and what policy optimization method works best, as it is right now the algorithmic choices in the paper seem rather ad-hoc compared to Distral, and need substantially more empirical evidence.\n\nMinor comments:\n• There are several missing words/grammatical errors throughout the manuscript, e.g. on page 2 \"gradient information can better estimated\".", "\nWe now address concerns regarding the differences between our method and Distral [1]. DnC and Distral not only have completely different motivations, but the technical differences between the two algorithms are substantial as well. It is worth noting that our experiments (with hyperparameter searches and multiple random seeds) over five varied tasks in the locomotion and manipulation settings clearly illustrate that the Distral method as described in prior work does not solve the challenging tasks in our evaluation, while our approach does. This extensive comparative evaluation already establishes a clear contribution over the prior work, as noted by the other two reviewers.\n\nThere are also significant conceptual differences. Distral considers a transfer learning setting, while the goal in our work is to obtain a single policy that succeeds on a single challenging task with stochastic structure. While both algorithms could be applied to both settings, we feel this conceptual difference is very important. Whereas our method is concerned with the performance of the central policy on the full state space, the Distral paper evaluates performance of the local policies on their respective domains.\n\nFurthermore, Distral does not propose nor analyze the potential to solve challenging continuous control tasks with stochastic initial state distributions. The observation that decomposing the initial state distribution in this way leads to drastically improved performance is not at all obvious, and is a key insight of our work. We believe that this contribution will be highly relevant to researchers interested in solving complex continuous control tasks, and this contribution is not present in the Distral paper. In the updated paper, we also describe how to automate the process of generating these decompositions, and present results in Appendix D. We find that DnC with this automated partitioning performs comparably to the manual partitions outlined in the paper, without the need for any manual specification of partitions.\n\nBoth DnC and Distral maintain the core idea that optimizing local or instance-specific policies can simplify many tasks. This idea is not new, and is popular in the RL community after works related to guided policy search [2]. In fact, ideas of the same flavor are present even in older works like target propagation [3] where an optimization method generates targets for a supervised learning network. From a bird’s-eye perspective, all these methods exploit the same principle, but a closer look at the technical details unveil significant differences.\n\nFor example, GPS observes that adding a regularization term to stay close to the central network helps with distillation and overall convergence. Distral rediscovers the exact KL regularization and supervised distillation procedure as GPS, albeit with neural networks as local policies. However, Distral’s key innovation comes from carefully choosing the algorithms for local training and applying the method to challenging visual transfer learning scenarios, something that the basic guided policy search algorithm does not do. In the same way, we propose a modified method with pairwise KL regularization terms and a varied distillation schedule, and apply it to challenging stochastic initial state continuous control tasks, as compared to the discrete control setup in Distral. Furthermore, while Distral uses soft Q-learning for their discrete action tasks, we use TRPO due to its stable performance in continuous control tasks. From this perspective, we believe that the difference between our method and Distral is comparable, if not greater than, the difference between Distral and GPS. \n\nIn motivation, technical detail, and empirical performance, DnC varies significantly from Distral. Thus, we believe that the proposed method, DnC, is quite different from previous methods, a sentiment that is shared by the other two reviews as well.\n\n\n[1] Teh. et al, Distral, NIPS 2017\n[2] Levine, et al, Guided Policy Search, ICML 2013\n[3] see references of Lee et al, Difference Target Propagation, ECML PKDD 2015\n", "Thank you for your valuable suggestions!\n\nWe have included specific experiment details in Appendix A. In particular, we ran an extensive penalty hyperparameter sweep for DnC, centralized DnC, and Distral on each task to select the appropriate parameter for each method. Since the initial version, we have also updated the experiments by conducting a finer hyperparameter sweep and by running experiments with 5 random seeds instead of 3. We have updated the paper with the results obtained from these searches (Figure 1,Table 1). We thus contend that the difference between the performance of the various methods is not contingent on the exact choice of hyperparameters, and is indeed a result of the algorithmic differences. If the reviewer has any other suggestions for how to address this concern, we would be happy to incorporate them. We have also included more comprehensive task information, which detail precisely what the contexts are in each task, in Appendix B. We have updated the paper to distinguish our use of the word “context” from contextual MDPs in Section 3. We also clarify in Section 5 that our analysis ports Distral to the TRPO objective. While the original Distral paper uses soft Q-learning, we adapt the algorithm to TRPO, since empirically TRPO exhibits better performance on high-dimensional continuous control tasks. If the reviewer has further recommendations, we would be happy to address these as well.\n", "Thank you for your very valuable comments. We address your questions below.\n\nIn regard to the choice of partitions: to address any potential concern regarding the partitions, we added additional experiments in Appendix D where the partitions are determined automatically, rather than being hand-specified. It is true that some care must be taken to get reasonable partitions, although our experiments suggest that even a simple K-means method can produce good results automatically. In Appendix D, we evaluate DnC on contexts generated by a K-means clustering procedure on the initial state distribution for the Picking task, which performs comparably to our manually designed contexts, indicating that performance of DnC is not particular to our choice of decomposition. We intend to extend this procedure to all the tasks for the final version. We further believe that it’s possible to find more sophisticated automatic methods to generate the decompositions, which would make for interesting future work. \n\n\nRegarding the complexity of the pairwise KL divergence, we have updated the paper to include a discussion of the computational cost in the fourth paragraph of Section 4.2. Empirically we find that the quadratic penalty is not a bottleneck for the problems we hope to address with DnC, since sampling the environment is by far the most computationally demanding operation.\n\nIn regard to the relationship with curriculum learning, we have now added some remarks at the end of the first paragraph of Section 2. Investigating the use of progressive decompositions with our method is an interesting direction for future work!\n", "Thank you for your very valuable feedback! \n\nWe have modified the paper to include comparisons between DnC and two different oracle-based ensembles of local policies in Appendix C. The first ablation of DnC never distills the policies together, training the local policies to convergence. This ablation performs poorly compared to DnC in most tasks: we hypothesize that the distillation step allows the local policies to escape the local minima that policy gradient methods generally suffer from. Similar observations have been noted in Mordatch et al. [1], where trajectory optimization without distillation to a central neural network underperforms. The other ablation runs DnC, but returns the final local ensemble instead of the final global policy. We observe that this final local ensemble with oracle context performs only marginally better than the final global policy in most tasks, indicating that there is little loss in performance during the distillation process. For both of these variants, the central policy, which must operate successfully for a wide range of contexts, generalizes better to contexts that are slightly different than the training distribution. Considering that training and testing conditions will almost always differ slightly in practice, even if one has oracle access to the context, it might be beneficial to use the central policy due to its better generalization capability.\n\nAutomatic ways to perform the partitioning is indeed an interesting future direction! As a step in this direction, we have updated the paper with a simple automated partitioning scheme in Appendix D. Partitions are automatically generated via a K-means clustering procedure on the initial state distribution to generate contexts, and find that DnC performs well in this case as well. We hope to pursue more elaborate partitioning schemes in future work.\n\n[1] Mordatch et al, Interactive Control of Diverse Complex Characters with Neural Networks, NIPS 2015\n" ]
[ 7, 7, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rJwelMbR-", "iclr_2018_rJwelMbR-", "iclr_2018_rJwelMbR-", "rycTQSqgG", "rycTQSqgG", "HJNRVMqez", "r1A2hMtgz" ]
iclr_2018_B1e5ef-C-
A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs
Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the “meaning” of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice.
accepted-poster-papers
sadly, none of the reviewers seem to have been able to fully appreciate and check the proofs. but in the words of even the least positive reviewer: In general, I find many of the observations in this paper interesting. However, this paper is not strong enough as a theory paper; rather, the value lies perhaps in its fresh perspective. i think we can all gain from fresh perspectives of LSTMs and DL for NLP :)
train
[ "SkzuQ_dlG", "SyURgFFlG", "H1kgYgogM", "HyxkZZhXM", "BkyahGuzM", "rytGHUTWM", "BkX26PhZf", "SkhuaD3Zf", "BJi42DnZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "The main insight in this paper is that LSTMs can be viewed as producing a sort of sketch of tensor representations of n-grams. This allows the authors to design a matrix that maps bag-of-n-gram embeddings into the LSTM embeddings. They then show that the result matrix satisfies a restricted isometry condition. Combining these results allows them to argue that the classification performance based on LSTM embeddings is comparable to that based on bag-of-n-gram embeddings.\n\nI didn't check all the proof details, but based on my knowledge of compressed sensing theory, the results seem plausible. I think the paper is a nice contribution to the theoretical analysis of LSTM word embeddings.", "The interesting paper provides theoretical support for the low-dimensional vector embeddings computed using LSTMs or simple techniques, using tools from compressed sensing. The paper also provides numerical results to support their theoretical findings. The paper is well presented and organized.\n\n-In theorem 4.1, the embedding dimension $d$ is depending on $T^2$, and it may scale poorly with respect to $T$.", "My review reflects more from the compressive sensing perspective, instead that of deep learners.\n\nIn general, I find many of the observations in this paper interesting. However, this paper is not strong enough as a theory paper; rather, the value lies perhaps in its fresh perspective.\n\nThe paper studies text embeddings through the lens of compressive sensing theory. The authors proved that, for the proposed embedding scheme, certain LSTMs with random initialization are at least as good as the linear classifiers; the theorem is almost a direction application of the RIP of random Rademacher matrices. Several simplifying assumptions are introduced, which rendered the implication of the main theorem vague, but it can serve as a good start for the hardcore statistical learning-theoretical analysis to follow.\n\nThe second contribution of the paper is the (empirical) observation that, in terms of sparse recovery of embedded words, the pretrained embeddings are better than random matrices, the latter being the main focus of compressive sensing theory. Partial explanations are provided, again using results in compressive sensing theory. In my personal opinion, the explanations are opaque and unsatisfactory. An alternative route is suggested in my detailed review.\nFinally, extensive experiments are conducted and they are in accordance with the theory.\n\nMy most criticism regarding this paper is the narrow scope on compressive sensing, and this really undermines the potential contribution in Section 5.\n\nSpecifically, the authors considered only Basis Pursuit estimators for sparse recovery, and they used the RIP of design matrices as the main tool to argue what is explainable by compressive sensing and what is not. This seems to be somewhat of a tunnel-visioning for me: There are a variety of estimators in sparse recovery problems, and there are much less restrictive conditions than RIP of the design matrices that guarantee perfect recovery.\n\nIn particular, in Section 5, instead of invoking [Donoho&Tanner 2005], I believe that a more plausible approach is through [Chandrasekaran et al. 2012]. There, a simple deterministic condition (the null space property) for successful recovery is proved. It would be of direct interest to check whether such condition holds for a pretrained embedding (say GloVe) given some BoWs. Furthermore, it is proved in the same paper that Restricted Strong Convexity (RSC) alone is enough to guarantee successful recovery; RIP is not required at all. While, as the authors argued in Section 5.2, it is easy to see that pretrained embeddings can never possess RIP, they do not rule out the possibility of RSC.\n\nExactly the same comments above apply to many other common estimators (lasso, Dantzig selector, etc.) in compressive sensing which might be more tolerant to noise.\n\nSeveral minor comments:\n\n1. Please avoid the use of “information theory”, especially “classical information theory”, in the current context. These words should be reserved to studies of Channel Capacity/Source Coding `a la Shannon. I understand that in recent years people are expanding the realm of information theory, but as compressive sensing is a fascinating field that deserves its own name, there’s no need to mention information theory here.\n\n2. In Theorem 4.1, please be specific about how the l2-regularization is chosen.\n\n3. In Section 4.1, please briefly describe why you need to extend previous analysis to the Lipschitz case. I understood the necessity only through reading proofs.\n\n4. Can the authors briefly comment on the two assumptions in Section 4, especially the second one (on n- cooccurrence)? Is this practical?\n\n5. Page 1, there is a typo in the sentence preceding [Radfors et al., 2017].\n\n6. Page 2, first paragraph of related work, the sentence “Our method also closely related to ...” is incomplete.\n\n7. Page 2, second paragraph of related work, “Pagliardini also introduceD a linear ...”\n\n8. Page 9, conclusion, the beginning sentence of the second paragraph is erroneous.\n\n[1] Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, Alan S. Willsky, “The Convex Geometry of Linear Inverse Problems”, Foundations of Computational Mathematics, 2012.", "Dear Readers,\n\nWe have uploaded a revision. There are two main changes:\n\n1. An improvement to Lemma 4.1 that allows the embedding dimension $d$ in Theorem 4.1 to depend linearly (up to log factors) on the document length $T$. This directly addresses the concern of AnonReviewer1 that $d$ scales quadratically with $T$.\n\n2. Updates to Section 5 and the Appendix to a) clarify why basis pursuit is the natural choice for our setting and b) discuss weak sparse recovery conditions (including NSP/REP) in greater depth to see how they can help understand why BoW recovery improves when the sensing matrix consists of word embeddings. We hope these changes address the concern of AnonReviewer3 about the scope of compressed sensing considered in the paper.", "Thank you for following up.\n\n1) “Much weaker conditions than RIP that are sufficient for sparse recovery should be explicitly mentioned. The authors say \"These embeddings cannot fit into the usual compressed sensing worldview since the matrix defined by the embeddings cannot satisfy RIP.\" Compressed sensing is not RIP.” (Paraphrase)\n\nYou are right; as discussed later in the paper an explanation should likely come from some other (weaker) condition. We chose to focus on the polytope condition but agree that discussion of others (like REC) is warranted and will include it in revision. \n\n\n2) “Rather than verify NSP (NP-hard), empirically check conditions that imply it. For example, the restricted singular/eigenvalue condition (Theorem 3.2 of Chandrasekaran et al.) implies it. Try generating random directions in the tangent cone, computing their norms, and checking the constant distribution; notice that in this case RSC is equivalent to RS/EC.” (Paraphrase)\n\nWe computed such upper bounds on RE constants** for both pretrained and random embeddings and found that they were smaller for the former (note that larger constants are better for recovery). Unfortunately this does not settle the issue about RE property for pretrained embeddings as lower bounds for random vectors are vacuous when recovery isn’t perfect (e.g. when the dimension is small enough that pretrained embeddings do better) (Banerjee et al., 2014). Note that verifying RE is also NP-hard (Dobriban & Fan, 2016). We will discuss these points in revision.\n\n\n(** We don’t know how to sample directions uniformly from tangent cone ---rejection sampling doesn’t work well as the intersection of the cone with the unit sphere is very small compared to the unit sphere--- so these bounds were computed in a greedy manner.)\n\n\n3) “LASSO/Dantzig selector are common compressive sensing algorithms (LASSO is possibly more popular than Basis Pursuit). Hence it feels weird to equate compressive sensing to BP in Appendix A. Maybe the authors can simply mention that this paper focuses on the BP algorithm, and defer the others to future work.” \n\nAppendix A represents a brief overview of only the parts of compressed sensing needed in the main paper. We didn’t include LASSO/Dantzig because the word embeddings setting has no signal/measurement noise, in which case both methods are equivalent to BP. The revision will clarify.\n\n\n4) “In the response the authors mentioned that they had tried LASSO but it didn't work as well. I'm not sure what the authors exactly did, but I guess in this case the *constrained* LASSO would perform well.” \n\nWe have indeed tried constrained LASSO and it works quite well, but BP works slightly better. This does not seem surprising to us since we are in the noiseless setting. We’ll add a note to that effect.\n\n\n5) “The connection to \"classical information theory\" is still quite vague (you refer to a paper which was inspired by the MDL and used LZ77, and I see very little connection with this paper).” \n\nWe’re happy to omit the problematic phrase; it only refers to a past work of Paskov et al. (which uses compression ideas on Bag-of-n-Grams representation). \n\n\nA. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. “Estimation with Norm Regularization.” NIPS 2014.\nE. Dobriban and J. Fan. “Regularity Properties for Sparse Regression.” Communications in Mathematics and Statistics 2016.\n", "Thanks for the authors' detailed response. It seems that the authors missed some of my points, hence a few further clarifications below:\n\n1) Regarding the conditions that I posed (NSP, RSC, etc.), my main point is that there exist much weaker conditions than RIP which are sufficient for sparse recovery. This should at least be explicitly mentioned, otherwise researchers in compressive sensing will have doubts to many of the statements in this paper. \n\nFor instance, in Section 5, the authors wrote \"These embeddings cannot fit into the usual compressed sensing worldview since the matrix defined by the embeddings cannot satisfy RIP.\" Compressed sensing is not RIP. Moreover, this is certainly not surprising to most high-dimensional statisticians as RIP is a very strong condition. On the other hand, they would be much more surprised if the RSC fails to hold (which is suggested by the authors in the response; I'm much more interested in this fact).\n\n2) I did not ask the authors to verify the NSP, which is of course NP-hard, but rather to check empirically the conditions that imply NSP. For instance, the famous Restricted Singular/Eigenvalue Condition does the job (see Theorem 3.2 of [Chandrasekran et al. 2012]): Generate random directions in the tangent cone, compute their norms, and check the RSP constant distribution; notice that in this case RSC is equivalent to RS/EC. Again, it is very intriguing to me that the RSC fails not hold, as suggested by the authors.\n\n3) LASSO/Dantzig selector are common compressive sensing algorithms (LASSO is possibly more popular than Basis Pursuit). Hence it feels weird to equate compressive sensing to BP in Appendix A. Maybe the authors can simply mention that this paper focuses on the BP algorithm, and defer the others to future work.\n\n\nSome minor comments:\n\n4) In the response the authors mentioned that they had tried LASSO but it didn't work as well. I'm not sure what the authors exactly did, but I guess in this case the *constrained* LASSO would perform well. Maybe the authors can take a shot.\n\n5) The connection to \"classical information theory\" is still quite vague (you refer to a paper which was inspired by the MDL and used LZ77, and I see very little connection with this paper). I personally feel that using \"classical information theory\" only damages the credibility of this paper to the experts. The simplest way to remedy this is to remove the term; otherwise, making the connection explicit is also an option.", "Thank you for the positive review! We are currently preparing a revision incorporating these comments. We would also like to clarify that our paper concerns LSTM document embeddings, not word embeddings.", "Thank you for the thorough review! We’ll revise incorporating your comments.\n\nMain Responses:\n\n1) “instead of using [Donoho & Tanner 2005] it should be better to use [Chandrasekaran et al. 2012]’s deterministic condition, the null space property or NSP” (paraphrase)\n\nWe knew of NSP but turned to Donoho & Tanner (2005) because NSP is difficult to work with (no obvious method to check if local NSP holds; checking global NSP is NP-hard (Tillmann & Pfetsch, 2014)). Since NSP is equivalent to exact recovery, our experiments (Fig. 1-2) strongly suggest that local NSP holds, but we did not find a way to use it to gain intuition or proofs. While closely related to NSP, the polytope condition of Donoho & Tanner (2005) implies Corollary 5.1, which suggests both a nice property of word embeddings and an efficient method to check recovery of nonnegative signals.\n\n 2): “[the claim that] certain LSTMs with random initialization are at least as good as the linear classifiers… ...is almost a direction application of the RIP of random Rademacher matrices”\n\nThis is true for the unigram (BoW) case. The proof for the n-gram case necessitated constructing a design matrix with correlated entries for which RIP is not as obvious. We agree that the bigger technical contribution is in connecting these ideas to text embeddings.\n\n\nOther Responses: \n\nRestricted Strong Convexity (RSC): “it is proved in [Chandrasekran et al. 2012] that Restricted Strong Convexity (RSC) alone is enough to guarantee successful recovery.”\n\nTo our knowledge RSC is used mostly for the case of signal/measurement noise (Negahban et al., 2010; Chandrasekaran et al., 2012), whereas we are in the noiseless setting. We know of work by Elenberg et al. (2016) using RSC to guarantee recovery with Orthogonal Matching Pursuit, but we have found that such greedy methods do not work well for pretrained embeddings (Section 5.1 paragraph 2), indicating that a sufficient RSC condition does not hold.\n\nLASSO/Dantzig Selector: “the same comments above apply to many other common estimators (lasso, Dantzig selector, etc.) in compressive sensing which might be more tolerant to noise.”\n\nLASSO was in fact the first approach we tried, with similar results as Basis Pursuit (we refer to it in Section 5.1 paragraph 2 as an “l_0-surrogate method”). However, as we are in the noiseless setting we do not need the robustness provided by LASSO; indeed, experiments show it performs somewhat worse for both pretrained and random vectors. Furthermore, to our knowledge guarantees for LASSO often have analogous results for Basis Pursuit, so the theoretical benefit to studying it is unclear. Although we did not try the Dantzig Selector, it can also be seen as a robust extension of Basis Pursuit and so similarly does not provide a clear advantage in our case.\n\n\nMinor Points:\n\n1. We use the phrase “classical information theory” only in connection with the scheme in Paskov et al., (2013) which is inspired by the Lempel-Ziv compression algorithm (Ziv & Lempel, 1977); 40 years old and directly inspired by Shannon’s works!\n2. In theory the regularization constant C is chosen to minimize the error bound; in practice it is chosen by cross-validation.\n3. We extend the analysis in order to handle logistic loss as it is commonly used in the NLP community and by supervised LSTMs. We do not need Theorem 4.2 to hold for all Lipschitz functions to get Theorem 4.1, but the function does need to be Lipschitz to control the error.\n4.1 This assumption is without loss of generality and is made to remove a spurious dependence on T in the error bound.\n4.2. There will sometimes be n-cooccurrences that contain a word more than once, e.g. (as, long, as), but they occur infrequently and can be removed by merging words as a preprocessing step. In the SST training corpus only 0.019% of bigrams and 0.75% of trigrams have this issue, the latter often due to words between two commas in a list.\n5-8. Will be addressed in revision.\n\n\nV. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. “The Convex Geometry of Linear Inverse Problems.” Found. of Comp. Mathematics 2012.\nD. L. Donoho and J. Tanner. “Sparse nonnegative solution of underdetermined linear equations by linear programming.” PNAS 2005.\nE. R. Elenberg, R. Khanna, A. G. Dimakis, and S. Negahban. “Restricted strong convexity implies weak submodularity.” arXiv 2016.\nS. Negahban, B. Yu, M. J. Wainwright, and P. K. Ravikumar. “A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers.” NIPS 2009.\nH. S. Paskov, R. West, J. C. Mitchell, and T. J. Hastie. “Compressive feature learning.” NIPS 2013.\nA. M. Tillmann and M. E. Pfetsch. “The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing.” IEEE Trans. on Info. Theory 2014.\nJ. Ziv and A. Lempel. “A Universal Algorithm for Sequential Data Compression.” IEEE Trans. on Info. Theory 1977.", "Thank you for the positive review! We are currently preparing a revision incorporating these comments. \n\nComment: “the embedding dimension $d$ is depending on $T^2$, and it may scale poorly with respect to $T$.”\nYes the bound may scale poorly with document length. At the moment many tasks in this area use short sentences (e.g. SST has avg. length < 20), and Fig. 4 indicates convergence of DisC to BonC performance even on the IMDB task (avg. length > 250) so perhaps our bound is too pessimistic. Note that in the unigram (BoW) case the scaling is (provably) linear in T because then the design matrix is an i.i.d. Rademacher ensemble." ]
[ 7, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1e5ef-C-", "iclr_2018_B1e5ef-C-", "iclr_2018_B1e5ef-C-", "iclr_2018_B1e5ef-C-", "rytGHUTWM", "SkhuaD3Zf", "SkzuQ_dlG", "H1kgYgogM", "SyURgFFlG" ]
iclr_2018_BkSDMA36Z
A New Method of Region Embedding for Text Classification
To represent a text as a bag of properly identified “phrases” and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”. Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts.
accepted-poster-papers
despite not amazing scores, this is a solid paper. it created a lot of discussion and was found to be reproducible. we should accept it to let the iclr community partake in the discussion and learn about this method of n-gram embeddings
train
[ "ryjxrEwlM", "r1sXToOgf", "Sy6ClHqef", "B1b7od97z", "B1_2OO9QG", "rJGB_OqXz", "Sy29H_5Qz", "HyE1mxEGf", "SJphme4Mz", "BkQabg4MM", "HJvnFyVMG", "ryfTd1Nfz", "BkA08yEGG", "SklnG7Mff", "SyUEJzzfM", "Hk992WfGz", "B1QjRyMGG", "Sy-OR3-fM", "B1hPEvKlM", "H1cW4Kulf", "HJl5GqvlG", "ryecqWvxG", "H1AXIZSeG", "ryjWc4Jgz", "SkKu25KJM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public", "official_reviewer", "public", "official_reviewer", "public", "public", "public" ]
[ "The authors propose a mechanism for learning task-specific region embeddings for use in text classification. Specifically, this comprises a standard word embedding an accompanying local context embedding. \n\nThe key idea here is the introduction of a (h x c x v) tensor K, where h is the embedding dim (same as the word embedding size), c is a fixed window size around a target word, and v is the vocabulary size. Each word in v is then associated with an (h x c) matrix that is meant to encode how it affects nearby words, in particular this may be viewed as parameterizing a projection to be applied to surrounding word embeddings. The authors propose two specific variants of this approach, which combine the K matrix and constituent word embeddings (in a given region) in different ways. Region embeddings are then composed (summed) and fed through a standard model. \n\nStrong points\n---\n+ The proposed approach is simple and largely intuitive: essentially the context matrix allows word-specific contextualization. Further, the work is clearly presented.\n\n+ At the very least the model does seem comparable in performance to various recent methods (as per Table 2), however as noted below the gains are marginal and I have some questions on the setup.\n\n+ The authors perform ablation experiments, which are always nice to see. \n\nWeak points\n---\n- I have a critical question for clarification in the experiments. The authors write 'Optimal hyperparameters are tuned with 10% of the training set on Yelp Review Full dataset, and identical hyperparameters are applied to all datasets' -- is this true for *all* models, or only the proposed approach? \n\n- The gains here appear to be consistent, but they seem marginal. The biggest gain achieved over all datasets is apparently .7, and most of the time the model very narrowly performs better (.2-.4 range). Moreoever, it is not clear if these results are averaged over multiple runs of SGD or not (variation due to initialization and stochastic estimation can account for up to 1 point in variance -- see \"A sensitivity analysis of (and practitioners guide to) CNNs...\" Zhang and Wallace, 2015.)\n\n- The related work section seems light. For instance, there is no discussion at all of LSTMs and their application to text classificatio (e.g., Tang et al., EMNLP 2015) -- although it is noted that the authors do compare against D-LSTM, or char-level CNNs for the same (see Zhang et al., NIPs 2015). Other relevant work not discussed includes Iyyer et al. (ACL 2015). In their respective ways, these papers address some of the same issues the authors consider here. \n\n- The two approaches to inducing the final region embedding (word-context and then context-word in sections 3.2 and 3.3, respectively) feel a bit ad-hoc. I would have appreciated more intuition behind these approaches. \n\nSmall comments\n---\nThere is a typo in Figure 4 -- \"Howerver\" should be \"However\"\n\n*** Update after author response ***\n\nThanks to the authors for their responses. My score is unchanged.", "The authors present a model for text classification. The parameters of the model are an embedding for each word and a local context unit. The local context unit can be seen as a filter for a convolutional layer, but which filter is used at location i depends on the word at location i (i.e. there is one filter per vocabulary word). After the filter is applied to the embeddings and after max pooling, the word-context region embeddings are summed and fed into a neural network for the classification task. The embeddings, the context units and the neural net parameters are trained jointly on a supervised text classification task. The authors also offer an alternative model, which changes the role of the embedding an the context unit, and results in context-word region embeddings. Here the embedding of word i is combined with the elements of the context units of words in the context. To get the region embeddings both model (word-context and context-word) combine attributes of the words (embeddings) with how their attributes should be emphasized or deemphasized based on nearby words (local context units and max pooling) while taking into account the relative position of the words in the context (columns of the context units). \n\nThe method beats existing methods for text classification including d-LSTMs , BoWs, and ngram TFIDFs on held out classification accuracy. the choice of baselines is convincing. What is the performance of the proposed method if the embeddings are initialized to pretrained word embeddings and a) trained for the classification task together with randomly initialized context units b) frozen to pretrained embeddings and only the context units are trained for the classification task?\n\nThe introduction was fine. Until page 3 the authors refer to the context units a couple of times without giving some simple explanation of what it could be. A simple explanation in the introduction would improve the writing.\nThe related work section only makes sense *after* there is at least a minimal explanation of what the local context units do. A simple explanation of the method, for example in the introduction, would then make the connections to CNNs more clear. Also, in the related work, the authors could include more citations (e.g. the d-LSTM and the CNN based methods from Table 2) and explain the qualitative differences between their method and existing ones.\n\nThe authors should consider adding equation numbers. The equation on the bottom of page 3 is fine, but the expressions in 3.2 and 3.3 are weird. A more concise explanation of the context-word region embeddings and the word-context region embeddings would be to instead give the equation for r_{i,c}. \n\nThe included baselines are extensive and the proposed method outperforms existing methods on most datasets. In section 4.5 the authors analyze region and embedding size, which are good analyses to include in the paper. Figure 2 and 3 could be next to each other to save space. \nI found the idea of multi region sizes interesting, but no description is given on how exactly they are combined. Since it works so well, maybe it could be promoted into the method section? Also, for each data set, which region size worked best?\n\nQualitative analysis: It would have been nice to see some analysis of whether the learned embeddings capture semantic similarities, both at the embedding level and at the region level. It would also be interesting to investigate the columns of the context units, with different columns somehow capturing the importance of relative position. Are there some words for which all columns are similar meaning that their position is less relevant in how they affect nearby words? And then for other words with variation along the columns of the context units, do their context units modulate the embedding more when they are closer or further away? \n\nPros:\n + simple model\n + strong quantitative results\n\nCons:\n - notation (i.e. precise definition of r_{i,c})\n - qualitative analysis could be extended\n - writing could be improved ", "() Summary\nIn this paper, the authors introduced a new simple model for text classification, which obtains state of the art results on several benchmark. The main contribution of the paper is to propose a new technique to learn vector representation of fixed-size text regions of up to a few words. In addition to learning a vector for each word of the vocabulary, the authors propose to also learn a \"context unit\" of size d x K, where d is the embedding size and K the region size. Thus, the model also have a vector representation for pair of word and position in the region. Then, given a region of K words, its vector representation is obtained by taking the elementwise product of the \"context unit\" of the middle word and the matrix obtained by concatenating the K vectors of words appearing in the region (the authors also propose a second model where the role of word vectors and \"context\" vectors are exchanged). The max-pooling operation is then used to obtain a vector representation of size d. Then a linear classifier is applied on top of the sum of the region embeddings. The authors then compare their approach to previous work on the 8 datasets introduced by Zhang et al. (2015). They obtain state of the art results on most of the datasets. They also perform some analysis of their models, such as the influence of the region size, embedding size, or replacing the \"context units\" vector by a scalar. The authors also provide some visualisation of the parameters of their model.\n\n() Discussion\nOverall, I think that the proposed method is sound and well justified. The empirical evaluations, analysis and comparisons to existing methods are well executed. I liked the fact that the proposed model is very simple, yet very competitive compared to the state-of-the-art. I suspect that the model is also computationally efficient: can the authors report training time for different datasets? I think that it would make the paper stronger. One of the main limitations of the model, as stated by the authors, is its number of parameters. Could the authors also report these?\n\nWhile the paper is fairly easy to read (because the method is simple and Figure 1 helps understanding the model), I think that copy editing is needed. Indeed, the papers contains many typos (I have listed a few), as well as ungrammatical sentences. I also think that a discussion of the \"attention is all you need\" paper by Vaswani et al. is needed, as both articles seem strongly related.\n\nAs a minor comment, I advise the authors to use a different letter for \"word embeddings\" and the \"projected word embeddings\" (equation at the bottom of page 3). It would also make the paper more clear.\n\n() Pros / Cons:\n+ simple yet powerful method for text classification\n+ strong experimental results\n+ ablation study / analysis of influence of parameters\n- writing of the paper\n- missing discussion to the \"attention is all you need paper\", which seems highly relevant\n\n() Typos:\nPage 1\n\"a support vectors machineS\" -> \"a support vector machine\"\n\"performs good\" -> \"performs well\"\n\"the n-grams was widely\" -> \"n-grams were widely\"\n\"to apply large region size\" -> \"to apply to large region size\"\n\"are trained separately\" -> \"do not share parameters\"\n\nPage 2\n\"convolutional neural networks(CNN)\" -> \"convolutional neural networks (CNN)\"\n\"related works\" -> \"related work\"\n\"effective in Wang and Manning\" -> \"effective by Wang and Manning\"\n\"applied on text classification\" -> \"applied to text classification\"\n\"shard(word independent)\" -> \"shard (word independent)\"\n\nPage 3\n\"can be treat\" -> \"can be treated\"\n\"fixed length continues subsequence\" -> \"fixed length contiguous subsequence\"\n\"w_i stands for the\" -> \"w_i standing for the\"\n\"which both the unit\" -> \"where both the unit\"\n\"in vocabulary\" -> \"in the vocabulary\"\n\netc...", "Thank for your suggestions. We will explain your concerns point by point.\n\n1) \"What is the performance of the proposed method if the embeddings are initialized to pretrained word embeddings and a) trained for the classification task together with randomly initialized context units b) frozen to pretrained embeddings and only the context units are trained for the classification task?\"\nWe evaluated the experiments with pretrained word embeddings on Yelp.F and Yelp.P dataset:\n\n*Datasets* *Method* Best epoch(start from 0)* *Accuracy*\nYelp F. Random 1 0.649500\nYelp F. Finetune 1 0.638580\nYelp F. Frozen 2 0.633060\nYelp P. Random 2 0.963895\nYelp P. Finetune 1 0.962500\nYelp P. Frozen 2 0.960842\n\nThe word embeddings are pre-trained by Wikipedia+Gigaword 5 glove with 200 dimensions, region size is 7. The result of a) is similar with randomly initialized word embeddings, and result of b) is slightly worse.\nIntuitively, pre-trained word embeddings should have a role, but maybe should not be applied directly. In fact, we will explore the way to apply local context unit to semi-supervised and unsupervised learning in our future work.\n\n2) \"Until page 3 the authors refer to the context units a couple of times without giving some simple explanation of what it could be. A simple explanation in the introduction would improve the writing.\" && \"the authors could include more citations (e.g. the d-LSTM and the CNN based methods from Table 2) and explain the qualitative differences between their method and existing ones.\"\n\nWe have deeply rewrited the introduction section and added citations related to ours.\n\n3)\"The authors should consider adding equation numbers. The equation on the bottom of page 3 is fine, but the expressions in 3.2 and 3.3 are weird. A more concise explanation of the context-word region embeddings and the word-context region embeddings would be to instead give the equation for r_{i,c}. \"\n\nThanks for your suggestions, equation numbers have been added, and expressions in 3.2 and 3.3 have been updated. We also add explanations and discussions in 3.2 to make this paper clearer.\n \n4)\"I found the idea of multi region sizes interesting, but no description is given on how exactly they are combined. Since it works so well, maybe it could be promoted into the method section? Also, for each data set, which region size worked best?\"\n\nDetailed information about multi region sizes has been added at section 4.5.1. The gains of the multi region sizes method are not so large. As a natural extend for our core idea, we prefer to discuss it in the exploratory experiments sections. Performances for each data set with different region size have been reported in Appendix.A now.\n\n5)\"Are there some words for which all columns are similar meaning that their position is less relevant in how they affect nearby words? And then for other words with variation along the columns of the context units, do their context units modulate the embedding more when they are closer or further away? \"\n\nWe have discussed this issue at section 4.5.4 and added words whose positions are less relevant in how they affect nearby words, which is consistent with our previous hypothesis. As for the second question, we have evaluated the entire vocabulary from the perspective of statistics and no obvious differential distribution built on different columns. In fact, it seems size of half of region characters expression patterns more, instead of strong distance distinction.\n\n", "Thank you very much for suggestions and meticulous corrections for this paper. Your description of our work is accurate. We have addressed each of your comments:\n\n1) Didn't report training time.\nWe have reported the training time for each dataset with different region sizes in appendix A.\n\n2) Didn't report number of parameters.\nParameters have been discussed in 4.5.1, and reported in appendix A for different settings.\n\n3) Typos and ungrammatical sentences in the paper.\nWe have greatly improved the writing of this paper, including all the typos you pointed out and others textual errors. We will continue to improve the writing quality before the camera ready version.\n\n4) The lack of discussion of the \"attention is all you need\" paper.\nThis work has been discussed now in the related work section.\n\n5) Should use different letters for \"word embeddings\" and the \"projected word embeddings\"\nWe have improved the notations to make the paper clearer.\n", "Thanks for your valuable comments. We will explain your concerns point by point:\n\n1)\"The authors write 'Optimal hyperparameters are tuned with 10% of the training set on Yelp Review Full dataset, and identical hyperparameters are applied to all datasets' -- is this true for *all* models, or only the proposed approach? \"\n\nThis is true only for the proposed approach. Results of the previous methods are their best results reported in corresponding previous papers, in which the hyperparameters are not identical for each datasets. (reference section 3.3 in D-LSTM, Table 5 in VDCNN, Table 1 and second paragraph of section 3.1 in FastText, section 4.2 in char-CRNN).\n\n\n2) \"The gains here appear to be consistent, but they seem marginal. The biggest gain achieved over all datasets is apparently .7, and most of the time the model very narrowly performs better (.2-.4 range). Moreover, it is not clear if these results are averaged over multiple runs of SGD or not.\"\n\nThe result are averaged over multiple runs. We have experimented the performance variance in independent tries on yelp datasets, the results are reported in Appendix. A.\n\nIn this paper, we want to show that, with the ability of word-specific contextualization given by our proposed local context unit, our simple model can consistently beats or achieves the state-of-the-art results on almost all text classification tasks against to previous methods (traditional and deep models). This gives us an insight to represent and understand natural language by word specific context units in our future work. Therefore, we didn't use any trival tricks (e.g. multi-region-size which have been proved can improve the performance) and extra regularization methods. In fact, the gains are not so marginal since the best previous method's gains are similar and even less than ours on some datasets.\n\n3)The related work section seems light.\n\nWe have improved and completed related work section now.\n\n4)The two approaches to inducing the final region embedding (word-context and then context-word in sections 3.2 and 3.3, respectively) feel a bit ad-hoc. I would have appreciated more intuition behind these approaches. \n\nThe main intuition we compose the region embeddings in two approaches is the flowing: We consider the semantic of a given region is derived from the mutual influences of the words in this region. Since the regions can be regarded as snapshots of a window sliding on a document, whose middle words are contiguous, we can just focus on the middle word influences on the context words, or the context words' influences on the middle word. According to the property of local context unit introduced in section 3.1, embeddings and units are used in two ways to address these influences, respectively. Finally, to extract the most predictive information and then produce a fixed length vector representation, a max pooling operation is used.\n\nWe also revised sections 3.2 and 3.3 of this paper.", "Thank you very much for reviewing our submission and making so many valuable comments. We also thank people for their attention to our work and their experiments in reproducing our results.\n\nWe have addressed all the issues pointed out by you and now the submission is of quite high quality. Could you please review our submission again? We hope that our submission will be accepted.\n\nWith the help from our colleagues, we have significantly improved the writing of the paper. The title has been modified to better describe the main contribution of the work. The abstract and body have also been significantly revised accordingly. \n\nWe further plan to have a native speaker to conduct proof reading on our paper, if it is accepted. \n\nBelow is a summary of the major changes.\n\n1. Title has been changed to “A new method of region embedding for text classification”.\n\n2. Abstract and introduction have been deeply revised. The expression has been improved, and a more clear explanation about the local context unit has been added in the introduction.\n\n3. In the related work section, discussion about \"Attention is all you need\" paper and citations including xx have been added. \n\n4. In method, we have improved the notations. including the notations of projected embedding (e_{w_i}^j - > p_w{w_i}^j), region size(c -> 2 * c +1). We also added equation numbers and refine the equations in 3.2 and 3.3. More explanations and discussions about the intuition behind the approaches we produce the region embeddings have been added in 3.2.\n\n5. We have added information about the datasets(average document lengths), implement details(multi-region-sizes mode), more cases about context units visualization, experimental results(training time and parameters numbers, best region sizes) in experiments section and appendix A. Figure 2 and Figure 3 are put next to each other to save space.\n", "Thank you very much for the reproducing experiments and suggestions about this paper. We have updated our code and discussed the common issues about the reproducibility. \nIn summary, within 1% variance gap can be explained by the 90% training data in the published configure as default, while we use 100% in the paper results. \nPlease see our latest comments to get more detailed information.\n\nThank you for pointing out, we have fix the stop condition issue in train.py and refined the code.\n", "Thank you very much for the reproducing experiments and suggestions about this paper. We have updated our code and discussed the common issues about the reproducibility. \nIn summary, within 1% variance gap can be explained by the 90% training data in the published configure as default, while we use 100% in the paper results. \nThe problem of slow convergence may be caused by the learning rate. In the example configure it was 1e-5 which declared 1e-4 in the paper.\nThe significant different reproducing results on DBPedia and AG News can be explained by the preprocess bug in our published code, and we have fixed it.\nPlease see our latest comments to get more detailed information.\n\nWe have add scalar mode context unit in our code, which should not take 2 days long to train the model until convergence, could you refer our implementation or share yours so we can find out the problem?\n", "Thank you very much for the reproducing experiments and suggestions about this paper. We have updated our code and discussed the common issues about the reproducibility. \nIn summary, within 1% variance gap can be explained by the 90% training data in the published configure as default, while we use 100% in the paper results. \nThe problem of slow convergence may be caused by the learning rate. In the example configure it was 1e-5 which declared 1e-4 in the paper.\nThe significant different reproducing results on DBPedia and AG News can be explained by the preprocess bug in our published code, and we have fixed it.\nPlease see our latest comments to get more detailed information.\n\nCould you please share which datasets you applied on FastText Uni. & Bigram with different embedding sizes? Since some datasets like DBPedia were preprocessed incorrectly, experiments on these datasets may lead different conclusion. \n\nInterestingly,we found there is a hidden part in the .tex file (%experiment notes part)of the original FastText paper(https://arxiv.org/abs/1607.01759, click other formats link). Embedding size 10 is better than 100 for both unigram&bigram in Fasttext.\n\nModel && AG & Sogou & DBP & Yelp P. & Yelp F. & Yah. A. & Amz. F. & Amz. P. \\\\\n%Ours, $h=100$ && 91.0 & 92.6 & 98.2 & 92.9 & 59.6 & 70.7 & 55.3 & 90.9 \\\\\n%Ours, $h=100$, bigram && 92.4 & 96.4 & 98.5 & 95.7 & 63.7 & 71.9 & 59.2 & 94.5 \\\\\n\\texttt{fastText}, $h=10$ && 91.5 & 93.9 & 98.1 & 93.8 & 60.4 & 72.0 & 55.8 & 91.2 \\\\\n\\texttt{fastText}, $h=10$, bigram && 92.5 & 96.8 & 98.6 & 95.7 & 63.9 & 72.3 & 60.2 & 94.6 \\\\\n\nFrom this result we can make the consistent conclusion with our paper.\nWe have add multi-region size mode in our code and we will add more implement details in our paper, thank you for your suggestion! \nIs there a typo of the results you reported on Yahoo! Answers and Yelp Review? the numbers seems not similar with the results reported in this paper.\n", "Thank you very much for the reproducing experiments. We have updated our code and discussed the common issues about the reproducibility. \nIn summary, within 1% variance gap can be explained by the 90% training data in the published configure as default, while we use 100% in the paper results. \nThe problem of slow convergence may be caused by the learning rate. In the example configure it was 1e-5 which declared 1e-4 in the paper.\nPlease see our latest comments to get more detailed information.\nThank you for your suggestions, we have refined the code with more guild-lines and comments.", "Thank you very much for the reproducing experiments. We have updated our code and discussed the common issues about the reproducibility. \n\nIn summary, within 1% variance gap can be explained by the 90% training data in the published configure as default, while we use 100% in the paper results. \n\nThe problem of slow convergence may be caused by the learning rate. In the example configure it was 1e-5 which declared 1e-4 in the paper.\n\nPlease see our latest comments to get more detailed information.\n\nThank you for your suggestions about participating in some competitions, we will consider it!", "Thank you very much for the effort on reproducing experiments and suggestions for this paper. \n\nAlthough results on most datasets were reported reproducible, we have updated our code to reproduce more consistent experimental results straightly(including the exploratory experiments). Due to the time limit, the previous version of the shared code is not complete clear enough, we have update the code: 1)fixed a bug in the preprocess code which leads significant difference on DBPedia and AG News; 2)added exploratory experiments module, 3)published training configures for each dataset 4)added guild-lines and comments for the code. The latest code can be pulled from the same repository. We will also update this paper in a few days.\n \nWe reply issues about reproducibility here together:\n1. Significant difference of reproducing results on DBPedia and AG News:\nWe find a bug in the public version of prepare.py that we treat the raw csv input files as two-columns for all datasets, while some of them are not. \nThis bug is caused by our negligence during migrate the code from internal version(which worked as expected) to public version. Unfortunately, we only verified the public version code on Yelp datasets which are two-columns files. This bug may lead the significant different reproducing results on DBPedia and AG News. We are very sorry for this bug in the shared code and now it has been fixed and verified reproducible.\n\n2. 1% variance on most reproducing result:\nAlthough similar results(variance within 1% on accuracy) have been reported on most datasets, the training data were defaultly set to 90% training data to tune the hyperparameters in the example configure. Since we only tuned the hyperparameters on Yelp F and applied these hyperparameters on all datasets, the results reported in this paper are trained by 100% training data, this can explain the variance on accuracy in reproducing results. And we have set 100% training data as default value in the new version of config.\n\n3. Training time:\nWe have listed the training time and best epoch with different region sizes of each dataset in our paper(will be upload in a few days). In our experiments, it usually converges at 2 or 3 epoch with learning rate 1e-4 instead of more than 20 epochs, we are not sure whether it misled people that the initial learning rate in the example configure was 1e-5 which declared 1e-4 in the paper. Ignored the extra look up operation, the computational complexities of the proposed methods are basically the same magnitude with CNN. However, the shared code was not well optimized which may lead somehow slower in practice.\n\n4. Hyperparameters:\nWe have tuned hyperparameters on Yelp. F. and applied them on all datasets, so there may be better hyperparameters for a given dataset. We chose the region size as 7 since we found that the performances with region size 7 and 9 were almost the same and 7 needed less model parameters, and similar with the embedding size.\n\n5. Reproduction for baseline methods:\nWe have implemented some baseline models and achieved similar results with small fluctuations, considering the consistency of the comparison and lack of implementation details of some models, we reported the best results from previous works instead of reproducing all the baseline models.\n", "This paper proposes two novel text classification models, Word-Context and Context-Word Region Embeddings, where every word is represented by a word embedding vector and a local context unit matrix. The local context unit is designed to capture the semantic and syntactic information of a word in a given context. The columns of the local context unit are used to interact with the embeddings of the words of the same region, creating projected word embeddings. Max pooling is applied to the projected word embeddings to create region embeddings which are context dependent. The Word-Context method is based on the interaction between the local context unit of a given word with the surrounding word embeddings, while the Context-Word is based on the interaction between the embedding of a given word with the surrounding local context units. A document is represented by a weighted summation of all its region embeddings, which is fed to an upper Fully-Connected layer for text classification. The proposed methods are designed to outperform some of the most commonly used methods in the literature such as bag of words, ngrams, bigram-FastText, D-LSTM among others. \n\nThe authors provide code to facilitate the reproduction of their results. Although mostly functional, it required some tweaking before it was ready to go. The code was used to reproduce the results found on the Yelp Full Review dataset, since the model's configuration parameters seem to have already been specified in the code. The results were found to be similar to those published by the authors (within 1\\% variance). However, training the model can be computationally expensive if the required hardware is not available. Using a machine with 24 vCPUs and no powerful GPUs, the model required a training time of 1.2 hours per epoch. This presented serious constraints for tweaking the model's parameters. \n\nCode for baseline implementations can be found in projects associated with the papers cited by the authors, but some of them have specific hardware requirements, and a modification of the input data format. The hyperparameters for implementing these baseline linear classifiers are in some cases left vague (such as the logistic regression of the n-gram TFIDF method). Thus, many baseline methods had to be inferred and implemented independently, and implementing the new methods required certain hardware resources to take advantage of the torch-driven parallelized implementation. It was thus challenging to reproduce the baseline results presented in this paper.\n\nFinally, we introduce a new dataset on sentiment analysis of movie reviews from Kaggle to evaluate if this method can generalize well to other tasks. The dataset was particularly interesting since it had an imbalanced class distribution, and since it is associated to an online competition. The best published accuracy for this competition is 76.5\\%. Although upon inspection this dataset had many noisy samples, word-context model resulted in a 54.02\\% accuracy after twenty epochs of training, putting it slightly above the majority class prediction baseline. However, the test accuracy was constantly increasing until the last epoch, so if left longer the model might have achieved better classification results, or the convergence would have been faster if the parameters were tweaked. The main limitation here was again, computational complexity. We encourage the authors to participate in such competitions.\n\nStrengths:\n+ Intuitive concept, clear paper supported by figures and graphs\n+ Open source code\nWeaknesses:\n- Code completeness and clarity\n- Computational Complexity\nSmall Comments:\n~ typos\n~ Data statistics for the amazon polarity and full datasets in table 1 are interchanged", "The authors behind the paper \"Bag of region embeddings via local context unit for text classification\" have come up with a new method to help classify text by looking at words' contextual effect on surrounding regions and how that changes with their relative positions. This is in contrast to just looking at the distribution of words and/or sequences of words. The authors were able to come up with some very good and state-of-the-art results. We have worked with the paper and the code made public by the team, in order to reproduce some of their results.\n\nThe report is very well structured, and it coherently develops the model contained within. Everything follows naturally with respect to these initial ideas. They show a lot of good examples of the power of their model, for example how some words contribute to the sentiment of the sentence, in different ways. The report does however have some grammatical errors, and some sections that did not read well.\n\nWe found that the results obtained by the authors were indeed reproducible, since we, using the same method and code, got very similar results. The authors made their code publicly available for all reviewers to see, in the interests of reproducibility, which is definitely an asset for their credibility. Furthermore, the authors used publicly available data that was very easy to find. They were also able to compare their results to other more established models, which have all classified and obtained results on the same data. The authors tried two different approaches using the same model, and found that one was much more effective than the other, by comparing the results. One of their models got state-of-the-art results on a range of the datasets.\n\nThe Github code given by the authors had some good general instructions on how to run the pre-training and the actual training. Using Python 2.7.1 with Tensorflow and standard libraries such as Scipy and Numpy, we think that it is very accessible to reproduce their results, hardware considerations included.\n\nThe whole idea makes a lot of sense intuitively and mathematically, and we give the authors a lot of credit for this. They make it simple to understand. Even though the authors explained the feature selection model with good detail, and strong mathematical reasoning, we do not think the detail on the actual implementation is appropriate. We consider the lack of detail on the implementation a shortcoming of the paper, and the reproducibility of the results. The code given by the authors is very good, especially the hyper-parameter configuration file, which made it possible for us to reproduce the results. We did not find that we could run the code provided straight from Github, due to a very simple problem that was easily fixed. To make the code more easily understandable it would have been beneficial to add some more comments, since it is a non-trivial pre-training model, particularly given the lack of implementation detail in the paper. We do not think it would have been possible for us to reproduce the results without having been given the code.\n\nOur results:\n\nWe tried to reproduce the results of the Word-context model which is the better one of the two models made by the authors. We tried only to reproduce the results on the two datasets, Yelp Full-Review and Yelp Polarity, and we obtained accuracies which were very comparable to the results published in the paper (within 0.5-0.8% of that published). We consider this result successful in our effort to reproduce the results. The key limitation of our work was the hardware, because the authors used very powerful hardware that was not available to us. With the hyperparameters given by the authors, and limited to one epoch with the hardware accessible to us, we found training to take around 12 hours. This forced us to slightly modify some of the hyperparameters, in order to run the code in a reasonable amount of time. ", "New Proposed method of Text Classification\nSummary\nWithin this paper, the author proposed two fresh text classification methods that learn task specific region embeddings without hand crafted features. In the model, there are two attributes for every word, which are generally word embeddings representing regions and local context unit which interacts with the word's context. After learning the bag of region embeddings which is either word-context embedding or context-word embedding, a linear classifier is used for classification. The authors implemented the context unit model and compared them with other 8 baseline methods using 8 datasets and showed the beating results over all previous models on all datasets except VDCNN on Amazon datasets. Besides, they explored the effects of a hyper-parameter of selecting region size since small region size loses patterns of long distances and large region size gets more noises. Additionally, they experimented with the effect of context window size and the embedding size. Finally, they visualized the contribution of each word and selected phrase of classification.\nDiscussion:\nThrough repetitive validations on baseline methods as well as proposed methods by the authors, we proved some baseline results are reproducible, however, some are not. To be more specific, we derived comparable results to the ones referenced by the paper using FastText Bigram or Unigram method. Nevertheless, when we tried to fit Dbpedia dataset with Bag of Words or Ngrams-TFIDF model, the prediction accuracy was low, achieving only around 70% where the authors claimed it would reach 96.6%. Thus, we suggest the authors to manually implement several baselines to make sure the reliability of the source data. After the reproduction of different embedding sizes on FastText Uni. & Bigram, the prediction accuracy is not declining as the embedding size increases. The advantage of using proposed method instead of FastText to avoid over-fitting is not obvious.\nFor the experiments using local context unit models, we reproduced the baseline run with optimal parameters on four datasets and studied the effects of embedding sizes and window sizes. We obtained similar results on Yahoo! Answers and Yelp Review Full datasets, with 64.6% and 68.9% respectively. When training with Word Context model, the best results we obtained on DBPedia within 20 epochs is 71.7%, differing significantly from the claimed results 98.9%. We carefully checked the source code and performed multiple trainings with different random initialization, however the accuracy didn’t improve. We doubt whether this problem is cause by implementation issue or the discrepancies in terms of training parameters. Moreover, we obtained 87% accuracy on AG’s News within 39 epochs, 5% less than the claimed results.\nDue to the limited the computing power, we validate the effect of window sizes we obtained the results with only 8 ~ 12 epochs, except for windows size 1 and 7 of more than 20 epochs. However, the accuracy increase rate drops between adjacent epochs by the end of training on each dataset. Also, a clear distribution that is similar to the one illustrated in the paper is observed: 61.01%, 63.00%, 63.92%, 64.6%, 63.15%. We can thus conclude that the variation across window sizes is reproducible. However, the mixed region sizes are not reproduced because the implementation of this is not specified by neither the paper nor the source code.\nWe failed to perform FastText (Win-pool) and W.C. region embedding with one dimensional context unit because of some implementation issues which was hard to overcome. We suggest authors releasing the implementation for the variation or more verbose guild-lines for customizing the code.\nStrength:\n+ Proposed model is very intuitive, the representation of the architecture of C.M. & M.C. region embedding is clear and easy to follow.\n+ Exploratory experiments on embedding size effect and region effect as well as the visualization make it more vivid\nWeakness:\n- The authors didn’t manually implement the baseline methods through 8 datasets.\n", "This paper proposed a new method on document classification task. In addition to word embedding, it assigns a context unit, which is a matrix, to each word. Given a region, the method uses these matrix’s columns and the word embedding vectors within the region to compute element-wise multiplication. The results are some projected vectors. Then they apply element-wise max-pooling for these projected vectors. In the end, the model extracts a feature vector on each N consecutive words and then it computes the average of these vectors (or computes the weighted sum). This final vector can be regarded as the feature vector of the entire document. Finally, the model feeds the vector into a fully connected layer and gets the classification results. This feature construction process is similar to that of the N-gram model since they all extract a feature vector from each region of fixed length, except that, in N-gram, each consecutive N words is used as a feature directly. \nThe authors justified the effectiveness of their model by comparing the test set accuracy to several well-known baseline models on 8 different data sets. Based on their result, the model outperformed all previous methods on all data sets except for VDCNN on two Amazon data sets. In general, the model achieved an overall outstanding performance in terms of accuracy. The paper also carried out several exploratory experiments to study the effect of each hyper-parameter on the model. The authors compared the effectiveness of the model under various region sizes and embedding sizes. They concluded from the results that their model is robust to the over-fitting problem. The paper also investigated the effects of context unit by successively removing each independent method and comparing the performance of each resulting model. It showed that accuracy increased every time when they added one more component to the model. Furthermore, a significant improvement occurred when the context unit was fully utilized. \n\nOur result:\nFor Yelp Review Polarity data set, our test set accuracy does not match the reported accuracy, yet they are consistent to some extent. Despite the subtle disagreement, our result still exceeds the all test accuracy form previous work by at least 0.6\\%. When comparing the results on Yelp Review Full data set, however, we observe large discrepancy. The authors claimed that their model achieved a test set accuracy of 64.9% and as such beat all previous models whereas our experiment yields an accuracy of 64.6% which is a bit far from so-claimed 64.9%. Moreover, based on our result, the proposed model also fails to outperform VDCNN. \nWe notice that, for all region sizes, we obtained a test set accuracy lower than the one reported by the authors. Based on our result, Best performance is achieved when region size is set to be 9 as opposed to the paper's result that the performance continues to increase with the growth of region size up to 7. Nevertheless, as the region size increases, model performance from two results firstly get dramatically promoted and begin to decline after some point.\n\nPros: \n-The paper is easy to understand. Moreover, the authors used a graph to illustrate their model structure which is helpful.\n-The model and the idea are simple, which makes the model easy to interpret.\n-Code is provided which makes it easy to reproduce the result. \n-The paper provided results and comparisons on several benchmarks with several state-of-the-art models. \nCons: \n-Computational time for each task was not showed in this paper. If the authors provided the running time with different choice of hyper-parameters, it will be helpful for the readers to choose the combination of hyperparameters so as to best fit their computation environment when time is limited. \n-The hyper-parameter settings used for producing results in the paper were not clearly stated. For example, in table 2 of this paper the epoch size was not given. There is ambiguity of all the numerical results. The accuracy can be either interpreted as the average accuracy of several repeated runs of this task or interpreted as the highest accuracy obtained from running the same task repeatedly for several times. \n-The open source code has one major issue where the stop condition in trainer.py/function train() will result the whole training process to be forcefully terminated with error before computing the test and dev data accuracy of last epoch of the training process. \n-The naming of variables in this paper is inconsistent and does not align with the open source code variable name, such as, hyper-parameter \"region size\" are renamed as \"window size\" in section 4.5.2.\n", "This paper proposed a new technique to learn a vector representation of local context information, where the model learns a separate target embedding and context embedding for each word in a sequence. Under the proposed framework, there are two models: Context-Word and Word-Context. Word-Context uses the same context embedding for the target word on each context word, and Context-Word uses the same target embedding for each context word. Max pool is applied to get the final word representation. The authors evaluate the quality of the learned embeddings on text classification task where the word and context embeddings are directly feed into a linear classifier as input. The embedding for target word and context words and the neural net parameters are trained jointly on the supervised classification task. \n\nWe conduct experiments using both Word-Context and Context-Word on 4 out of the 8 datasets: Yelp Full Review, Yelp Review Polarity, AG news and DBP dataset. After running the experiments given the details provided, we compare our result against the paper’s claim, and conclude that the proposed model is mostly as effective as the paper claimed (variance within 1% on accuracy), while in some cases we get results worse than the reported and we are not able to reproduce. \n\nFirst, we reproduced the text classification tasks on the 4 datasets mentioned above. We find that there are small fluctuations (around 0.5% to 1%) on each result, but the results are in general consistent with the paper on all datasets except DBP and AG news. Surprisingly, our results on DBP dataset and AG News are not consistent with the authors’: our result is around 75% while the paper reports over 98% on DBP, and 88% while the paper reports 92% on AG news. We find no obvious reason for the model to fail, and thus the experiments on those dataset is not reproducible. \n\nNext, we test several reported baselines on the full dataset. Due to time and computation limit, we reproduce 4 out of 8 baselines: Bag of Word (BoW), Bag of Ngrams, Bag of Ngrams with TFIDF and FastText.\nThe BoW, Ngram and Ngrams with TFIDF baselines are easy to implement with Scikit-learn library, but we find our results significantly lower than the reported results on an average of 5-7% in classification accuracy. The code for FastText is public online, and our reproduced results are in line with the claimed results for all datasets.\n\nIn addition, we conduct the experiments for analytical purpose as described in the paper. In the paper, the effect of region size and embedding size are studied separately, showing that a region size of 7 (3 words to the left of the target word and 3 words to the right) and an embedding of dimension 128 produces the best result. Here we also reproduce similar results on the same dataset following the instructions given in the paper. We find that the optimal region size is around 3-4 words on each side, and slight numerical difference (within 0.5% variation) might exist depending on the dataset and other hyperparameters such as embedding size. For example, on Word-Context model with embedding size 200 on Yelp Full Review, a region size of 9 turns out to yield the best results. \n\nFurthermore, we conduct the experiments on the settings of different embedding size up until our time and computation power limit, and we find that the accuracy score is about 0.5% worse than the claim on large embedding sizes but about 1% better than the score reported on the paper.\n\nLastly, we examine the effect of context unit by comparing the performance of the proposed model with FastText baseline. As described in the paper, we train the model to learn a scalar representation of the word and its context, and compare it with the normal vector version and baselines. However, in practice it takes more than 2 days to train (on GPU) the model until convergence. Therefore we are not yet able to draw conclusion about the effectiveness of the model with scalar representation model compared to the one with vector form.\n\nWe also record the average cost for each model on each dataset. On large dataset such as Yelp Review, both proposed models take about 7-10 hours to converge with the reported hyperparameters, while on small datasets such as AG News, the models converge in 2-3 hours. The model learns relatively slowly compared to some baseline models such as FastText, which converges within half an hour. This is expected, given the number of parameters the proposed models have.\n\nOverall, we conclude that the proposed methods are well justified and the empirical evaluations and analysis are well executed. Most results are reproducible within certain time and computation limit, but there also exist certain experiments that are not reproducible.", "Thank you for asking. \n\nIn this paper, previous works' experimental results in Table 2 are reported from Joulin et al.(2016). From our knowledge, we believe FastText and the character based methods did not use pre-trained word embeddings, and we are not very sure whether the embeddings are pre-trained in Discriminative LSTM from now on. Performances on these eight datasets of word based CNN can be found in Zhang & LeCun (2015)(with and without pre-trained word embeddings), which show the effect of whether the embeddings are pre-trained. \n\nMore details about those experiments under different conditions can be found in the original papers for each method. ", "Thank you so much for the clarifications! What about the embeddings of the competing methods, e.g. Fasttext? Are they pretrained? Are they retrained for the classification task at hand?", "Thanks for your question.\n\nIn this paper, we are going to produce the task related region embeddings which can be used to improve the performance of classification tasks, so the word embedding matrix E and local context unit matrix U are randomly initialized and jointly trained as classification model parameters, here we use the cross entropy loss as the classification loss.\n\nNoticed that the power of the local context unit on learning task related region embeddings, we are\ninterested to explore its ability to semi-supervised or unsupervised learning in our future work, but in this paper, we only focus on developing a new mechanism to extract the predictive features on small text regions.", "How are you training the local context units? Which loss are you optimizing to get the embeddings E and context parameters U?", "Thank you for your comments and suggestions. We generally agree with your understanding of this paper, if there are any problems with the implementation, please let us know.\n\nAbout your questions with datasets:\n\n1)The average document length for the different datasets are following, we will add them in the next revision:\n\n *Dataset* *Average Document Length*\n Yelp P. 156.153767857\n Yelp F. 157.641587692\n Yah.A. 111.638106429\n Sogou 578.654484444\n DBP 55.3340017857\n Amz.P. 90.8814119444\n Amz.F. 92.7758653333\n AG 43.6560583333\n\nIt is worth to mention that the motivation of our method is not to capture the long distance dependence, but to capture the local features of the text from a new perspective(Although more complex upper layers like RNNs can be applied to capture the long term dependencies in the document, we simply used a bag of region embeddings upper layer).\n\n\n2)The datasets are built from real world and are widely used to evaluate the model performance on text classification tasks(see details in the references of this paper). On some datasets such as AG and Sogou, the Bow and Bag-of-Ngrams can achieve 90%+ accuracy which indeed shows that these datasets are naturally easy to be separated, but we can see that our method achieves the state of the art results against the baselines and along with other deep models(LSTMs, CNNs), on some other more difficult datasets, such as multi-level sentiment analysis(Yelp.F., Amz.F), QA matching(Yah.A), our method yields a significant performance gain compared to the BoW and ngrams baselines (5%-8%), which shows that our method can capture local text features better.", "Hey authors\n\nThank you for sharing the implementation of the paper - it goes a long way towards ensuring reproducibility.\n\nMy understanding of the paper is the following: for each word in the vocabulary, along with learning a word vector, learn a context matrix. This context matrix would introduce a soft-attention kind of effect and is expected to be more powerful than the use of just a context vector for capturing the context. Please correct me if there is something wrong in my understanding :)\n\nIn the experiments section, the paper uses 8 different datasets. It would be helpful if the paper also mentions the average document length for the different datasets. That could be a crude proxy to understand how important is it to capture the long term dependencies in the document. Further, the simple baselines of BoW and ngrams give decent performance (around 90% for 5 datasets). Could it be the case that the datasets are not very \"difficult\"? I have not used these datasets and would be glad to know what do the authors feel about the same.", "In order to facilitate reviewers to reproduce our results, we share the implementation of our method at here(https://github.com/text-representation/local-context-unit). We will formally open-source our code upon publishing the paper." ]
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "r1sXToOgf", "Sy6ClHqef", "ryjxrEwlM", "iclr_2018_BkSDMA36Z", "B1QjRyMGG", "Sy-OR3-fM", "Hk992WfGz", "SyUEJzzfM", "SklnG7Mff", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z", "H1cW4Kulf", "HJl5GqvlG", "ryecqWvxG", "iclr_2018_BkSDMA36Z", "ryjWc4Jgz", "iclr_2018_BkSDMA36Z", "iclr_2018_BkSDMA36Z" ]
iclr_2018_S1Dh8Tg0-
Fix your classifier: the marginal value of training the last weight layer
Neural networks are commonly used as models for classification for a wide variety of tasks. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have a vast number of parameters, which grows linearly with the number of possible classes, thus requiring increasingly more resources. In this work we argue that this classifier can be fixed, up to a global scale constant, with little or no loss of accuracy for most tasks, allowing memory and computational benefits. Moreover, we show that by initializing the classifier with a Hadamard matrix we can speed up inference as well. We discuss the implications for current understanding of neural network models.
accepted-poster-papers
This paper proposes an interesting new idea which creates an interesting discussion.
train
[ "HyGVuO0ez", "rJX0wjUVz", "rJ3ZYFtxM", "S1kGhTKez", "Bkvocznzf", "SyPO9M2zz", "SJOW5M2fz", "By_g-D-xf", "SkwPDJbxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Revised Review:\n\nThe authors have largely addressed my concerns with the revised manuscript. I still have some doubts about the C > N setting (the new settings of C / N of 4 and 2 aren't C >> N, and the associated results aren't detailed clearly in the paper), but I think the paper warrants acceptance.\n\nOriginal Review:\n\nThe paper proposes fixing the classification layers of neural networks, replacing the traditional learned affine transformation with a fixed (e.g., Hadamard) matrix. This is motivated by the observation that classification layers frequently constitute a non-trivial fraction of a network's overall parameter count, compute requirements, and memory usage, and by the observation that removal of pre-classification fully-connected layers has often been found to have minimal impact on performance. Experiments are performed on a range of datasets and network architectures, in both image classification and NLP settings.\n\nFirst, I'd like to note that the empirical component of this paper is strong: I was impressed by the breadth of architectures and settings covered, and the experiments left me reasonably convinced that the classification layer can often be fixed, at least for image classification tasks, without significant loss of accuracy.\n\nI have two general concerns. For one, removing the fully connected classification layer is not a novel idea; All Convolutional Networks (https://arxiv.org/abs/1412.6806) reported excellent results without an additional fully connected affine transform (just a global average pooling after the last convolutional layer). I think it would be worth at least referencing/discussing differences with this and other all-convolutional architectures. Including a fixed Hadamard matrix for the classification layer is I believe new (although related to an existing literature on using structured matrices in neural networks).\n\nHowever, I have doubts about the ability of the approach to scale to problems with a larger number of classes, which arguably is a primary motivation of the paper (\"parameters ... grow linearly with the number of classes\"). Specifically, the idea of using a fixed N x C matrix with C orthogonal columns (such as Hadamard) is only possible when N > C. This is a critical point: in the N > C regime, a final hidden representation with N dimensions can be chosen to achieve *any* C-dimensional output, regardless of the projection matrix used (so long as it is full rank). This makes it seem fairly reasonable to me that the network can (at least approximately, and complicated by the ReLU nonlinearities) fold the \"desired\" classification layer into the previous layer, especially with a learned scaling and bias term. In fact it's not clear to me that the fixed classification layer accomplishes anything here, beyond projecting from N -> C (i.e., if N = C, I'd guess it could be removed entirely similar to all convolutional nets, as long as the learned scaling and bias were retained).\n\nOn the other hand, when C > N, it is not possible to have mutually orthogonal columns, and in general the output is constrained to lie in an N-dimensional subspace of the overall C-dimensional output space. Picking somewhat randomly a *fixed* N-dimensional subspace seems like a bad idea when N << C, since it is unlikely to select a subspace in which it is possible to adequately capture correlations between the different classes. This makes the proposed technique much less appealing for precisely the family of problems where it would be most effective in reducing compute/memory requirements. It also provides (in my view) a clearer explanation for the failure of the approach in the NLP setting. These issues were not discussed anywhere in the text as far as I can tell, and I think it's necessary to at least acknowledge that mutually orthogonal columns can't be chosen when C > N in section 2.2 (and probably include a longer discussion on the probable implications).\n\nOverall, I think the paper provides a useful observation that clearly isn't common knowledge, since classification layers persist in many popular recent architectures. But the notion of fixing or removing the classification layer isn't particularly novel, and I don't believe the proposed technique would scale well to settings with many classes. As is I think the paper falls slightly short.", "Thank you for adding the additional experiments.\n\nI will not modify the score.\nI still believe the idea is interesting, but it is unclear how large the impact actually is on performance.\nIn my experiments, I observed a small loss in accuracy but no improvement in speed.\n\nCurrently, many papers on large batch training show close to linear scaling. Especially the FB in 1 hour approach where the gradient updates for higher layers are communicated in parallel with the gradient computation for lower layers. So it is not clear how much of a difference not doing back-propagation would make.\n\nIdeally I would suggest the authors to implement a cuda kernel for the hadamard transform too show that the speed up is effectively there.", "The paper proposes to use a fixed weight matrix to replace the final linear projection in a deep neural network.\nThis fixed classifier is combined with a global scaling and per output shift that are learned.\nThe authors claim that this can be used as a drop in replacement for standard architectures and does not result in reduced performance.\nThe key advantage is that it generates a reduction in parameters (e.g. for resent 50 8% of parameters are eliminated).\n\nThe idea is extremely simple and I like it conceptually.\nCurrently it looks like my reimplementation on resent 50 is working. \nI do lose a about 1% in accuracy compared to my baseline learned projection implementation.\nIs the scale and bias regularized?\n\nI have assigned a score of 6 now. but I will wait for my final rating when I get the actual results.\nOverall the evaluation is seems reasonably thorough many tasks were presented and the model was applied to different architectures.\n\nI also think the manuscript could benefit from the following experiments:\n- how does the chosen projection matrix affect performance.\n- is the scale needed\nI assume the authors did these experiments when they developed the method but it is unclear how important these choices are. \nIncluding these experiments would make it a more scientific contribution.\n\nThe amount of computation saved seems rather limited? Especially since the gradient of the scale parameter has to go through the weight vector?\nTherefore my assumption is that only the application of the gradients save a limited amount of time and memory?\nAt least in my experiments reproducing these results, the computational benefit is not there/obvious.\n\nWhile I like the idea, the way the manuscript is written is a bit strange at times. \nThe introduction appears to be there to be because you need a introduction, not to explain the background. \nFor this reason some of the cited work seems a bit out of place.\nEspecially the universal approximation and data memorization references.\nWhat I find interesting is that this work is the complement of the reservoir computing/extreme learning machines approach.\nThere the final output layer is trained but the network itself uses random weights.\n \nIt would be nice if Fig 2 had a better caption. Which dataset, model, ….\nIs there an intuition why the training error remains higher but the validation error is identical? This is difficult to get my head round.\nAlso, it would be nice if an analysis was provided where the computational cost of not doing the gradient update was computed.\n", "This paper proposes replacing the weights of the final classifier layer in a CNN with a fixed projection matrix. In particular a Hadamard matrix can be used, which can be represented implicitly.\n\nI'd have liked to see some discussion of how to efficiently implement the Hadamard transform when the number of penultimate features does not match the number of classes, since the provided code does not do this.\n\nHow does this approach scale as the number of classes grows very large (as it would in language modeling, for example)?\n\nAn interesting experiment to do here would be to look this technique interacts with distillation, when used in the teacher or student network or both. Does fixing the features make it more difficult to place dog than on boat when classifying a cat? Do networks with fixed classifier weights make worse teachers for distillation?\n", "We thank the reviewer for his detailed feedback on our paper and his suggestions. We hope to answer his questions below. We also made adjustments to latest revision accordingly.\n\n1) \"Are the scale and bias regularized?\" - Yes. We found that regularization can help with the final validation error in the same way it helps with common learned weights. Best results appeared when trained with weight decay for several epochs and removed later.\n\n2) \"how does the chosen projection matrix affect performance\" - We found no significance change in final accuracy when using different projection matrix. We do find slight change in convergence rate when initial scale is changed.\n\n3) \"is the scale needed\" - We added some experiments to show that the scale is not needed as a learned parameter, but this may help convergence.\n\n4) \"The amount of computation saved seems rather limited?\" - The compute saved is for the gradient of the classifier weights (which is not needed to get the gradient for the scale). This may be limited for the cases shown, but becomes more apparent when number of classes is larger. As we noted, these gradients and weights can now be avoided in communication over several nodes in distributed setting - saving precious bandwidth. Moreover using a Hadamard matrix we can replace all multiplication operations preformed by the classifier with additions which are far more hardware friendly. \n\n5)\"Is there an intuition why the training error remains higher but the validation error is identical?\" - Our conjecture is that with our new fixed parameterization, the network can no longer increase the norm of a given sample's representation - thus learning its label requires more effort. As this may happen for specific seen samples - it affects only training error.\n\nRegarding clarity and manuscript structure - we have taken the reviewer's comments into account and revised our paper accordingly.\n\n", "We thank the reviewer for his feedback and suggestions. We added an explanation as well as extended the supplementary code for the case where number of penultimate features does not match the number of classes.\nWe also added to the discussion the case where C >> N. Regarding distillation - we found no apparent difference when distilling a network with fixed classifier.", "\nWe thank the reviewer for his detailed feedback on our paper.\nWe hope to address the 2 main concerns raised:\n1) Novelty - \"removing the fully connected classification layer is not a novel idea; All Convolutional Networks (https://arxiv.org/abs/1412.6806) reported excellent results without an additional fully connected affine transform (just a global average pooling after the last convolutional layer)\"\n\nWe believe there is a slight misunderstanding here: in the \"All convolutional networks\" paper the fully-connected was not removed, as it just got replaced with a convolutional layer with the same number of parameters. This means there is still a final classifier (a conv layer) with number of parameters proportional to the number of classes.\nOur work introduces what we believe to be a novel idea - removing the classifier layer altogether making the number of network parameters independent from the number of classes. We added a clarification to this matter in our recent revision. \n\n2) Applicability of our method when C > N:\n\nThe reviewer is right in his claim that when C > N we can not have mutually orthogonal columns, but this is true even for a fully learned weight matrix. \nWe empirically verified that for the vision use-cases brought in the paper we achieve good performance for C > N (e.g., on imagenet, so C=1000, with either mobilenet 0.5 where N = 512 or resnet with N reduced to 256).\nWe do agree with the reviewer that this can be limiting when the classes have strong correlation with one another (as in the NLP case) and we add this as another possible explanation. We still, however, feel that this can be useful even for C >> N in other domains such as vision.", "Yes, the experiments are done with both hadamard matrix and scaling. ", "One thing that is not clear to me from the paper are the experiments done with the Hadamard version AND scaling?\n\n" ]
[ 6, -1, 6, 6, -1, -1, -1, -1, -1 ]
[ 4, -1, 5, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1Dh8Tg0-", "Bkvocznzf", "iclr_2018_S1Dh8Tg0-", "iclr_2018_S1Dh8Tg0-", "rJ3ZYFtxM", "S1kGhTKez", "HyGVuO0ez", "SkwPDJbxz", "iclr_2018_S1Dh8Tg0-" ]
iclr_2018_HyRnez-RW
Multi-Mention Learning for Reading Comprehension with Neural Cascades
Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur. Existing neural architectures typically do not scale to the entire evidence, and hence, resort to selecting a single passage in the document (either via truncation or other means), and carefully searching for the answer within that passage. However, in some cases, this strategy can be suboptimal, since by focusing on a specific passage, it becomes difficult to leverage multiple mentions of the same answer throughout the document. In this work, we take a different approach by constructing lightweight models that are combined in a cascade to find the answer. Each submodel consists only of feed-forward networks equipped with an attention mechanism, making it trivially parallelizable. We show that our approach can scale to approximately an order of magnitude larger evidence documents and can aggregate information from multiple mentions of each answer candidate across the document. Empirically, our approach achieves state-of-the-art performance on both the Wikipedia and web domains of the TriviaQA dataset, outperforming more complex, recurrent architectures.
accepted-poster-papers
The authors did a good job addressing reviewer concerns and analyzing and testing their model on interesting datasets with convincing results.
val
[ "rkKuj7zgz", "rJa0zH9xf", "B1jA_O5xM", "BJghy1hmz", "By5HYFnZM", "H1whTGi-f", "rJHomqP-M", "SkOozcw-f", "r15HG5DZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "author", "author", "author" ]
[ "The authors present a scalable model for questioning answering that is able to train on long documents. On the TriviaQA dataset, the proposed model achieves state of the art results on both domains (wikipedia and web). The formulation of the model is straight-forward, however I am skeptical about whether the results prove the premise of the paper (e.g. multi-mention reasoning is necessary). Furthermore, I am slightly unconvinced about the authors' claim of efficiency. Nevertheless, I think this work is important given its performance on the task.\n\n1. Why is this model successful? Multi-mention reasoning or more document context?\nI am not convinced of the necessity of multi-mention reasoning, which the authors use as motivation, as shown in the examples in the paper. For example, in Figure 1, the answer is solely obtained using the second last passage. The other mentions provide signal, but does not provide conclusive evidence. Perhaps I am mistaken, but it seems to me that the proposed model cannot seem to handle negation, can the authors confirm/deny this? I am also skeptical about the computation efficiency of a model that scores all spans in a document (which is O(N^2), where N is the document length). Can you show some analysis of your model results that confirm/deny this hypothesis?\n\n2. Why is the computational complexity not a function of the number of spans?\nIt seems like the derivations presents several equations that score a given span. Perhaps I am mistaken, but there seems to be n^2 spans in the document that one has to score. Shouldn't the computational complexity then be at least O(n^2), which makes it actually much slower than, say, SQuAD models that do greedy decoding O(2n + nm)?\n\nSome minor notes\n- 3.3.1 seems like an attention computation in which the attention context over the question and span is computed using the question. Explicitly mentioning this may help the reading grasp the formulation.\n- Same for 3.4, which seems like the biattention (Seo 2017) or coattention (Xiong 2017) from previous squad work.\n- The sentence \"We define ... to be the embeddings of the l words of the sentence that contains s.\" is not very clear. Do you mean that the sentence contains l words? It could be interpreted that the span has l words.\n- There is a typo in your 3.7 \"level 1 complexity\": there is an extra O inside the big O notation.", "This paper proposes a method that scales reading comprehension QA to large quantities of text with much less document truncation than competing approaches. The model also does not consider the first mention of the answer span as gold, instead formulating its loss function to incorporate multiple mentions of the answer within the evidence. The reported results were state-of-the-art(*) on the TriviaQA dataset at the time of the submission deadline. It's interesting that such a simple model, relying mainly on (weighted) word embedding averages, can outperform more complex architectures; however, these improvements are likely due to decreased truncation as opposed to bag-of-words architectures being superior to RNNs. \n\nOverall, I found the paper interesting to read, and scaling QA up to larger documents is definitely an important research direction. On the other hand, I'm not quite convinced by its experimental results (more below) and the paper is lacking an analysis of what the different sub-models are learning. As such, I am borderline on its acceptance.\n\n* The TriviaQA leaderboard shows a submission from 9/24/17 (by \"chrisc\") that has significantly higher EM/F1 scores than the proposed model. Why is this result not compared to in Table 1? \n\nDetailed comments:\n- Did you consider pruning spans as in the end-to-end coreference paper of Lee et al., EMNLP 2017? This may allow you to avoid truncation altogether. Perhaps this pruning could occur at level 1, making subsequent levels would be much more efficient.\n- How long do you estimate training would take if instead of bag-of-words, level 1 used a biLSTM encoder for spans / questions?\n- What is the average number of sentences per document? It's hard to get an idea of how reasonable the chosen truncation thresholds are without this.\n- In Figure 3, it looks like the exact match score is still increasing as the maximum tokens in document is increased. Did the authors try truncating after more words (e.g., 10k)?\n- I would have liked to see some examples of questions that are answered correctly by level 3 but not by level 2 or 1, for example, to give some intuition as to how each level works.\n- \"Krasner\" misspelled multiple times as \"Kramer\"", "This paper proposes a lightweight neural network architecture for reading comprehension, which 1) only consists of feed-forward nets; 2) aggregates information from different occurrences of candidate answers, and demonstrates good performance on TriviaQA (where documents are generally pretty long).\n\nOverall, I think it is a nice demonstration that non-recurrent models can work so well, but I also don’t find the results strikingly surprising. It is also a bit hard to get the main takeaway messages. It seems that multi-loss is important (highlight that!), summing up multiple mentions of the same candidate answers seems to be important (This paper should be cited: Text Understanding with the Attention Sum Reader Network https://arxiv.org/abs/1603.01547). But all the other components seem to have been demonstrated previously in other papers. \n\nAn important feature of this model is it is easier to parallelize and speed up the training/testing processes. However, I don’t see any demonstration of this in the experiments section.\n\nAlso, I am a bit disappointed by how “cascades” are actually implemented. I was expecting some sophisticated ways of combining information in a cascaded way (finding the most relevant piece of information, and then based on what it is obtained so far trying to find the next piece of relevant information and so on). The proposed model just simply sums up all the occurrences of candidate answers throughout the full document. 3-layer cascade is really just more like stacking several layers where each layer captures information of different granularity. \n\nI am wondering if the authors can also add results on other RC datasets (e.g., SQuAD) and see if the model can generalize or not. \n", "We thank for the reviewers for their valuable feedback and have made the following main improvements to the paper:\n\n-Content:\n1. Speed comparison (Figure 4 - right): We compare the speed of our approach to a vanilla bi-LSTM on a GPU. Because of our approach is trivially parallelizable, it gets relatively much faster compared to the LSTM as the document length increases (reaching ~45x speedup for a truncation limit of 10K tokens).\n2. Oracle statistics for truncation limit (Figure 3 - right): To justify our choice of truncation limit, we plot the oracle accuracy for various truncation thresholds.\n3. Analysis of each submodel. We provide:\n a. Figure 3 - left: A quantitative analysis showing the performance of the top K results of each submodel\n b. Table 3: A table of examples showing predictions of each submodel, and cases where our aggregation model (level 3) is able to do more than other submodels.\n\n-Writing:\n1. Introduction: We have clarified the novelty of our approach:\n a. Multi-loss formulation, which to our knowledge has not been used in question answering, before. Empirically, this factors to a 10pt difference in the dev EM, as demonstrated in Table 2.\n b. Aggregating multiple mentions of candidates at the representation level. We found this strategy to allow us to obtain high accuracy with simpler models when the multiple-mention assumption holds. Empirically, removing the aggregation level drops accuracy by 5.5 points in dev EM as shown in Table 2.\n c. Unlike existing approaches, our model is trivially parallelizable in that we can process all the O(nl) spans in the document in parallel, allowing it to handle larger documents.\n\n-Complexity: We have clarified that O(nl) is similar to O(n) since l = maximum span length and is restricted to 5, and is therefore not quadratic.\n\n-Additional citations: Kadlec et al. 2016 (attention sum reader network), Seo et al. 2017, Xiong et al. 2017\n\n-Analysis: Sample of predictions have been provided and analysed in Section 4.4.\n\n-Experimental Results (Leaderboard): We believe we have addressed AnonReviewer3’s concerns (see the responses below), and also added a clarifying footnote to the paper.\n", "I'm the administrator for the TriviaQA leaderboard on Codalab. I second Chris' comment. The leaderboard allows private submissions. In addition, the date field for each entry on the leaderboard refers to the date of submission (and not the date it was made public).", "I am the author of the \"chrisc\" submission on the TriviaQA Leaderboard.\n\nI just wanted to comment and confirm that our result on the leaderboard was not made public until after the ICLR deadline. The date listed on the leader board reflects the time we uploaded our test results, but we did not make that result public until after we had finished writing the paper and completed the rest of our evaluations, which occurred shortly after the ICLR submission deadline.", "Thank you for your comments! We will revise the paper based on your feedback but we would like to clarify some aspects beforehand:\n\nSuccess of the model:\nOur model benefits from both multi-mention reasoning and more document context. The ablation in Table 2 shows that without the Level 3 multi-mention aggregation, model performance drops from 52.18% to 46.52%. The only purpose of the level 3 model is to do aggregation across multiple mentions, and therefore this shows that multi-mention reasoning helps our model significantly. More document context allows the upper bound of the dev EM under our approach to be 92%, compared to the 83% in the baseline method from Joshi et. al. (2017). We can make this more clear in our revision. \n\nComputational efficiency:\nWe only allow spans up to a length l (where l=5). Therefore our computational complexity is O(nl) and not O(n^2). Moreover, our method trivially parallelizes across the length of the document, unlike recurrent network based approaches.\n\nNegation:\nOur model does not handle negation specifically but negation is not typically a key aspect of existing reading comprehension tasks (as it is in sentiment analysis for instance).\n\nAttention:\nWe will clarify the attention section and cite (Seo et al. 2017, Xiong et al. 2017). Unlike their approach, we use the word embeddings as input to the attention, not the LSTM states as they do.\n", "Thank you for your comments! We will revise our paper based on your feedback, particularly discussing more the contribution of each submodel and addressing your detailed comments. However, we would like to clarify some main points below:\n\nThe “chrisc” leaderboard submission:\nTriviaQA allows someone to submit privately and then make their result public later. Therefore while the web result for the chrisc model might have been submitted earlier, it was not publicly visible before the ICLR Oct 27 deadline and therefore was not included in our table. None of the other ICLR submissions we are aware of report this result either e.g. (https://openreview.net/forum?id=rJl3yM-Ab, https://openreview.net/forum?id=HJRV1ZZAW, https://openreview.net/pdf?id=B1twdMCab ) The paper itself was posted on arXiv on 29th Oct (https://arxiv.org/abs/1710.10723) and only contained Web (and not Wikipedia) results. We would also like to point out that the chrisc model involves a two-stage pipeline, many layers of recurrent neural nets and is procedurally more involved than ours.\n\nPruning spans:\nWe did try pruning spans, based on the levels, i.e. level 1 considered all spans in the document (up to truncation) and level 2 considered the top K spans from level 1, and so on. However, we found this decreased the accuracy by ~4-5 points. This could be attributed to the lower levels pruning away good candidates, because they did not have access to more information, such as sentence context, and attention with the question. We will revise the paper to include these results.\n\nUsing biLSTMs:\nRunning a biLSTM over the entire document of length n to obtain span representations is not parallelizable over the document length and therefore would be much slower (unlike our approach which trivially parallelizes the attention computation over the O(nl) spans). \n\nTruncation stats:\nTruncating documents to contain at most 6000 tokens gives us an upper bound of 92% on dev EM in the Wikipedia domain (avg number of sentences being 141). 87% of documents in the Wikipedia dataset are fully covered under this truncation limit. \n\nWe do not make any claims about the expressive power of BOW models vs RNNs. Our model performance can be attributed to the scalability of BOW architectures which can take advantage of longer documents, which RNN architectures are not well suited for. Furthermore, our other contributions (multi-loss + multi-mention learning) significantly boost the performance of these simple architectures as explored in the ablations in Table 2. \n", "Thank you very much for your comments, we will revise the paper over the next few weeks based on your feedback. Below, we make some clarifications.\n\nPrimary contributions of our work:\nOur work presents a novel and more scalable approach to question answering that is considerably different than the existing literature that is dominated by monolithic LSTM-based architectures. The takeaway messages regarding our approach are:\n\n1. Multi-loss formulation, which to our knowledge has not been used in question answering, before. Empirically, this factors to a 10pt difference in the dev EM, as demonstrated in Table 2.\n2. Aggregating multiple mentions of candidates at the representation level. We found this strategy to allow us to obtain high accuracy with simpler models when the multiple-mention assumption holds. Empirically, removing the aggregation level drops accuracy by 5.5 points in dev EM as shown in Table 2.\n3. Unlike existing approaches, our model is trivially parallelizable in that we can process all the O(nl) spans in the document in parallel, allowing it to handle larger documents.\n\nWe believe that since our approach can scale to much longer document spans with only 1 GPU attests to its scalability. We also provide asymptotic analysis of our runtime complexity.\n\nCascades:\nWe would like to point out in contrast to previous approaches that aggregate answers (e.g. Attention Sum Reader Network, which we will add a reference for), our method aggregates positions at the *representation* level (by adding vector representations of mentions), not at the score level. While we agree that there could be more complex ways of realizing cascades, we chose the simplest approach that would show the efficacy of such an idea. More sophisticated ways to combine information may not be compatible with the trivial parallelizability in our model. \n\nSQuAD dataset:\nWhile it could run on SQuAD, our model was specifically designed for a case that is different from SQuAD. In SQuAD, the evidence only consists of a paragraph (avg length 122 tokens, TriviaQA evidence is more than 20X longer) so scalability is not a concern. Furthermore, answers to SQuAD questions are almost always unique spans in the passage, hence many of our intuitions of multi-mention learning might not be relevant for this task.\n" ]
[ 7, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyRnez-RW", "iclr_2018_HyRnez-RW", "iclr_2018_HyRnez-RW", "iclr_2018_HyRnez-RW", "H1whTGi-f", "rJa0zH9xf", "rkKuj7zgz", "rJa0zH9xf", "B1jA_O5xM" ]
iclr_2018_r1SnX5xCb
Deep Sensing: Active Sensing using Multi-directional Recurrent Neural Networks
For every prediction we might wish to make, we must decide what to observe (what source of information) and when to observe it. Because making observations is costly, this decision must trade off the value of information against the cost of observation. Making observations (sensing) should be an active choice. To solve the problem of active sensing we develop a novel deep learning architecture: Deep Sensing. At training time, Deep Sensing learns how to issue predictions at various cost-performance points. To do this, it creates multiple representations at various performance levels associated with different measurement rates (costs). This requires learning how to estimate the value of real measurements vs. inferred measurements, which in turn requires learning how to infer missing (unobserved) measurements. To infer missing measurements, we develop a Multi-directional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it sequentially operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. At runtime, the operator prescribes a performance level or a cost constraint, and Deep Sensing determines what measurements to take and what to infer from those measurements, and then issues predictions. To demonstrate the power of our method, we apply it to two real-world medical datasets with significantly improved performance.
accepted-poster-papers
This paper is well written, addresses and interesting problem, and provides an interesting solution.
train
[ "rkoifQKEM", "Bk4-YGplf", "HJyXsRtef", "HJg15Lhgz", "rJ4VNC1Mf", "ryg27A1ff", "ryzBXC1Gf", "B1lmXCJGf", "BJCe7A1GG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "I have raised my score from 6 to 8 to reflect the authors' thorough responses and significant improvements in the revised manuscript.\n\nFrom my own review, the authors addressed the following points in their revision:\n\n- expanded related work to cover active learning and submodular optimization\n- added a discussion of AND empirical comparison with Futoma [1] -- though I should note that I find the poor performance of Futoma on the MIMIC-III data set surprising (I wonder about hyperparameter tuning for your Futoma implementation)\n- improved clarify of the paper, including an explicit discussion of how the algorithm can be used to decide WHEN to measure and the fact that it uses a *greedy* search strategy, the details of the prediction tasks in the experiments, and the way in which the subsampled data set was created\n- added an ablation study to understand the relative contribution of each model component (imputation, interpolation, backward imputation, GRU hidden layer, etc.)\n- added algorithm pseudocode and discussion of the approximate confidence intervals to the appendix\n\nThis is cool work -- I look forward to the eventual code release to try it out.", "This is a very interesting submission that takes an interesting angle on clinical time series modeling, namely, actively choosing when to measure while simultaneously attempting to impute missing measurements and predict outcomes of interest. The proposed solution formulates everything as a giant learning problem that involves learning (a) an interpolation function that predicts a missing measurement from its past and present, (b) an imputation function that predicts a missing measurement from other variables at the same time step, (c) a prediction function that predicts outcomes of interest, including forecasting future measurements, (d) an error estimation function that estimates error of the forecasts in (c). These four pieces are then used in combination with a heuristic to decide when certain variables should be measured. This framework is used with a GRU-RNN architecture and in experiments with two datasets, outperforms a number of strong baselines.\n\nI am inclined toward accepting this paper due to the significance of the problem, the ingenuity of their proposed approach, and the strength of the empirical results. However, I think that there is a lot of room for improvement in the current manuscript, which is difficult to read and fully grasp. This will lessen its impact in the long run, so I encourage the authors to strive to make it clearer. If they succeed in improving it during the review period, I will gladly raise my score.\n\nNOTE: please do a thorough editorial pass for the next version -- I found at least one typo in the references (Yu, et al. \"Active sensin.\")\n\nQUALITY\n\nThis is solid research, and I have few complaints about the work itself (most of my feedback will focus on clarity). I will list some strengths (+) and weaknesses (-) below and try to provide actionable feedback:\n\n+ Very important problem that receives limited attention from the community\n+ I like the formulation of active sensing as a prediction loss optimization problem\n+ The learning problem is pretty intuitive and is well-suited to deep learning architectures since it yields a differentiable (albeit complex) loss function\n+ The results speak for themselves -- for adverse event prediction in the MIMIC-III task, DS improves upon the nearest baseline by almost 9 points in AUC! More interestingly, using Deep Sensing to create a \"resampled\" version of the data set improves the performance of the baselines. It also achieves much more accurate imputation than standard approaches.\n\n- The proposed approach is pretty complex, and it's unclear what is the relative contribution of each component. I think it is incumbent to do an ablation study where different components are removed to see how performance degrades, if at all. For example, how would the model perform with interpolation but not imputation? Is bidirectional interpolation necessary, or would forward interpolation work sufficiently well (the obvious disadvantage of the bidirectional approach is the need to rerun inference at each new time step). Is it necessary to use both the actual AND predicted measurements as inputs (what if we instead used actual measurements when available and predicted otherwise)?\n- The experiments are thorough with a nice selection of baselines, but I wonder if perhaps Futoma, et al. [1] would be a stronger baseline than Choi, Che, or Lipton. They showed improvements over similar magnitude over baselines for predicting sepsis, and their approach (a differentiable GP-approximating layer) is conceptually simpler and has other benefits. I think it could be combined with the active sensing framework in this paper.\n- The one question this framework appears incapable of answering in a straightforward manner is WHEN the next set of measurements should be made. One could imagine a heuristic in which predictive loss/gain are assessed at different points in the future, but the search space will be huge, particularly if one wants to optimize over measurements at different points, e.g., maybe the optimal strategy is to take roughly hourly vitals but no labs until 12 hours from now. Indeed, it might be impossible to train such a model properly since the sampling times in the available training data are highly biased.\n- One thing potentially missing from this paper is a theoretical analysis to understand and analyze its behavior and performance. My very superficial analysis is that the prediction loss/gain framework is related to minimizing entropy and that the heuristic for choosing which variables to measure is a greedy search. A theoretical treatment to understand whether and how this approach might be sub-optimal would be very desirable.\n- Are the measurement and prediction \"confidence intervals\" proper confidence intervals (in the formal statistical sense)? I don't think so -- I wonder if there are alternatives for measuring uncertainty (formal CIs or maybe a Bayesian approach?).\n\nCLARITY\n\nMy main complaint about this paper is clarity -- it is not difficult to read per se, but it is difficult to fully grok the details of the approach and the experimental setup. From the current manuscript, I do not feel confident that I could re-implement Deep Sensing or reproduce the experiments. This is especially important in healthcare research, where there is a minor reproducibility crisis, even for resarch using MIMIC (see [2]). Of course, this can be alleviated by publishing the code and using a public benchmark [3], but it can't hurt to clarify these details in the paper itself (and to add an appendix if length is an issue).\n\nHere are some potential areas for improvement:\n\n- The structure of the paper is a bit weird. In particular section 2 (pages 2-4) seems to be a grab bag of miscellaneous topics, at least by the headers. I think the content is fine -- perhaps section 2 can be renamed as \"Background,\" subsection 2.1 renamed as \"Notation,\" and subsection 2.2 renamed as \"Problem Formulation\" (or similar). I'd just combine subsection 2.3 with the previous one and explain that Figure 1 illustrates the problem formulation.\n- The active sensing procedure (subsection 2.2, page 3, equation 1 and the equations just above) is unclear. How are the minimization and maximization performed (gradient descent, line search, etc.)? How is the search for the subset of measurement variables performed (greedy search)? The latter is a discrete search, and I doubt it's, e.g., submodular, so it must be a nontrivial optimization.\n- Related, I'm a little confused about equation 1: C_T is the set of variables that should be measured, but C_T is being used to index prediction targets -- is this a typo?\n- The related work section is pretty extensive, but I wonder if it should also include work on active learning (Bayesian active learning, in particular, has been applied to sensing), submodular optimization (for sensor placement, which can be thought of as a spatial version of active sensing), and reinforcement learning.\n- I don't understand how the training data for the interpolation and imputation functions are constructed. I *think* that is what is described in the Adaptive Sampling subsection on page 8, but that is unclear. The word \"representations\" is used here, but that's an overloaded term in machine learning, and its meaning here is unclear from context. It appears that maybe there's an iterative procedure in which we alternate between training a model and then resampling the data using the model -- starting with the full data set.\n- The distinction between training and inference is not clear to me, at least with respect to the active sensing component. Is selective sampling performed during training? If so, what happens if the model elects to sample a variable at time t that is not actually measured in the data?\n- I don't follow subsection 4.2 (pages 8-9) at all -- what is it describing? If by \"runtime\" the authors refer to the computational complexity of the algorithm, then I would expect a Big-O analysis (none is provided -- it's just a rather vague discussion of what happens). I'd recommend removing this entire subsection and replacing it with, e.g., an Algorithm figure with pseudocode, as a more succinct description.\n- For the experiments, the authors provide insufficient detail about the data and task setup. Since MIMIC is publicly available, then readers ought (hypothetically) to be able to reproduce the experiments, but that is not currently possible. As an example, what adverse events are being predicted? How are they defined?\n- Figure 4 is nice, but it's not immediately obvious what the connection between observation rate and sampling cost. The authors should explain how a given observation rate is encoded as cost in the loss function.\n\nORIGINALITY\n\nWhile active sensing is not a new research topic per se, there has been very limited research into the specific question of choosing what clinical variables to measure when in the context of a given prediction problem. This is a topic that (in my experience) is frequently discussed but rarely studied in clinical informatics circles. Hence, this is a very original line of inquiry, and the prediction loss/gain framing is a unique angle.\n\nSIGNIFICANCE\n\nI anticipate this paper will generate significant interest and follow-up work, at least among clinical informaticists and machine learning + health researchers. The main blockers to a significant impact are the clarity of writing issues listed above -- and if the authors fail to publish their code.\n\nREFERENCES\n\n[1] Futoma, et al. An Improved Multi-Output Gaussian Process RNN with Real-Time Validation for Early Sepsis Detection. MLHC 2017.\n[2] Johnson, et al. Reproducibility in critical care: a mortality prediction case study. MLHC 2017\n[3] Harutyunyan, et al. Multitask Learning and Benchmarking with Clinical Time Series Data. arXiv.", "This paper presents a new approach to determining what to measure and when to measure it, using a novel deep learning architecture. The problem addressed is important and timely and advances here may have an impact on many application areas outside medicine. The approach is evaluated on real-world medical datasets and has increased accuracy over the other methods compared against.\n\n+ A key advantage of the approach is that it continually learns from the collected data, using new measurements to update the model, and that it runs efficiently even on large real-world datasets.\n\n-However, the related work section is significantly underdeveloped, making it difficult to really compare the approach to the state of the art. The paper is ambitious and claims to address a variety of problems, but as a result each segment of related work seems to have been shortchanged. In particular, the section on missing data is missing a large amount of recent and related work. Normally, methods for handling missing data are categorized based on the missingness model (MAR/MCAR/MNAR). The paper seems to assume all data are missing at random, which is also a significant limitation of the methods.\n\n-The paper is organized in a nonstandard way, with the methods split across two sections, separated by the related work. It would be easier to follow with a more common intro/related work/methods structure.\n\nQuestions:\n-One of the key motivations for the approach is sensing in medicine. However, many tests come as a group (e.g. the chem-7 or other panels). In this case, even if the only desired measurement is glucose, others will be included as well. Is it possible to incorporate this? It may change the threshold for the decision, as a combination of measures can be obtained for the same cost.", "This paper proposes a novel method to solve the problem of active sensing from a new angle (Essentially, the active sensing is a kind of method that decides when (or where) to take new measurements and what measurements we should conduct at that time or (place)). By taking advantage of the characteristics of long-term memory and Bi-directionality of Bi-RNN and M-RNN, deep sensing can model multivariate time-series signals for predicting future labels and estimating the values of new measurements. The architecture of Deep Sensing basically consists of three components: \n1. Interpolation and imputation for each of channels where missing points exist;\n2. Prediction for the future labels in terms of the whole multivariate signals (The signal is a time-series data and made up of multiple channels, there is supposed to be a measured label for each moment of the signal); \n3. Active sensing for the future moments of each of the channels. \n\nPros\n\nThe novelty of this paper lies in using a neural network structure to solve a traditional statistical problem which was usually done by a Bayesian approach or using the idea of the stochastic process. \n\nA detailed description of the network architecture is provided and each of the configurations has been fully illustrated. The explanation of the structure of the combined RNNs is rigorous but clear enough of understanding. \n\nThe method was tested on a large real dataset and got a really promising result based several rational assumptions (such as assuming some of the points are missing for evaluating the error of the interpolation & imputation).\n\nCons\n\nHow and why the architecture is designed in this way should be further discussed or explained. Some of the details of the design could be inferred indirectly. But somewhere like the structure of the interpolation in Fig.3 doesn't have any further discussion. For example, why using GRU based RNN, and how Bi-RNN benefits here. \n", "Answer 1: As it is written, Deep Sensing applies if data is missing completely at random (MCAR) or just missing at random (MAR). We will make this clearer in the revision. The setting in which measurements are missing not at random (MNAR) is important but the literature dealing with this setting is small; see for instance the discussion in [1]. Deep Sensing can also be applied in the MNAR framework as well by incorporating the mask vector (which indicates missingness) as an additional input. In the revised manuscript, we will discuss this point and provide additional experiments to highlight this point.\n \n[1] A. M. Alaa, S. Hu, and M. van der Schaar, \"Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis,\" ICML, 2017\n\nAnswer 2: We will revise the manuscript to conform to the more common format.\n\nAnswer 3: Yes this can be incorporated easily. For example: every set of tests that can be carried out as a single panel at the same cost can be considered as a single test.", "Answer 1: We will improve the explanation in two ways: (1) We will improve the discussion, going step-by-step. (2) We will follow a suggestion of Reviewer 1 and show how the accuracy of our method would be reduced if we carried out only imputation (no interpolation) or only interpolation (no imputation) or only operated in one direction (no bi-directionality). We will also improve and clarify the discussion of active sensing. \n\nAnswer 2: In many domains (e.g. the medical domain), the measurements display long-term correlations, and accurate prediction of current states requires capturing these long-term correlations. GRU-based RNN’s are well-known to be good for this purpose [1, 2]. Using a Bi-RNN rather than an ordinary unidirectional RNN (in the interpolation block) is important because it helps to capture the correlation of the current measurement with both previous and future measurements. To highlight these points, we will incorporate additional experiments in the revised manuscript that show the effects of Bi-RNN and GRU in comparison to the standard RNN framework.\n \n[1] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, 2014\n \n[2] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. Gated feedback recurrent neural networks. In International Conference on Machine Learning, 2015", "Answer 11: We will check carefully but that was certainly not intended. C_T is used to index prediction targets; the argmax is C^*_T.\n\nAnswer 12: We will add the suggested related works (active learning, submodular optimization, and reinforcement learning) in the revised manuscript.\n\nAnswer 13: We agree that the meaning of “representations” in this context is unclear and we will not use it in the revision. The training procedure is as follows. In the first step, we train the M-RNN architecture (the interpolation and imputation functions) using the original data set. In the second step, we fix a threshold and delete measurements (resampling the original data set) whose estimated “information gain – cost” is smaller than the fixed threshold; this procedure yields a resampled data set. In the third step, we re-train the M-RNN architecture using the resampled data set. We then increase the threshold and repeat the second and third steps, continuing through whatever set of thresholds are chosen. We will clarify this in the revision.\n\nAnswer 14: If the actual dataset is complete this is of course not a problem. If the actual dataset is not complete, we only consider measurements that are actually recorded in the dataset. For example, suppose the dataset records vital signs every hour but lab tests only every 12 hours and we are at a time T when both vital signs and lab tests are recorded. In determining what – if anything should be sampled one hour later, we consider only vital signs and not lab tests. Of course, after 11 more hours have elapsed, then lab tests at the next hour are possible and considered. We will clarify this in the revision.\n\nAnswer 15: We agree. We will remove this subsection and replace it with the pseudocodes for the algorithms.\n\nAnswer 16: We will clarify the discussion of the experiment; in particular we will clarify the discussion of the adverse events that are being predicted in each case (mortality in MIMIC-III dataset and admission to the ICU in the Wards dataset). \n\nAnswer 17: The cost of each possible measurement is well-defined. If all measurements were equally costly, we could identify the cost with the observation rate. If some measurements are most costly than others, we weight those measurements more heavily when expressing the cost in terms of the observation rate. We will explain this more thoroughly in the revision.\n\nAnswer 18: Of course, we will publish our code in Github. However, because the review process is anonymous, we will publish our code after the final decision.", "Answer 7: We think that modulo some (reasonable) assumptions, we do in fact use a proper confidence interval. However, we agree that more justification/discussion\nis warranted, and we will add it in the revised manuscript, along the line sketched below.\n\nDefine \\hat{x} = x + n. For the moment, assume that n is Gaussian noise and\nthat we can interpret the error e = | \\hat{x} - x | as the estimated standard deviation of the Gaussian noise n. Then, (\\hat{x} - \\lambda \\times e, \\hat{x} + \\lambda \\times e) is the proper confidence interval for x in the formal statistical sense.\n\nThe assumption of Gaussian noise is quite standard and probably needs no further comment. The interpretation of the error as the estimated standard\ndeviation of Gaussian noise is not standard but can be justified in the following\nway. If our estimate \\hat{x} is the expected value of x, then we will have x = \\hat{x} + n, where x is the observed measurement from a Gaussian distribution, \\hat{x} is the expected value of x and n is normal (Gaussian). In that case, the expected value of e = | \\hat{x} – x | = | n | is just the standard deviation of Gaussian noise, which is \\sqrt{E[n^2]}. Hence, we need two assumptions: (1) our estimate \\hat{x} is the expected value of x; (2) the observed measurement can be approximated as the sum of the expectation of x and Gaussian noise (approximate normality [1, 2, 3, 4]).\n\nTo justify these assumptions, we proceed as follows. Assume that the measurement x is sampled from an unknown distribution P_\\theta; i.e. x ~ P_\\theta. If P_\\theta is itself normal (Gaussian), then it follows that.\n\nx ~ P_\\theta \\Leftrightarrow x = E[x] + n\n\nwhere n ~ N(0, \\sigma^2). (This uses the observation that the expectation of the normal distribution is the mean). In general, we cannot assume that P_\\theta is normal, but it will be enough if it is approximately normal, which is a common assumption in the literature (see [1, 2, 3, 4] for instance.) In that case, following the literature we can obtain\n\nx \\approximate E[x] + n\n\nwhere n ~ N(0, E[(x-E[x])^2]). From this we obtain that \\hat{x} = E[x]. (In practice, we use \\hat{x} as the sample mean of x which converges to E[x]). In that case the distribution of the error e = |\\hat{x} – x| coincides with the distribution of the absolute value of samples generated by the normally distributed noise n:\n\nE[e] = E[|\\hat{x}-x|] = E[\\sqrt{(\\hat{x}-x)^2}]=E[\\sqrt{n^2}] = E[\\sqrt{(x-E[x])^2}]).\n\nThus, estimating e can be interpreted as estimating the standard deviation of the noise. \n\n[1] Rothenberg, Thomas J. \"Approximate normality of generalized least\nsquares estimates.\" Econometrica: Journal of the Econometric Society (1984):\n811-825. \n[2] Davison, A. C., and D. V. Hinkley. \"Bootstrap Methods and Their Application, Cambridge Univ.\" Press, Cambridge (1997).\n[3] Efron, Bradley, and Robert Tibshirani. \"Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy.\" Statistical science (1986): 54-75.\n[4] Bartlett, Maurice S. \"Approximate confidence intervals. II. More than one unknown parameter.\" Biometrika 40.3/4 (1953): 306-317.\n\nAnswer 8: We will of course publish the code in the Github after the paper is accepted. We will make every effort to clarify both the approach and the experimental aspects. \n\nAnswer 9: These are good suggestions and we will follow them.\n\nAnswer 10: With respect to the equations just above equation (1): We carry out the minimization and maximization by one-dimensional gradient descent. This is possible because the minimization and maximization problems for each feature are independent. With respect to the maximization in equation (1): What we wrote was (accidentally) misleading and we will correct it in the revision. If the number of possible measurements is large, and there are complementarities among measurements, then the actual optimization problem requires examining all possible subsets of measurements – which is an intractable problem. Instead, we follow a greedy procedure: we identify all the measurements with the property that the value of that measurement (by itself) exceeds its cost, and we take C^*_{T+1} to be that set of measurements. Thus we solve a tractable optimization problem that yields an approximation to the actual optimal set of measurements. We will clarify this in the revision.\n", "Answer 1: We will carefully revise the entire paper.\n\nAnswer 2: These are useful suggestions and we will follow them. To highlight the source of gain, we will carry out several additional experiments and report the results in the revised manuscript. In particular, we will carry out an experiment in which the model is restricted to interpolation (no imputation), another experiment in which the model is restricted to imputation (no interpolation) and a third experiment in which only forward interpolation (no backward interpolation) is performed.\n\nAnswer 3: Indeed this is exactly what we are doing: we use actual measurements when available and predicted measurements when actual measurements are not available. We will make this clearer in the revision. \n\nAnswer 4: This is also a good suggestion. In the revision, we will add discussion of Futoma, et al in the related works and will conduct experiments to compare the performance with that of Deep Sensing in various settings.\n\nAnswer 5: Deep Sensing does answer the question of when the next set of measurements should be made. At each time T, Deep Sensing asks whether there are any measurements to be made at time T+1 for which the benefit outweighs the cost. If the answer is “yes” then Deep Sensing recommends that those measurements should be made at time T+1. If the answer is “no” then Deep Sensing asks whether there are any measurements to be made at time T+2 for which the benefit outweighs the cost, and so forth. Thus Deep Sensing is recommending both a time at which the next measurements should be taken and which measurements should be taken at that time.\n\nThis is a greedy procedure, and it is conceivable that although it is beneficial to take measurements at time T+1, it would be even more beneficial to wait and take measurements at time T+2 instead. Deep Sensing could be modified to be forward-looking (rather than greedy), but this would expand the search space enormously. We will add a discussion of this point.\n \nAlternatively, one can imagine asking Deep Sensing to decide at time T whether to take one set of measurements at time T+1 and a second set of measurements at time T+2 and a third set of measurements at time T+3, and so forth. However, it is not clear what advantage would be obtained by doing this. As currently formulated, Deep Sensing can decide at time T to take a set of measurements at time T+1, and then at time T+1 – after the results of those measurements become available – it can decide what measurements to take at time T+2, and so forth. We will add a discussion of this point as well.\n\nAnswer 6: We will add a more theoretical treatment as suggested." ]
[ -1, 8, 6, 7, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "Bk4-YGplf", "iclr_2018_r1SnX5xCb", "iclr_2018_r1SnX5xCb", "iclr_2018_r1SnX5xCb", "HJyXsRtef", "HJg15Lhgz", "Bk4-YGplf", "Bk4-YGplf", "Bk4-YGplf" ]
iclr_2018_HkZy-bW0-
Temporally Efficient Deep Learning with Spikes
The vast majority of natural sensory data is temporally redundant. For instance, video frames or audio samples which are sampled at nearby points in time tend to have similar values. Typically, deep learning algorithms take no advantage of this redundancy to reduce computations. This can be an obscene waste of energy. We present a variant on backpropagation for neural networks in which computation scales with the rate of change of the data - not the rate at which we process the data. We do this by implementing a form of Predictive Coding wherein neurons communicate a combination of their state, and their temporal change in state, and quantize this signal using Sigma-Delta modulation. Intriguingly, this simple communication rule give rise to units that resemble biologically-inspired leaky integrate-and-fire neurons, and to a spike-timing-dependent weight-update similar to Spike-Timing Dependent Plasticity (STDP), a synaptic learning rule observed in the brain. We demonstrate that on MNIST, on a temporal variant of MNIST, and on Youtube-BB, a dataset with videos in the wild, our algorithm performs about as well as a standard deep network trained with backpropagation, despite only communicating discrete values between layers.
accepted-poster-papers
This paper provides an interesting synthesis of ideas. Although the results could be improved, this is a good paper.
train
[ "ryHEjLtgz", "Hy9zmitlG", "Syrb8hW-G", "SyWo3T5mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The principal problem that the paper addresses is how to integrate error-backpropagation learning in a network of spiking neurons that use a form of sigma-delta coding. The main observation is that static sigma-delta coding as proposed in OConnor and Welling (2016b), is not correct when the weights change during training, as past activations are taken into account with the old rather than the new weights.\n\nThe solution proposed in this work is to have past activations decay exponentially, to reduce this problem. The coding scheme then mimics the proporitional-integral-derivative idea from control-theory. The result, spikes having an exponentially decaying effect on the postsynaptic neuron, is similar to that observed in biological spiking neurons. \n\nThe authors show how spike-based learning can be implemented with spiking neurons using such coding, and demonstrate the results on an MLP with one hidden layer applied to the temporal MNIST dataset, and to the Youtube-BB dataset. \n\nThis approach is original and significant, though the presented results are a bit on the thin side. As presented, the spiking networks are not exactly \"deep\": I am puzzled by the statement that in the youtube-bb dataset only the top 3 layers are \"spiking\". The network for the MNIST dataset is similarly only 3 layers deep (input, hidden, output). Is there a particular reason for this? The presentation right now suggests that the scheme does in practise not work for deep networks...\n\nWith regard to the learning rule: while the rule is formulated in terms of spikes, it should be noted that for neuron with many inputs and outputs, this update will have to be computed very very often, even for networks with low average firing rates. \n\nThe paper is clear in most points, with some parts that could use further elucidation. In particular, in Sec 2.5 the feedback pass for weight updating is computed. It is unclear from the text that this is an ongoing process, in parallel to the feedforward pass. In Sec 2.6 e_t is termed the postsynaptic (pre-nonlinearity) activation, which is confusing as the computation is going the other way (post-to-pre). These two sections would benefit from a more careful layout of the process, what is going on in a forward pass, a backward pass, how does this interact. \n\nSection 2.7 tries to relate the spike-based learning rule to the biologically observed STDP phenomenon. While the formulation in terms of pre-post spike-times is interesting, the result is clearly different from STDP, and ignores the fact that e_t refers to the backpropagating error (which presumably would be conveyed by a feedback network): applying the plotted pre-post spike-time rule in the same setting as where STDP is observed will not achieve error-backpropagation. \n\nThe shorthand notation in the paper is hard to follow in the first place btw, perhaps this could be elaborated/remedied in an appendix, there is also some rather colloquial writing in places: \"obscene wast of energy\" (abstract), \"There's\" \"aren't\" (2.6, p5). \n\nThe correspondence of spiking neurons to sigma-delta modulation is incorrectly attributed to Zambrano and Bohte (2016), but is rather presented in Yoon (2017/2016, check original date of publication!). \n\n", "This paper presents a novel method for spike based learning that aims at reducing the needed computation during learning and testing when classifying temporal redundant data. This approach extends the method presented on Arxiv on Sigma delta quantized networks (Peter O’Connor and Max Welling. Sigma delta quantized networks. arXiv preprint arXiv:1611.02024, 2016b.). Overall, the paper is interesting and promising; only a few works tackle the problem of learning with spikes showing the potential advantages of such form of computing. The paper, however, is not flawless. The authors demonstrate the method on just two datasets, and effectively they show results of training only for Feed-Forward Neural Nets (the authors claim that “the entire spiking network end-to-end works” referring to their pre-trained VGG19, but this paper presents only training for the three top layers). Furthermore, even if suitable datasets are not available, the authors could have chosen to train different architectures. The first dataset is the well-known benchmark MNIST also presented in a customized Temporal-MNIST. Although it is a common base-line, some choices are not clear: why using a FFNN instead that a CNN which performs better on this dataset; how data is presented in terms of temporal series – this applies to the Temporal MNIST too; why performances for Temporal MNIST – which should be a more suitable dataset — are worse than for the standard MNIST; what is the meaning of the right column of Figure 5 since it’s just a linear combination of the GOps results. For the second dataset, some points are not clear too: why the labels and the pictures seem not to match (in appendix E); why there are more training iterations with spikes w.r.t. the not-spiking case. Overall, the paper is mathematically sound, except for the “future updates” meaning which probably deserves a clearer explanation. Moreover, I don’t see why the learning rule equations (14-15) are described in the appendix, while they are referred constantly in the main text. The final impression is that the problem of the dynamical range of the hidden layer activations is not fully resolved by the empirical solution described in Appendix D: perhaps this problem affects CCNs more than FFN. \nFinally, there are some minor issues here and there (the authors show quite some lack of attention for just 7 pages):\n-\tTwo times “get” in “we get get a decoding scheme” in the introduction;\n-\tTwo times “update” in “our true update update as” in Sec. 2.6;\n-\tPag3 correct the capital S in 2.3.1\n-\tPag4 Figure 1 increase font size (also for Figure2); close bracket after Equation 3; N (number of spikes) is not defined\n-\tPag5 “one-hot” or “onehot”; \n-\tin the inline equation the sum goes from n=1 to S, while in eq.(8) it goes from n=1 to N;\n-\tEq(10)(11)(12) and some lines have a typo (a \\cdot) just before some of the ws;\n-\tPag6 k_{beta} is not defined in the main text;\n-\tPag7 there are two “so that” in 3.1; capital letter “It used 32x10^12..”; beside, here, why do not report the difference in computation w.r.t. not-spiking nets?\n-\tPag7 in 3.2 “discussed in 1” is section 1?\n-\tPag14 Appendix E, why the labels don’t match the pictures;\n-\tPag14 Appendix F, explain better the architecture used for this experiment.", "This paper applies a predictive coding version of the Sigma-Delta encoding scheme to reduce a computational load on a deep learning network. Whereas neither of these components are new, to my knowledge, nobody has combined all three of them previously. The paper is generally clearly written and represents a valuable contribution. The authors may want to consider the following comments:\n\n1. I did not really understand the analogy with STDP in neuroscience because it relies on the assumption that spiking of the post-synaptic neuron encodes the backpropagating error signal. I am not aware of any evidence for this. Given that the authors’ algorithm does not reproduce the sign-flip in the STDP rule I would suggest revise the corresponding part of the paper. Certainly, the claim in the Discussion “show these to be equivalent to a form of STDP – a learning rule first observed in neuroscience.” is inappropriate.\n\n2. If the authors’ encoding scheme really works I feel that they could beef up their experimental results to demonstrate its unqualified advantage.\n\n3. The paper could benefit greatly from better integration with the existing literature.\na. Sigma-Delta model of spiking neurons has a long history in neuroscience starting with the work of Shin. Please note that these papers are much older than the ones you cite: \nShin, J., Adaptive noise shaping neural spike encoding and decoding. Neurocomputing, 2001. 38-40: p. 369-381. \nShin, J., The noise shaping neural coding hypothesis: a brief history and physiological implications. Neurocomputing, 2002. 44: p. 167-175. \nShin, J.H., Adaptation in spiking neurons based on the noise shaping neural coding hypothesis. Neural Networks, 2001. 14(6-7): p. 907-919.\nMore recently, the noise-shaping hypothesis has been tested with physiological data:\nChklovskii, D. B., & Soudry, D. (2012). Neuronal spike generation mechanism as an oversampling, noise-shaping a-to-d converter. In Advances in Neural Information Processing Systems (pp. 503-511). (see Figure 5A for the circuit implementing a Predictive Sigma-Delta encoder discussed by you)\n\nb. It is more appropriate to refer to encoding a combination of the current value and the increment as a version of predictive coding in signal processing rather than the proportional derivative scheme in control theory because the objective here is encoding, not control. Also, predictive coding has been commonly used in neuroscience:\nSrinivasan MV, Laughlin SB, Dubs A (1982) Predictive coding: a fresh view of inhibition in the retina. Proc R Soc Lond B Biol Sci 216: 427–459. pmid:6129637\nUsing leaky neurons for encoding and decoding is standard, see e.g.:\nBharioke, Arjun, and Dmitri B. Chklovskii. \"Automatic adaptation to fast input changes in a time-invariant neural circuit.\" PLoS computational biology 11.8 (2015): e1004315. \nFor the application of these ideas to spiking neurons including learning please see a recent paper:\nDenève, Sophie, Alireza Alemi, and Ralph Bourdoukan. \"The brain as an efficient and robust adaptive learner.\" Neuron 94.5 (2017): 969-977.\n\nMinor:\nPenultimate paragraph of the introduction section: “get get” -> get\nFirst paragraph of the experiments section: ”so that so that” -> so that\n", "Dear Reviewers,\n\nThank you for taking the time to read our paper in detail. Your feedback was very helpful to improving this work. In response to your suggestions, we have made the following changes to the paper:\n\n- We have reworked the Related Work section, as well as parts of the Abstract and Methods sections, and added Section B of the appendix: Relation to Predictive Coding, to make it clear that our algorithm makes use predictive coding. We’ve done the same for Shin’s work positing that neurons perform noise-shaping, of which sigma-delta modulation is an instance.\n\n- We’ve added additional explanations for clarity wherever they were asked for.We have added Section A of the appendix - which contains a dictionary of the various notations used throughout the paper. \n\n- We’ve toned down our claim on STDP, making clear that the rule is based on the temporal difference between presynaptic forward pass spikes and postsynaptic backwards pass spikes. \n\n- In response to reviewer R2 asking about training deeper networks, and also in response to the general request for a more extensive experimental analysis, we add a table of results in Appendix G training deeper and deeper networks on MNIST. The main conclusion is that there are no problems with training deeper networks. For the YTBB dataset we follow common practice in computer vision where only the last 3 layers need training when the lower layers have converged. There are not theoretical limitations, however, and assuming a GPU with large enough on-chip memory, the whole network can also be trained.\n\n- We’ve updated figures to make them more readable, and corrected a mistake wherein the one of the Youtube-BB learning curves was not completely plotted, as well as the figure in the appendix where labels were not matched to the images.\n\n- We’ve corrected all the small mistakes you pointed out - thank you for those. \n" ]
[ 7, 6, 8, -1 ]
[ 5, 4, 4, -1 ]
[ "iclr_2018_HkZy-bW0-", "iclr_2018_HkZy-bW0-", "iclr_2018_HkZy-bW0-", "iclr_2018_HkZy-bW0-" ]
iclr_2018_ry-TW-WAb
Variational Network Quantization
In this paper, the preparation of a neural network for pruning and few-bit quantization is formulated as a variational inference problem. To this end, a quantizing prior that leads to a multi-modal, sparse posterior distribution over weights, is introduced and a differentiable Kullback-Leibler divergence approximation for this prior is derived. After training with Variational Network Quantization, weights can be replaced by deterministic quantization values with small to negligible loss of task accuracy (including pruning by setting weights to 0). The method does not require fine-tuning after quantization. Results are shown for ternary quantization on LeNet-5 (MNIST) and DenseNet (CIFAR-10).
accepted-poster-papers
The paper presents a variational Bayesian approach for quantising neural network weights and makes interesting and useful steps in this increasingly popular area of deep learning.
train
[ "By5wsMNxM", "S10EfvFxM", "BkIy2pKxf", "r1X08SpmM", "SkbsUrTQM", "S1ytLST7G", "SyWjrHa7f", "ryZUHSTXf", "Hyg1BgzJM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper proposes to use a mixture of continuous spikes propto 1/abs(w_ij-c_k) as prior for a Bayesian neural network and demonstrates good performance with relatively sparsified convnets for minist and cifar-10. The paper is building quite a lot upon Kingma et al 2015 and Molchanov et al 2017. \n\nThe paper is of good quality, clearly written with an ok level of originality and significance.\n\nPros:\n1. Demonstrates a sparse Bayesian approach that scales.\n2. Really a relevant research area for being able to make more efficient and compact deployment.\nCons:\n1. Somewhat incremental relative to the papers mentioned above.\n2. Could have taken the experimental part further. For example can we learn something about what part of the network has the biggest potential for being pruned and use that to come up with modifications of the architecture? ", "This paper presents Variational Network Quantization; a variational Bayesian approach for quantising neural network weights to ternary values post-training in a principled way. This is achieved by a straightforward extension of the scale mixture of Gaussians perspective of the log-uniform prior proposed at [1]. The authors posit a mixture of delta peaks hyperprior over the locations of the Gaussian distribution, where each peak can be seen as the specific target value for quantisation (including zero to induce sparsity). They then further propose an approximation for the KL-divergence, necessary for the variational objective, from this multimodal prior to a factorized Gaussian posterior by appropriately combining the approximation given at [2] for each of the modes. At test-time, the variational posterior for each weight is replaced by the target quantisation value that is closest, w.r.t. the squared distance, to the mean of the Gaussian variational posterior. Encouraging experimental results are shown with performance comparable to the state-of-the-art for ternary weight neural networks.\n\nThis paper presented a straightforward extension of the work done at [1, 2] for ternary networks through a multimodal quantising prior. It is generally well-written, with extensive preliminaries and clear equations. The visualizations also serve as a nice way to convey the behaviour of the proposed approach. The idea is interesting and well executed so I propose for acceptance. I only have a couple of minor questions: \n- For the KL-divergence approximation you report a maximum difference of 1 nat per weight that seems a bit high; did you experiment with the `naive` Monte Carlo approximation of the bound (e.g. as done at Bayes By Backprop) during optimization? If yes, was there a big difference in performance?\n- Was pre-training necessary to obtain the current results for MNIST? As far as I know, [1] and [2] did not need pre-training for the MNIST results (but did employ pre-training for CIFAR 10).\n- How necessary was each one of the constraints during optimization (and what did they prevent)? \n- Did you ever observe posterior means that do not settle at one of the prior modes but rather stay in between? Or did you ever had issues of the variance growing large enough, so that q(w) captures multiple modes of the prior (maybe the constraints prevent this)? How sensitive is the quantisation scheme?\n\nOther minor comments / typos:\n(1) 7th line of section 2.1 page 2, ‘a unstructured data’ -> ‘unstructured data’\n(2) 5th line on page 3, remove ‘compare Eq. (1)’ (or rephrase it appropriately).\n(3) Section 2.2, ’Kullback-Leibler divergence between the true and the approximate posterior’; between implies symmetry (and the KL isn’t symmetric) so I suggest to change it to e.g. ‘from the true to the approximate posterior’ to avoid confusion. Same for the first line of Section 3.3.\n(4) Footnote 2, the distribution of the noise depends on the random variable so I would suggest to change it to a general \\epsilon \\sim p(\\epsilon).\n(5) Equation 4 is confusing.\n\n[1] Louizos, Ullrich & Welling, Bayesian Compression for Deep Learning.\n[2] Molchanov, Ashukha & Vetrov, Variational Dropout Sparsifies Deep Neural Networks.", "\nThe goal of this work is to infer weights of a neural network, constrained to a discrete set, where each weight can be represented by a few bits. This is a quite important and hot topic in deep learning. As a direct optimization would lead to a highly nontrivial combinatorial optimization problem, the authors propose a so-called 'quantizing prior' (actually a relaxed spike and slab prior to induce a sparsity enforcing heavy tail prior) over weights and derive a differentiable variational KL approximation. One important advantage of the current method is that this approach does not require fine-tuning after quantization. The paper presents ternary quantization for LeNet-5 (MNIST) and DenseNet-121 (CIFAR-10).\n\nThe paper is mostly well written and cites carefully the recent relevant literature. While there are a few glitches here and there in the writing, overall the paper is easy to follow. One exception is that in section 2, many ideas are presented in a sequence without providing any guidance where all this will lead.\nThe idea is closely related to sparse Bayesian learning but the variational approximation is achieved via the local reparametrization trick of Kingma 2015, with the key idea presented in section 3.3.\n\n\n\nMinor\n\nIn the introduction, the authors write \"... weights with a large variance can be pruned as they do not contribute much to the overall computation\". What does this mean? Is this the marginal posterior variance as in ARD? \n\nThe authors write: \"Additionally, variational Bayesian inference is known to automatically reduce parameter redundancy by penalizing overly complex models.\" I would argue that \nit is Bayesian inference; variational inference sometimes retains this property, but not always.\n\nIn Eq (10), z needs also subscripts, as otherwise the notation may suggest parameter tying. Alternatively, drop the indices entirely, as later in the paper.\n\nSec. 3.2. is not very well written. This seems to be the MAP of the product of the marginals,\nor the mode of the variational distribution, not the true MAP configuration of the weight posterior. Please be more precise. \n\nThe abbreviation P&Q (probably Post-training Quantization) seems to be not defined in the paper.\n", "\nRegarding 2.: Connecting network compression with principled architecture search/optimization is a very interesting topic which has not received enough attention in the literature so far and the authors agree that there is promising potential. Unfortunately, our method might only be suitable for rather coarse statements. In order to provide interesting statements about parts of layers or even single neurons / convolutional filters, the method would need to be extended to include group-constraints as was done in Bayesian Compression or Structured Bayesian Pruning. This would allow statements about the relevance of certain sub-parts of networks. In contrast, our method only allows reporting sparsity-rates per layer, which could perhaps be used for high-level architecture exploration (layers with high sparsity can probably be made smaller).", "Did we observe that the posterior variance of weights grows large enough to cover multiple prior modes?\nYes. Weights which are close to the upper \\log \\sigma clipping boundary (see Figure 1b and 3) have a comparatively large posterior variance such that all prior modes have a non-negligible likelihood. Empirically we find that this is not problematic for our method since such large variance weights are pruned after training (via thresholding \\alpha, see Eq. 9 and the following sentence). A speculative explanation could be that these high-variance weights can essentially have arbitrary values since the information that they convey is discarded anyway downstream in the sparse network.\n\nHow sensitive is the quantization scheme?\nWe found that training on MNIST typically worked quite robustly and was not severely affected by different initializations or changes in the learning rate etc. Training on CIFAR-10 was more sensitive regarding the learning rate. Probably the most crucial aspects were the clipping constraints and using a lower learning-rate for learning the codebook levels. One interesting aspect about the probabilistic soft-quantization is that weights with large posterior variance can essentially have any value that has sufficiently high likelihood under the posterior - this could be beneficial for improving robustness against hardware errors (rounding errors, limited precision, analog effects). In theory this should also translate into being more robust against noisy activations (or even network input) which could be very interesting. We think this question would require proper investigation beyond the scope of this paper.\n\n\nResponse to minor comments:\n(1) Done.\n\n(2) Done.\n\n(3) Thanks for pointing it out, we have fixed this throughout the paper.\n\n(4) Done.\n\n(5) Another glitch, the equation should have been arranged differently (it should make more sense then) - we have updated the equation in the paper.", "We address the reveiewer's questions in their original order (due to limit in number of characters we respond with two separate entries)\n\nDid we try naive MC approximation of the bound?\nWe ran additional experiments to compare our results against a naive MC approximation of the KL divergence. To keep computational complexity comparable to our method, we use a single sample for the MC approximation. On MNIST we get the same accuracy and even higher pruning rates, however on CIFAR-10 we get catastrophic accuracy after quantization and even the non-quantized network has significantly lower accuracy. We have added these results to the appendix A 3.1, including a new table and two figures.\n\nWas pre-training necessary on MNIST?\nWe follow the same learning schedule as Sparse VD and train the first five epochs of a randomly initialized network without the KL penalization term and then gradually switch it on over the next epochs. We call the network after these first five epochs the \"pre-trained\" network, since five epochs suffice to get a decent MNIST classifier. We have run an additional experiment where we have a non-zero weight for the KL term already in the first epoch of training to start from a truly random network. Results were added to Table 1, training from scratch gets the same accuracy but slightly better pruning rates.\n\nHow necessary was each of the constraints?\nLower-bounding the log-variance helps avoiding numerical issues, upper-bounding the log-variance leads to higher accuracy during training - Bayesian Compression and the Multiplicative Normalizing Flows paper also report upper-bounding the posterior variance as it \"helps avoiding bad local optima of the variational objective\". Clipping the non-zero codebook levels at an absolute value of 0.05 to avoid getting collapsing codebooks was important since the objective implicitly favors close-to-zero codebook levels - particularly in the early stages of training such a collapse of the codebook needed to be prevented via clipping. Clipping weights that lie left to the left-most funnel or right to the right-most funnel helped with keeping accuracy after quantization. Without this clipping a small number of (seemingly important) weights are drawn to very large positive or negative values (particularly in the first layer). Since it is just a small number of weights, the impact on the objective is small, however quantizing such weights leads to significant accuracy loss. By clipping, the algorithm seems to find an alternative weight configuration that does not require such weights with large absolute values.\n\nDid we observe posterior means that do not settle at one of the prior modes?\nYes, such cases can be seen in our experiments Fig. 1b (conv_1) and more pronounced in the first and last layer of DenseNet (top-left and bottom-right panel of Fig. 3 in the appendix). A small number of weights (blue dots) do not lie on the prior modes (outside the \"funnels\" in the low-variance regime). During early stages of training, the number of such weights is typically higher and quantizing such a network leads to poor accuracy. After sufficient training, we find in our experiments that a small number of such weights is tolerable without much loss in accuracy.\n", "We address the reivewer's comments in the order in which they appear in the original review \n\n\nSection 2: no guidance where this will lead to - we added a short introduction to section 2 to tie the section together and provide an outline as a guidance to the reader. We also rewrote section 2.1 to be more focused.\n\nMinor comments:\n\nIntro: we write \"... weights with a large variance can be pruned as they do not contribute much to the overall computation\". What does this mean? Is this the marginal posterior variance as in ARD?\nYes, in that sentence we refer to the marginal (approximate) posterior variance which is also the pruning criterion in ARD - however in ARD typically parameters with low variance (or high precision) are pruned. This is due to the fact that ARD assumes a zero-mean Gaussian prior over weights (with a different precision per parameter or group of parameters, that is adjusted during training and regularized by a hyper-prior). Weights that differ significantly from zero get assigned a high variance or, dually, weights with low variance are very likely to lie close to zero (the prior mean) and can thus be pruned. ARD is very similar to the situation where we only have the central funnel (a zero-mean prior) which is the case in Sparse Variational Dropout (compare Eq. 10 in our paper). However in the latter, as in our method, the pruning criterion takes into account both, the marginal posterior mean and variance (see Eq. 9) and also large-variance weights are pruned as long as the posterior mean is small enough (the intuition is that a high-variance weight can essentially have arbitrary values which implies that it most probably does not do anything sensible and can be pruned). To visualize the difference between the pruning criteria, consider the central funnel in the top-row plots of Figure 1: Sparse Variational Dropout and our method prune everything that lies within the area marked by the red dotted funnel. In contrast, thresholding the marginal posterior variance as in classical ARD would correspond to pruning everything that lies below a horizontal line in the \"funnel plots\" (which for the central funnel are precisely weights that lie close to zero). Note that of course different pruning criteria can also be used in ARD.\n\n\nIntro: Bayesian inference penalizes overly complex models, variational Bayesian inference does not necessarily do so - agreed, we have changed the sentence accordingly.\n\nEq 10. - z needs subscripts - agreed, we have added sub-scripts throughout the paper.\n\nSection 3.2: do not refer to 'MAP' but be more precise - agreed, we rephrased our writing to refer to 'maximizing likelihood under the approximate posterior'.\n\nClarify P&Q - P&Q refers to 'Pruning' and 'Quantization', we have clarified this in the corresponding table legends.", "We thank the reviewers for their feedback and constructive comments. Based on the feedback, we ran some additional experiments and made some changes to the paper. We have also re-ran our original experiments with two small modifications which produced slightly better results. We describe all changes below and respond to each reviewer individually with a separate comment on the corresponding review-entry in the forum.\n\n\nUpdated configuration for all experiments:\n\n-) Changed pruning threshold to \\log T_\\alpha = 2 (was 3 in the first submission). Leads to small improvements in accuracy.\n\n-) Gradient-stopping for clipping (applying gradients to a shadow weight at the clipping boundary that depends on the trainable codebook values and using the clipped weight only for the forward-pass). This helped improve results for CIFAR-10 experiment, particularly for quantizing the first layer without loss in accuracy. More details in Experiment section.\n\n\n\nAdditional experiments:\n\n-) Performed experiments with naive MC approximation of KL divergence (single sample only) to compare against our functional approximation of the KL divergence. Good results on MNIST (same accuracy, higher pruning rates) but catastrophic results for quantized network on CIFAR-10 with the MC approximation. Results are shown in Appendix A 3.1.\n\n-) Since the MC approximation cannot be used with local reparameterization, we performed a control experiment where we used our functional KL approximation but without local reparameterization (results in the Appendix, Table 3 and Figure 5a).\n\n\n\nPaper changelog:\n\n-) Updated results according to new experiment configuration (minor changes for MNIST in Table 1, better results for CIFAR-10 when quantizing the whole network shown in Table 2)\n\n-) Added results for using a randomly initialized network (without any pre-training) on MNIST to Table 1. Same accuracy, slightly better pruning rates.\n\n-) Added intro to section 2, to give some guidance to the reader\n\n-) Rewrote 2.1 for more clarity.\n\n-) No longer use the term 'MAP' but more accurately refer to 'maximizing likelihood under the approximate posterior' in 3.2.\n\n-) Added the naive MC approximation of the KL divergence to the discussion.\n\n-) Added detailed results of comparison between our KL approximation and the naive MC approximation to the appendix (A 3.1), including experiments on MNIST and CIFAR-10 (reported in Table 3) and two additional plots (Figure 5 and 6)\n\n-) Changed abstract to use passive form\n\n-) Fixed minor typos, glitches and other issues throughout the paper, including the ones pointed out by the reviewers.\n\n", "The authors would like to correct four typos in the current version of the manuscript:\n-) Table 1: Percentage of non-zero weights for Soft Weight-Sharing (P&Q) is 0.5 (not 3 as reported in the table) and bits for Deep Compression is 5 - 8 (not 10 - 13 as reported in the table)\n-) Last paragraph before 4.1: We ensure alpha >= 0.05 by clipping.\n-) Page 8, last sentence: we use a batch size of 64 samples\n-) Appendix, Figure 3: The validation accuracy of the network shown is 91.55% (corresponds to VNQ (no P&Q) in Table 1).\n\nWe additionally want to clarify that in Eq. (11) p_m denotes the prior over locations whereas p_k is a scalar (the mixture weight for component k).\n\nNote that the point of these corrections is to avoid potential confusion, our main results are not affected by these typos." ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry-TW-WAb", "iclr_2018_ry-TW-WAb", "iclr_2018_ry-TW-WAb", "By5wsMNxM", "S1ytLST7G", "S10EfvFxM", "BkIy2pKxf", "iclr_2018_ry-TW-WAb", "iclr_2018_ry-TW-WAb" ]
iclr_2018_SJJySbbAZ
Training GANs with Optimism
We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.
accepted-poster-papers
The reviewers thought the paper provides an interesting line of research.
train
[ "H1NffmKgz", "Syhxg_jgf", "S138CtnWf", "SkcwGOpQz", "rJ78fOT7z", "rkWJrG5QM", "rk8tNz5XM", "BJw8Efc7G", "BJhRfg8-z", "HJZkcmSJf", "rJ99FMSkf", "SJxehjl1G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games. Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well. The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training. They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10. \n\nThis paper is relatively well-written and clear, and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games. Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.The issue of last iterate versus average iterate for non convex-concave problems is also presented well. \n\nThe theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013. Moreover, the result is only at best a motivation for using OMD in WGAN training since the WGAN optimization problem is not a bilinear game. \n\nThe experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, although they seem less compelling than the prior discussion in the paper would suggest. In particular, they are matched by SGD with momentum when evaluated by last epoch performance (albeit while being less sensitive to learning rates). OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, but there doesn't seem to be even an attempt at explaining this in the paper. \n\nI found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5. This gives the impression that the experimental results might have been at least slightly \"gamed\". \n\nFor the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality.\n\nSpecific comments:\nPage 1, \"no-regret dynamics in zero-sum games can very often lead to limit cycles\": I don't think limit cycles are actually ever formally defined in the entire paper. \nPage 3, \"standard results in game theory and no-regret learning\": These results should be either proven or cited.\nPage 3: Don't the parameter spaces need to be bounded for these convergence results to hold? \nPage 4, \"it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm\": For completeness, this should probably either be (quickly) proven or a reference should be provided.\nPage 5, \"the unique equilibrium of the above game is...for the discriminator to choose w=0\": Why is w=0 necessary here?\nPage 6, \"We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A\": Why is this true? This should either be proven or cited.\nPage 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result. In the Initialization section, it appears simply to be out of convenience.\nPage 6, Theorem 1: It should be explicitly stated that this result doesn't provide a convergence rate, in contrast to the existing OMD results cited in the paper. \nPage 7, \"we considered momentum, Nesterov momentum and AdaGrad\": Why isn't Adam used in this section if it is used in later experiments?\nPage 7-8, \"When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.\": Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set? None of the theoretical arguments presented earlier in the paper seem to even hint at this. The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch. However, this doesn't even really hold, since there exist learning rates in which SGD with momentum matches the performance of SOMD.\nPage 8, \"Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants\": Learning rate sensitivity doesn't seem to be touched upon in the earlier discussion. Can these results be explained by theory?\nPage 8, \"we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training\": These results don't mean much without error bars.\nPage 8, \"we only trained the discriminator once after one iteration of generator training. The latter is inline with the intuition behind the use of optimism....\": Why didn't this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times?\n\n\nAfter reading the response of the authors (in particular their clarification of some technical results and the extra experiments they carried out during the rebuttal period), I have decided to upgrade my rating of the paper from a 6 to a 7. Just as a note, Figure 3b is now very difficult to read. \n\n", "The paper proposes to use optimistic gradient descent (OGD) for GAN training. Optimistic mirror descent is know to yield fast convergence for finding the optimum of zero-sum convex-concave games (when the players collaborate for fast computation), but earlier results concern the performance of the average iterate. This paper extends this result by showing that the last iterate of OGD also provides a good estimate of the value of bilinear games. Based on this new theoretical result (which is not unexpected but is certainly nice), the authors propose to use stochastic OGD in GAN training. Their experiments show that this new approach avoids the cycling behavior observed with SGD and its variants, and provides promising results in GAN training. (Extensive experiments show the cycling behavior of SGD variants in very simple problems, and some theoretical result is also provided when SGD diverges in solving a simple min-max game).\n\nThe paper is clearly written and easy to follow; in fact I quite enjoyed reading it. I have not checked all the details of the proofs, but they seem plausible.\nAll in all, this is a very nice paper.\n\nSome questions/comments:\n- Proposition 1: Could you show a similar example when you can prove the oscillating behavior?\n- Theorem 1: It would be interesting to write out the convergence rate of Delta_t, which could be used to optimize eta. Also, my understanding is that you actually avoid computing gamma, hence tuning eta is not straightforward. Alternatively, you could also use an adaptive OGD to automatically tune eta (see, e.g., Joulani et al, \"A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, and variational Bounds,\" ALT 2017). The non-adaptive selection of eta might be the reason that your method does not outperform adagrad SGD in 5 (b), although it is true that the behavior of your method seems quite stable for different learning rates).\n- LHS of the second line of (6) should be theta.\n- Below (6): \\mathcal{R}(A) is only defined in the appendix.", "This paper proposes a simple modification of standard gradient descent -- called “Optimistic Mirror Descent” -- which is claimed to improve the convergence of GANs and other minimax optimization problems. It includes experiments in toy settings which build intuition for the proposed algorithm, as well as in a practical GAN setting demonstrating the potential real-world benefits of the method.\n\n\nPros\n\nSection 3 directly compares the learning dynamics of GD vs. OMD for a WGAN in a simple toy setting, showing that the default GD algorithm oscillates around the optimum in the limit while OMD’s converges to the optimum.\n\nSection 4 demonstrates the convergence of OMD for a linear minimax optimization problem. (I did not thoroughly verify the proof’s correctness.)\n\nSection 6 proposes an OMD-like modification of Adam which achieves better results than standard Adam in a practical GAN setting (WGANs trained on CIFAR10) .\n\n\nCons/Suggestions\n\nThe paper could use a good deal of proofreading/revision for clarity and correctness. A couple examples from section 2:\n- “If the discriminator is very powerful and learns to accurately classify all samples, then the problem of the generator amounts to solving the Jensen-Shannon divergence between the true distribution and the generators distribution.” -> It would be clearer to say “minimizing” (rather than “solving”) the JS divergence. (“Solving” sounds more like what the discriminator does.)\n- “Wasserstein GANs (WGANs) Arjovsky et al. (2017), where the discriminator rather than being treated as a classifier is instead trying to simulate the Wasserstein−1 or earth-mover metric” -> Instead of “simulate”, “estimate” or “approximate” would be better word choices. And although the standard GAN discriminator is a binary classifier, when optimized to convergence, it’s also estimating a divergence -- the JS divergence (or a shifted and scaled version of it). Even though the previous paragraph mentions this, it feels a bit misleading to characterize WGANs as doing something fundamentally different.\n\nSec 2.1: There are several non-trivial but uncited mathematical claims hidden behind “well-known” or similar descriptors. These results could indeed be well-known in certain circles, but I’m not familiar with them, and I suspect most readers won’t be either. Please add citations. A few examples:\n- “If the loss function L(θ, w) ..., then standard results in game theory and no-regret learning imply that…”\n- “In particular, it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm with an L2 regularizer...”\n- “It is known that if the learner knew in advance the gradient at the next iteration...” \n\nSection 4: vectors “b” and “c” are included in the objective written in (14), but are later dropped without explanation. (The constant “d” is also dropped but clearly has no effect on the optimization.)\n\n\nOverall, the paper could use revision but the proposed approach is simple and seems to be theoretically well-motivated with solid analysis and benefits demonstrated in real-world settings.", "10) As we explain in the response to reviewer 1, unfortunately this stability of performance wrt to learning rate was only an artifact of a mistake in our implementation and we have removed this comment from the paper.\n11) We reran the experiment across 35 runs for 30 epochs (due to compute restrictions) for the two top-performing methods (optimAdam-ratio1 and Adam), and plot the results with 10-90 error bars in the Appendix of the paper, demonstrating that optimAdam indeed reliably performs better in terms of inception score. Once we can run the experiment 100 times for all 100 epochs, we plan to replace our main-text figure with this style of plot.\n12) It should indeed apply. We have included such results too. We thought of first comparing with existing proposals and hyperparameter settings in the literature to see the effect of simply adding optimism. However, we agree that we should have included this alternative 1:1 training in this experimental section too. The results in that section are inline again with this intuition and the ratio1 algorithms perform better with their corresponding 5:1 counterparts. (see figure 3b)", "We would like to thank you for your comments and suggestions and we explain below how we have addressed your concerns/questions in the updated revision of the paper.\n\n1) We have replaced limit cycles in the intro with \"limit oscillatory behavior\". We hope that this term is self explanatory, as we want to avoid further notation for formally defining a limit cycle.\n2) We have added references for the average converging to equilibrium result (Freund-Ssapire 1999)\n3) Indeed regret rates, as is typically, always require some boundedness of the optimization space. We have added a sentence on page 3 (\"theta and w lie in some bounded convex space\") to address this comment.\n4) We have added a reference on equivalence between GD and FTRL\n5) Since we are arguing about an equilibrium, it has to be that theta=v is a best response to w. If w > 0, then the best response for theta is minus infinity. If w < 0, then the best response for theta is infinity. If w=0, then any value for theta is a best response. Similarly, at an equilibrium, w also needs to be a best response to theta. If theta>v, then w=infinity is a best response and if theta < v then w=-infinity is a best response. None of the above can be a simultaneous best-response, and the only unique equilibrium is theta=v and w=0.\n6) We will add a quick sentence about this fact, which follows along the exact same lines as the example above: if y is not in the null space of A, then Ay has some non-zero coordinates. Then the best response for x is to set minus infinity on the positive coordinates of Ay and infinity on the negative coordinates. This will lead to a value of minus infinity, which can be avoided by the y player by choosing a y that lives in the null space of A, leading to a value of zero. Hence, at any equilibrium y is such that Ay = \\vec{0} (i.e. the null space of A). Similarly, we can argue for x having to lie in the null space of A^T.\n7) We have added a convergence rate for Theorem 1 as a function of eta. This result does show that the rate at which Delta_t and consequently the convergence of the solutions goes to the limit value. In particular, this convergence to the limit value of eta gamma^2 Delta_0, happens at an exponential rate of approximately exp{- eta^2 * t / gamma^2}, while this limit value depends linearly with eta. For the regret rates mentioned in section 2 typical values of eta are of the order of 1/T^{1/4} (see e.g. Syrgkanis et al. 2015). Hence, if one wants both regret rates and convergence to equilibrium, these are reasonable values of eta. \n8) We have added experiments of Adam and optimistic adam in this section too. We thought that Adam was a method particularly useful for image tasks and hence wanted to compare with simpler and more classical algorithms in this section. However, we do admit that we should have also compared with Adam in this section too and we augmented our experiments to include Adam. Indeed adam and optimistic adam performs better in this task too and not only in the image task. Still optimistic adam outperforms adam in this task too.\n9) Indeed the theoretical results do not imply that an out-of-sample early stopping would work better under optimism than under other methods. However, we wanted to test performance of OMD with an early stopping criterion, since typically, such criteria are used. We indeed are not explicitly doing an early stopping but rather using out-of-sample performance to choose the best iteration. We found this approach to be interesting in practice and grounded in the observations made in Arjovsky et al (as we note in the text before the figure) and we also found that since SOMD was also better performing than other methods other than Adam an interesting finding. Also in terms of last epoch, our updated results show that only Adam has comparable performance with the best performance of optimistic adam or OMD (i.e. with the best learning rates). Also for most learning rates, momentum and nesterov momentum have statistically significant lower performance (indeed comparable, but strictly worse).\n[Continued in the following comment due to character limit]", "We would like to thank you for your comments and suggestions and we explain below how we have addressed your concerns/questions in the updated revision of the paper.\n\n1) In these examples that we give that lead to divergence, it is easy to see that if one takes the step size to zero, then you get a limit cycle (i.e. continuously oscillating behavior). For any other non-zero step-size the behavior is still oscillatory but diverging (i.e. the radius of the cycle is constantly increasing). In some sense, the finite step size, makes the dynamics jump from one limit cycle of the continuous limit dynamics (stepsize=0), to another. \n\n2) We have added an explicit form of the convergence of Delta_t as a function of eta and gamma in the theorem. Analyzing the limit dynamics with a non-constant stepsize and extending theorem 1 seems feasible, but would complicate even more the inductive proof with extra notation. Hence, we defer such an extension to the full version. It is true that the condition depends on gamma, albeit only an upper bound on gamma is required. If any such upper bound on gamma is known then an appropriate step size can be chosen. Also in terms of an adaptive step size, we believe that our optimistic Adam algorithm is exactly a way of setting and adaptive step size that adapts to the variance of the problem, hence the reason why it out-performs the fixed step size optimistic mirror descent in both experimental sections. So you are right that adaptive step sizes can lead to improved practical performance even in the presence of optimism. Also we note here that in fact there was a small mistake in our implementation of OMD in the DNA experiment which lead to the stability of the performance of OMD across learning rates. We have fixed this mistake and the stability of the performance across learning rates was only an artifact. Still optimism performs better than most methods and optimistic adam leads to the bet last iterate loss, while OMD leads to best early stopping loss.", "Thank you for your comment, we have uploaded a revision of the paper that we believe will address these concerns:\n \n1) In the first version of our paper, Lemma 4 did indeed depend on the induction hypothesis for time t-2. This has been resolved in the latest revision, where we have unpacked this implicit use of the induction hypothesis and re-structured the proof a bit. Lemma 4 is currently proving a weaker statement which is still good enough for the final conclusion of the theorem. We have also cleaned up the proof in general and corrected some typos in other parts. \n2) You are also correct here. This was a typo and should have been 1/gamma rather than gamma. In the new version of the proof, the constants in the conditions and the rates of convergence has slightly changed and we have updated the correct conditions in the revision, while we have also augmented the convergence rates in the theorem statement to explicitly contain the dependence on gamma, which we omitted in the initial submission as we are treating gamma as a constant. \n3) Indeed we simplified and removed the max operator, since as you say spectral norm is the same for A and A^T. ", "We would like to thank you for your comments and suggestions and we explain below how we have addressed your concerns/questions in the updated revision of the paper.\n1) We changed \"solving\" to \"minimizing\"\n2) We changed \"simulate\" to \"approximate\" and also added a small comment to stress that traditional GANs are also a form of metric between two distribution\n3) We have added a reference for the averages of both players converging to an equilibrium in zero-sum games, in particular Freund-Shapire1999\n4) We have added the Shalev-Swartz survey on online learning and online convex optimization for the claim that gradient descent is equivalent to FTRL with l_2 regularizer\n5) We have added the follow-the-perturbed-leader paper of Kalai-Vempala and the lecture notes of Philippe Rigollet for the claim that by knowing the gradient in the next iteration you get constant regret (a consequence of the be-the-leader lemma in these references)\n6) For vectors b,c,d as we state in the first paragraph of the section we work in the main body only with the simpler game x^TAy and we point that in Appendix D the analysis easily extends to the more complex games with the b, c and d vectors, hence we omitted these vectors in the theorem presented in the main paper. Indeed d is irrelevant for the optimization of both players.", "Dear authors, \n\nCould you please provide more details on why the inequalities on the top of page 23 hold true where you try to bound $||x_{t-2}||^2$ and $\\Delta^i_{t-2}$? Did the proof of Lemma 4 implicitly assume the induction hypothesis? Because it looks very weird to me. Thanks! \n\nAlso, in the proof of the Theorem 3, if we assume all the lemmas are correct, then wouldn't the step from ineq. (40) to ineq. (34) require $\\eta > \\gamma$ instead of $\\eta < \\gamma$? If so, then the condition in the main Theorem also should be changed? Correct me if I'm wrong. \n\nBTW, in Theorem 1, matrix A and A^T have the same spectral norm so I guess you can drop the op $\\max{A, A^T}$ directly.", "Thanks a lot for your reply!", "Our main theoretical result in Section 4 is that OMD exhibits last-iterate, rather than average-iterate, convergence in zero-sum games. In particular, its dynamics converges to equilibrium rather than cycling, as gradient descent does, around the equilibrium. We believe that this theoretical guarantee extends to general convex-concave settings.\n\nInspired by the better behavior of OMD in theory, we evaluate experimentally its performance outside of the convex-concave setting. We observe that it performs better than other methods (e.g. adagrad, adam, nesterov momentum) in adversarial training applications such as training on cifar10 and DNA sequence data. \n\nOn the theory front the non convex-concave setting is not well-understood yet. We believe that the theoretical part our analysis could generalize to show convergence to local minimax solutions, but defer this as an interesting open question for future work. We should note that even for non-adversarial training, the non-convex case is not well-understood. Here too methods only have good theoretical properties in the convex setting while still being used in the non-convex setting. One way to view our results it that they complement these results. We show that OMD has last-iterate convergence in the convex-concave setting, and propose using it in the non convex-concave setting where our experimental evaluation shows promising results.", " Dear authors,\n I have a question on the convergence of OMD. It claims that OMD has a faster convergence rate to the equilibrium of a zero-sum game. Does it hold for an objective function that is convex-concave, or any general objective function? Section 4 only shows the convergence results for a bilinear function. If similar convergence result does not hold for a general objective function, how does OMD help in the minimax game of GAN, which is generally not convex-concave in the generator and discriminator network parameters?" ]
[ 7, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJJySbbAZ", "iclr_2018_SJJySbbAZ", "iclr_2018_SJJySbbAZ", "rJ78fOT7z", "H1NffmKgz", "Syhxg_jgf", "BJhRfg8-z", "S138CtnWf", "iclr_2018_SJJySbbAZ", "rJ99FMSkf", "SJxehjl1G", "iclr_2018_SJJySbbAZ" ]
iclr_2018_SJA7xfb0b
Sobolev GAN
We propose a new Integral Probability Metric (IPM) between distributions: the Sobolev IPM. The Sobolev IPM compares the mean discrepancy of two distributions for functions (critic) restricted to a Sobolev ball defined with respect to a dominant measure mu. We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis. The Dominant measure mu plays a crucial role as it defines the support on which conditional CDFs are compared. Sobolev IPM can be seen as an extension of the one dimensional Von-Mises Cramer statistics to high dimensional distributions. We show how Sobolev IPM can be used to train Generative Adversarial Networks (GANs). We then exploit the intrinsic conditioning implied by Sobolev IPM in text generation. Finally we show that a variant of Sobolev GAN achieves competitive results in semi-supervised learning on CIFAR-10, thanks to the smoothness enforced on the critic by Sobolev GAN which relates to Laplacian regularization.
accepted-poster-papers
The paper provides a useful analysis of the role of gradient penalties and the performance of the proposed approach in semi-supervised cases.
val
[ "HyJ_7LFlG", "Hk4rkBRlz", "SJYrKxQ-f", "BJIPz57bG", "HJ5YP_p7f", "SkSg7XTmM", "rkeWfhZ7M", "rkANbHuzf", "HyWBqmufz", "H1stO7_zM", "H1pY73PGG", "HJrd-2wGz", "ByNaV2wGz", "rJsEWhDGM", "HJ4rCjPff", "B1e-piPfG", "SkWlhswff", "r1rajiwGG", "HJfPlp0WG", "r1Tvj8EZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "public", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "The paper deals with the increasingly popular GAN approach to constructing generative models. Following the first formulation of GANs in 2014, it was soon realized that the training dynamics was highly unstable, leading to significant difficulties in achieving stable results. The paper by Arjovsky et al (2017) provided a framework based on the Wasserstein distance, a distance measure between probability distributions belonging to the class of so-called Integral Probability Metrics (IPMs). This approach solved the stability issues of GANs and demonstrated improved empirical results. Several other works were then developed to deal with these stability issues, specifically the Fisher IPM. Both these methods relied on discriminating between distributions P and Q based on computing a function f, belonging to an appropriate function class {\\cal F}, that maximizes the deviation E_{x~P}f(x)-E_{x~Q}f(x). The main issue relates to the choice of the class {\\cal F}. For the Wasserstein distance this was the class of L_1 Lipschitz functions, while for the Fisher distance it was the class of square integrable functions. The present paper introduces a new notion of distance, where {\\cal F} is the defined through the Sobolev norm, based on the L_2 norm of the gradient of f(x), with respect to a measure \\mu(x), where the latter can be freely chosen under certain assumptions. \n\nThe authors prove a theorem related to the properties of the Sobolev norm, and express it in terms of the component-wise conditional distributions. Moreover, they show that the optimal critic f is obtained by solving a PDE subject to zero boundary conditions. They then use their suggested metric in order to develop a GAN algorithm, and present experimental results demonstrating its utility. The Sobolev IPM has two nice features. First, it is based on the component-wise conditional distribution of the CDFs, and, second, its relation to the Laplacian regularizer from manifold learning. Its 1D version also relates to the well-known von Mises Cramer statistics used in hypothesis testing. \n\nThe paper belongs to a class of recent papers attempting to suggest improvements to the original GAN algorithm, relying on the KL divergence. It is well conceived and articulated, and provides an interesting and potentially powerful new direction to improve GANs in practice. However, it is somewhat difficult to follow the paper, and would urge the authors to improve and augment their presentation of the following issues. \n1)\tOne often poses regularization schemes based on optimality criteria. Is there any optimality principle under which the Sobolev IPM is a desired choice? \n2)\tThe authors argue that their approach is especially well suited for discrete sequential data. This issue was not clear to me, and it would be good if the authors could expand on this issue and provide a clearer explanation. \n3)\tHow would the Sobolev norm behave under a change of coordinates or a homeomorphism of the space? Would it make sense to require some invariance in this respect? \n4)\tThe Lagrangian in eq. (9) contains both a Lagrange constraint on the Sobolev norm and a penalty term. Why are both needed? Why do the updates of \\lambda and p in Algorithm 1 used different schemes (SGD and ADAM, respectively). \n5)\tTable 2, p. 13 – it would be nice to see a comparison to the recently introduced gradient penalty approach, Gulrajani et al., Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.\n6)\tThe integral defining F_p(x) on p. 3 has x as an argument on the LHS and as an integrand of the RHS. Please correct this. Also specify that x=(x_1,\\ldots,x_d).\n", "Summary: The authors provide another type of GAN--the Sobolev GAN--which is the typical setup of a GAN but using a function class F for which f belongs to F iff \\grad f belongs to L^2(mu). They relate this MMD to the Cramer and Fisher distance and then produce a recipe for training GANs with this sort of function class. In their empirical examples, they show it has similar performance to the WGAN-GP.\n\nOverall, the paper has some interesting mathematical relationships to other MMDs. However, I finished reading the paper wondering why one would want to trust this GAN over any of the other GANs. I may have missed it, but I didn't see any compelling theoretical reason the gradients from this method would prove superior to many of the other GANs in existence today. The authors argue \"from [equation 5] we see that we are comparing CDFs, which are better behaved on discrete distributions,\" but I wasn't sure what exactly to make of this comment.\n\nNits:\n* The \"Stein metric\" is actually called the Stein discrepancy [see Gorham & Mackey (2015) Measuring Sample Quality using Stein's Method].", "The paper proposes a different gradient penalty for GAN critics.\nThe proposed penalty is forcing the expected squared norm of the gradient to be equal to 1.\nThe corresponding integral probability metric is well analysed.\n\nPros:\n- The paper provides a nice overview of WGAN-GP, Fisher GAN and Sobolev GAN.\nThe differences and similarities and mentioned.\n- The paper shows that Sobolev IPM is comparing coordinate-wise conditional CDFs.\n- The 1D example in Section 4.2 shows a limitation of Fisher GAN.\n\nCons:\n- The introduced gradient penalty is harder to optimize.\nAlgorithm 1 is using a biased estimate of the penalty.\nIf having independent samples in the minibatch,\nit would be possible to construct an unbiased estimate of the penalty.\n- An unbiased estimate of the gradient penalty\nwill be hard to construct when not having two independent real samples.\nE.g., when doing conditional modeling with a RNN.\n- The algorithm requires to train the critic well\nbefore using the critic.\nThe paper does not provide an improvement over WGAN-GP in this direction.\nMMD GAN and Cramer GAN may require less critic training steps.\n- The experimental results do not demonstrate an improvement over WGAN-GP.\n- Too much credit is given to implicit conditioning.\nThe Jensen Shanon divergence can be also written as a chain\nof coordinate-wise JS divergences. That does not guarantee non-zero gradients\nfrom the critic. A critic with non-zero gradients seems to be more important.\n\n\nMinor typos:\ns/pernalty/penalty/\ns/ccordinate/coordinate/", "This paper designs a new IPM(Integral Probability Metric) that uses the gradient properties of the test function. The advantage of Sobolev IPM over the Fisher IPM is illustrated by the insight given in Section 4.2. This is convincing. For comparing the true distribution and the generated distribution, it is much better to provide a quantitative measurement, rather than a 0-1 dichotomy. This target is implicitly achieved by the reproducing kernel methods and the original phi-divergence.\n\nOn the other side, the paper is hard to follow, and it contains many long sentences, some 4-5 lines. The formulation of the Sobolev norm could be improved at the top of page 6.", "Your experiments with curriculum conditioning can be interpreted also differently.\n1) The recurrent discriminator was helping only slightly.\nSo an explicit conditioning in the loss computation is not sufficient.\n\n2) The curriculum for the generator may be helping for a different reason.\nIf the discriminator is not able to perfectly recognize real and generated examples,\nthe generator gets a non-zero training signal, even if using Fisher GAN. ", "We thank the reviewer for their comment. We will consider rephrasing those sentences for clarity. The success of Fisher GAN with curriculum conditioning on text generation supports that conditioning is the missing ingredient (as we show in Figure 4 in the paper). The explicit curriculum conditioning parallels the implicit conditioning induced by the Sobolev IPM, where context modeling is induced by the metric.\nAnalysis using sequences of probability distributions, and investigating similar implicit conditioning for WGAN-GP would be interesting directions for future work.", "Thank you for your response and clarifications.\nThe analysis of Sobolev IPM would deserve to be published.\nI increased my rating to: \"6: Marginally above acceptance threshold\".\n\nThe paper would still benefit from rewording a few sentences.\nFor example, I do not find the following sentences helpful:\n\"Matching conditional dependencies between coordinates is crucial for sequence modeling.\"\n\"We validate that the conditioning implied by Sobolev GAN is crucial for the success and stability of GAN in text generation.\"\n\nYour illustrative example from Section 4.2 nicely shows that Fisher IPM does not provide a useful training signal\nwhen measuring the distance between two distributions with disjoint supports.\nI would not call this problem a lack of \"implicit conditioning\".\nThe Wasserstein GAN paper compared different divergences by looking at convergence of sequences of probability distributions. That seems to be a more general approach than requiring an implicit conditioning.\nIt is not clear what form of implicit conditioning is needed and which implicit conditioning is done by the Wasserstein distance.\n", "There are three meanings of convergence in the GAN context: \na- convergence of the min-max game to an equilibrium point in the parameter space of the generator and the critic. \nb- convergence of f_omega to sup_f in the inner loop - necessary to achieve equivalence to closed form probability metric.\nc- convergence of fake (generator) to real distribution \n\nWe are referring to type (b) convergence. What you are referring to as \"stability\" or \"convergence\" is type (a) convergence of the game to an equilibrium point (convergence of w and theta) to an equilibrium. Local convergence to an equilibrium is important for stability, otherwise we can have some oscillation and have no convergence of gradient descent to a stable saddle point of the cost function as explored in this paper https://arxiv.org/pdf/1711.00141.pdf. \n\nUnfortunately there is no formal guarantee that type (a) convergence implies type (c) convergence. Original Goodfellow paper, Arjovsky and bottou ICLR 2017, were concerned in type (c) convergence. Under some conditions of non vanishing gradient and some assumption on the generator one can show that type (b) convergence, ensure that we are minimizing the probability metric at hand . We think it is still an open question theoretically the interplay between types (a), (b) and (c) convergence and what are the implication on the inner loop optimization of the discriminator and on the architectures of the generator and discriminator.", " Just pass by and want to comment about your bullet point (2). In the original paper by Ian Goodfellow, they prove the theoretical convergence if the discriminator is maximized in the inner loop. Actually this is not needed. From the theoretical perspective, simultaneous gradient descent can guarantee convergence (under some conditions). Details can be found in the following two papers:\n\n1. Gradient descent GAN optimization is locally stable, NIPS 2017.\n\n2. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, NIPS 2017. ", "Thanks for your reply!", "We thank the reviewer for his encouraging and thoughtful comments and his suggestions that we already incorporated in the revision. We address below his main questions:\n\n1- It would be interesting to find a primal formulation that has sobolev IPM as a dual. Given the PDE that the critic of Sobolev satisfies, one way to find in which optimal way the Sobolev discrepancy is defined, is to attempt to write its \ndynamic form. While we don’t have yet a formal proof (we are working on it) we conjecture that sobolev IPM may relate to the following dynamic problem:\n\n inf_{f_t in W_2,0(q_t)} \\int _{0}^T \\nor{\\nabla_x f^t(x)} q_t(x) dx\n d(q_t) /dt = -div(q_t \\nabla_x f^t(x))\n q_0=Q , q_T=P\nNote that Wasserstein 2 distance has a similar form due to Benamou et al (\"A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem\" by Jean-David Benamou et al.) and has instead of \\nabla_x f , a function g \\in \\mathbb{R}^d in L_2(q_t). We think Sobolev IPM looks for “smooth flows” to move density masses. The effort for moving the mass is measured with the sobolev norm of critics and we wish to minimize this effort.\n\n2- When we wish to compare discrete sequences we face two problems: \n \na- The generator is continuous and the real data is discrete. Wasserstein distance is well known to address this issue of discrete/continuous matching. Divergences like KL or JS compare density functions and hence are not suited for this task. For sobolev IPM: by noising the discrete data (smoothing) we match two continuous distributions based on the coordinatewise conditional CDFs. Since the noising is annealed over training we will end up matching the discrete and continuous distributions. CDFs are better suited for the discrete nature of the problem as shown in Figure 1 in the revision of the paper. \n\nb - another problem is that we need for sequences a discrepancy between processes, rather than just distributions. For time processes, how the joint distributions factors in terms of conditioning is very important. Sobolev IPM gives a little advantage by comparing for each coordinate a unique quantity CDF(x_i| x^{-i})PDF(x^{-i}), since the Sobolev discrepancy captures some conditioning it has the ability of measuring coordinate dependencies, crucial for sequence/time process modeling.\n\nSee also our answer to Reviewer 1, part 3. We included Figure 1 in the paper revision to illustrate this.\n\n3- Behavior under coordinate change: \n\nDiscrepancy is unchanged but critic rotated: Assume we rotate P and Q and use mu=P+Q/2, the discrepancy won’t change but the critic gradients will be rotated. \n\nIncorporating invariance in mu: Since mu is free, we think a coordinate change might be a good place for incorporating desired invariances. for instance if we set d\\mu(gx) = \\frac{P( gx) + Q(g x) }{2} dg dx, g is an element in a lie group, our critic will be smooth and well defined on the support of an “augmented” distribution, which will probably benefit the semi-supervised learning with sobolev GAN. This is in the spirit of the recent work on invariances with GAN from Kumar et al NIPS 2017. \n\t\t \t \t \t\t\nKumar et al NIPS 2017. Semi supervised learning with GANs: Manifold invariance with improved inference.\n\t\t\t\t\n4- Augmented Lagrangian: We are using an augmented Lagrangian to enforce the equality constraint. Hence we have a lagrangian and a penalty term. Note that in Benamou paper an augmented lagrangian was also used. We used adam for optimizing the critic and the generator which is default practice, while the simple SGD for the lagrangian is following common practice in ADMM (convex optimization) where also our penalty coefficient is the learning rate for the lagrangian (see eg slide 9 of https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf) \n\n5- The WGAN-GP paper did not provide SSL results; in our experiments we did not get WGAN-GP to work for SSL (same experimental setup). We now report those results in Table 2 page 15. \n\n6- This was fixed in the revision thanks for pointing it out.\n", "3- CDF for continuous/discrete matching: Note that when we want to generate text, we are facing a problem of a continuous generator and discrete real data. If we were to compare a continuous and a discrete distribution based on pdfs, this will fail (e.g. KL, JSD, etc). Wasserstein distance for instance in one dimension is known to be the comparison of inverse CDFs and this makes it possible to compare continuous and discrete distributions. For sobolev IPM: by noising the discrete data we match two continuous distributions based on the coordinatewise conditional CDFs. Since the noising is annealed over training we will end up matching the discrete and continuous distributions. We hypothesise that the implicit conditioning (coordinatewise conditional CDF matching) implied by the gradient regularizer allows end to end training of GAN text generation, since we are implicitly conditioning on the context for each coordinate. We added plots to illustrate this (Figure 1 in the revision).", "The main changes are: \n- added WGAN GP performance on SSL (Table 2 page 15)\n- moved Lemma 1 from appendix to the main paper for the approximation of the sobolev IPM in a hypothesis class (page 7) for showing that the approximation error is in the sobolev sense.\n- added Figure 1 to illustrate CDF versus PDF based matching ( Figure 1 page 8)", "We thank the reviewer for his comments and address their main concerns: (1) empirical performance and (2) compelling theoretical reasons for superior gradients. We uploaded a revision of the paper to incorporate the reviewer comments.\n\n- Empirical performance similar to WGAN-GP: agreed for text modeling, but not for SSL. (1) In text generation, our goal was to understand the success of WGAN-GP. The sobolev point of view gives us the insight that the gradient penalty is not only introducing stability to the training, but also providing implicit conditioning needed for sequence modeling. (2) In SSL, while WGAN-GP fails, we show how to use sobolev IPM in SSL and link it to laplacian regularization in manifold learning (Please see updated Table 2 page 15 where we added a comparison to WGAN-GP)\n\n- Gradients of sobolev critic : \n\n1- Convergence of critics in Sobolev Sense: One interesting property we show (Lemma 1 originally in appendix, we moved it now to the main paper) is about the approximation of the optimal critic f^*, when f is parameterized in a hypothesis class (i.e. neural network, f_H).\nMost GANs don’t have any guarantee.\n- Fisher GAN provides an approximation guarantee in Lebesgue sense, meaning norm(f^* - f_H)..\n- Sobolev GAN provides an approximation guarantee in Sobolev sense, meaning norm (grad f^* - grad f_H).\nHaving in mind that the gradient of the critic is the information that is passed on to the generator, we see that this convergence in Sobolev sense to the optimal critic is an important property for GAN training. We highlighted this in the revision of the paper .\n\n2- Meaningful Gradient Directions by fokker planck diffusion: We show that the optimal critic satisfies a PDE that relates to a deterministic fokker planck diffusion (https://en.wikipedia.org/wiki/Fokker–Planck_equation). Assume our goal is to move a density Q to a density P. we start by computing the sobolev critic between Q and P. If we have particles X_t whose initial distribution distributions Q_0= Q and we move those particles with the gradient of the critic (X_t=X_{t-1}+ epsilon \\nabla_x f^t_{q_t,P}(x) ). we do this process for T steps by recomputing the critic at each time between the particle distributions q_t and P. The relation of Sobolev IPM to fokker planck diffusion gives us the evolution of the density of the moving particles. At the end of the process we are guaranteed by fokker planck diffusion to converge to the distribution P. Note that this diffusion has two steps: compute the critic between q_t and P, and update the particles X_t. This is similar to the gradient descent applied to learning GANS: compute critic, and update particles, with the main difference that we are working here with densities, in GAN the generator distribution is degenerate and does not have a density.\n \nFor an illustration of how we transport Q to P using critic gradients, we provide these examples:\none is using sobolev IPM critic that has the diffusion property (https://www.dropbox.com/s/ymwmddvsdpj29lq/sobolev_near_mu_q_full_65.mp4?dl=0 ) and one with the MMD distance critic (https://www.dropbox.com/s/ehpmhfghh5wbo5n/mmd_near_65.mp4?dl=0) that does not have this diffusion property. In those videos we see how the pink particles (Q) move to have same density as the black particles (P). In each frame the particles move with the gradient of the critic. The critic is recomputed between 2 frames, between the new density q_t and P in black. The level sets of the new critic are shown in each frame.\nAfter 65 iterations with the same learning rate, we see that sobolev IPM succeeds at this task, while the MMD fails. One can see that the descent on sobolev discrepancy converges almost completely in just 65 iterations but the same is not true for MMD based descent method. Although, MMD based descent method takes almost 300 iterations to converge which can be seen in the video link given below : https://www.dropbox.com/s/shffwn3tffvb52r/mmd_near_full.mp4?dl=0 \n\n", "We thank the reviewer for his comments. We have uploaded a revision of the paper to incorporate the reviewer comments.\n\nFirst we stress that the goal of this paper is to better *understand* the gradient penalty and to know what it adds to the learning problem, rather than an emphasis on performance, although we show better performance than WGAN-GP in semi-supervised learning (we will expand on this later).\nWGAN-GP is the first paper to show text GAN working end to end, our goal was to explain what is behind this success.\n\nWe hope to answer the main concerns of the reviewer: \n\n1) Biased estimate of the penalty. Reviewer 4 thinks reusing the same samples between objective and constraint introduces bias, however the use of the same samples in the loss and the constraint is a well-studied problem and is not an issue theoretically and practically. See for example (Shivaswamy and Jebara JMLR 2010 http://www.jmlr.org/papers/volume11/shivaswamy10a/shivaswamy10a.pdf).\nThe Lagrangian formulation does not need another minibatch for an unbiased estimate, while the quadratic penalty theoretically needs one. But using some concentration inequalities for data dependent constraints as done in (Shivaswamy and Jebara, Section 4.6), one can show that this bias is very small and does not add any difficulty to the optimization. In Shivaswamy and Jebara, the term “landmarks” is used for the samples in the constraint. It is shown in this work that using the same samples for the loss and the constraints, introduces a small bias that vanishes with the number of samples. We will add a discussion in the paper.\nFurthermore we did not find empirically any difficulty in optimizing Sobolev GAN. (we added remark 1 under Algorithm 1 to refer to the small bias and to Shivaswamy and Jebara )\n\n\n2) From a theoretical perspective, all GAN formulations (including MMD GAN and Cramer GAN) require full maximization of the discriminator (critic) in the inner loop (Arjovsky and Bottou ICLR 2017). In practice, usually a small number of iterations n_c is used instead in the inner loop (disc maximization).\nAs stated in Appendix D, empirically we find that we can use n_c= 1 or 2 in Sobolev GAN.\n\n3) Performance: WGAN-GP has not shown any semi-supervised learning performance. In our experiments, WGAN-GP for semi supervised learning gives bad performance (see updated Table 2 page 15). We show in this paper how to make use of the sobolev norm to regularize SSL in IPM based SSL, and we show the connection to laplacian regularization in manifold learning.\n\n4A) Example in one D between 2 Diracs is not only a limitation of Fisher IPM , it is a limitation of any distance comparing PDFs as shown in Arjovsky et al. This is in line with the intuition of the reviewer on the importance of non zero gradients.\n\n4B) Implicit conditioning: the Reviewer may have missed the point of the implicit conditioning introduced by Sobolev IPM (coordinate-wise conditional CDF form, equation 5). Indeed the Bayes rule can be used to rewrite JS coordinate wise, nevertheless this conditioning acts on PDF (probability density functions): it compares PDF{x_i|x^{-i}} PDF(x^{-i}), which is equal for all coordinate to PDF(x_1,dots x_d). Note that the Sobolev IPM compares instead *unique quantities for each coordinate* CDF{x_i|x^{-i}} PDF(x^{-i}) (see Equation 5), making the model able to learn coordinate conditional distributions, in other words the ability to model context. The empirical evidence for the benefit of implicit conditioning is in Section 6.1 where we show when training Fisher GAN for text generation we need curriculum conditioning, while Sobolev GAN doesn’t. We highlight there the crucial role of conditioning in sequence learning, it is not only a vanishing gradient problem. The gradient penalty in Sobolev GAN and WGAN-GP imply this implicit conditioning that is crucial in end to end sequence/ text generation using GAN. \n", "We thank the reviewer for his encouraging and supportive comments. We have revised the paper and we improved the presentation of the sobolev IPM on top of page 6 and added additional theoretical results in this section, as well as more illustrative examples (Figure 1).", "Thank you for your interest and your questions!\n\n1 and 2 - Unfortunately there is no easy answer to your question “which GAN is better”. It will be an empirical question given the specific application at hand. For example it has been observed that sample quality and semi-supervised performance may negatively impact each other (Bad GAN: Dai et al. NIPS 2017).\n\n1- There are two big families of discrepancies: f-divergences and IPMs (Integral probability metrics). They are all valid for GAN training. For using them in GAN training how the critic is regularized impacts both the metric being computed and the stability of GAN training. For instance weight clipping performances poorly, while gradient penalty performs quite well. Variance control as in fisher gan performs well also in the continuous case. Spectral normalization introduced recently seems also to perform well. Our understanding until now is the main advantage of gradient penalty is 1) numerical/stability: a better control of the gradient of the critic that is passed on by backpropagation to the generator. 2) theoretical : introduces some factoring/ implicit conditioning crucial for sequence (for e.g text) modeling.\n\n2- It is possible to combine regularizers under the IPM objective, which we showed in this paper (variances and gradients i.e fisher and sobolev) to achieve good performance in SSL , without the need of any batch or layer normalization. Combining losses is a plausible future direction.\n\n3. We are comparing fisher to sobolev in the discrete case, the intuition being that when we have disjoint supports Fisher may not provide good gradients for the generator. We are not claiming that Wasserstein would not work, rather we are saying that to achieve the exact Wasserstein distance, the Lipschitz constraint is not easy to enforce, while for Sobolev IPM the constraint is computationally tractable. \n", "Thanks for your interest and your comment! Indeed we tried to summarize in TABLE 1 the recent discrepancies used in the GAN literature to put in context our contribution on Sobolev IPM. We did not put the primal formulation of wasserstein under closed form, meaning a formula that can be readily computed given the densities of the two distributions: The primal still needs to be solved using entropic regularization or a linear program (LP) to find the (regularized) wasserstein distance. We can probably add the primal and state that the regularized wasserstein can be solved using entropic regularization with the sinkhorn algorithm (Cuturi et al. 2013), there was some recent GANs using the primal formulation and automatic differentiation through the sinkhorn algorithm, we will add those to the paper. We updated Table 1 to reflect GANs using the primal formulation of optimal transport. ", "Thanks to the authors for the paper, I really appreciate this attempt to unify ideas that have been around lastly (at least this is my interpretation after a not-in-depth look at the paper). \nMy question is about Table 1: It is a great table, but I wonder why there is a NA in the closed form expression for the Wasserstein IPM: why dont use what is known from Kantorovich duality? (i.e inf_{Z=(X,Y) in U(P,Q) E_Z(|X-Y|)) where U(P,Q) is the set of joints Z that are consistent with the marginals P,Q ?)\n\nThanks", " Dear authors,\n After I read this paper, I have a couple questions:\n 1. There are so many different divergent metrics for distributions, and each of them corresponds to a specific GAN. What is the advantage of one over another? Is there one that has a dominating performance?\n 2. Instead of evaluating each variant of GAN one by one (we already have 7 variants as on the list, and I can foresee more by changing the divergence metrics, say Wasserstein-p distance can result in a new GAN ), do we have a general framework that incorporates all these variants of GANs? For example, the framework has some parameters that we can tune to change the form of GANs. It is very useful in practice, because these parameters together with the network parameters can serve as hyperparameters in the performance tuning. \n3. It points out in Section 4.2 that the CDF comparison is more suitable than PDF for comparing distributions on discrete spaces, because W(P,Q) = |a1-a2| and F(P,Q) = 2. May I know the logic why F(P,Q) = 2 is better than W(P,Q) = |a1-a2|? Is there a toy example that shows Wasserstein GAN does not work, while Sobolev GAN works?\n\nThanks!" ]
[ 8, 6, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJA7xfb0b", "iclr_2018_SJA7xfb0b", "iclr_2018_SJA7xfb0b", "iclr_2018_SJA7xfb0b", "SkSg7XTmM", "rkeWfhZ7M", "HJ4rCjPff", "HyWBqmufz", "HJ4rCjPff", "SkWlhswff", "HyJ_7LFlG", "rJsEWhDGM", "iclr_2018_SJA7xfb0b", "Hk4rkBRlz", "SJYrKxQ-f", "BJIPz57bG", "r1Tvj8EZM", "HJfPlp0WG", "iclr_2018_SJA7xfb0b", "iclr_2018_SJA7xfb0b" ]
iclr_2018_H1sUHgb0Z
Learning From Noisy Singly-labeled Data
Supervised learning depends on annotated examples, which are taken to be the ground truth. But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality exceeds a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.
accepted-poster-papers
This paper provides an important discussion about the relationship between training efficiency and label redundancy. The updates to the paper will improve the paper further. Reviewers found the paper interesting, well written, and addresses and important problem.
train
[ "BJhUuZDgf", "rJRdVJPNf", "SJpS_JYgz", "ry8emW9gf", "r18iu8g4M", "BJnLnZyEG", "S17KhF2Xz", "H1IVZo4XG", "Bk0D6FNQM", "rJkQat4mM", "BJ0EnF4mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "author" ]
[ "This paper proposes a method for learning from noisy labels, particularly focusing on the case when data isn't redundantly labeled (i.e. the same sample isn't labeled by multiple non-expert annotators). The authors provide both theoretical and experimental validation of their idea. \n\nPros:\n+ The paper is generally very clearly written. The motivation, notation, and method are clear.\n+ Plentiful experiments against relevant baselines are included, validating both the no-redundancy and plentiful redundancy cases. \n+ The approach is a novel twist on an existing method for learning from noisy data. \n\nCons: \n- All experiments use simulated workers; this is probably common but still not very convincing.\n- The authors missed an important related work which studies the same problem and comes up with a similar conclusion: Lin, Mausam, and Weld. \"To re (label), or not to re (label).\" HCOMP 2014.\n- The authors should have compared their approach to the \"base\" approach of Natarajan et al. \n- It seems too simplistic too assume all workers are either hammers or spammers; the interesting cases are when annotators are neither of these.\n- The ResNet used for each experiment is different, and there is no explanation of the choice of architecture.\n\nQuestions: \n- How would the model need to change to account for example difficulty? \n- Why are Joulin 2016, Krause 2016 not relevant?\n- Best to clarify what the weights in the weighted sum of Natarajan are. \n- \"large training error on wrongly labeled examples\" -- how do we know they are wrongly labeled, i.e. do we have a ground truth available apart from the crowdsourced labels? Where does this ground truth come from?\n- Not clear what \"Ensure\" means in the algorithm description.\n- In Sec. 4.4, why is it important that the samples are fresh?\n", "I read the rebuttal and I am leaning positive. I am going to update my score.", "This paper focuses on the learning-from-crowds problem when there is only one (or very few) noisy label per item. The main framework is based on the Dawid-Skene model. By jointly update the classifier weights and the confusion matrices of workers, the predictions of the classifier can help on the estimation problem with rare crowdsourced labels. The paper discusses the influence of the label redundancy both theoretically and empirically. Results show that with a fixed budget, it’s better to label many examples once rather than fewer examples multiple times.\n\nThe model and algorithm in this paper are simple and straightforward. However, I like the motivation of this paper and the discussion about the relationship between training efficiency and label redundancy. The problem of label aggregation with low redundancy is common in practice but hardly be formally analyzed and discussed. The conclusion that labeling more examples once is better can inspire other researchers to find more efficient ways to improve crowdsourcing.\n\nAbout the technique details, this paper is clearly written, but some experimental comparisons and claims are not very convincing. Here I list some of my questions:\n+About the MBEM algorithm, it’s better to make clear the difference between MBEM and a standard EM. Will it always converge? What’s its objective?\n+The setting of Theorem 4.1 seems too simple. Can the results be extended to more general settings, such as when workers are not identical?\n+When n = O(m log m), the result that \\epslon_1 is constant is counterintuitive, people usually think large redundancy r can bring benefits on estimation, can you explain more on this?\n+During CIFAR-10 experiments when r=1, each example only have one label. For the baselines weighted-MV and weighted-EM, they can only be directly trained using the same noisy labels. So can you explain why their performance is slightly different in most settings? Is it due to the randomly chosen procedure of the noisy labels?\n+For ImageNet and MS-COCO experiments with a fixed budget, you reduced the training set when increasing the redundancy, which is unfair. The reduction of performance could mainly cause by seeing fewer raw images, but not the labels. It’s better to train some semi-supervised model to make the settings more comparable.\n", "The authors proposed a supervised learning algorithm for modeling label and worker quality. Further utilize it to study one of the important problems in crowdsourcing - How much redundancy is required in crowdsourcing and whether low redundancy with abundant noise examples lead to better labels.\n\nOverall the paper was well written. The motivation of the work is clearly explained and supported with relevant related work. The main contribution of the paper is in the bootstrapping algorithm which models the worker quality and labels in an iterative fashion. Though limited to binary classification, the paper proposed a theoretical framework extending the existing work on VC dimension to compute the upper bound on the risk. The authors also showed theoretically and empirically on synthetic data sets that the low redundancy and larger set of labels in crowdsourcing gives better results. \n\nMore detailed comments\n1. Instead of considering multi-class classification as one-vs-all binary classification, can you extend the theoretical guarantee on the risk to multi-class set up like Softmax which is widely used in research nowadays.\n2. Can you introduce the Risk -R in the paper before using it in Theorem 4.1\n3. Is there any limit on how many examples each worker has to label? Can you comment more on how to pick that value in real-world settings? Just saying sufficiently many (Section 4.2) is not sufficient.\n4. Under the experiments, different variations of Majority Vote, EM and Oracle correction were used as baselines. Can you cite the references and also add some existing state-of-the-art techniques mentioned in the related work section.\n5. For the experiments on synthetic datasets, workers are randomly sampled with replacements. Were the scores reported based on average of multiple runs. If yes, can you please report the error bars.\n6. For the MS-COCO, examples can you provide more detailed results as shown for synthetic datasets? Majority vote is a very weak baseline. \n\nFor the novel approach and the theoretical backing, I consider the paper to be a good one. The paper has scope for improvement.\n\n ", "Thanks for the comment, and for pointing to relevant works in the literature. While we are familiar with and value these papers, there are substantial differences between their work and ours. Below, we describe the differences between our work and the papers that you mentioned: \n\nData Programming: Creating Large Training Sets, Quickly (NIPS 2016): \nWe agree that this is a relevant prior-work and we will add it to our related works section. Please note that there are two critical differences between their algorithm and ours: \n\n(a) They propose to minimize the expected loss for each training example and each noisy label (after estimating the noise parameters). In contrast, we propose to minimize a weighted loss function for each training example. Our weights are the posterior probabilities of the true label given all the redundant noisy labels and the estimated noise parameters. If there is only one label per example then the two loss functions are same. However, for more than one label per example, the two loss functions are significantly different. Up to this difference, their work when applied in our setting reduces to one of our baseline algorithms, weighted-EM.\n\n(b) A major gain of our algorithm is in its iterative approach where we use model predictions to refine the estimation of noise parameters and learn a better model iteratively. This approach allows us to learn noise parameters (confusion matrices of workers) even when we collect only one label per example. There is no such iterative approach proposed in their work. \n\nData fusion algorithms developed by database community:\nThese algorithms are relevant to standard crowdsourcing algorithms which do not use features of the tasks to train a supervised learning algorithm. Their end goal is to establish the true label of the tasks given multiple noisy labels (by assessing noise rate of the different noisy labelers). On the other hand, in our paper and in the Data programming paper, the end goal is to train a classifier given the noisy labels. \n", "How does this paper compare to recent work by Ratner et al. (see Data programming at NIPS 2016)? Also how does this work compare to all the data fusion work in the database community? Please see https://arxiv.org/abs/1505.02463 and https://dl.acm.org/citation.cfm?id=3035951.\n\nWhile the authors might not be aware of the work done by the database community on data fusion algorithms they should have compared against the work on data programming and weak supervision https://hazyresearch.github.io/snorkel/\n\n", "We have added the results of the EM algorithm as an additional baseline to the MSCOCO experiments.", "Replies to each point follow:\n\n1. Re “All experiments use simulated workers; this is probably common but still not very convincing.”\n\nPlease note that in experiments on MSCOCO, we procured the real noisy labels from the raw data. See in abstract: “Experiments conducted on … and MSCOCO (using the real crowdsourced labels)...”\n\n2. Re “The authors missed an important related work which studies the same problem and comes up with a similar conclusion: Lin, Mausam, and Weld. \"To re (label), or not to re (label).\" HCOMP 2014.”\n\nWe agree that this is one of the most relevant works. Note that we cited this work along with their 2016 paper along similar lines “Re-active learning: Active learning with relabeling.”. Also, note that unlike ours, their work does not use predictions of the supervised learning algorithm to estimate the true labels.\n\n3. Re “The authors should have compared their approach to the \"base\" approach of Natarajan et al.”\n\nTheir approach is designed for the binary classification setting when all the workers are identical. We study the multi-class classification setting where workers have varying qualities.\n\n4. Re “It seems too simplistic to assume all workers are either hammers or spammers; the interesting cases are when annotators are neither of these.”\n\nWe agree and point out (1) that we considered two other worker models. In synthetic dataset, e.g. we consider class-wise hammer spammer, where each worker is hammer for some of the classes and spammer for the other classes. (2) We report experiments on MSCOCO with labels collected by real workers. \n\n5. Re “The ResNet used for each experiment is different, and there is no explanation of the choice of architecture.”\n\nFor simulated worker experiments on CIFAR10 and ImageNet, we used the fewest possible layers. These choices are dictated by the ResNet implementation that we used “https://github.com/tornadomeet/ResNet/”. Smaller ResNet architectures save training time, enabling us to perform experiments on more baseline algorithms, worker noise models, and levels of redundancy. \n\nFor MSCOCO experiments, we used a 98-layer ResNet because this is a relatively small dataset. Also, we did not have various experiments to run for different worker noise models here.\n\n6. Re “How would the model need to change to account for example difficulty? ”\n\nWhen we include example difficulty in the model, there are three sets of latent parameters to be estimated: worker qualities, example difficulties and the true labels. A standard approach to learn these parameters is to use alternating maximum likelihood estimation where we initialize the two sets of parameters and estimate the third one and iterate over. In our algorithm, we would need to estimate example difficulties by maximizing the likelihood of the observed data given the intermediate estimate of worker qualities and the labeling function. \n\n7. Re “Why are Joulin 2016, Krause 2016 not relevant?”\n\nTwo important differences between these works & our setting: a) they have only one label per example - no redundancy. b) they do not aim to estimate worker qualities. \n\n8. Re “Best to clarify what the weights in the weighted sum of Natarajan are.”\n\nUpdate: We have provided the weights in Natarajan et al in the revised draft in the first paragraph of Section 4.1- “Learning with noisy labels”. \n\n9. Re “\"large training error on wrongly labeled examples\" -- how do we know they are wrongly labeled...?”\nYou are correct that we do not know which examples are wrongly labeled and we do not have ground truth available apart from the crowdsourced labels. We would humbly point out that the statement \"large training error on wrongly labeled examples\" is not a part of our algorithm. The purpose of the statement is to justify why comparing worker responses to the model prediction would give a good estimation of the worker qualities. It is further elaborated in the text below the line \"large training error on wrongly labeled examples\".\n\n10. Re “Not clear what \"Ensure\" means in the algorithm description.”\n\nIn the algorithmic package, “Input” and “Output” are expressed with “Require” and “Ensure” respectively. So “Ensure” just means the output of the algorithm. We agree that “Input” and “Output” are clearer and modified the latest version to use these terms.\n\n11. Re “In Sec. 4.4, why is it important that the samples are fresh?”\n\nAs we mention in the paper, fresh samples are required for the analysis to hold. It allows the estimated worker qualities and the predictor function learned in each step to be independent of each other which is required for the Theorem 4.1 to hold. We point out that practically, fresh samples are not required for the algorithm to succeed, and in our implementation we do not use fresh samples in each round.\n", "Thanks for the clear review and actionable recommendations. We have modified the draft per your feedback and reply to each point below:\n1. Re: “What’s [MBEM’s] objective?”: Thanks for spotting this oversight. The objective for MBEM is the maximum likelihood estimation of latent parameters under the Dawid-Skene model, where the true labels are replaced by the model predictions. We have added this to the revised draft in Section 4, Algorithm. Yes, the MBEM will converge under mild conditions when the worker quality is above a threshold and number of training examples is sufficiently large.\n2. Re: “Can the results be extended to more general settings, such as when workers are not identical?\nPlease note that the Theorem 4.1 includes the scenario when the workers are not identical. The two critical quantities $\\alpha$ and $\\beta$ that capture the average worker quality in the Theorem are defined for a general setting when the workers are not identical in the appendix. For simplicity and to illustrate the main idea of the theorem in the main paper we have defined them for the particular setting when all the workers are identical.\n\n3. Re: “When n = O(m log m), the result that \\epslon_1 is constant is counterintuitive, people usually think large redundancy r can bring benefits on estimation, can you explain more on this?”\nThe expression O(m log m) hides redundancy r as a constant. In the revised draft, we have modified the statement to “when n = O((m log m)/r) the epsilon_1 is sufficiently small.” That is if the redundancy r is large the number of training examples n required for achieving epsilon_1 to be a small constant decreases.\n\n4. Re “During CIFAR-10 experiments when r=1, each example only have one label. For the baselines weighted-MV and weighted-EM, they can only be directly trained using the same noisy labels. So can you explain why their performance is slightly different in most settings? Is it due to the randomly chosen procedure of the noisy labels? ” \n\nYes, you are correct. When r =1, the baselines weighted-MV and weighted-EM can only be trained using the same noisy labels. Please note that R=1 is only in the left-most figures for CIFAR10 experiments for the two settings of hammer-spammer and class-wise hammer-spammer, respectively. In these plots, the lines for weighted-MV and weighted-EM are nearly identical. The negligible differences owe only to random worker assignment and random initialization of parameters. In rest of the four figures of CIFAR10 experiments, the redundancy r varies along the x-axis.\n\n5. Re “For ImageNet and MS-COCO experiments with a fixed budget, you reduced the training set when increasing the redundancy, which is unfair. The reduction of performance could mainly cause by seeing fewer raw images, but not the labels. It’s better to train some semi-supervised model to make the settings more comparable.”\n\nWe agree that in principle, the strongest baseline to prove our point that labeling once is optimal would allow the redundant labelers to make used of the unlabelled data in a semi-supervised fashion. We note that this does not directly fall out of our theory, which addresses the supervised case (see Theorem 4.1) and thus may be beyond the scope of this paper. We also note that many current semi-supervised algorithms, such as Ladder Networks, show most significant improvements when the ratio of unlabeled to labeled data is quite large, and that it is not clear how advantageous current semi-supervised algorithms would be at a redundancy level of say 3. While answering these questions conclusively is a non-trivial task and left for future work, we think that this is a great point and plan to investigate in the future how the utility of unlabeled data for semi-supervised learning may complicate the picture. \n", "Thanks for the thoughtful review and clearly enumerated critical points. We reply to each below:\n\n1. We agree that it would be desirable to extend the theoretical guarantees to the multiclass-classification setting with cross-entropy loss and we plan to explore this question in future work. However, this extension is non-trivial under the current framework. Equation 22 in Lemma A.2 does not apply for cross-entropy loss and it is not obvious how to complete the guarantees without this result. \n\n2. In the initial draft, we introduced the Risk -R in the problem formulation section. We’re grateful for the feedback that this was not obvious when you arrived at theorem 4.1 and we have modified the draft to remind the reader at this point.\n\n3. Equation 7 in Theorem 4.1 states the condition on how many examples each worker has to label for the algorithm to succeed in estimating worker qualities. In particular, given m workers the algorithm needs to estimate O(m) latent parameters of their confusion matrices. From standard statistical analysis as reflected in Equation 7, we need O(m log m) independent observations to estimate O(m) parameters. Therefore, if we have n training examples, and redundancy is r then the total number of observations nr should satisfy: nr > m log m. Hence, each worker has to label O(log m) examples for the algorithm to succeed. \n\n4. We have compared our algorithm MBEM with four different algorithms and two oracle-based algorithms. Majority vote is a standard algorithm and Expectation Maximization (EM) is based on the classical Dawid Skene (1979) work, we have included reference to it in the revised draft. Weighted MV and weighted EM use a weighted loss function that is newly proposed in this work. The purpose of including these algorithms is to establish efficacy of weighted loss function over the standard loss function for noisy labels. Note that MBEM uses the weighted loss function in addition to the bootstrapping idea to estimate worker qualities.\n\nWe appreciate the request for a comparison against state-of-the-art techniques mentioned in the related work section. We are presently implementing the method from “Lean Crowdsourcing” (Branson, Van Horn, Petrona 2017) as an additional baseline method and will add the results to the experiments section as soon as they are available (http://openaccess.thecvf.com/content_cvpr_2017/html/Branson_Lean_Crowdsourcing_Combining_CVPR_2017_paper.html).\n\n5. Per your suggestions, in the new (current) draft we added the error bars for CIFAR10. In the initial draft, we were reporting averages across multiple runs for CIFAR10 and MSCOCO. For ImageNet, the experiments are too expensive, so we only execute one run. \n\n6. We will add the results of the EM algorithm and weighted EM algorithm for MSCOCO experiment. We are also working presently on adding the method due to Branson et al. as a baseline to the MSCOCO experiment and will post results when available.\n", "We would like to thank all of the reviewers for providing us with three clear and thoughtful reviews. We were encouraged both to see that the reviews were generally positive and that the recommendations were clear and actionable. We have acted on many of these recommendations and the current draft has been improved significantly. For example, in Section 4, the articulation of the objective for the MBEM algorithm is made explicit. Additionally we now report error bars for our CIFAR experiments. Additional experiments are underway and we will post these improvements as they become available. Please find specific replies to each review in the respective threads. " ]
[ 7, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1sUHgb0Z", "H1IVZo4XG", "iclr_2018_H1sUHgb0Z", "iclr_2018_H1sUHgb0Z", "BJnLnZyEG", "iclr_2018_H1sUHgb0Z", "BJ0EnF4mf", "BJhUuZDgf", "SJpS_JYgz", "ry8emW9gf", "iclr_2018_H1sUHgb0Z" ]
iclr_2018_H1Y8hhg0b
Learning Sparse Neural Networks through L_0 Regularization
We propose a practical method for L0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since the L0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L0 regularized objective is differentiable with respect to the distribution parameters. We further propose the \emph{hard concrete} distribution for the gates, which is obtained by ``stretching'' a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.
accepted-poster-papers
The results in the paper are interesting, and the modifications improve the paper further. Reviewers found teh paper interesting and potentailly applicable to many models.
train
[ "rJUkD7vgf", "BJF5RpKgG", "S1tmowoeG", "rkDib-FQf", "ryTQxbK7f", "ryTrTgK7z", "rkNx5xYXG", "rJeSs7YZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization. The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations. Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components. The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods. \n\nPros: \n\n- The paper is clearly written, self-contained and a pleasure to read. \n- Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization\n\nCons: \n\n- It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices \n- It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc. \n- If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant\n", "Learning sparse neural networks through L0 regularisation\n\nSummary: \n\nThe authors introduce a gradient-based approach to minimise an objective function with an L0 sparse penalty. The problem is relaxed onto a continuous optimisation by changing an expectation over discrete variables (representing whether a variable is present or not) to an expectation over continuous variables, inspired by earlier work from Maddison et al (ICLR 2017) where a similar transformation was used to learn over discrete variable prediction tasks with neural networks. Here the application is to learn sparse feedforward networks in standard classification tasks, although the framework described is quite general and could be used to impose L0 sparsity to any objective function in principal. The method provides equivalent accuracy and sparsity to published state-of-the-art results on these datasets but it is argue that learning sparsity during the training process will lead to significant speed-ups - this is demonstrated by comparing to a theoretical benchmark (standard training with dropout) rather than through empirical testing against other implementations. \n\nPros:\n\nThe paper is well written and the derivation of the method is easy to follow with a good explanation of the underlying theory. \n\nOptimisation under L0 regularisation is a difficult and generally important topic and certainly has advantages over other sparse inference objective functions that impose shrinkage on non-sparse parameters. \n\nThe work is put in context and related to some previous relaxation approaches to sparsity. \n\nThe method allows for sparsity to be learned during training rather than after training (as in standard dropout approaches) and this allows the algorithm to obtain significant per-iteration speed-ups, which improves through training. \n\nCons:\n\nThe method is applied to standard neural network architectures and performance in terms of accuracy and final achieved sparsity is comparable to the state-of-the-art methods. Therefore the main advance is in terms of learning speed to obtain this similar performance. However, the learning speed-up is presented against a theoretical FLOPs estimate per iteration for a similar network with dropout. It would be useful to know whether the number of iterations to achieve a particular performance is equivalent for all the different architectures considered, e.g. does the proposed sparse learning method converge at the same rate as the others? I felt a more thorough experimental section would have greatly improved the work, focussing on this learning speed aspect. \n\nIt was unclear how much tuning of the lambda hyper-parameter, which tunes the sparsity, would be required in a practical application since tuning this parameter would increase computation time. It might be useful to provide a full Bayesian treatment so that the optimal sparsity can be chosen through hyper-parameter learning. \n\nMinor point: it wasn’t completely clear to me why the fact (3) is a variational approximation to a spike-and-slab is important (Appendix). I don’t see why the spike-and-slab is any more fundamental than the L0 norm prior in (2), it is just more convenient in Bayesian inference because it is an iid prior and potentially allows an informative prior over each parameter. In the context here this didn’t seem a particularly relevant addition to the paper. \n\n", "The paper introduces a technique for optimizing an L0 penalty on the weights of a neural network. The basic problem is empirical risk minimization with a incremental penalty for each non zero weight. To tackle this problem, this paper proposes an expected surrogate loss that is then relaxed using a method related to recently introduced relaxations of discrete random variables. The authors note that this loss can also be seen as a specific variational bound of a Bayesian model over the weights. The key advantage of this method is that it gives a training time technique for sparsifying neural network computation, leading to potential wins in computation time during training. \n\nThe results presented in the paper are convincing. They achieve results competitive with previous methods, with the additional advantage that their sparse models are available during training time. They show order of magnitude reductions in computation time for small models, and more modest constant improvements for large models. The hard concrete distribution is a small but nice contribution on its own.\n\nMy only concern is the lack of discussion on the relationship between this method and Concrete Dropout (https://arxiv.org/abs/1705.07832). Although the focus is apparently different, these methods are clearly closely related. A discussion of this relationship seems really important.\n\nSpecific comments/questions:\n- The reduction of computation time is the key advantage, and it would have been nice to see a more thorough investigation of this. For example, it would have been interesting to see whether this method would work with structured L0 penalties that removed entire units (as opposed to single weights) or other subsets of the computation. This would give a stronger sense of the kind of wins that are possible in this framework.\n- Hard concrete is a nice contribution, but there are clearly many possibilities for these relaxations. Extra evaluations of different relaxations would be appreciated. At the very least a comparison to concrete would be nice.\n- In equation 2, the equality of the L0 norm with the sum of z assumes that tilde{theta} is not 0.", "We would first like to thank you for taking the time to review our submission; we will now address your comments:\n\n- We agree that comparing other L_p choices with the L_0 norm is beneficial; we would like to point out that the GL (Group Lasso) baseline method for the LeNet5-Caffe experiment employs L_1 regularization for pruning neurons and convolutional filters so we believe that it can serve as a way to measure differences between the L_0 norm and the most popular L_p alternative.\n\n- Due to lack of space we provided a bit of more information about the hard concrete distribution at the appendix; it has a closed-form density that involves the CDF and PDF of the concrete distribution. \n\n- The stretching parameters were initially chosen heuristically and kept fixed for all of the experiments. The heuristic was to approximately aim for clipping to zero if the value of the random variable is less than 0.1 or rounding to 1 if the value of the random variable is larger than 0.9. It should be noted that their choice is not particularly important due to their interplay with the temperature parameter of the concrete distribution; they collectively determine the probabilities of the endpoints {0, 1}, i.e. p(z=0) and p(z=1). As a result we believe that the choice of the stretching parameters is not very important, given the fact that the temperature of the concrete distribution is tuned appropriately.\n", "We would first like to thank you for the thorough and extensive review. Regarding whether the method converges in a similar way to standard networks; indeed this is the case. On the CIFAR task with WRNs the L0 regularized networks had similar learning curves with the dropout equivalent networks. We have updated the paper with an example plot on CIFAR 10.\n\nRegarding the lambda hyperparameter; this is true. Empirically, we didn’t have to tune this parameter a lot and considered a small set of values. Treating this parameter in a Bayesian way would indeed be a fruitful direction for future research.\n\nAs for the spike-and-slab connection; we agree that it is a minor point (hence it is in the appendix), but we still believe that it is a relevant addition to the paper. It provides an interpretation to the L0 objective that also allows for the incorporation of prior knowledge about the behaviour of the sparity in the form of a prior over the gates. This could then potentially allow for better regularization of the gating mechanism.", "We would first like to thank you for the constructive review; we revised the paper and now it contains a discussion of concrete Dropout. The main difference is that concrete dropout does not allow for values of exact zero (and one) thus precluding the benefits of sparsity during training time. One potential way to employ concrete Dropout in this case would be to use it as a biased surrogate for the optimization of eq. 3; this could still allow for potential sparsity at test time by pruning according to thresholds, but nevertheless would require evaluating the full original model during training. As for your other comments:\n\n- Perhaps it's not very prominent but all of our results employ structured penalties, i.e. we are removing either entire convolutional feature maps or entire hidden units. \n\n- For the reasons we previously mentioned, we believe that comparing with concrete dropout will not provide much extra information as the sparsity could only be achieved at test time and not during training (which was one of the main objectives of this work). An alternative that maintains the sparsity during training time, and we experimented with in a pilot study, was a smoothing mechanism that involved the hard-sigmoid of a Gaussian r.v.. This turned out to be worse than the hard concrete procedure, and we attribute this to the unimodality of the underlying Gaussian distribution (which cannot accurately capture the behaviour of a Bernoulli r.v.). We mentioned a couple of sentences about this in the related work section. We also included a comparison against “Generalized Dropout” (GD) that utilizes the straight-through estimator for the same LeNet-5 task we considered in the experiments; this can serve as a comparison against the proposed hard concrete smoothing procedure.\n\n- This is indeed true and we have updated the text accordingly.", "We would like to thank you for taking an interest in our work and pointing out yours, we were not aware of it. After reading the relevant papers we agree that there indeed are some similarities (but also a large amount of differences) and we have updated our submission accordingly. More specifically, we believe that the similarities between our works end at eq. 3 which provides the expected L0 regularized objective under a Bernoulli gating mechanism; your paper 1 proceeded in optimizing that objective with the biased straight through estimator for the gradients of the discrete gates. This is also what we mentioned as an alternative at the paragraph underneath eq. 3. Notice that this objective is not differentiable w.r.t. the parameters of the gates as you have to take the gradient of the Heaviside function. Our main contribution is to show how we can smooth the L0 regularized objective in a way that can make it differentiable, and thus allow for efficient gradient based optimization, without needing extra terms to make learning stable. The hard concrete distribution was then one potential instance of that framework, but certainly not the only choice.", "Hello all, \n\nThis paper has large overlap with my own work which was published previously.\nPaper 1 - \"Learning Sparse Neural Networks\" https://arxiv.org/abs/1611.06694 (published in 2017 CVPR workshop)\n\nThe section on group sparsity is also very similar to my earlier work.\nPaper 2 - \"Learning Neural Network Architectures using Backpropagation\" https://arxiv.org/abs/1511.05497 (published in BMVC 2016)\n\nSimilarities:\n1) We motivate the problem as an intractable L0 regularization problem, which is equation 1 in both of those papers (although we do not use the term L0 to describe it).\n2) We propose using binary gates for every weight and a regularizer which sums over 'smoothened' values of gates.\n3) We make a connection to spike-and-slab priors (in Paper 1).\n4) We view the process as an intractable monte-carlo sum (in Paper 1), but we additionally introduce a variance-reducing term.\n\nDifferences:\n1) We use a bernoulli distribution directly with a straight-through estimator[1] in the optimization whereas this paper uses a concrete distribution.\n2) We have an additional regularization term to make learning with bernoulli distributions stable (i.e.; variance reducing term).\n\nWe believe that both our papers (Paper 1 and 2) and this paper attempt to solve the same overall objective, but use slightly different relaxation methods.\n\nThanks,\nSuraj Srinivas\n\n[1]: Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through\nstochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013." ]
[ 6, 6, 7, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1Y8hhg0b", "iclr_2018_H1Y8hhg0b", "iclr_2018_H1Y8hhg0b", "rJUkD7vgf", "BJF5RpKgG", "S1tmowoeG", "rJeSs7YZz", "iclr_2018_H1Y8hhg0b" ]
iclr_2018_BkQqq0gRb
Variational Continual Learning
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental results show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.
accepted-poster-papers
The paper addresses the problem of continual learning and solutions based on variational inference. Updates to the paper have improved it and addresses many of the concerns raised by the reviewers during the discussion period.
train
[ "SyF0odSef", "H1T4epKeM", "BkgsE19xz", "Hy67jKomG", "rkgk9tsmG", "S1GDuYi7G", "SyQOwYi7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Overall, the idea of this paper is simple but interesting. Via performing variational inference in a kind of online manner, one can address continual learning for deep discriminative or generative networks with considerations of model uncertainty.\n\nThe paper is written well, and literature review is sufficient. My comment is mainly about its importance for large-scale computer vision applications. The neural networks in the experiments are shallow. \n", "This paper proposes a new method, called VCL, for continual learning. This method is a combination of the online variational inference for streaming environment with Monte Carlo method. The authors further propose to maintain a coreset which consists of representative data points from the past tasks. Such a coreset is used for the main aim of avoiding the catastrophic forgetting problem in continual learning. Extensive experiments shows that VCL performs very well, compared with some state-of-the-art methods. \n\nThe authors present two ideas for continual learning in this paper: (1) Combination of online variational inference and sampling method, (2) Use of coreset to deal with the catastrophic forgetting problem. Both ideas have been investigated in Bayesian literature, while (2) has been recently investigated in continual learning. Therefore, the authors seems to be the first to investigate the effectiveness of (1) for continual learning. From extensive experiments, the authors find that the first idea results in VCL which can outperform other state-of-the-art approaches, while the second idea plays little role. \n\nThe finding of the effectiveness of idea (1) seems to be significant. The authors did a good job when providing a clear presentation, a detailed analysis about related work, an employment to deep discriminative models and deep generative models, and a thorough investigation of empirical performance.\n\nThere are some concerns the authors should consider:\n- Since the coreset plays little role in the superior performance of VCL, it might be better if the authors rephrase the title of the paper. When the coreset is empty, VCL turns out to be online variational inference [Broderich et al., 2013; Ghahramani & Attias, 2000]. Their finding of the effectiveness of online variational inference for continual learning should be reflected in the writing of the paper as well.\n- It is unclear about the sensitivity of VCL with respect to the size of the coreset. The authors should investigate this aspect.\n- What is the trade-off when the size of the coreset increases?\n", "The paper describes the problem of continual learning, the non-iid nature of most real-life data and point out to the catastrophic forgetting phenomena in deep learning. The work defends the point of view that Bayesian inference is the right approach to attack this problem and address difficulties in past implementations. \n\nThe paper is well written, the problem is described neatly in conjunction with the past work, and the proposed algorithm is supported by experiments. The work is a useful addition to the community.\n\nMy main concern focus on the validity of the proposed model in harder tasks such as the Atari experiments in Kirkpatrick et. al. (2017) or the split CIFAR experiments in Zenke et. al. (2017). Even though the experiments carried out in the paper are important, they fall short of justifying a major step in the direction of the solution for the continual learning problem.", "Extensions to more complex tasks:\n\nIn the existing discriminative model experiments, we use shallow networks that are comparable to those considered in previous work (Kirkpatrick et al., 2017; Zenke et al., 2017) so that our reimplementation fairly represents the previous work. In the updated version of the paper, we have added an additional Split notMNIST experiment (see page 7 of the new version and Figure 5). The notMNIST dataset is much larger and more noisy than the MNIST dataset. It contains 400,000 images of 10 characters written in different font styles, where each character has 40,000 images. This dataset is considered more difficult than the MNIST dataset. In this new experiment, we investigate a deeper network with 4 hidden layers and our method also performs well compared to EWC and SI.\n\nExtension to computer vision applications:\n\nThe paper shows that VCL performs very well for MLPs in a variety of settings which we believe is an important contribution. To apply our method to many large-scale computer vision applications, the method needs to be extended to handle CNNs. In general, accurate approximate variational inference methods have not been developed for CNNs and this is an outstanding goal of the area of Bayesian Deep Learning. We therefore leave this development for future research. However, once a good general variational inference method has been developed for CNNs, it will be straightforward to apply the VCL framework.\n\nAlthough MC dropout (Gal & Ghahramani, 2016) is one candidate for Bayesian inference in CNNs, the nature of this approximation makes vanilla application of the VCL framework difficult. MC dropout uses a Gaussian prior over the weights and (the limit of) a mixture of Gaussians with shared parameters for the variational distribution. These two distributions are not of the same form and therefore a second approximation step would be required to apply VCL. Moreover, the impoverished representation of posterior uncertainty retained by MC dropout is likely to result in poor continual learning performance since nuanced and parameter specific information about parameter uncertainty is required in this setting. Approximations that employ a single global variance parameter in the q distribution, such as those employed by Kingma et al., 2015, will suffer similar problems.\n\nReferences:\n\nY. Gal and Z. Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016.\n\nD.P. Kingma, T. Salimans, M. Welling. Variational Dropout and the Local Reparameterization Trick. NIPS 2015.", "New experiments showing that coresets can significantly improve VCL's performance:\n\nThe use of a coreset can *significantly* improve VCL over the vanilla version. We have added a more comprehensive comparison to the updated version of the paper to make this completely clear (see Figure 3 and the last paragraph on page 6). For example, on the permuted MNIST task when the coreset size is 200 examples per task, the final accuracy of VCL improves from 90% to 93% and when the coreset size is increased to 5,000 examples per task, the performance further improves to 95.5%. These are significant improvements for this dataset. Crucially, using just the coreset alone (and no online inference) still performs significantly worse. Thus, although we agree that VCL alone is effective for continual learning the combination with a coreset can be critical. \n\nMoreover, as now noted in the paper, from a more general perspective, coreset VCL is equivalent to a message-passing implementation of variational inference in which the coreset data point message updates are scheduled last, only after the contributions from other data have been incorporated. This opens the door to versions of VCL which revisit the coreset points several times through learning (rather than just at the end).\n\nNovelty of VCL and contributions of the paper:\n\nThe novelty of our VCL method compared to online variational inference (Broderich et al., 2013; Ghahramani & Attias, 2000) is two-fold. \n\nFirst, online VI has only previously been applied to simple conjugate models. Here instead we consider deep neural networks and variational auto-encoders. Indeed, a Bayesian treatment of the parameters of variational auto-encoders, in addition to the latent variables, is challenging in and of itself. These more complex models require a fusion of online VI and Monte Carlo VI which is technically challenging. \n\nSecond, previous work on online VI considers very simple tasks, most typically where the data arrive in iid fashion. Here instead, we consider much more general continual learning tasks that were not previously considered for online VI. The increased inhomogeneity in the data necessitated the development of coreset VI which is more natural and simpler than previous work on coresets for continual learning such as Lopez-Paz and Ranzato (2017) which requires an additional constraint on the optimization objective for every new task.\n\nAt a more general level, we also feel that it is important to point out to the continual learning community that standard methods of (approximate) Bayesian inference provide a rich mathematical and algorithmic framework for attacking continual learning that has hitherto been largely overlooked.\n\nAppropriateness of the Title:\n\nGiven the two points addressed in the above responses, we believe that the title is appropriate. We have endeavoured to explain the relationship to prior work in the first line of the abstract, “a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks”, which we hope clearly explains the positioning of the paper.\n", "New Experiment on a harder task:\n\nIn order to further assess the efficacy of VCL on larger scale and more complex tasks we have added an additional experiment to the paper: the new Split notMNIST task on page 7 of the updated paper and Figure 5. The notMNIST dataset is much larger and more noisy than the MNIST dataset. It contains 400,000 images of 10 characters written in different font styles, where each character has 40,000 images. This dataset is generally considered more difficult than the MNIST dataset. In this new experiment, we investigate a deeper network and show that VCL still performs well compared to EWC and SI.\n\nDeployment on tasks requiring CNNs:\n\nThe application of VCL to the Atari or Split CIFAR tasks is also a sensible suggestion. However, this requires the development of reliable variational inference methods for convolutional neural networks (CNNs). This is still an outstanding research goal of Bayesian Deep Learning and so we leave this for future research. However, once a good variational inference method has been developed for CNNs, it is straightforward to apply the VCL framework to the above tasks. \n\nPlease see more relevant discussions of the points above in the response to Reviewer 3.", "Dear Reviewers,\n\nMany thanks for your detailed reviews. We really appreciate the time and effort you have put into reading and commenting on our paper. \n\nSorry for not responding to your comments more quickly, but we have been working on a set of new experimental results that have been inspired by your suggestions and which we believe strengthen the paper.\n\nPlease also note that there were some errors in the original plots of EWC and K-center Coreset Only methods in Figure 2. We have corrected the plots in our updated paper. The updated results are now consistent with previous findings in Zenke et al. (2017), where EWC and SI are comparable in the Permuted MNIST experiment. The updated results do not change our conclusions in this paper.\n\nWe will now address each of your reviews individually." ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1, -1 ]
[ "iclr_2018_BkQqq0gRb", "iclr_2018_BkQqq0gRb", "iclr_2018_BkQqq0gRb", "SyF0odSef", "H1T4epKeM", "BkgsE19xz", "iclr_2018_BkQqq0gRb" ]
iclr_2018_H1-nGgWC-
Gaussian Process Behaviour in Wide Deep Neural Networks
Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented.
accepted-poster-papers
A clearly written paper. While the practical relevance that came up in the review remains, the analysis and discussion is important for a deeper understanding of the deeper connections between these two important areas of machine learning.
train
[ "Hk4JEb5eM", "BykFw7cxz", "rJVpI8hgz", "HywWHWKGM", "Sy4PN-KGz", "Hk08QWFff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors study the limiting behaviour for wide Bayesian neural networks, comparing to Gaussian processes. \n\nThe paper is well written, and the experiments are enlightening. This work is a nice follow up to Neal (1994), and recent work considering similar results for neural networks with more than one hidden layer. It does add to our understanding of this body of work.\n\nThe weakness of this paper is in its significance and practical value. This infinite limit loses much of the interesting representation in neural networks because the variance of the weights goes to zero. Thus it’s unclear whether these formulations will have many of the benefits of standard neural networks, and whether they’re particularly related to standard neural networks at all. There also don’t seem to be many practical takeaways from the experiments, and the experiments themselves do not consider any predictive tasks at all. It would be nice to see some practical benefit for a predictive task actually demonstrated in the paper. I am not sure what exactly I would do differently in training large neural networks based on the results of this paper, and the possible takeaways are not tested here on real applications.\n\nThis paper also seems to erroneously attribute this limitation of the Neal (1994) limit, and its multilayer extensions, to Gaussian processes in the section “avoiding Gaussian process behaviour”. The problems with that construction are not a profound limitation of Gaussian processes in general. If we can learn the kernel function, then we can learn an interesting representation that does not have these limitations and still use a GP. We could alternatively treat the kernel parameters probabilistically, but the fact that in this case we would not marginally have a GP any longer is mostly incidental. The discussed limitations are more about specific kernel choices, and lack of kernel learning, than about “GP behaviour”.\n\nIndeed, while the discussion of related work is mostly commendable, the authors should also discuss the recent papers on “deep kernel learning”:\ni) http://proceedings.mlr.press/v51/wilson16.pdf\nii) https://papers.nips.cc/paper/6426-stochastic-variational-deep-kernel-learning.pdf\niii) http://www.jmlr.org/papers/volume18/16-498/16-498.pdf\n\nIn particular, these papers do indeed learn flexible representations with Gaussian processes by using kernels constructed with neural networks. They avoid the behaviour discussed in the last section of your paper, but still use a Gaussian process. The network structures themselves are trained through the marginal likelihood of the Gaussian process. This approach effectively learns an infinite number of adaptive basis functions, parametrized through the structural properties of a neural network. Computations are made scalable and practical through exploiting algebraic structure. \n\t\t\t\nOverall I enjoyed reading your paper.", "- Summary\n\nThe paper is well written and proves how deep, wide, fully connected NNs are equivalent to GPs in the limit. This result, which was well known for single-layer NNs, is now extended to the multilayer case. Although there was already previous work suggesting GP this behavior, there was no formal proof under the specific conditions presented here.\n\nThe convergence to a GP is also verified experimentally on some toy examples.\n\n\n- Relevance\n\nThe result itself does not feel very novel because variants of it were already available.\n\nUnfortunately, although making other researchers aware of this is worthy, the application of this result seems limited, since in fact it describes and lets us know more about a regime that we would rather avoid, rather than one we want to exploit. Most of the applications of deep learning benefit from strong structured priors that cannot be represented as a GP. This is properly acknowledged in the paper.\n\nThe lack of practical relevance combined with the not-groundbreaking novelty of the result makes this paper less appealing.\n\n\n- Other comments\n\nPage 6: \"It does mean however that our empirical study does not extend to larger datasets where such inference is prohibitively expensive (...) prior dominated problems are generally regarded as an area of strength for Bayesian approaches and in this context our results are directly relevant.\"\n\nAlthough that argument can hold for datasets that are large in terms of amount of data points, it doesn't for datasets that are large in terms of number of dimensions. The empirical study could have used very high-dimensional datasets with comparatively low amounts of training data. That would maintain a regime were the prior does matter but and better show the generality of the results.\n\nPage 6: \"We use rectified linear units and correct the variances to avoid a loss of prior variance as depth is increased as discussed in Section 3\" \n\nAre you sure this is discussed in Section 3?\n\nPage 4: \"This is because for finite H the input activations do not have a multivariate normal distribution\". \n\nCan you elaborate on this? Since we are interested in the infinite limit, why is this a problem?", "In part 1, the authors introduce motivation for studying wide neural networks and summarize related work. \nIn part 2, they present a theorem (main theoretical result) stating that under conditions on the weight priors, the output function of a multi-layer neural network (conditionally to a given input) weakly converges to a gaussian process as the size of the hidden layers go to infinity.\nremark on theorem 1: This result generalizes a result proven in 2015 stating that the normality of a layer propagates to the next as the size of the first layer goes to infinity. The result stated in this paper is proven by bounding the gap between the output distribution and the corresponding gaussian process, and by propagating this bound across layers (appendix). \nIn part 3, the authors discuss the choice of a nonlinearity function that enables easy computation of the kernels introduced in the covariance matrix of the limit normal distribution. Their choice lands on ReLU.\nIn part 4, the focus is on the speed of the convergence presented in theorem 1. Experiments are conducted to show how the distance (maximum mean disrepancy) between the output distribution and its theoretical gaussian process limit vary when the sizes of the hidden layers increase. The results show that the convergence (in MMD) happens consistently, although it is slower when the number of hidden layers gets bigger.\nIn part 5, the authors compare the distributions (finite Bayesian deep networks and their analogues Gaussian processes) in yet another way: by studying their agreement in terms of inference. For this purpose, the authors chose several crieteria: the first two moments of the posterior, the log marginal likelihood and the predictive log-likelihood. The authors judge that the distributions agree on those criteria, but do not provide further analysis.\nIn part 6, now that It has been shown that the output distributions of Bayesian neural nets do not only weakly converge to Gaussian processes but also behave similarly in terms of inference, the authors discuss ways to avoid the gaussian process behaviour. Indeed, it seems that Gaussian processes with a fixed kernel cannot learn hierarchical representations, that are essential in deep learning.\nThe idea to avoid the Gaussian process behaviour is to contradict one of the hypothesis of the CLT (so that it does not hold anymore), either by controlling the size of intermediate layers, by using networks with infinite variance in the activities, or by choosing non-independent weights.\nIn part 7, it is concluded that the result that has been proven for size of layers going to infinity (Theorem 1) seems to empirically be verified on finite networks similar to those used in the literature. This can be used to simplify inference in cases were the gaussian process behaviour is desired, and opens questions on how to avoid this behaviour the rest of the time.\n\nPros: The authors line of thought of the authors is overall quite easy to follow. The main theoretical convergence result is stated early on, and the remaining of the article is dedicated to observing this result empirically from different angles (MMD, inference, predictive capability..). The last part contains a discussion concerning the extent to which it is actually a desired or a undesired result in classical deep learning use-cases, and the authors provide intuitive conditions under which the convergence would not hold. The stated theorem is a clear improvement on the past literature and is promising in a context where multi-layers neural networks are more and more studied.\nFinally, the work is well documented.\n\nCons: \nI have a some concerns with the main result (Theorem 1) and found that some of the notations / formulas were not very clear.\n Concerns with Theorem 1:\n* at the end of the proof of Lemma 2, H_\\mu is to be chosen large enough in order to get the \\epsilon bound of the statement. However, I think that H_\\mu is constrained by the statement of Proposition 2, not to be larger than a constant times 2^(H_{\\mu+1}). Isn't that a problem?\n* In the proof of Lemma 4, it looks like matrix \\Psi, from the schur decomposition of \\tilde f, actually depends on H_{\\mu-2}, thus making \\psi_max depend on it too, as well as the final \\beta bound, which would contradict the statement that it depends only on n and H_{\\mu}. Could you please double check?\n\nUnclear statements/notations:\n* end of page 3, notations are not entirely consist with previous notations\n* I do not understand which distribution is assumed on epsilon and gamma when taking the expectancy in equation (9).\n* the notation x^(i) (in the theorem and the proof notably) could be changed, for the ^(i) index refers to the depth of the layer in the rest of the notations, and is here surprisingly referring to a set of observations.\n* the statement of Theorem 1:\n * I would change \"for a countable input set\" to \"for any countable input set\", if this holds true.\n * does not say that the width has to go to infinity for the convergence to happen, which goes a bit in contradiction with the adjective \"wide\". However, the authors say that in practice, they use the identity as width function.\n* I understood that the conclusion of part 3 was that the expectation of eq (9) was elegantly computable for certain non-linearity (including ReLU). However I don't see the link with the \"recursive kernel\" idea (maybe it's just the way to do the computation described in Cho&Saul(2009) ?)\n\nSome places where it appears that there are minor mistakes:\n* 7th line from the bottom of page 3, the vector f^{(2)}(x) contains f_i^{(1)}(x) but should contain f_i^{(2)}(x)\n* last display of page 3: change x and x', and indicate upper limit of the sum\n* please double check variances C_w and/or \\hat{C}_w appearing in equations in (9) and (13).\n* line 2 of second paragraph after equations (8) and (9). The authors refer to equation (8) concerning the independence of the components of the output. I think they rather wanted to refer to (9). Same for first sentence before eq (14).\n* middle of page 12: matrix LY should be RY.", "Thank you for your detailed and thought provoking review. We will acknowledge your anonymous contribution in the final version of the paper. \n\n--On the deep kernel learning papers of Wilson et al and Al-Shedivat et al:\n\nWe agree that this deep kernel literature is useful and relevant in this context. We are sure you would agree that is not the only promising approach. In Section 6 we did point out that the “emergent kernels in our case are hyperparameter free” and that “any Gaussian process with a fixed kernel does not use a learnt hierarchical representation”. Therefore we respectfully disagree with your assessment that we “erroneously attributed” this behaviour to GP methods with a learnt kernel. Nevertheless, we agree that the paper would be clearer with more discussion of learnt representations and we have added additional material to Section 6 along with the citations you kindly suggested.\n\n--On significance and practical value:\n\nWe agree that, in your words: “this infinite limit loses much of the interesting representation in neural networks because the variance of the weights goes to zero.” Indeed, the careful extension of the mathematics underlying this intuition to networks with more than one hidden layer is part of the contribution of our paper. We view the cautionary message of the paper as one of its key scientific contributions. Furthermore, Neal's original 1996 work \"suffers\" from the same issue yet has become extremely influential and led to many invaluable insights. Our analysis moves the careful study of random networks beyond what was known. This requires considerable technical insight. The theoretical assumptions we make are less restrictive than for instance Daniely et al. (2016), which was (correctly in our opinion) regarded as impactful at that NIPS. \n\nAlthough, as we acknowledge, it is difficult to do exhaustive experiments in the fully Bayesian regime, our experiments with the base network architecture of Hernandez-Lobato and Adams (2015) suggest that the Gaussian process limit is relevant to wide finite Bayesian neural networks in the regime studied. ", "Thank you for your review which raises some important questions. We will endeavour here to answer them more clearly. We will acknowledge your anonymous contribution in the final version of the paper. \n\nIf you will excuse us we will start with your last question first since it relates to your criticism of the significance of the work. \n\nFrom your review: Page 4: ` \"This is because for finite H the input activations do not have a multivariate normal distribution\". Can you elaborate on this? Since we are interested in the infinite limit, why is this a problem?'\n\nThis is an important point. There is a general answer and a more specific answer: \n 1) In general, weak convergence is exactly that - many general manipulations that we might want to perform with it don't actually hold. For instance if the sequence of distributions ( a_n ) converges weakly/in-distribution to a and the sequence of distributions ( b_n ) converges weakly/in-distribution to b then the sequence of independent product distributions (a_n,b_n) doesn't necessarily converge weakly/in-distribution to (a,b). See Billingsley 1999 page 23. Care and rigour is required in this domain.\n 2) More specifically to this example, the rate at which the convergence of the activations occurs could have a ``knock on effect'' on the convergence of the activation distributions further through the network. We've added a comment about this second point to the main text just after the sentence you quote.\n\nAs an example of point 2). Suppose that the sequence of distributions (P_n) converges in distribution to some P_*. Consider the limit of a sequence of expectations (\\int \\psi_n d P_n ) where the integrand is also changing. This will not in general be the same as if we first substitute the limit measure (\\int \\psi_n d P_*) and then take the n limit of the new integral. The rate of convergence will in general matter. \n\nFrom your review: \"The result itself does not feel very novel because variants of it were already available.\" \n\nWe have already argued that improving rigorous results in this area is very desirable. Therefore we must respectfully disagree. To the best of our knowledge there are no rigorous results about convergence in this area since Neal (1996).\n \nIt is fair to point out that our empirical analysis does not extend to high dimensional functions- thank you. We've updated the discussion to reflect this. Note that the content of Theorem 1 does not depend on the dimensionality of the inputs.\n\nAlso from your review: \"Are you sure this is discussed in Section 3?\"\n\nYou are correct - we do not allude to this. Thank you for pointing this out. This is an orphaned cross reference to some material that did not make the cut because it is orthogonal to the main thrust of the paper. Essentially, carefully scaling the weight variances can help mitigate the onset of the depth pathologies discussed in Duvenaud et al (2014). We apologize and have now removed this. The exact code we used is available in our anonymous repository.", "We thank the reviewer for their careful reading of the paper. We will acknowledge your anonymous contribution in the final version of the paper. \n\nRegarding the technical query for the proof of Lemma 2, we now have slightly rearranged the material to make clear what constitutes a “sufficiently large H_\\mu for the bound to hold, and to show that this is consistent with the growth rates in the statement of Lemma 2. In fact, we require a rate which grows faster than 2^{n H_\\mu}, for all n to deal with this, and so have adjusted the stated growth rates in Lemma 2 to H_{\\mu-1} = O(2^{H_{\\mu}^2}).\n\nRegarding Lemma 4, the bound is actually independent of H_{\\mu-2}. This is because \\tilde g^{\\mu - 1} is a deterministic transformation of Z^{\\mu - 1} with the known n-dimensional normal distribution from Lemma~1, independent of H_{\\mu-2}. The original proof incorrectly bounded the norm of \\tilde g^{\\mu - 1} in terms of \\tilde f^{\\mu - 1} instead of Z^{\\mu - 1} which we noticed thanks to your comment.\n\nThese details have now been incorporated into the relevant sections of the appendix. We emphasise that the conclusion of Theorem 1 in the main paper remains unchanged, and thank the reviewer once again for their close attention to the details of the proof.\n\nRegarding notational issues:\nBottom of p3 - we have modified the notation to be consistent with the rest of the paper.\nEq (9) - we have now defined the distributions in the text.\nx^{(i)} notation - we now use notation of the form x[i] to refer to the different input points to the neural network.\nTheorem 1 - we have made the suggested change of wording, and added the phrase “strictly increasing” to emphasise that the convergence happens as layer widths go to infinity.\nRecursive kernel comment: Cho & Saul (2009) indeed solve the~integral recursion of Hazan and Jaakkola (2015); we have provided more details to make the link clearer.\n\n\nMinor mistakes:\nThank you for spotting these, we have made the relevant changes in the revised version of the paper." ]
[ 6, 6, 6, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_H1-nGgWC-", "iclr_2018_H1-nGgWC-", "iclr_2018_H1-nGgWC-", "Hk4JEb5eM", "BykFw7cxz", "rJVpI8hgz" ]
iclr_2018_H135uzZ0-
Mixed Precision Training of Convolutional Neural Networks using Integer Operations
The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular, FP16 accumulating into FP32 Micikevicius et al. (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on the ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half precision
accepted-poster-papers
Mixed precision application of CNNs is being explored for e.g. hardware implementations of networks trained at full precision. Mixed precision at training time is less common. This submission primarily concerns itself with the practical implementation details of training with mixed precision, and focuses primarily on representation of mixed precision floating point and algorithmic issues for learning. In the end the support for the approach is primarily empirical, with the mixed precision approach giving a factor of two speedup with half the precision, while accuracies remain effectively statistically tied on the ImageNet 1k database. Table 1 should avoid the use of bold as there is likely no statistical significance. The reviewers appreciated the paper. The proposed approach is sensible, and appears correct.
train
[ "r1vIV-R1z", "HyIm4t7xz", "HJlhggcgM", "SJ7rEp6Xf", "Hykn4K6Wf", "SJHIVYpbM", "S1mqhBFWM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper describes an implementation of reduced precision deep learning using a 16 bit integer representation. This field has recently seen a lot of publications proposing various methods to reduce the precision of weights and activations. These schemes have generally achieved close-to-SOTA accuracy for small networks on datasets such as MNIST and CIFAR-10. However, for larger networks (ResNET, Vgg, etc) on large dataset such as ImageNET, a significant accuracy drop are reported. In this work, the authors show that a careful implementation of mixed-precision dynamic fixed point computation can achieve SOTA on 4 large networks on the ImageNET-1K datasets. Using a INT16 (as opposed to FP16) has the advantage of enabling the use of new SIMD mul-acc instructions such as QVNNI16. \n\nThe reported accuracy numbers show convincingly that INT16 weights and activations can be used without loss of accuracy in large CNNs. However, I was hoping to see a direct comparison between FP16 and INT16. \n\nThe paper is written clearly and the English is fine.", "This paper is about low-precision training for ConvNets. It proposed a \"dynamic fixed point\" scheme that shares the exponent part for a tensor, and developed procedures to do NN computing with this format. The proposed method is shown to achieve matching performance against their FP32 counter-parts with the same number of training iterations on several state-of-the-art ConvNets architectures on Imagenet-1K. According to the paper, this is the first time such kind of performance are demonstrated for limited precision training.\n\nPotential improvements:\n\t\n - Please define the terms like FPROP and WTGRAD at the first occurance.\n - For reference, please include wallclock time and actual overall memory consumption comparisons of the proposed methods and other methods as well as the baseline (default FP32 training).", "This work presents a CNN training setup that uses half precision implementation that can get 2X speedup for training. The work is clearly presented and the evaluations seem convincing. The presented implementations are competitive in terms of accuracy, when compared to the FP32 representation. I'm not an expert in this area but the contribution seems relevant to me, and enough for being published.", "We've added an updated revision of the paper which addresses the following :\n\n> Typos and grammatic errors throughout the paper\n> Added training throughput speedups (Section5)\n> Included discussion on performance implications (Section4.3)\n\nWe thank all the reviewers for their helpful comments and feedback. ", "We would like to thank the reviewer for the comments. ", "We would like to thank the reviewer for the comments.\n\nWe will shortly update the manuscript to fix the missing definitions for the terms pointed out and also a number of other minor typographical errors that we have identified since submission.\n\nWe intend to also include a more detailed discussion on performance (described in the comment below), in which we also include the baseline FP32 performance, along with a comparison with the INT16 variant in terms of various system aspects (memory footprint, performance profile...)", "Thanks a lot for your comments. \n\nWe do indeed have a Proof of Concept implementation for ResNet-50 on KNM 72c, 1.5GHz part with 16GB MCDRAM. \nOn this part a FP32 implementation using MKLDNN on Intel Caffe achieves 152 img/s while our POC (also using Intel Caffe + MKLDNN interface (but not MKLDNN code)) achieved 275 img/s while achieving SOTA. This is a ~1.8x speedup over FP32. We also believe that there is scope for further improvements. If the PC/Reviewers permit we can add these results to the paper. \n\nAlso the results in the paper are obtained using QVNNI-16 kernels on a 32 node KNM cluster as mentioned in Section 5. \n\nWe do admit that the performance and overflow related discussion has room for improvement. Specifically the statement you point out pertains to the fact that we can always have the following sequence of instructions: QVNNI-16, cvtepi32ps (convert INT32 to FP32), fmaddps (scale and accumulate FP32 results) which will almost never overflow. Unfortunately the above mentioned sequence is ~3x slower than pure QVNNI-16 (as it has 3x more instructions). Therefore we select a compromise point between number of sufficient number of QVNNI-16 instructions followed by the convert and accumulate sequence, which optimizes performance without compromising numerics.\n\nI hope this clarifies things a little more. We will rewrite this Section 4.3 to clarify more. \n\nAgain as per the breakup of performance lost per component of the mixed precision training methodology, if the reviewers/PC permits we can provide more details in the paper. \n\nWe will update the submission shortly for a bunch of typographical and grammar issues we have identified at our end, and other edits discussed here. \n" ]
[ 7, 7, 6, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_H135uzZ0-", "iclr_2018_H135uzZ0-", "iclr_2018_H135uzZ0-", "iclr_2018_H135uzZ0-", "HJlhggcgM", "HyIm4t7xz", "r1vIV-R1z" ]
iclr_2018_SkFqf0lAZ
Memory Architectures in Recurrent Neural Network Language Models
We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov,2015; Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language.
accepted-poster-papers
This paper provides a comparison of different types of a memory augmented models and extends some of them to beyond their simple form. Reviewers found the paper to be clearly written, saying it "nice introduction to the topic" and noting that they "enjoyed reading this paper". In general though there was a feeling that the "substance of the work is limited". One reviewer complained that experiments were limited to small English datasets PTB and Wikitext-2 and asked why they didn't try "machine translation or speech recognition". (The author's note that they did try the Linzen dataset, and while the reviewers found the experiments impressive, the task itself felt artificial) . Another felt that the "multipop model" alone was not too large a contribution. The actual experiments in the work are well done, although given the fact that the models are known there was expectation of "more "in-depth" analysis of the different models". Overall this is a good empirical study, which shows the limited gains achieved by these models, a nevertheless useful piece of information for those working in this area.
test
[ "SkiJh-5lM", "SkTEzjqgf", "H1gkDZaeM", "Hy_dpvVZz", "r1Wd8AC-z", "SJS58CA-G", "rkwnI0AZz", "SJyC7vQgM", "HJhJDmfxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "public" ]
[ "The authors propose to compare three different memory architecture for recurrent neural network language models:\nvanilla LSTM, random access based on attention and continuous stack. The second main contribution of the paper is to propose an extension of continuous stacks, which allows to perform multiple pop operations at a single time step.\nThe way to do that is to use a similar mechanism as the adaptive computation time from Graves (2016): all the pop operations are performed, and the final state of the continuous stack is weighted average of all the intermediate states. The different memory models are evaluated on two standard language modeling tasks: PTB and WikiText-2, as well as on the verb number prediction dataset from Linzen et al (2016). On the language modeling tasks, the stack model performs slightly better than the attention models (0-2 ppl points) which performs slightly better than the plain LSTM (2-3 ppl). On the verb number prediction tasks, the stack model tends to outperforms the two other models (which get similar results) for hard examples (2 or more attractors).\n\nOverall, I enjoy reading this paper: it is clearly written, and contains interesting analysis of different memory architecture for recurrent neural networks. As far as I know, it is the first thorough comparison of the different memory architecture for recurrent neural network applied to language modeling. The experiments on the Linzen et al. (2016) dataset is also interesting, as it shows that for hard examples, the different models do have different behavior (even when the difference are not noticeable on the whole test set).\n\nOne small negative aspect of the paper is that the substance might be a bit limited. The only technical contribution is to merge the ideas from the continuous stack with the adaptive computation time to obtain the \"multi-pop\" model. In the experimental section, which I believe is the main contribution of the paper, I would have liked to see more \"in-depth\" analysis of the different models. I found the experiments performed on the Linzen et al. (2016) dataset (Table 2) to be quite interesting, and would have liked more analysis like that. On the other hand, I found Figures 2 or 3 not very informative, as it is (would like to see more). For example, from Fig. 2, it would be interesting to get a better understanding of what errors are made by the different models (instead of just the distribution).\n\nFinally, I have a few questions for the authors:\n- In Figure 1. shouldn't there be an arrow from h_{t-1} to m_t instead of x_{t-1} to m_t?\n- What are the equations to update the stack? I assume something similar to Joulin & Mikolov (2015)?\n- Do you have any ideas why there is a sharp jump between 4 and 5 attractors (Table 2)?\n- Why no \"pop\" operations in Figure 3 and 4?\n\npros/cons:\n+ clear and easy to read\n+ interesting analysis\n- not very original\n\nOverall, while not groundbreaking, this is a serious paper with interesting analysis. Hence, I am weakly recommending to accept this paper.", "The authors propose a new stack augmented recurrent neural network, which supports continuous push, stay and a variable number of pop operations at each time step. They thoroughly compare several typical neural language models (LSTM, LSTM+attention mechanism, etc.), and demonstrate the power of the stack baed recurrent neural network language model in the similar parameter scale with other models, and especially show the superiority when the long-range dependencies are more complex in NLP area.\n\nHowever the corpora they choose to test the ideas, are PTB and Wikitext-2, they're quite small, so the variance of the estimate is high, similar conclusions might not be valid on large corpora such as 1B token benchmark corpus. \n\nTable 1 only gives results with the same level of parameters, the ppls are worse than some other models. Another angle might be the proposed model use the similar size of hidden layer 1500 plus the stack, and see how much ppl reductions it could get.\n\nFinally the authors should do some experiments on machine translation or speech recognition and see whether the model could get performance improvement.\n\n\n", "The main contribution of this paper are:\n(a) a proposed extension to continuous stack model to allow multiple pop operation,\n(b) on a language model task, they demonstrate that their model gives better perplexity than comparable LSTM and attention model, and \n(c) on a syntactic task (non-local subject-verb agreement), again, they demonstrate better performance than comparable LSTM and attention model.\n\nAdditionally, the paper provides a nice introduction to the topic and casts the current models into three categories -- the sequential memory access, the random memory access and the stack memory access models. \n\nTheir analysis in section (3.4) using the Venn diagram and illustrative figures in (3), (4) and (5) provide useful insight into the performance of the model.", "As part of the ICRL 2018 Reproducibility Challenge, we are trying to replicate some of the experiments reported in this paper. We would like to contact the authors of this paper to discuss some of the technical details of the proposed model. We would be very grateful if the authors could get in touch with us at __________@_________.__", "Thank you for your thoughtful review. Based on your suggestion, we have added examples of mistakes made by competing models in the Linzen experiment instead of just the Venn diagram (Table 3).\n\nAnswers to your specific questions:\n- In Figure 1. shouldn't there be an arrow from h_{t-1} to m_t instead of x_{t-1} to m_t?\nThanks for pointing this out. You are correct, we have updated the figure to fix the arrow. \n\n- What are the equations to update the stack? I assume something similar to Joulin & Mikolov (2015)?\nThe equation to update the stack is given in Equation 1 (page 4).\n\n- Do you have any ideas why there is a sharp jump between 4 and 5 attractors (Table 2)?\nWe think that there are two main reasons that could explain the sharp jump. \nThe first one is because there are much fewer test examples in the dataset with 5 attractors (~150) compared to 4 attractors and above (400, 1100, 3800, ...), so the standard error on the reported accuracy is also higher (e.g., 91.6 +/- 1.2 and 88.0 +- 2.6 for the stack model with 4 and 5 attractors respectively).\nAnother reason could be that sentences with more attractors are much longer than sentences with fewer attractors, so the difficulty increases non linearly as the number of attractors increases.\n\n- Why no \"pop\" operations in Figure 3 and 4?\nThe \"pop\" operations are shown in the x axis (number of pops). Each pair of red and blue bars represents a single pop number.\n", "We would like to note that the main goal of the paper is to compare different memory architectures for RNN language models and analyze what kind of dependencies these models fail to learn.\n\nPerplexity is one metric to evaluate such models, so we use PTB and Wikitext-2---the two most commonly used language modeling datasets---to both compare these models and show that the memory models we implemented perform reasonably well compared to other work on these datasets.\n\nHowever, as noted in our paper, the overall perplexity on these datasets is strongly dominated by words that have few if any long term dependencies, making it difficult to assess when memory helps using perplexity alone.\nInstead of running these models on 1B corpus, which would have the same problem, we chose to include experiments on the Linzen dataset to be able to analyze these memory models further and get a better understanding of their strengths and limitations.\nWe think this set of experiments adds more value and offers a more useful insight into memory augmented RNN LM than another opaque perplexity result on a larger corpus.\n\nApplications to machine translation and speech recognition are beyond the scope of this paper.\n", "email sent! :)", "The reward used is the log probability of the sequence generated, conditional on the sampled stack control decisions. This is thus optimizing an EM-like bound on the marginal likelihood.\n", "Could you please elaborate on the reward that was used for REINFORCE in the Single Computation Discrete Stack?" ]
[ 6, 5, 8, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkFqf0lAZ", "iclr_2018_SkFqf0lAZ", "iclr_2018_SkFqf0lAZ", "iclr_2018_SkFqf0lAZ", "SkiJh-5lM", "SkTEzjqgf", "Hy_dpvVZz", "HJhJDmfxM", "iclr_2018_SkFqf0lAZ" ]
iclr_2018_ry_WPG-A-
On the Information Bottleneck Theory of Deep Learning
The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case. Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.
accepted-poster-papers
This submission explores recent theoretical work by Shwartz-Ziv and Tishby on explaining the generalization ability of deep networks. The paper gives counter-examples that suggest aspects of the theory might not be relevant for all neural networks. There is some uncertainty surrounding the results where mutual information is estimated empirically. Even state-of-the-art estimation methods might lead to misleading empirical results. However, the submission appears to follow reasonable practice following previous work, making the reported results at least suggestive. They warrant reporting for further study and discussion. The reviewers generally found the paper interesting enough for acceptance, however strong objections were posted by Tishby. A lengthy public exchange resulted between the groups of authors. Not every part of this exchange is resolved. It is not clear whether Tishby's group would be able to fix the full-connected ReLU demonstration in this paper, or whether the authors of this submission have anything to say about Tishby's ReLU+convnet demonstration. By accepting this work, we are not declaring where this debate will end. However, we felt the current submission is a constructive part of ongoing discussion in the literature on furthering our theoretical understanding of neural networks.
train
[ "Bkzy2_YeG", "rJzOv7qxG", "rJdaeccgf", "ByOQJGsXf", "HkHYpbjXG", "HymkpZs7z", "rkeo2Zi7z", "BkGKibjmM", "rkdv9WsXf", "Skp7LbP-f", "ryJ6FjclG", "rJ53OMckM", "HJbi_G5Jf", "SkbXdzckf", "BksedMckM", "Byn5PG9yG", "S12WZqNyz", "S1lBxcE1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "author", "author", "author", "author", "author", "public", "public" ]
[ "This paper presents a study on the Information Bottleneck (IB) theory of deep learning, providing results in contrasts to the main theory claims. According to the authors, the IB theory suggests that the network generalization is mainly due to a ‘compression phase’ in the information plane occurring after a ‘fitting phase’ and that the ‘compression phase’ is due to the stochastic gradient decent (SDG). Instead, the results provided by this paper show that: the generalization can happen even without compression; that SDG is not the primary factor in compression; and that the compression does not necessarily occur after the ‘fitting phase’. Overall, the paper tackles the IB theory claims with consistent methodology, thus providing substantial arguments against the IB theory. \n\nThe main concern is that the paper is built to argue against another theoretical work, raising a substantial discussion with the authors of the IB theory. This paper should carefully address all the raised arguments in the main text. \n\nThere are, moreover, some open questions that are not fully clear in this contribution:\n1)\tTo evaluate the mutual information in the ReLu networks (sec. 2) the authors discretize the output activity in their range. Should the non-linearity of ReLu be considered as a form of compression? Do you check the ratio of ReLus that are not active during training or the ratio of inputs that fall into the negative domain of each ReLu? \n2)\tSince one of today common topics is the training of deep neural networks with lower representational precision, could the quantization error due to the low precision be considered as a form of noise inserted in the network layers that influences the generalization performance in deep neural networks? \n3)\tWhat are the main conclusions or impact of the present study in the theory of neural networks? Is it the authors aim to just demonstrate that the IB theory is not correct? Perhaps, the paper should empathize the obtained results not just in contrast to the other theory, but proactively in agreement with a new proposal. \n\nFinally, a small issue comes from the Figures that need some improvement. In most of the cases (Figure 3 C, D; Figure 4 A, B, C; Figure 5 C, D; Figure 6) the axes font is too small to be read. Figure 3C is also very unclear.\n", "The authors address the issue of whether the information bottleneck (IB) theory can provide insight into the working of deep networks. They show, using some counter-examples, that the previous understanding of IB theory and its application to deep networks is limited.\n\nPROS: The paper is very well written and makes its points very clearly. To the extent of my knowledge, the content is original. Since it clearly elucidates the limitations of IB theory in its ability to analyse deep networks, I think it is a significant \ncontribution worthy of acceptance. The experiments are also well designed and executed. \n\nCONS: On the downside, the limitations exposed are done so empirically, but the underlying theoretical causes are not explored (although this could be potentially because this is hard to do). Also, the paper exposes the limitations of another paper published in a non-peer reviewed location (arXiv) which potentially limits its applicability and significance.\n\nSome detailed comments:\n\nIn section 2, the influence of binning on how the mutual information is calculated should be made clear. Since the comparison is between a bounded non-linearity and an unbounded one, it is not self-evident how the binning in the latter case should be done. A justification for the choice made for binning the relu case would be helpful.\n\nIn the same section, it is claimed that the dependence of the mutual information I(X; T) on the magnitude of the weights of the network explains why a tanh non-linearity shows the compression effect (non-monotonicity vs I(X; T)) in the information plane dynamics. But the claim that large weights are required for doing anything useful is unsubstantiated and would benefit from having citations to papaers that discuss this issue. If networks with small weights are able to learn most datasets, the arguments given in this section wouldn't be applicable in its entirety.\n\nAdditionally, figures that show the phase plane dynamics for other non-linearities e.g. relu+ or sigmoid, should be added, at \nleast in the supplementary section. This is important to complete the overall picture of how the compression effect depends on having specific activation functions.\n\nIn section 3, a sentence or two should be added to describe what a \"teacher-student setup\" is, and how it is relevant/interesting.\n\nAlso in section 3, the cases where batch gradient descent is used and where stochastic gradient descent is used should be \npointed out much more clearly. It is mentioned in the first line of page 7 that batch gradient descent is used, but it is not \nclear why SGD couldn't have been used to keep things consistent. This applies to figure 4 too. \n\nIn section 4, it seems inconsistent that the comparison of SGD vs BGD is done using linear network as opposed to a relu network which is what's used in Section 2. At the least, a comparison using relu should be added to the supplementary section.\n\nMinor comments \nThe different figure styles using in Fig 4A and C that have the same quantities plotted makes it confusing.\nAn additional minor comment on the figures: some of the labels are hard to read on the manuscript.", "A thorough investigation on Info Bottleneck and deep learning, nice to read with interesting experiments and references. Even though not all of the approach is uncontroversial (as the discussion shows), the paper contributes to much needed theory of deep learning rather than just another architecture. \nEstimating the mutual information could have been handled in a more sophisticated way (eg using a Kraskov estimator rather than simple binning), and given that no noise is usually added the discussion about noise and generalisation doesn't seem to make too much sense to me. \n\nIt would have been good to see a discussion whether another measurement that would be useful for single-sided saturating nonlinearities that do show a compression (eg information from a combination of layers), from learnt representations that are different to representations learnt using double-sided nonlinearities. \n\nRegarding the finite representation of units (as in the discussion) it might be helpful to also consider an implementation of a network with arbitrary precision arithmetic as an additional experiment. \n\nOverall I think it would be nice to see the paper accepted at the very least to continue the discussion. ", "\n-1. That the compression of the representation (reducing I(T:X)) is due to the saturated non-linearity and is not appear with other non-linearity (RelU’s in particular). The authors don’t know how to estimate mutual information correctly. When properly done, there essentially the same fitting and compression phases with RelU’s and any other network we examined: Here are the Information Plane trajectories for the CIFAR-10 CNN networks with RelU’s non-linearity as shown in our presentations: Figure 1 (see attachment). One can easily see the two phases and the phase transition between them (where I(X;T) has its maximum). \n\nIn response to these concerns, we have run additional experiments and now include results using the state-of-the-art nonparametric KDE approach of Kolchinsky et al., (2017) as suggested in your prior comment, and the k-NN estimator of Kraskov et al, 2004. We find no compression with ReLUs or linear networks using (a) the exact MI estimation procedure used in “Opening the black box,” (b) the exact MI calculation with no approximations in linear networks, (c) the nonparametric KDE approach on the MNIST dataset, and (d) the Kraskov et al., (2004) k-NN based estimator. We note that, in addition to showing the robustness of our findings to the specific MI estimation method employed, these results also show the robustness to the particular dataset used: we find similar results using the original dataset of “Opening the….”, the linear student-teacher dataset, and now the MNIST dataset. \n\n-2. “that there is no evident causal connection between compression and generalization” We rigorously proved that compression leads to dramatic improvement in generalization, providing that the partitions remained homogenous to the label probability. In fact we argue that any bit of representation compression (under these conditions) is effective as doubling the size of the training data! Here is the sketch of our proof as given in our presentations: Figure 2 (see attachment).\n\nWe note that the caveat is critical (“providing that the partitions remained homogenous to the label probability”): reducing to a discrete representation in which all inputs associated with the same discrete value have the same class label would be of benefit if possible. However no rigorous argument is given for why deep networks might achieve this special label-homogeneous partition if they in fact do, and this would seem to be the core fact to be explained. We also note that we observe similar generalization performance between Tanh and ReLU networks despite different compression dynamics, indicating that compression is not a major factor in the empirical behavior we observe (for instance, the bound might apply but be too weak).\n\n-3. “that the compression is unrelated to the noisy (low SNR) phase of the gradients”, as we claim. Below are some figures that clearly show the precise relation between the beginning of the compression phase (argmax I(X;T) for the last hidden layer (green line on left) and the gradient-SNR transition (blue line on right): Figure 3 (see attachment) Moreover, when changing the min-batch size (from 32 to 4000) both transitions move together in perfect linear relationship (left). In fact we show (right) that the full batch case (BGD of the “paper”) lies on the same line (green point) which suggests that the reported compression here is exactly the same phenomena, for much weaker gradient noise (as we claimed). We believe these facts nullify the arguments given in this “paper” all together.\n\nWe have added plots of the gradient SNR to the appendix. As can be seen, in all cases (tanh, ReLU, and linear) we observe the two phases of gradient SNR (as is to be expected, e.g., Murata, 1998; Chee & Toulis, 2017). However we see no compression for ReLU or linear networks, indicating that these phenomena are unrelated. That is, because ReLU/linear networks do not compress, the plots given in the attachment would be vertical lines with no correlation between batch size and argrmax I(X;T) or between the gradient SNR phase transition and argmax I(X;T) (because argmax I(X;T) would be the final epoch of training in networks that do not compress). Hence, while we agree that this correlation exists for tanh networks, it does not for ReLU/linear networks, and therefore noise in the training process is not the causal mechanism behind compression. (We also note that there may be a plotting issue here—the x-axis of the left panel is labeled argmax I(X;T) and ranges from 0 to ~2000, while the y-axis of the right panel is also labeled argmax I(X;T) but ranges from 0 to ~200.) \n", "Please also see our comments to all reviewers above.\n\n-This paper presents a study on the Information Bottleneck (IB) theory of deep learning, providing results in contrasts to the main theory claims. According to the authors, the IB theory suggests that the network generalization is mainly due to a ‘compression phase’ in the information plane occurring after a ‘fitting phase’ and that the ‘compression phase’ is due to the stochastic gradient decent (SDG). Instead, the results provided by this paper show that: the generalization can happen even without compression; that SDG is not the primary factor in compression; and that the compression does not necessarily occur after the ‘fitting phase’. Overall, the paper tackles the IB theory claims with consistent methodology, thus providing substantial arguments against the IB theory.\n\nThank you!\n\n-The main concern is that the paper is built to argue against another theoretical work, raising a substantial discussion with the authors of the IB theory. This paper should carefully address all the raised arguments in the main text.\n\nThe revision now addresses these arguments in the main text. We believe the conclusions in our original submission still stand, and are now supported by additional experiments.\n\n-There are, moreover, some open questions that are not fully clear in this contribution:\n-1) To evaluate the mutual information in the ReLu networks (sec. 2) the authors discretize the output activity in their range. Should the non-linearity of ReLu be considered as a form of compression? Do you check the ratio of ReLus that are not active during training or the ratio of inputs that fall into the negative domain of each ReLu?\n\nOur discretization does consider the nonlinearity of ReLU, which could in principle lead to compression if ReLUs tended to inactivate over the course of training. However they do not seem to in practice, which can be seen from the histograms of activity over training in Fig. 17. The bottom-most bin contains zero, the ReLU saturation value. There is no consistent trend in the number of saturated ReLU activations over training, with most layers ending up about where they started, with neurons inactive on roughly 50% of examples.\n\n-2) Since one of today common topics is the training of deep neural networks with lower representational precision, could the quantization error due to the low precision be considered as a form of noise inserted in the network layers that influences the generalization performance in deep neural networks?\n\nThank you for the suggestion, we now point to this possibility in the discussion. For networks which explicitly incorporate noise in their architecture (either through quantization or noise injection), the broader information bottleneck theory may apply and yield potentially new training algorithms. Our point in this paper is that the specific claims of the information bottleneck theory of deep learning, which attempt to explain the performance of “vanilla” deep networks with no quantization or noise, do not in fact explain the generalization performance of these networks.\n\n-3) What are the main conclusions or impact of the present study in the theory of neural networks? Is it the authors aim to just demonstrate that the IB theory is not correct? Perhaps, the paper should empathize the obtained results not just in contrast to the other theory, but proactively in agreement with a new proposal.\n\nThere are a variety of theories (several cited in our introduction) which may be consistent with all of the results reported in this paper. Most directly, the results in Advani & Saxe, 2017 successfully account for generalization behavior in the linear models we study. However even there, it remains to be seen how those ideas might apply to deep nonlinear networks. It is outside the scope of this paper to provide strong support for any one of these theories, as singling out one theory as better would require experiments designed to specifically test them, which must be left for future work. Our aim rather was to carefully and fairly inspect an exciting and, it seemed to us, promising theory, and the result turned out to be somewhat negative. In our view, negative results are a critical component of a healthy research ecosystem, and on occasion science advances through falsification. The impact of the present study on the theory of neural networks is to help narrow the field of plausible candidate theories. We expect our results to be important to researchers currently building off of the information bottleneck theory of deep learning.\n\n-Finally, a small issue comes from the Figures that need some improvement. In most of the cases (Figure 3 C, D; Figure 4 A, B, C; Figure 5 C, D; Figure 6) the axes font is too small to be read. Figure 3C is also very unclear.\n\nWe apologize for this issue, we have increased the size of several figures and are working towards a revision with the rest corrected.\n", "-Some detailed comments:\n\n-In section 2, the influence of binning on how the mutual information is calculated should be made clear. Since the comparison is between a bounded non-linearity and an unbounded one, it is not self-evident how the binning in the latter case should be done. A justification for the choice made for binning the relu case would be helpful.\n\nFor ReLU, we simply space bins up to the largest activation value encountered over the course of training (this method places no a priori assumption on how large the activations might grow, and is equivalent to having bins stretching to infinity since all larger bins would never be used and have probability zero). We have added an extended discussion to the appendix which, in addition to these points, shows the results of alternative binning strategies.\n\n-In the same section, it is claimed that the dependence of the mutual information I(X; T) on the magnitude of the weights of the network explains why a tanh non-linearity shows the compression effect (non-monotonicity vs I(X; T)) in the information plane dynamics. But the claim that large weights are required for doing anything useful is unsubstantiated and would benefit from having citations to papaers that discuss this issue. If networks with small weights are able to learn most datasets, the arguments given in this section wouldn't be applicable in its entirety.\n\nWe have now included an appendix which justifies this claim. First, we note that nonlinearities like tanh are linear near the origin. Hence small weights place activities in this linear regime and the network can only compute a linear function of the input. As essentially all real world tasks are nonlinear, it is a virtual necessity for the weights to increase until the tanh nonlinearities saturate on some examples. More generally, we cite Rademacher complexity bounds which depend on the norm of the weights (implying that small weight networks can represent only simple functions). Finally, as an emiprical matter, we show that for the tanh, ReLU, and linear networks considered in this paper the weight norms increase in every layer over training.\n\n-Additionally, figures that show the phase plane dynamics for other non-linearities e.g. relu+ or sigmoid, should be added, at least in the supplementary section. This is important to complete the overall picture of how the compression effect depends on having specific activation functions.\n\nThank you, we have now added two more nonlinearities (softplus and softsign) to the appendix, which also show similar results.\n\n -In section 3, a sentence or two should be added to describe what a \"teacher-student setup\" is, and how it is relevant/interesting. Also in section 3, the cases where batch gradient descent is used and where stochastic gradient descent is used should be pointed out much more clearly. It is mentioned in the first line of page 7 that batch gradient descent is used, but it is not clear why SGD couldn't have been used to keep things consistent. This applies to figure 4 too.\n\nWe have now more fully described the student-teacher scenario, and more carefully labeled the batch size in our experiments (though we note that it made little difference on the information plane dynamics in our hands).\n\n-In section 4, it seems inconsistent that the comparison of SGD vs BGD is done using linear network as opposed to a relu network which is what's used in Section 2. At the least, a comparison using relu should be added to the supplementary section.\n\nWe now use the ReLU network in the main text, and have placed the linear network result in the appendix.\n\n-Minor comments: The different figure styles using in Fig 4A and C that have the same quantities plotted makes it confusing. An additional minor comment on the figures: some of the labels are hard to read on the manuscript.\n\nWe apologize for these issues, we intend to submit another revision with larger figure captions and consistent plotting styles.\n", "Please also see our comments to all reviewers above.\n\n-PROS: The paper is very well written and makes its points very clearly. To the extent of my knowledge, the content is original. Since it clearly elucidates the limitations of IB theory in its ability to analyse deep networks, I think it is a significant contribution worthy of acceptance. The experiments are also well designed and executed.\n\nThank you!\n\n-CONS: On the downside, the limitations exposed are done so empirically, but the underlying theoretical causes are not explored (although this could be potentially because this is hard to do). Also, the paper exposes the limitations of another paper published in a non-peer reviewed location (arXiv) which potentially limits its applicability and significance.\n\nWhile we agree that we have not been able to prove theoretically that, for instance, ReLUs will not compress, we do believe we have elucidated some of the theoretical causes: we present a minimal three neuron model that exhibits the compression phenomenon and give an explicit formula for the binning-based MI estimate; and we give exact calculations of the MI for the linear case, for which the generalization behavior is known. Finally, we now directly discuss the fact that SGD does not necessarily behave like BGD plus additive noise (and hence there is no stochastic relaxation to a Gibbs distribution).\n\nAlthough the information bottleneck theory of deep learning has appeared only as an arXiv paper, it has achieved attention through video lectures and articles in the popular press. Most importantly from our perspective, researchers are actively attempting to build new methods off of the ideas in the information bottleneck theory, and we believe our results could be significant to those efforts—this, in our view, is the main value in our present work.\n", "Please also note our comments to all reviewers above.\n\n-A thorough investigation on Info Bottleneck and deep learning, nice to read with interesting experiments and references. Even though not all of the approach is uncontroversial (as the discussion shows), the paper contributes to much needed theory of deep learning rather than just another architecture. \n\nThanks for the encouraging comments! \n\n-Estimating the mutual information could have been handled in a more sophisticated way (eg using a Kraskov estimator rather than simple binning), and given that no noise is usually added the discussion about noise and generalisation doesn't seem to make too much sense to me. \n\nWe now include the Kraskov estimator as well as a nonparametric KDE estimator, which show similar results to the binning-based estimate.\n\nWe have revised the text to clarify that the `''noise'' in the student-teacher section on generalization is fundamentally different from the noise added to representations for analysis. It represents approximation error (i.e., aspects of the target function which even the best neural network of a given architecture cannot model), and is part of generating an interesting dataset based on a teacher. The noise added to representations for analysis, by contrast, is an assumption which affects the student network itself, and is not part of the operation of the student network in practice.\n\n-It would have been good to see a discussion whether another measurement that would be useful for single-sided saturating nonlinearities that do show a compression (eg information from a combination of layers), from learnt representations that are different to representations learnt using double-sided nonlinearities. \n\nSo long as hidden activities are continuous, we believe that MI between the input and multiple layers simultaneously should show similar dynamics. Given our results, it seems that single-sided saturating nonlinearities do not in general compress, and this would carry through to measures that combine multiple layers (because these layers form a Markov chain).\n\n-Regarding the finite representation of units (as in the discussion) it might be helpful to also consider an implementation of a network with arbitrary precision arithmetic as an additional experiment. Overall I think it would be nice to see the paper accepted at the very least to continue the discussion. \n\nThank you for the suggestion, we considered doing an experiment with arbitrary precision but were able to rule out this concern through another route: if noise in batch gradient descent from numerical precision causes the weights to converge to a Gibbs distribution, and this in turn causes compression, then we should see compression in ReLU or linear networks trained with BGD. However we do not, as we now show in Fig. 5D, which makes this explanation unlikely in our eyes. Moreover, even the noise in SGD appears insufficient to cause compression for ReLU or linear networks, and hence is unlikely to be the source of compression more generally.", "We thank all reviewers for their thoughtful comments which have greatly improved the paper. We have just posted a revision which contains changes and additional experiments suggested by the reviewers. Most notably, we have replicated our basic results using the nonparametric KDE estimator (Kolchinsky et al., 2017) suggested for use in the OpenReview discussion, and using the popular k-NN based Kraskov et al., 2004 estimator. Again we find that ReLU networks do not compress while Tanh networks do. We also apply the KDE estimator to networks trained on MNIST, to show that the phenomenon also holds on (small) real world tasks. Additionally, we now include results for two other nonlinearities (soft plus and soft sign), where again only the double saturating nonlinearities show compression. We have also added results concerning the relationship between the two phases of gradient descent and compression: we show that networks exhibit these two phases regardless of the nonlinearity employed (for tanh, relu, or linear), and hence the SGD phases cannot be causally related to compression since networks that do and do not compress still exhibit them. We have endeavored to clarify points in the text, and now include extended discussions of several important points in the Appendix. We emphasize that all results reported in the original submission remain correct to our knowledge, and are left in place largely untouched—our additional results provide more thorough supporting evidence and control experiments that better establish the generality of our findings. \n", "The linked Medium post mentions \"the gradient phase transition\" because it is reporting Shwartz-Ziv & Tishby's paper. I don't see an independent verification there. The post does reference some other papers; does one of them contain such a verification?", "Naftali Tishby and Ravid Shwartz Ziv\n\nFinal public comment on the ICLR 2018 Conference Paper852 \nOn the Information Bottleneck Theory of Deep Learning\nThis “paper” attacks our work through the following flawed and misleading statements: \n\n1.\tThat the compression of the representation (reducing I(T:X)) is due to the saturated non-linearity and is not appear with other non-linearity (RelU’s in particular).\n\nThe authors don’t know how to estimate mutual information correctly. When properly done, there essentially the same fitting and compression phases with RelU’s and any other network we examined:\n\nHere are the Information Plane trajectories for the CIFAR-10 CNN networks with RelU’s non-linearity as shown in our presentations:\n\nFigure 1 (see attachment) \nOne can easily see the two phases and the phase transition between them (where I(X;T) has its maximum).\n\n\n2.\t“that there is no evident causal connection between compression and generalization”\n\nWe rigorously proved that compression leads to dramatic improvement in generalization, providing that the partitions remained homogenous to the label probability. In fact we argue that any bit of representation compression (under these conditions) is effective as doubling the size of the training data! Here is the sketch of our proof as given in our presentations:\n \nFigure 2 (see attachment)\n3.\t“that the compression is unrelated to the noisy (low SNR) phase of the gradients”, as we claim. \n\nBelow are some figures that clearly show the precise relation between the beginning of the compression phase (argmax I(X;T) for the last hidden layer (green line on left) and the gradient-SNR transition (blue line on right):\n\nFigure 3 (see attachment) \t \n\nMoreover, when changing the min-batch size (from 32 to 4000) both transitions move together in perfect linear relationship (left). In fact we show (right) that the full batch case (BGD of the “paper”) lies on the same line (green point) which suggests that the reported compression here is exactly the same phenomena, for much weaker gradient noise (as we claimed).\n\nWe believe these facts nullify the arguments given in this “paper” all together.\n \t \nSee attachment with figures: \nhttps://www.dropbox.com/s/6aotykw6py37z1h/Naftali%20Tishby%20and%20Ravid%20Shwartz%20Ziv-final%20comment.pdf?dl=0\n", "7. We have much to say about the linear analysis. It should be compared, as said in the paper, to the Linear Gaussian IB (GIB). Then one could nicely see the convergence to the GIB information curve through compression (projections to the CCA space). In general, however, linear networks don’t capture the most interesting aspects of deep learning, in our opinion.\n\n\nLinear neural networks are studied here because they are a simple system (albeit high-dimensional and with non-linear dynamics) which we can understand fully and where compression cannot occur due to saturation of nonlinear units, a very important issue as we point out in this work. Linear networks also sidestep the issue of MI estimation methods, because the MI can be calculated exactly. The fact that we do not reliably observe compression as defined in “Opening the Black Box of Deep Neural Networks via information” in linear systems where many of the input dimensions are irrelevant appears to be an important issue which would need to be addressed by such a theory before being applied to complex systems where there could be multiple reasons for compression. We do observe compression of irrelevant dimensions as should be expected, but interestingly do not see a compression phase when we consider all dimensions (as is done in “Opening the Black Box ...”), which suggests that saturation of nonlinearities seems to be crucial to observe the compression phase in the way it has been defined. Finally, our linear results reiterate that these phenomena arise even with batch training where there is no noise in the training procedure, and hence a stochastic relaxation is not responsible for the resulting information plane dynamics.\n\nOverall, with respect to the theory, we have shown that compression does not happen due to stochastic relaxation. And with respect to the empirical claims in “Opening the Black Box…”, we have shown that the observed compression results arise primarily from the double saturating nonlinearities and method of MI estimation, not stochasticity in SGD. We believe these are important results to communicate to the community.", "6. The main flaw/misconception of this work is in the estimate the mutual information in the RelU case. RelU Networks can’t converge without some regularization which limits the weight magnitude. This induces some distribution on the values of the units with a finite controlled variance, and the CDF of this distribution is the effective nonlinearity. This CDF should be binned equally (max entropy binning) as we do with the saturated tanh nonlinearity. This binning, or noise if you prefer, is NOT arbitrary! It has to to do with the ALWAYS FINITE precision of the units. The mutual information is bounded by both the inputs and layer entropies, and is always finite due to this inherent discretization of the units. When doing this correct quantization on the RelUs, we obtain , as shown in the talk [34:18], exactly the same compression phase as with saturated units. \n\nIn our simulations on ReLU networks in this paper, we have used the binning strategy described in “Opening the Black Box of Deep Neural Networks via information.” At a minimum, our results speak to the empirical methodology used in that paper (we note that max entropy binning, contrary to the comment, is not what is done in the “Opening the Black Box…” paper for tanh). Additionally we point out that in many cases ReLU networks can converge without regularization as has been found in simulations and described in a recent work examining generalization dynamics in neural networks https://arxiv.org/abs/1710.03667.\n\nDespite the fact that our binning strategy is the same one used in the paper in question, in response to the authors’ suggestion we have investigated other binning strategies as well (we emphasize that estimating mutual information in high dimensions is a notoriously difficult task and all binning or parametric methods are approximate, hence none are ‘correct’ or ‘incorrect’). Regarding max entropy binning, we note that this aims to use every bin equally frequently; and therefore the information is constant across training, so long as nonlinearities are invertible. Because of this we thought the even bin spacing used in the original paper was a fairer comparison. In particular, max entropy binning of the full joint CDF would typically yield constant information with respect to the input for linear and tanh networks, contradicting the results in the paper (intuitively, more bins are spaced in the saturation regime of the tanh units). It is possible that tanh units could show compression if they saturate hard enough due to machine precision, but this further highlights the importance of saturation and nonlinearity. We note that calculating the bin spacings from the marginal CDFs for each neuron separately would be far from the true max entropy binning of the joint CDF because it assumes independence between neurons. For ReLU networks, max entropy binning may not be able to achieve equal frequency bins because the nonlinearity is not invertible (any negative preactivations map exactly to zero). However this clearly shows that compression, if it is measured under this method, would be due to the impact of saturation and nonlinearity; and furthermore, shows the sensitivity of observed compression in the information plane to the method of MI estimation. More broadly, estimating mutual information is a difficult task in high dimensions regardless of the estimation technique employed. Thankfully, we note that our linear networks permit exact calculation of the MI, sidestepping the necessity to estimate MI entirely. For the linear case we find that there is no compression, providing at least one counterexample showing that a two phase fitting/compression dynamic is not a universal phenomenon. We believe our results provide reason for caution that every deep network will undergo a two phase fitting/compression dynamic. \n\n \n", "4. We also showed in the talk [32:11]and paper that there are clearly and directly two phases of the gradients distribution. First, high SNR gradients follow by a sharp flip to low SNR gradients, which corresponds to the slow saturation of the training error. This clear gradients phase transition, which we see with all types of non-linearities and architectures, beautifully corresponds to the “knee” between memorization and compression phases in the information plane. This gradient phase transition was reported by several other people. See e.g. https://medium.com/intuitionmachine/the-peculiar-behavior-of-deep-learning-loss-surfaces-330cb741ec17. This can be explained as done by Poggio in his theory 3 paper, or by Riccardo Zecchina and his coworkers using statistical mechanics. \n\nPlease see our response to comment 5, which also addresses these comments.\n\n5. This transition has little to do with the saturation of the nonlinearities, but mainly with the complex nature of the training error surfaces in high dimension. The saturation of the non-linearities is directly related the “collapsing gradients” phenomenon, which is well understood and led to the usage of RelU and other non-saturating non-linearities. Our compression phase happens BEFORE this saturation, and the compression is not a consequence of the saturation. Indeed, as we also noted, some of the units are pushed to the hard binary limit eventually, which makes the partition of the encoder harder. This can only enhance the compression, as also shown in this paper (rather inconsistent with other claims in the paper). \n\nThese two phases of stochastic gradient descent are general, known variously as the transient and stochastic phases or search and convergence phases, and are not a result of the complex nature of the training error surface in high dimensions (see, eg, Murata, 1998; Chee & Toulis, 2017). For instance, our simple 3 neuron model is not high dimensional but nevertheless shows this behavior, which has a straightforward origin: the transient phase corresponds to forgetting the initialization (if weights are initialized to be small, all must be increased, yielding a consistent mean gradient); the stochastic phase corresponds to oscillating in the vicinity of the minimum (when weights are large and the training error is near zero, different examples need the weights to be increased or decreased, yielding higher variance gradients). However, these two phases are not the cause of the observed compression. These phases happen for ReLU and linear networks (plots we will add to the appendix), where no compression is observed. And we emphasize that we have shown directly that compression is indeed a consequence of saturation and the approach to saturation for the tanh networks (note that compression due to the tanh nonlinearity can happen well before the ‘hard binary limit’). That is, we agree that SGD has two phases in general; but we disagree that these phases are causally connected to the compression observed, which we have shown to be due to the nonlinearity and binning methodology.", "3. Also showed in these talks some of our newer simulations, which include much larger and different problems (MNIST, CIFAR-10 with RelU nonlinearties, different architectures, CNN, Linear networks, etc.). In ALL these networks we observe essentially the same picture: at least the last hidden layer first improves generalization error (which is actually proved in my Berlin talk [20:53] to be DIRECTLY bounded by the mutual information on Y) by fitting the training data and adding more information on the inputs, and then further improve generalization by compressing the representation and “forget” the irrelevant details of the inputs. During both these phases of training the information on the relevant components of the input increases monotonically, as we show in our paper and nicely verified in the last section of this paper. One can of course have input compression without generalization, when the training size is too small to keep the homogeneity of the cover. This we clearly show in the paper and talk ([28:34] top left), as follows from the theory. \n\nOur results differ with these findings for ReLU and linear networks. Regarding compression in “at least the last hidden layer,” if this is the final softmax output layer, the results would be consistent with ours. If this is the final ReLU/linear layer just before the softmax output, does this mean that compression is not observed in lower layers? Again, in our simulations, we see no compression in any ReLU or linear layer, only at the final softmax output (which, as we show, can be explained by the double saturating nonlinearity in the final layer). Because they are analytically tractable, our results in linear networks in particular show that no compression in any layer occurs in these instances. Regarding the comment “information on the relevant components of the input increases monotonically, as we show in our paper”, we are not sure what this refers to in “Opening the Black Box…”. We note that we can define relevant input components in our case only because we specify the data generation process. However, this is not possible for datasets like the ones used in “Opening the Black Box…”. \n\n", "Thank you for the comments, we have carefully investigated them and responded in full below. \n\n2. In the archive papers and much more in the YouTube talks [https://www.youtube.com/watch?v=bLqJHjXihK8&t=912s , https://www.youtube.com/watch?v=FSfN2K3tnJU&t=5781s] which followed it, we give two independent theoretical arguments on (1) why and how the compression of the representation dramatically improves generalization, and (2) how the stochastic relaxation, due to either noise of the SGD by mini batches, OR a noisy training energy surface which effectively adds smaller similar noise also to BGD, push the weights distribution to a Gibbs measure in the training error. This is an old argument used in the statistical mechanics of learning 25 years ago, and is used today by many (e.g. Poggio). We then argue that this weight Gibbs distribution leads directly (essentially through Bayes rule) to the IB optimal encoders of the layers. These theoretical results are the real core of our theory, not the numerical simulations.\n\nWe disagree with a core theoretical result of this theory, namely that stochastic relaxation is responsible for the compression phase. There are two proposed ways that a stochastic relaxation could arise: first, SGD could behave eventually like a constrained diffusion; and second, a “noisy training energy surface” could effectively add noise to BGD. With respect to SGD, we have shown that the compression phase occurs even without it by using BGD without adding noise. Moreover, the theory relies on the noise in SGD acting like a constrained diffusion, whereas the behavior of SGD is in fact far from this because updates are highly correlated. This was also pointed out in a recent ICLR submission [“On the inductive bias of stochastic gradient descent” https://openreview.net/forum?id=HyWrIgW0W ] which states: “SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components.” \n\nWith respect to the suggestion that there is a “noisy training energy surface which effectively adds noise also to BDG,” in our batch gradient descent setting the training energy surface contains no noise that would cause the weights to converge to a Gibbs distribution. The statistical mechanics of learning papers (eg, Seung, Sompolinsky, & Tishby (1992)) explicitly add isotropic noise to the gradient to obtain Langevin dynamics and a Gibbs distribution over weights as is done in Eqn. 9 of Poggio’s Theory 3 paper, which we point out does not claim an equivalence between SGD and these Langevin dynamics. We emphasize that adding noise to the learning rules is not the standard practice in deep networks. For our batch GD setting there is no noise in the training dynamics, no Gibbs distribution on the weights, and yet nevertheless we observe nearly identical dynamics in the information plane. \n\nUsing the simple three neuron model, we show clearly that nonlinearity and the binning procedure can cause compression in this instance. This is our main point, which addresses a core claim of the information bottleneck theory of deep learning: compression does not appear to happen through a stochastic relaxation because (a) the randomness in SGD does not behave like a diffusion, (b) we observe identical compression even with true batch GD, where there is no noise and no stochastic relaxation, and (c) we have identified a simple mechanism that explains the observed empirical results based on the neural nonlinearity. We disagree with the statements “the diffusion phase mostly adds random noise to the weights, and they evolve like Wiener processes...” and “The stochasticity of SGD methods is usually motivated as a way of escaping local minima of the training error. In this paper we give it a new, perhaps much more important role: it generates highly efficient internal representations through compression by diffusion” for the reasons outlined above.\n\n", "Part 2 of the response of Naftali Tishby and Ravid Shwartz-Ziv\n\n6. The main flaw/misconception of this work is in the estimate the mutual information in the RelU case. For RelU Networks often people use some regularization which limits the weight magnitude. Anyway, even without regularization, there is some distribution on the values of the units with a finite controlled variance, and the CDF of this distribution is the effective nonlinearity. This CDF should be binned equally (max entropy binning) as we do with the saturated tanh nonlinearity. This binning, or noise if you prefer, is NOT arbitrary! It has to to do with the ALWAYS FINITE precision of the units. The mutual information is bounded by both the inputs and layer entropies, and is always finite due to this inherent discretization of the units. When doing this correct quantization on the RelUs, we obtain , as shown in the talk [34:18], exactly the same compression phase as with saturated units. \n\nIn fact binning is only the simplest way to estimate the MI. For RelU units (and for much larger networks) we estimated it using more sophisticated parametric methods such as mixture of Gaussian. \nThere is a lot of detailed literature on how to estimate MI in DNNs in practice. See e.g. https://arxiv.org/abs/1705.02436 .\n\n7. We have much to say about the linear analysis. It should be compared, as said in the paper, to the Linear Gaussian IB (GIB). Then one could nicely see the convergence to the GIB information curve through compression (projections to the CCA space). In general, however, linear networks don’t capture the most interesting aspects of deep learning, in our opinion.", "1. We would like to thank the authors for taking the effort to repeat and verify many of our numerical experiments. Basically, this paper confirms our theory and strengthen it. Unfortunately, the paper ignores much of our theoretical and experimental results and is flawed and misleading in many ways.\n\n2. In the archive papers and much more in the YouTube talks [https://www.youtube.com/watch?v=bLqJHjXihK8&t=912s , https://www.youtube.com/watch?v=FSfN2K3tnJU&t=5781s] which followed it, we give two independent theoretical arguments on (1) why and how the compression of the representation dramatically improves generalization, and (2) how the stochastic relaxation, due to either noise of the SGD by mini batches, OR a noisy training energy surface which effectively adds smaller similar noise also to BGD, push the weights distribution to a Gibbs measure in the training error. This is an old argument used in the statistical mechanics of learning 25 years ago, and is used today by many (e.g. Poggio).\nWe then argue that this weight Gibbs distribution leads directly (essentially through Bayes rule) to the IB optimal encoders of the layers. These theoretical results are the real core of our theory, not the numerical simulations.\n\n3. Also showed in these talks some of our newer simulations, which include much larger and different problems (MNIST, CIFAR-10 with RelU nonlinearties, different architectures, CNN, Linear networks, etc.). \nIn ALL these networks we observe essentially the same picture: at least the last hidden layer first improves generalization error (which is actually proved in my Berlin talk [20:53] to be DIRECTLY bounded by the mutual information on Y) by fitting the training data and adding more information on the inputs, and then further improve generalization by compressing the representation and “forget” the irrelevant details of the inputs. During both these phases of training the information on the relevant components of the input increases monotonically, as we show in our paper and nicely verified in the last section of this paper. One can of course have input compression without generalization, when the training size is too small to keep the homogeneity of the cover. This we clearly show in the paper and talk ([28:34] top left), as follows from the theory.\n\n4. We also showed in the talk [32:11]and paper that there are clearly and directly two phases of the gradients distribution. First, high SNR gradients follow by a sharp flip to low SNR gradients, which corresponds to the slow saturation of the training error. This clear gradients phase transition, which we see with all types of non-linearities and architectures, beautifully corresponds to the “knee” between memorization and compression phases in the information plane. \nThis gradient phase transition was reported by several other people. See e.g. https://medium.com/intuitionmachine/the-peculiar-behavior-of-deep-learning-loss-surfaces-330cb741ec17.\nThis can be explained as done by Poggio in his theory 3 paper, or by Riccardo Zecchina and his coworkers using statistical mechanics. \n\n5. This transition has little to do with the saturation of the nonlinearities, but mainly with the complex nature of the training error surfaces in high dimension. The saturation of the non-linearities is directly related the “collapsing gradients” phenomenon, which is well understood and led to the usage of RelU and other non-saturating non-linearities. \nOur compression phase happens BEFORE this saturation, and the compression is not a consequence of the saturation. Indeed, as we also noted, some of the units are pushed to the hard binary limit eventually, which makes the partition of the encoder harder. This can only enhance the compression, as also shown in this paper (rather inconsistent with other claims in the paper). \n\nSee also part 2.\n\n" ]
[ 6, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry_WPG-A-", "iclr_2018_ry_WPG-A-", "iclr_2018_ry_WPG-A-", "ryJ6FjclG", "Bkzy2_YeG", "rkeo2Zi7z", "rJzOv7qxG", "rJdaeccgf", "iclr_2018_ry_WPG-A-", "S1lBxcE1z", "iclr_2018_ry_WPG-A-", "HJbi_G5Jf", "S12WZqNyz", "BksedMckM", "Byn5PG9yG", "S1lBxcE1z", "S1lBxcE1z", "iclr_2018_ry_WPG-A-" ]
iclr_2018_BJJ9bz-0-
Reinforcement Learning from Imperfect Demonstrations
Robust real-world learning should benefit from both demonstrations and interaction with the environment. Current approaches to learning from demonstration and reward perform supervised learning on expert demonstration data and use reinforcement learning to further improve performance based on reward from the environment. These tasks have divergent losses which are difficult to jointly optimize; further, such methods can be very sensitive to noisy demonstrations. We propose a unified reinforcement learning algorithm that effectively normalizes the Q-function, reducing the Q-values of actions unseen in the demonstration data. Our Normalized Actor-Critic (NAC) method can learn from demonstration data of arbitrary quality and also leverages rewards from an interactive environment. NAC learns an initial policy network from demonstration and refines the policy in a real environment. Crucially, both learning from demonstration and interactive refinement use exactly the same objective, unlike prior approaches that combine distinct supervised and reinforcement losses. This makes NAC robust to suboptimal demonstration data, since the method is not forced to mimic all of the examples in the dataset. We show that our unified reinforcement learning algorithm can learn robustly and outperform existing baselines when evaluated on several realistic driving games.
workshop-papers
I appreciate the experimental results, which includes a comparison against several baselines, however, I echo some of the concerns raised by the reviewers that the formulation is unclear and hard to follow. Moreover, the novelty over [Nachum, 2017] and [Haarnoja, 2017] seems small. Especially because [Nachum, 2017] also used expert trajectories to improve the performance in their experiments. Detailed comment: The use of log-sum-exp state values is only valid for the optimal policy, so it is not clear how an on-policy state value is replaced with the log-sum-exp state value. Also, because the equations that you derive characterize the optimal policy, I am not sure if you need importance correction at all.
train
[ "rynqOnBez", "ryE3gOulz", "B1ornw9xG", "Hk_MXdpQM", "r1mUz_TXf", "HJA2l_67z", "BJL7Cv6mM", "Hym3W3ref" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public", "official_reviewer" ]
[ "Thanks for all the explanations on my review and the other comments. While I can now clearly see the contributions of the paper, the minimal revisions in the paper do not make the contributions clear yet (in my opinion that should already be clear after having read the introduction). The new section \"intuitive analysis\" is very nice.\n\n*******************************\n\nMy problem with this paper that all the theoretical contributions / the new approach refer to 2 arXiv papers, what's then left is an application of that approach to learning form imperfect demonstrations.\n\nQuality\n======\nThe approach seems sound but the paper does not provide many details on the underlying approach. The application to learning from (partially adversarial) demonstrations is a cool idea but effectively is a very straightforward application based on the insight that the approach can handle truly off-policy samples. The experiments are OK but I would have liked a more thorough analysis.\n\nClarity\n=====\nThe paper reads well, but it is not really clear what the claimed contribution is.\n\nOriginality\n=========\nThe application seems original.\n\nSignificance\n==========\nHaving an RL approach that can benefit from truly off-policy samples is highly relevant.\n\nPros and Cons\n============\n+ good results\n+ interesting idea of using the algorithm for RLfD\n- weak experiments for an application paper\n- not clear what's new", "This paper proposes a method to learn a control policy from both interactions with an environment and demonstrations. The method is inspired by the recent work on max entropy reinforcement learning and links between Q-learning and policy gradient methods. Especially the work builds upon the recent work by Haarnoja et al (2017) and Schulman et al (2017) (both unpublished Arxiv papers). \n\nI'm also not sur to see much differences with the previous work by Haarnoja et al and Schulman et al. It uses demonstrations to learn in an off-policy manner as in these papers. Also, the fact that the importance sampling ration is always cut at 1 (or not used at all) is inherited from these papers too. \n\nThe authors say they compare to DQfD but the last version of this method makes use of prioritized replay so as to avoid reusing too much the expert transitions and overfit (L2 regularization is also used). It seems this has not been implemented for comparison and that overfitting may come from this method missing. \n\nI'm also uncomfortable with the way most of the expert data are generated for experiments. Using data generated by a pre-trained network is usually not representative of what will happen in real life. Also, corrupting actions with noise in the replay buffer is not simulating correctly what would happen in reality. Indeed, a single error in some given state will often generate totally different trajectories and not affect a single transition. So imperfect demonstration have very typical distributions. I acknowledge that some real human demonstrations are used but there is not much about them and the experiment is very shortly described. ", "SUMMARY:\n\nThe motivation for this work is to have an RL algorithm that can use imperfect demonstrations to accelerate learning. The paper proposes an actor-critic algorithm, called Normalized Actor-Critic (NAC), based on the entropy-regularized formulation of RL, which is defined by adding the entropy of the policy as an additional term in the reward function.\nEntropy-regularized formulation leads to nice relationships between the value function and the policy, and has been explored recently by many, including [Ziebart, 2010], [Schulman, 2017], [Nachum, 2017], and [Haarnoja, 2017].\nThe paper benefits from such a relationship and derives an actor-critic algorithm. Specifically, the paper only parametrizes the Q function, and computes the policy gradient using the relation between the policy and Q function (Appendix A.1).\n\nThrough a set of experiments, the paper shows the effectiveness of the method.\n\n\nEVALUATION:\n\nI think exploring and understanding entropy-regularized RL algorithm is important. It is also important to be able to benefit from off-policy data. I also find the empirical results encouraging. But I have some concerns about this paper:\n\n- The derivations of the paper are unclear.\n- The relation with other recent work in entropy-regularized RL should be expanded.\n- The work is less about benefiting from demonstration data and more about using off-policy data.\n- The algorithm that performs well is not the one that was actually derived.\n\n* Unclear derivations:\nThe derivations of Appendix A.1 is unclear. It makes it difficult to verify the derivations.\n\nTo begin with, what is the loss function of which (9) and (10) are its gradients?\n\nTo be more specific, the choices of \\hat{Q} in (15) and \\hat{V} in (19) are not clear. For example, just after (18) it is said that “\\hat{Q} could be obtained through bootstrapping by R + gamma V_Q”. But if it is the case, shouldn’t we have a gradient of Q in (15) too? (or show that it can be ignored?)\n\nIt appears that \\hat{Q} and \\hat{V} are parameterized independently from Q (which is a function of theta). Later in the paper they are estimated using a target network, but this is not specified in the derivations.\n\nThe main problem boils down to the fact that the paper does not start from a loss function and compute all the gradients in a systematic way. Instead it starts from gradient terms, each of which seems to be from different papers, and then simplifies them. For example, the policy gradient in (8), which is further decomposed in Appendix A.1 as (15) and (16) and simplified, appears to be Eq. (50) of [Schulman et al., 2017] (https://arxiv.org/abs/1704.06440). In that paper we have Q_pi instead of \\hat{Q} though.\n\nI suggest that the authors start from a loss function and clearly derive all necessary steps.\n\n\n* Unclear relation with other papers:\nWhat part of the derivations of this work are novel? Currently the novelty is not obvious.\nFor example, having the gradient of both Q and V, as in (9), has been stated by [Haarnoja et al., 2017] (very similar formulation is developed in Appendix B of https://arxiv.org/abs/1702.08165).\nAn algorithm that can work with off-policy data has also been developed by [Nachum, 2017] (in the form of a Bellman residual minimization algorithm, as opposed to this work which essentially uses a Fitted Q-Iteration algorithm as the critic).\n\nI think the paper could do a better job differentiating from those other papers.\n\n\n* The claim that this paper is about learning from demonstration is a bit questionable. The paper essentially introduces a method to use off-policy data, which is of course important, but does not cover the important scenario where we only have access to (state,action) pairs given by an expert. Here it appears from the description of Algorithm 1 that the transitions in the demonstration data have the same semantic as the interaction data, i.e., (s,a,r,s’).\nThis makes it different from the work by [Kim et al., 2013], [Piot et al., 2014], and [Chemali et al., 2015], which do not require such a restriction on the demonstration data.\n\n\n* The paper mentions that to formalize the method as a policy gradient one, importance sampling should be used (the paragraph after (12)), but the performance of such a formulation is bad, as depicted in Figure 2. As a result, Algorithm 1 does not use importance sampling.\nThis basically suggests that by ignoring the fact that the data is collected off-policy, and treating it as an on-policy data, the agent might perform better. This is an interesting phenomenon and deservers further study, as currently doing the “wrong” things is better than doing the “right” thing. I think a good paper should investigate this fact more.", "We thank you for the feedback and comments. We clarify our contributions in General Clarifications to Common Misunderstandings. Also, we answer the question about 1. Should demonstrations include reward signals?\n2. Why NAC is not a simple application of an off-policy method to the learning from demonstration problem? in the General Clarifications.\n\nBesides the contributions that we respond to all reviewers, we would like to emphasize that although the technical details have appeared in [Haarnoja et al. 2017], our investigation of the method on the learning from demonstration problem and the better performance compared to the baselines are still novel. And as we explained in the common response, the NAC method is not simply taking an off-policy method and apply it on the learning from demonstration (LfD) problem. Many other off-policy methods such as Q learning doesn’t work at all on the LfD problem, while NAC has the correct prior that fits the LfD problems.\n \nThe reviewer also suggests more extensive discussions and analysis of why our method perform better than baselines. We had some mathematical discussions in Section 3. To be more concrete, we pose an intuitive example here. Suppose at some state s0, we observe the demonstration has taken an action a1, and receiving a reward of 1.0. The action space is {a0, a1, a2} for every state. Soft Q takes an update of \\nabla Q(s0, a1)*(Q(s0, a1) - \\hat{Q(s0, a1)}) for the parameter \\theta of the Q function, where \\hat{Q(s0, a1)} = 1.0+gamma*V(s0’) and s0’ is the next state after action a1. This update pushes the value of Q(s0, a1) close to 1.0+gamma*V(s0’). However, at the state s0, we didn’t observe other actions, as it’s usually the case in real world, there is no regressing target for Q(s0, a0) and Q(s0, a2). When we parametrize the Q function with a neural network, there is no guarantee of how Q(s0, a0) and Q(s0, a2) will compare to Q(s0, a1). As a result, the learning algorithm won’t give any meaningful Q values for the unobserved actions. The results in Figure 2 (left) has confirmed our analysis. A similar analysis applies to the hard Q method as well.\n \nNow we turn to analyze the proposed NAC method. The policy gradient update part is (\\nabla Q(s0, a1) - \\nabla V(s0))*(Q(s0, a1) - \\hat{Q(s0, a1)}), where V(s0) = log \\sum_a exp(Q(s0,a)). After manipulating the terms, the update is equal to \\nabla{ \\frac {exp(Q(s0, a1))} {sum_a exp(Q(s0, a))} } * (Q(s0, a1) - \\hat{Q(s0, a1)}). Note that the first term becomes the gradient of a cross entropy loss between a softmax layer output and a synthetic label of action a1. The logits are Q(s0, *). That is to say, when Q(s0, a1) - \\hat{Q(s0, a1)} < 0, i.e. the Q(s0, a1) is under-estimated, the method tries to increase Q(s0, a1) while pushing down Q(s0, a0) and Q(s0, a2). When Q(s0, a1) is over-estimated, the reverse process happens. It’s worth to note that the proposed method naturally has a prior of reducing the undemonstrated Q(s0, a0) and Q(s0, a2) values, when the current action a1 is good enough, i.e. Q(s0, a1) is lower than the bootstrapped \\hat{Q(s0, a1)} value. We also add the intuitive analysis in the newer version of our paper..\n \nDQfD works well when the demonstrated action is a good one. However, the supervise loss will deteriorate the performance when the actual demonstrated action is bad. The proposed NAC method, on the other hand, will learn from the bad demonstrations. In a case of the bad action, say the reward of doing a1 at state s0 is no longer 1.0, but -1.0 instead. The second term in the NAC update Q(s0, a1) - \\hat{Q(s0, a1)} will more likely to turn positive, because the bootstrapped value is lower. In that case, the NAC will push down Q(s0, a1) and push up the other two values. Although there is no guarantee of the sign of the second term, it will definitely be more positive when the reward is lower. In contrast, DQfD does not have this adaptive behavior.\n \nThank you for the comments on typos and references. We have already modified the texts and typos according to your suggestion. [Haarnoja et al,. 2017] is already a published work at International Conference of Machine Learning (ICML) 2017 and PMLR. We will correct the reference and cite the correct version.\n\n", "We thank you for the feedback and comments. We clarify our contributions in General Clarifications to Common Misunderstandings.\n \nAbout the baseline DQfD, we have already included the L2 regularization loss in the current version of our experiments, and we will clarify it in the paper. We did not include the prioritized replay buffer in our re-implementation because the prioritized replay was not a DQfD component when we wrote the paper. Moreover, since prioritized replay is an independent component for both algorithms, adding it will likely have a similar effect on both NAC and DQfD. However, to be complete in comparison, we will add one more comparison with prioritized replay in the final version.\n \nThe reviewer also mentions about the noisy data generation process. We generated data from a trained agent, but we did not simply add noise to the replay buffer. Instead, we corrupt the data while collecting them; the driving agent will have the compound error you mentioned in your comments. To quantify the quality of a dataset, we use a trained agent because we can measure the amount of corrupted actions and use that to indicate how imperfect the dataset is.\n \nWe collected data from amateur human players with their natural capability of playing video games. We can intentionally let the players do wrong actions but it will not be significantly different from trained agents. We also detail more about the human data and our experiments on human dataset accordingly. \n \nThank you for correcting the reference. [Haarnoja et al,. 2017] is already a published work at International Conference of Machine Learning (ICML) 2017 and PMLR. We will correct the reference and cite the correct version.\n", "We thank you for the feedback and comments. We clarify our contributions in General Clarifications to Common Misunderstandings. Also, we answer the question about 1. Should demonstrations include reward signals?\n2. Why NAC is not a simple application of an off-policy method to the learning from demonstration problem? in the General Clarifications.\n \nWe clarify our derivation of our method in Appendix. A. We reorganize Appendix A to derive the loss and gradient from a single objective.\n \nIn the newer version of the paper, we also include the comparison with the PCL method [Nachum et al., 2017]. We only include this method in the first set of experiment and find that our method consistently outperforms the baseline. We also plan to include this baseline in other settings methods in the final version of this paper.\n \nThe reviewer also suggested the fact that NAC without important sampling is better than NAC with it deserves further study. We planned to study this phenomenon in depth before the final version. Specifically, we want to test the hypothesis that it is the high variance of the importance sampling gradient estimator that causes the instability. Specifically, at some parameter \\theta_0, we could calculate the gradient either with importance sampling, or without it. By repeatedly compute the gradient with different data mini-batch at the same \\theta_0 for a sufficient number of rounds, we could get two gradient estimators’ bias and variance respectively. With those statistics, one could compare the two convergence speeds with the theories of stochastic gradient descent. We also plan to incorporate the ideas from [Gu et al. 2017] to reach a better policy gradient estimator by combining the two methods.\n \n[1] Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Bernhard Schölkopf, Sergey Levine. “Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning”. NIPS 2017.\n", "Here we further differentiate our work from related ones. Both of [Haarnoja et al., 2017] and [Schulman et al., 2017] focus on soft Q learning, where the update is derived from L2 loss of the Bellman error. The soft Q formulation in [Haarnoja et al., 2017] and [Schulman et al., 2017] are exactly the same when the action space is discrete and we present it in our experiments as the \"soft Q\" method. Our proposed NAC algorithm can only be derived when we start from a soft policy gradient perspective, where the objective is the future expected discounted sum of rewards. As we’ve shown in the paper, the \\nabla V term is the key difference between soft Q and NAC, and that extra term normalizes the Q value. We have a similar formulation as the one described in Appendix. B. in [Haarnoja et al., 2017]. However, they didn’t implement the algorithm, not to say fully explore the properties of such a NAC method. It’s because the method doesn’t have any obvious benefit as an off-policy learning method. We discover that in the learning from demonstration setting, the NAC method is quite useful and outperforms all the baselines.\n \nWe summarize by restating our novelties and contributions as follows:\n1. \tWe are the first to explore the NAC method experimentally in the learning from demonstration scenario and discovered that the proposed method is not only theoretically plausible but also outperforms alternative methods experimentally.\n2. \tTo the best of our knowledge, we are the first to propose a unified method to learn from both demonstration and environment interaction.\n3. \tUnlike other methods that utilize supervised learning to learn from demonstration, our pure reinforcement learning method is not sensitive to noisy demonstrations and it outperforms prior methods that explicitly mix supervised learning and reinforcement learning objectives.\n \nReviewer 1 and 2 also mention that the proposed method is more about using off-policy data, rather than using demonstrations. We would like to clarify two aspects here: \n\n1. Should demonstrations include reward signals?\n2. Why NAC is not a simple application of an off-policy method to the learning from demonstration problem?\n\nFor the first question, some literatures take a demonstration set as a collection of (s, a) pairs, such as [Kim et al., 2013] mentioned by Reviewer 1. While other works, such as [Hester et al., 2017], also include the reward into the demonstrations, i.e. the (s, a, r) tuple. Learning from demonstration is a broad concept and we refer to the second setting in this paper. We also clarify this in the related work section of the paper.\n\nFor the second question, we first note the difference between the demonstration and the general off-policy data. Although one could view the demonstration set in our paper as some off-policy data, they are critically different because demonstrations are mostly good behaviors while that is not necessarily true for off-policy data. That’s why generic off-policy methods such as Q learning and soft Q learning don’t work on demonstration set at all, as described in Section 3 of the paper. Our method, on the other hand, has the normalization factor that causes the method to mimic the demonstration data when there is no evidence to the contrary. This implies an assumption that the prior off-policy data is generally good, or at least better than a random policy. This is a major conceptual distinction from Q-learning and soft Q-learning methods and, indeed, as shown in our experiments, our method substantially outperforms these prior methods.\n", "The paper proposes to employ an algorithm from Haarnoja 2017 and Schulman 2017 for reinforcement learning where some of the data comes from potentially adversarial user demonstrations.\nThe paper does not seem to be able to make up its mind whether the approach is novel or not. \"We propose a unified LfD approach\" but then all the algorithmic details are \"as shown by Haarnoja / Schulman 2017\". It gets a bit messy with arXiv papers (in this case they don't seem to have appeared yet in a peer-reviewed venue) but the authors are treating these two papers as published, so I'll also treat this paper as an extension of these. Which means that this paper boils down to \"just\" taking an RL approach from Haarnoja / Schulman 2017 that works well with data that is truly off-policy and employing it for learning from (imperfect) demonstrations. Hence we have essentially an experimental paper.\nThe approach does reasonably well, but I'd like to have seen more extensive discussions and an analysis WHY it performs better (or in some experiments actually worse) compared to the baselines.\n\nMinor comments\n===============\nEq (1): \\operatorname{argmax} https://en.wikibooks.org/wiki/LaTeX/Advanced_Mathematics\nEq (4): I found this very strange, the text reads like this result is from 2010 while the entropy formulation above is claimed to be from 2017...\nSect. 3: \"Equation 8\" -> \"Equation (8)\" \\eqref{}\nSect. 3.1: So what's new compared to Haarnoja / Schulman 2017? \nSect. 3: you talk about normalization but only explain in Sect 3.2 (and even there the discussion should be improved) which part of the update corresponds to this normalization\nSect. 5.2: \"method(Mnih\" => \"method (Mnih\"" ]
[ 5, 6, 5, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJJ9bz-0-", "iclr_2018_BJJ9bz-0-", "iclr_2018_BJJ9bz-0-", "rynqOnBez", "ryE3gOulz", "B1ornw9xG", "iclr_2018_BJJ9bz-0-", "iclr_2018_BJJ9bz-0-" ]
iclr_2018_HJjvxl-Cb
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
workshop-papers
The reviewers agree that the results are promising and there are some interesting and novel aspect to the formulation. However, two of the reviews have raised concerns regarding the exposition and the discussion of previous work. The paper benefits from a detailed description of soft Q-learning, PCL, and off-policy actor-critic algorithms, and how SAC is different from those. Instead of differentiating against previous work by saying soft Q-learning and PCL are not actor-critic algorithms, discuss the similarities and differences and present empirical evaluation.
test
[ "HJr5IZ5gM", "Bk2K_F9lz", "SJ1QyNJZf", "BJ8AjRxNz", "BJHwgKj7G", "SJ0JxFsXM", "Sk2hJKsXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a soft actor-critic method aiming at lowering sample complexity and achieving a new convergence guarantee. However, the current paper has some correctness issues, is missing some related work and lacks a clear statement of innovation. \n\nThe first issue is that augmenting reward by adding an entropy term to the original RL objective is not clearly innovative. The connections, and improvements upon, other approaches need to be made more clear. In particular, the connection to the work by Haarnoja is unclear. There is this statement: “Although the soft Q-learning algorithm proposed by Haarnoja et al. (2017) has a value function and actor network, it is not a true actor-critic algorithm: the Q- function is estimating the optimal Q-function, and the actor does not directly affect the Q-function except through the data distribution. Hence, Haarnoja et al. (2017) motivates the actor network as an approximate sampler, rather than the actor in an actor-critic algorithm. Crucially, the convergence of this method hinges on how well this sampler approximates the true posterior. In contrast, we prove that our method converges to the optimal policy from a given policy class, regardless of the policy parameterization.” The last sentence suggests that the key difference is that any policy parameterization can be used, making the previous sentences less clear. Is the key extension on the proof, and so on the use of the projection with the KL-divergence?\n\nFurther, there is a missing connection to the paper “Guided policy search”, Levine and Koltun. Though it is a different framework, it clearly mentioned it uses the augmented reward to learn the sub-optimal policies (for differential dynamic program). The DDPG paper mentioned that DDPG can be also used within the GPS framework. That work is different, but a discussion should nonetheless be included about connections. \n \nIf the key novelty in this work is an extension on the theory, to allow any policy parameterization, and empirical results demonstrating improved performance over Haarnoja et al., there appear to be correctness issues in both, as laid out below.\n\nThe key novelty in the theory seems to be to use a projection onto the space of policies, using a KL divergence. There are, however, currently too many unclear or misspecified steps to verify correctness.\n1. The definition of pinew in Equation (6) is for one specific state s_t; shouldn’t this be across all states? If it is for one state, then E_{pinew} makes sense, since pi is only specified as a conditional distribution (not a joint distribution); if it is supposed to be expected value across all states, then what is E_{pinew}? Is it with the stationary distribution of pinew?\n\n2. The proof for Lemma 1 is hard to follow, because Z is not defined. I mostly was able to guess, based on Haarnoja et al., but the step between (18) and (19) where E_{pinew} [Zpiold] - E_{piold} [Zpiold] is dropped is unclear to me. Zpiold does not depend on actions, so if the expectation is only w.r.t. to the action, then it cancels. This goes back to point 1, where it wouldn’t make much sense for the KL to only depend on actions. In fact, if pinew has to be computed separately for each state, then we are really back to tabular policies. \n\n3. “There is no need in principle to include a separate function approximator for the state value, since it is related to the Q-function and policy according to Qθ (st , at ) − log πφ (at |st )” This is not true, since you rely on the fact that you have separate network parameters to get an unbiased gradient estimate in (10). \n\nThe experimental results also appear to have some correctness issues. \n1. For the claim that the algorithm does better, this is also difficult to gauge because the graphs are unclear. In particular, it is not explained why the lines end early. How were multiple gradients incorporated into the update? Did you wait 4 (or 16) steps until computing a gradient update? This might explain why the lines end early, but then invalidates how the lines are drawn. Rather, the lines should be extended, where each point is plotted each 4 (or 16) steps. Doing this would remove the effect that the lines with 4 or 16 seem to learn faster (but really are just plotted on a different x-axis). \n\n2. There is a lack of experimental details. This includes missing details about neural network architectures used by each algorithm, parameter tuning details, how multiple gradients are used, etc. This omission makes the experiments not reproducible.\n\n3. Although DDPG is claimed to be very sensitive to parameter changes, and the proposed algorithm is more stable, there is no parameter sensitivity results showed. \n\nMinor comments:\n1. Graph font is much too small.\n2. Typo in (10), should be V(s_{t+1})\n3. Because the proof of Lemma 1 is so straightforward (just redefining reward), it would be better to actually spell it out, give a definition of entropy, etc. \n", "The paper presents an off-policy actor-critic method for learning a stochastic policy with entropy regularization. It is a direct extension of maximum entropy reinforcement learning for Q-learning (recently called soft-Q learning), and named soft actor-critic (SAC). Empirically SAC is shown to outperform DDPG significantly in terms of stability and sample efficiency, and can solve relatively difficult tasks that previously only on-policy (or hybrid on-policy/off-policy) method such as TRPO/PPO can solve stably. Besides entropy regularization, it also introduces multi-modal policy parameterization through mixture of Gaussians that enables diverse, on-policy exploration. \n\nThe main appeal of the paper is the strong empirical performance of this new off-policy method in continuous action benchmarks. Several design choices could be the key, so it is encouraged to provide more ablation studies on these, which would be highly valuable for the community. In particular,\n\n- Amortization of Q and \\pi through fitting state value function\n\n- On-policy exploration vs OU process based off-policy exploration\n\n- Mixture vs non-mixture-based stochastic policy\n\n- SAC vs soft Q-learning\n\nAnother valuable discussion to be had is the stability of off-policy algorithm comparing Q-learning versus actor-critic method.\n \nPros:\n\n- Simple off-policy algorithm that achieves significantly better performance than existing off-policy baseline algorithms\n\n- It allows on-policy exploration in off-policy learning, partially thanks to entropy regularization that prevents variance from shrinking to 0. It could be considered a major success of off-policy algorithm that removes heuristic exploration noise.\n\nCons:\n\n- Method is relatively simple extension from existing work in maximum entropy reinforcement learning. It is unclear what aspects lead to significant improvements in performance due to insufficient ablation studies. \n\n\nOther question:\n\n- Above Eq. 7 it discusses that fitting a state value function wrt Q and \\pi is shown to improve the stability significantly. Is this comparison with directly estimating state value using finite samples? If so, is the primary instability due to variance of the estimate, which can be avoided by drawing a lot of samples or do full integration (still reasonably tractable for finite mixture model)? Or, is the instability from elsewhere? By having SGD-based fitting of state value function, it appears to simulate slowly changing target values (similar role as target networks). If so, could a similar technique be used with DDPG and get more stable performance? \n\n", "Quality and clarity: \n\nIt seems that the authors can do a better job to improve the readability of the paper and its conciseness. The current structure of paper seems a bit suboptimal to me. The first 6.5 page of the paper is used to explain the idea of RL with entropy reward and how it can be extended to the case of parametrized value function and policy and then the whole experimental results is packed in only 1 page. I think the paper could be organized in a more balanced way by providing a more detailed description and analysis of the numerical results, especially given the fact that in my opinion this is the main strength of the paper. Finally some of the claims made in this paper is not really justified. For instance \"Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods\" not true, e.g., look at Deepmind AI recent work: https://arxiv.org/abs/1704.04651.\n\nOriginality and novelty:\n\nI think much of the ideas considered in this paper is already explored in previous work as it is acknowledged in the paper. However some of the techniques such as the way the policy is represented and the way the policy gradient formulation is approximated seems to be novel in the context of Deep RL though again these ideas have been explored in the literature of control and RL extensively. \n\nSignificance:\n\nI think the improvement on baseline in control suites is very impressive the problem is that the details of the implementation of algorithm e.g. architecture of neural network size of memory replay, the schedule the target network is implemented is not sufficiently explained in the paper so it is difficult to evaluate these results. Also the paper only compares with the original version of the baseline algorithms. These algorithms are improved since then and the new more efficient algorithms such as distributional policy gradients and NAF have been developed. So it would help to have a better understanding of this result if the paper compares with the state-of-the-art baselines. \n\nMinor:\nfor some reason different algorithms have ran for different number of steps which is a bit confusing. would be great if this is fixed in the future version. ", "They have added requested ablation studies in the main text (5.2) and some hyper-parameter sensitivity experiments in the appendix, with sufficient discussions on the results. While the entropy regularized actor critic is a direct extension on MaxEnt RL/soft Q-learning etc., the algorithm has shown impressive results and the follow-up studies are done sufficiently. I recommend for acceptance.\n ", "Thank you for your comments and feedback. We have added a number of additional experiments to address each of your concerns about the empirical results, and revised the paper to address all of your concerns regarding the theoretical results. We summarize these below.\n\nAs R3 notes, our method is novel when considered in the context of recent work in deep reinforcement learning. While the notion of entropy regularization is certainly not new (nor do we claim this), the particular method we propose is novel, and to our knowledge no prior work has proposed an off-policy actor-critic algorithm for optimizing the maximum entropy RL objective for continuous control. The empirical results show that this method substantially outperforms the previous state of the art in terms of sample efficiency on a range of very challenging continuous control tasks. We believe that state-of-the-art results on widely accepted benchmark tasks are of significant interest to the community, and merit publication in ICLR.\n\nTo address your concerns, we have revised the introduction to better communicate the contribution of our paper. Prior methods for maximum entropy policies have been formulated as Q-learning methods that learn the Q-function of the optimal policy directly, even though the optimal policy in continuous domains is intractable and the optimal behavior might not be reproducible. Another benefit of our formulation is that the practical approximation, the soft actor-critic algorithm, is simple to implement and does not rely on any biased approximations (such as estimation of the optimal value function in soft Q-learning) or approximate inference of the optimal policy (such as Stein variational gradient descent in soft Q-learning) which increases the time complexity. We have also cited the guided policy search work in the related work section.\n\nResponses to the comments on the theory:\n1. The minimization is indeed performed for each state independently. We have added clarification before Equation 4 (old Equation 6) to make it explicit.\n\n2. Thank you for pointing out the readability of the proof was insufficient. We have now revised the proofs to improve their clarity. To answer your specific question, Z is the partition function that, as you guessed, does not depend on actions, and therefore cancels out, which we now state explicitly in the proof.\n\n3. In fact, it is possible to estimate the value function in Equation 7 (was Equation 10) with Q - log \\pi evaluated at an action sampled from the current policy without introducing a bias. We have revised the second paragraph in Section 4.2 to better explain this point. We have also included an ablation in the experiment section where we estimate the value using the Q-function and policy directly, and we did not observe any significant difference in performance, but we found that the value function has an important role as a baseline for the off-policy policy gradient.\n\nResponses to the comment on the experimental results:\n1. We have addressed your concerns about the results. Indeed each of the experiments should have been run to convergence, we were unfortunately unable to do this for this submission due to time constraints. We have now updated the paper to include the full results as you requested. The current version of the paper has updated results, and now includes a soft Q-learning baseline (see Figure 1 on page 7), and we will also include NAF comparison in the final version of our paper. We took multiple gradient steps between sampling new evidence from the environment. Since the gradients are computed using samples from the replay buffer, they are not considered as additional steps in the learning curves, where the x-axes correspond to the number of environment steps. However, taking multiple gradient steps makes each step slower in terms of the wall-clock time--hence the experiment with a large number of gradient steps end earlier.\n\n2. We have added experimental details to Appendix C (how to apply our method to bounded action domains) and Appendix E (list of all hyperparameters we used in the experiments). We will also release a link to our code with the final version of our paper for reproducibility.\n\n3. We have added sensitivity study over the most important hyperparameters in Appendix D.\n", "Thank you for you constructive and useful suggestions. We have extended the experiment section to include all the suggested ablations and also added discussion and experiments regarding the sensitivity to hyperparameters in the appendix. In short, use of a separate value network has only a minor contribution to the stability of learning the Q-network, but it is crucial to be used as a baseline for the policy gradients and therefore cannot be excluded from the algorithm (see Figure 3 (c) in Section 5.2). One of the benefits of our formulation is that we can estimate the state value with a single action sample. This is possible since we estimate the value of the current policy instead of the value of the optimal policy as in soft Q-learning, where the estimation requires computing “log-sum-exp.” In our version, we only need to evaluate the expectation (“sum”), which can be estimated with a single sample without introducing a bias. In DDPG, the expectation is replaced with the evaluation of the Q-function at the current policy mean, which we found to be less robust and potentially the most important factor that makes our proposed algorithm work well.", "Thank you for your constructive comments and suggestions. We believe that our work does indeed compare to the state-of-the-art methods in terms of sample efficiency, and we have added a comparison to soft Q-learning (see the revised Figure 1 on page 7) and will add also NAF in the final to attempt to address your concerns. If there are specific other prior methods that the reviewer would like to see a comparison to, we would be happy to add them in the final. We address specific points raised by the reviewer below.\n\nTo address your concerns, we have extended the experiment section to include soft Q-learning as a baseline and will include NAF in the final. Distributional policy gradients is a concurrent submission to ICLR, and it therefore does not seem appropriate to require a comparison to this method, though we would be happy to discuss it in the final. \n\nOur paper has two major contributions: first, a novel theoretical framework, soft policy iteration, that is generally applicable for optimizing maximum entropy objectives, and second, a practical soft actor-critic algorithm that makes use of the theory and achieves state-of-the-art results in continuous benchmark tests. We think both of the contributions are of comparable importance, and therefore we have allocated a large part of the paper to the derivation and discussion of the theoretical framework.\n\nRegarding novelty: we are not aware of prior works that propose off-policy actor-critic algorithms within the maximum entropy framework for continuous control tasks, though if the reviewer has any suggestions for works of this kind, we would be happy to cite and discuss them. To our knowledge, the results reported in our experiments substantially improve on the state of the art (DDPG) in terms of sample efficiency, often by a very large margin. We believe that substantial improvements over the state of the art on sample efficiency, which is a crucial problem in deep RL, are of sufficient interest to the community to merit publication in ICLR.\n\nTo address your comment regarding the statement “Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods.” Note that this refers specifically to maximum entropy algorithms. To our knowledge, the only deep RL methods that optimize the entropy augmented objective are based on off-policy soft Q-learning (and similar prior methods that work in discrete domains), or on-policy policy gradients. Methods such as Gruslys et al. (referenced in your review) do not optimize the same objective, but rather use an entropy regularizer. We have revised the paper to address this and cited Gruslys et al. in the related work section, though we cannot compare to that method directly since it addresses discrete-action problems.\n" ]
[ 3, 7, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HJjvxl-Cb", "iclr_2018_HJjvxl-Cb", "iclr_2018_HJjvxl-Cb", "SJ0JxFsXM", "HJr5IZ5gM", "Bk2K_F9lz", "SJ1QyNJZf" ]
iclr_2018_rk6H0ZbRb
Intriguing Properties of Adversarial Examples
It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training. This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10. Our resulting architecture is more robust to white \emph{and} black box attacks compared to previous attempts.
workshop-papers
I am somewhat of two minds from the paper. The authors show empirically that adversarial perturbation error follows power law and looks for a possible explanation. The tie in with generalization is not clear to me and makes me wonder how to evaluate the significance of the finding of the power law distribution.. On the other hand, the authors present an interesting analysis, show that the finding holds in all the cases they explored and also found that architecture search can be used to find neural networks that are more resilient to adversarial search (the last shouldn't be surprising if that was indeed the training criterion). All in all, I think that while the paper needs a further iteration prior to publication, it already contains interesting bits that could spur very interesting discussion at the Workshop. (Side note: There's a reference missing on page 4, first paragraph)
test
[ "Bkm7cMvgf", "B17JC8dlf", "B1yz5XhgM", "H11B-xo7f", "ryQEy8JGG", "HJSmxwCbG", "r1DxkwA-G", "BJV6080bf", "SJTNVQ_1M", "H1mlAzf1f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper insists that adversarial error for small adversarial perturbation follows power low as a function of the perturbation size, and explains the cause by the logit-difference distributions using mean-field theory.\nThen, the authors propose two methods for improving adversarial robustness (entropy regularization and NAS with reinforcement learning).\n\n[strong points]\n* Based on experimental results over a broad range of datasets, deep network models and their attacks.\n* Discovery of the fact that adversarial error follows a power low as a function of the perturbation size epsilon for small epsilon.\n* They found entropy regularization improves adversarial robustness.\n* Their neural architecture search (NAS) with reinforcement learning found robust deep networks.\n\n[weak points]\n* Unclear derivation of Eq. (9). (What expansion is used in Eq. (21)?)\n* Non-strict argument using mean-field theory.\n* Unclear connection between their discovered universality and their proposals (entropy regularization and NAS with reinforcement learning).", "Very intriguing paper and results to say the least. I like the way it is written, and the neat interpretations that the authors give of what is going on (instead of assuming that readers will see the same). There is a well presented story of experiments to follow which gives us insight into the problem. \n\nInteresting insight into defensive distillation and the effects of uncertainty in neural networks.\n\nQuality/Clarity: well written and was easy for me to read\nOriginality: Brings both new ideas and unexpected experimental results.\nSignificance: Creates more questions than it answers, which imo is a positive as this topic definitely deserves more research.\n\nRemarks:\n- Maybe re-render Figure 3 at a higher resolution?\n- The caption of Figure 5 doesn't match the labels in the figure's legend, and also has a weird wording, making it unclear what (a) and (b) refer to.\n- In section 4 you say you test your models with FGSM accuracy, but in Figure 7 you report stepll and PGD accuracy, could you also plot the same curves for FGSM?\n- In Figure 4, I'm not sure I understand the right-tail of the distributions. Does it mean that when Delta_ij is very large, epsilon can be very small and still cause an adversarial pertubation? If so does it mean that overconfidence in the extreme is also bad?\n", "This work presents an empirical study aiming at improving the understanding of the vulnerability of neural networks to adversarial examples. Paraphrasing the authors, the main observation of the study is that the vulnerability is due to an inherent uncertainty that neural networks have about their predictions ( the difference between the logits). This is consistent across architectures, datasets. Further, the authors note that \"the universality is not a result of the specific content of these datasets nor the ability of the model to generalize.\"\n\nWhile this empirical study contains valuable information, its above conclusions are factually wrong. It can be theoretically proven at least using two routes. They are also in contradiction with other empirical observations consistent across several previous studies. \n\n1-Constructive counter-argument: Consider a neural network that always outputs a constant prediction. It (1) is by definition independent of any dataset (2) generalizes perfectly (3) has zero adversarial error, hence contradicting the central statement of the paper. \n\n2- Analysis-based counter-argument: Consider a neural network with one hidden layer and two classes. It is easy to show that the difference between the scores (logits) of the two classes is linear in the operator norm of the hidden weight matrix and linear in the L2-norm of the last weight vector. Therefore, the robustness of the model indeed depends on its capability to generalize because the latter is essentially governed by the geometric margin of the linear separator and the spectral norm of the weight matrix (see [1,2,3]). QED.\n\n3- Further, the lack of calibration of neural networks and its causes are well known. Among other things, it is due to the use of building blocks (such as batch-norm [4]), regularization (e.g., weight decay) or the use of softmax+cross-entropy during training. While this is convenient for optimization reasons, it indeed hurts the calibration. The authors should try to train a neural network with a large margin criteria and see if the same phenomenon still holds when they measure the geometric margin. Another alternative is to use a temperature with the softmax[4]. Therefore, the observations of the empirical study cannot be generalized to neural networks and should be explicitly restricted to neural networks using softmax with cross-entropy as criteria. \n\nI believe the conclusions of this study are misleading, hence I recommend to reject the paper. \n\n\n[1] Spectrally Normalized Margin-bounds Margin bounds for neural networks (Bartlett et al., 2017)\n[2] Parseval Networks: Improving Robustness to Adversarial Examples (Cisse et al., 2017) \n[3] Formal Guarantees on the Robustness of a classifier against adversarial examples (Hein et al., 2017)\n[4] On the Calibration of Modern Neural Networks (Guo et al., 2017)", "We thank the reviewers for their reviews. We would like to summarize our responses to individual reviewers. Our work shows two fundamental (and surprising) commonalities across datasets and models: logit differences and adversarial error have the same functional form across all models tested on MNIST, CIFAR10, and ImageNet. We show that these commonalities even hold for random data, and we theoretically derive the origin and its consequences under a mean-field approximation. Based on our observations we propose a counter-intuitive regularization term, entropy penalty, to reduce adversarial sensitivity. Since our results imply that better models are more robust, we use neural architecture search (NAS) to find a model that is adversarially more robust than previously available models. We can move the part on NAS to appendix if the reviewers see fit. In summary, our paper makes important contributions on three fronts: empirical findings, theoretical explanations of these findings, and practical results on adversarial robustness.\n\nMain criticism by AnonReviewer3 is based on a miscommunication. As explained below, our results agree with this reviewer: models that generalize better tend to be more robust. Furthermore, we implemented the experiments proposed by this reviewer, including a thought experiment. We show that the results of all of these experiments support our conclusions. Thanks to the suggestions by this reviewer, we have changed the wording to make our point more clear.\n\nAnonReviewer1 is concerned with the mean field approximation we employed, which we disagree with. Mean-field approximation has been used for more than a century to model complex systems, and its strengths as well as shortcomings are well understood[1,2,3,4,5]. We evaluated the validity of our approximations at every step. We are happy to discuss any particular step of the derivation, however we believe that the reviewer’s general criticism of mean-field approximation is not specific enough to warrant the rejection of the paper or for us to address the concern. The step of the derivation that the reviewer found unclear (Eq. 21) was just a Taylor expansion to smallest order, which we clarified in our revision.\n\nOverall, the reviewer reports do not have concrete disagreements with our results, and the reviewers found our experiments to be interesting over a broad range of datasets, models, and attacks. We have supported our arguments with concrete empirical evidence. In light of these, we hope that the AnonReviewer1 and AnonReviewer3 reconsider their scores. \n\n[1] Weiss, Pierre. \"L'hypothèse du champ moléculaire et la propriété ferromagnétique.\" J. phys. theor. appl. 6.1 (1907): 661-690.\n[2] Peterson, Carsten. \"A mean field theory learning algorithm for neural networks.\" Complex systems 1 (1987): 995-1019.\n[3] Kardar, Mehran. Statistical physics of fields. Cambridge University Press, 2007.\n[4] Poole, Ben, et al. \"Exponential expressivity in deep neural networks through transient chaos.\" Advances in neural information processing systems. 2016.\n[5] Schoenholz, Samuel S., et al. \"Deep Information Propagation.\" ICLR. 2017.\n", "We thank the reviewer for a careful reading of the manuscript and the helpful suggestions. \nWe are delighted that the reviewer thinks this topic deserves more research; we certainly agree! \n\nWe implemented the suggestions by the reviewer, as detailed below:\n- Re-rendered Fig. 3 at a higher resolution. We noticed that Fig. 3 may look pixelated on certain web browsers, but rendered correctly on all pdf viewers we have tried. \n- Corrected the typos and clarified the caption of Fig. 5. We appreciate the reviewer noticing this.\n- Experiment 1 networks were trained with stepll and Experiment 2 networks were trained with PGD. However, we did use FGSM accuracy on the validation set to choose the architectures. For this reason, we followed the reviewer's suggestion and plotted the same curves for FGSM attack in Figure 15. \n- Fig. 4 presents histograms where both axes are shown in log scale. The right-tail of the distributions signify that there are not many samples with as large \\Delta_{ij} values. \n\n", "We thank the reviewer for a careful reading of the manuscript and the helpful feedback. We are glad that the reviewer found several strong points about our paper. Below we respond point by point to the criticism:\n\n1) In Eq. (21) we expand both F(r+\\Delta_{1j}) and P(r+\\Delta_{1j}) to lowest order in \\Delta_{1j}, using regular Taylor expansion. We have added another step to the derivation to clarify. \n\n2) While mean field theory is an approximate framework, it has a long history of effective use across a wide range of fields studying complex behavior including machine learning. For example, there are papers that approach neural networks from a mean field perspective dating back to at least 1989 [1]. Here, at each step of our calculation we evaluate the validity of the mean field approximation in Fig. 3 and Fig. 4. If there is a specific point in the approximation that the reviewer objects to, we would be happy to address it further.\n\n3) Our proposed entropy regularization is directly related to the finding that logit difference distribution is universal. Since the adversarial error has a universal form due to the universal behavior of logit difference distribution, we tried to increase the logit differences to make our models more robust. As we show in Fig. 6, the entropy regularizer does increase the logit differences, as expected. Due to the increased logit differences, models that were trained with and without adversarial training are more robust to adversarial examples, as shown in Fig. 5 (MNIST) and Fig. 11 (CIFAR10). \n\nAs mentioned in the paper, although the functional form of the adversarial error is universal, better models are quantitatively more robust to adversarial examples (e.g. Figs 1a and 1b). Given this, we wanted to study whether architecture can be engineered to improve adversarial accuracy. As mentioned in our submission, recent papers have found that larger models are more robust, but left unanswered whether models that generalize better are less susceptible to adversarial examples [2,3]. Using NAS, we show that models that generalize better are more robust, however model size does not seem to correlate strongly with adversarial sensitivity. Our findings together present a unified analysis of a model’s sensitivity to adversarial examples: commonalities among datasets, cause of the commonalities, and dependence on architecture. \n\n[1] Peterson, Carsten. \"A mean field theory learning algorithm for neural networks.\" Complex systems 1 (1987): 995-1019.\n[2] Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. \"Adversarial machine learning at scale.\" arXiv preprint arXiv:1611.01236 (2016).\n[3] Madry, Aleksander, et al. \"Towards deep learning models resistant to adversarial attacks.\" arXiv preprint arXiv:1706.06083 (2017).", "Responses to specific points: \n1- This thought experiment actually agrees with our paper: a neural network that always outputs a constant has no uncertainty about its predictions, and thus has zero adversarial error. Furthermore, our theory assumed uncorrelated logits, but we empirically show that the power-law tails are robust to the the amount of correlation present in the logits of the commonly used neural networks. In the thought experiment suggested by the reviewer, the logits are maximally correlated. It is for this reason that our theory may not apply. Finally, we note that in this example the input-logit Jacobian is zero. In this case, our mathematical framework correctly predicts that \\hat\\epsilon\\to\\infty and so no amount of adversarial perturbation will change the predicted class. \n\n2- As mentioned above, we agree that models that generalize better have higher adversarial robustness. As can be seen in Fig. 1a and 1b, the models with best generalization (NASNet, Inception-ResNet v2, Inception v4) are also adversarially most robust, especially for small epsilon values. This is why we performed Neural Architecture Search to find adversarially robust architectures: although the qualitative form of the adversarial error is a power-law with similar exponents, the quantitative robustness can be improved via adversarial training, architecture engineering, and regularization. We have used all three of these techniques to increase the adversarial robustness in our study. \n\n3- The lack of calibration of neural networks and its causes may be well known, but our contribution is to point out that the functional form of the logit differences is universal across datasets and models, and unchanged after training. \n\nWe added two new figures to the appendix, Fig. 12 and Fig. 13, which show that the reported universality is not restricted to neural networks using softmax with cross-entropy as loss. In Fig. 12, we trained a fully-connected network on MNIST with hinge-loss (as suggested by the reviewer). We attacked this network both by differentiating the hinge-loss and a cross-entropy loss (attacks with cross-entropy loss are more successful, as also observed in the submission “Certified Defenses against Adversarial Examples”). We show that both of these attacks lead to the same universal behavior, both for adversarial error and for logit differences. We repeat the same experiment using an L2-norm loss for training, and reach the same results as the experiments in the original submission. \n\nIn short, our original submission is in agreement with the reviewer’s perspective; and newly performed experiments as suggested by the reviewer obey the universality that is presented by our paper. ", "We would first like to thank the reviewer for their careful reading of our manuscript and thoughtful comments. We are glad that the reviewer believes our study contains valuable information and interesting experiments. Meanwhile, we would like to address the concerns raised.\n\nSummary: We are confident that our results and conclusions are not at odds with the perspective of the referee. We believe the main issue stems from some ambiguous language in the original text that we have now corrected. We have also implemented the additional experiments proposed by the referee and have found them to corroborate our original conclusions. \n\nDetails: We believe that there is some confusion regarding what was meant by our statement “the universality is not a result of the specific content of these datasets nor the ability of the model to generalize.\" We are not proposing that the susceptibility of a neural network to adversarial examples is independent of its ability to generalize. Instead, we are saying that the functional form of the adversarial error as a function of epsilon does not depend on generalization (i.e. that it should scale like A * \\epsilon regardless of the network’s ability to generalize, as shown by our experiments on randomly sampled logits and MNIST with randomly-shuffled labels). In fact, we agree with the referee that the constant, A, will depend on the spectral norm of the Jacobian (and hence the readout weight matrix) and on the network’s ability to generalize. We copy some excerpts from the original submission to corroborate this below. However, the reviewer’s concerns allowed us to realize that our original phrasing was ambiguous. We have therefore reworded our conclusions to be clearer by replacing the problematic statement with: “Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training.” We have also removed the sentence “Here we argue that the origin of adversarial examples is primarily due to an inherent uncertainty that neural networks have about their predictions.”\n\nExcerpts from the original text showing agreement with the referee:\n\n“We observe that although the qualitative form of logit differences and adversarial error is universal, it can be quantitatively improved with entropy regularization and better network architectures.” \n“...vanilla NASNet-A (best clean accuracy in our study) has a lower adversarial error than adversarially trained [models]…”\nIn eq. 8 we find that the threshold for an adversarial error is proportional to J^TJ. This is clearly proportional to the spectral norm of the Jacobian.\n", "Thanks for the positive comment and the interesting question. Kurakin et al. did use non-integer values of epsilon during training. As mentioned in their paper, epsilon was sampled from a truncated normal defined in [0,16]. \n\nRegarding test-time: as we show in Fig 2c for MNIST, attacks with unit L2 norm have the same power-law form and exponent as FGSM, but allow for much larger change in each pixel value. For ImageNet, unit L2 norm attack has the same power-law form and exponent up to an epsilon of 70; this means that one pixel could change by as large as 70 due to adversarial distortion and still be in the power-law region. We will include an additional plot about this in the next version of our submission. ", "Very cool insights, I really enjoyed your paper. I had a question about your experiments with FGSM attacks for small epsilon (Figure 1 & 2). What is the rationale for considering non-integer values of epsilon here (especially epsilon < 1), since the resulting perturbed inputs do not actually represent valid RGB images? As I understand it, simply converting the image to a valid RGB representation would remove any perturbation with epsilon < 0.5.\nWhile it is interesting that adversarially trained models did not learn to be robust in that regime, is that really surprising given that Kurakin et al. (2016) seem to only consider integer values of epsilon in their paper?" ]
[ 5, 8, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rk6H0ZbRb", "iclr_2018_rk6H0ZbRb", "iclr_2018_rk6H0ZbRb", "iclr_2018_rk6H0ZbRb", "B17JC8dlf", "Bkm7cMvgf", "BJV6080bf", "B1yz5XhgM", "H1mlAzf1f", "iclr_2018_rk6H0ZbRb" ]
iclr_2018_BJy0fcgRZ
Capturing Human Category Representations by Sampling in Deep Feature Spaces
Understanding how people represent categories is a core problem in cognitive science, with the flexibility of human learning remaining a gold standard to which modern artificial intelligence and machine learning aspire. Decades of psychological research have yielded a variety of formal theories of categories, yet validating these theories with naturalistic stimuli remains a challenge. The problem is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires having a workable representation of these stimuli. Deep neural networks have recently been successful in a range of computer vision tasks and provide a way to represent the features of images. In this paper, we introduce a method for estimating the structure of human categories that draws on ideas from both cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep representation learners. We provide qualitative and quantitative results as a proof of concept for the feasibility of the method. Samples drawn from human distributions rival the quality of current state-of-the-art generative models and outperform alternative methods for estimating the structure of human categories.
workshop-papers
This paper introduces a GAN-based framework for inferring human category representations. The reviewers agree that the idea is interesting and well-motivated, and the results are promising. The technical contribution is not significant, but nevertheless the paper combines existing ideas in an interesting way. The reviewers would also like to see some more work towards the direction of investigation of the results and extraction of insights, without which the paper feels somehow incomplete.
train
[ "r1i3YtHgG", "H1jrxgclM", "Bk01UWclf", "H1cX123mM", "BJ5FaihmM", "ByxrasnQf", "HJp9njnQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Quality\n\nThis paper demonstrates that human category representations can be inferred by sampling deep feature spaces. The idea is an extension of the earlier developed MCMC with people approach where samples are drawn in the latent space of a DCGAN and a BiGAN. The approach is thoroughly validated using two online behavioural experiments.\n\nClarity\n\nThe rationale is clear and the results are straightforward to interpret. In Section 4.2.1 statements on resemblance and closeness to mean faces could be tested. Last sentences on page 7 are hard to parse. The final sentence probably relates back to the CI approach. A few typos.\n\nOriginality\n\nThe approach is a straightforward extension of the MCMCP approach using generative models.\n\nSignificance \n\nThe approach improves on previous category estimation approaches by embracing the expressiveness of recent generative models. Extensive experiments demonstrate the usefulness of the approach.\n\nPros\n\nUseful extension of an important technique backed up by behavioural experiments.\n\nCons\n\nDoes not provide new theory but combines existing ideas in a new manner.", "This paper presents a method based on GANs for visualizing how humans represent visual categories. Authors perform experiments on two datasets: Asian Faces Dataset and ImageNet Large Scale Recognition Challenge dataset.\n\nPositive aspects:\n+ The idea of using GANs for this goal is smart and interesting\n+ The results seem interesting too\n\nWeaknesses:\n- Some aspects of the paper are not clear and presentation needs improvement.\n- I miss a clearer results comparison with previous methods, like Vondrick et al. 2015.\n\nSpecific comments and questions:\n\n- Figure 1 is not clear. Authors should clarify how they use the inference network and what the two arrows from this inference network represent.\n- Figure 2 is also not clear. Just the FLD projections of the MCMCP chains are difficult to interpret. The legend of the figure is too tiny. The right part of the figure should be better described in the text or in the caption, I don't understand well what this illustrates.\n- Regarding to the human experiments with AMT: how do the authors deal with noise on the workers performance? Is any qualification task used? What are the instructions given to the workers?\n- In section 4.2. the authors state \"We also simultaneously learn a corresponding inference network, .... granular human biases captured\". This seems interesting but I didn't find any result on that in the paper. Can you give more details or refer to where in the paper it is discussed/tested?\n- Figure 4 shows \"most interpretable mixture components\". How this \"most interpretable\" were selected?\n- In second paragraph Section 4.3, it should be Table 1 instead of Figure 1. \n- It would be interesting to see a discussion on why MCMCP Density is better for group 1 and MCMCP Mean is better for group 2. To see the confusion matrixes could be useful.\n\nI like this paper. The addressed problem is challenging and the proposed idea seems interesting. However, the aspects mentioned make me think the paper needs some improvements to be published.\n", "The idea of using MCMCP with GANs is well-motivated and well-presented\nin the paper, and the approach is new as far as I know. Figures 3 and 5 are\nconvincing evidence that MCMCP compares favorably to direct sampling of\nthe GAN feature space using the classification images approach.\n\nHowever, as discussed in the introduction, the reason an efficient\nsampling method might be interesting would be to provide insight\non the components of perception. On these insights, the paper felt\nincomplete.\n\nFor example, it was not investigated whether the method identifies\nclassification features that generalize. The faces experiment is\nsimilar to previous work done by Martin (2011) and Kontsevich\n(2004) but unlike that previous work does not investgiate whether\nclassification features have been identified that can be added to an\narbitrary image to change the attribute \"happy vs sad\" or \"male vs female\".\n\nSimilarly, the second experiment in Table 1 compares classification\naccuracy between different sampling methods, but it does not provide\nany comparison as done in Vondrick (2015) to a classifier trained\nin a conventional way (such as an SVM), so it is difficult to discern\nwhether the learned distributions are informative.\n\nFinally, the effect of choosing GAN features vs a more \"naive\" feature\nspace is not explored in detail. For example, the GAN is trained\non an image data set with many birds and cars but not many\nfire hydrants. Is the method giving us a picture of this data set?", "Changes in response to initial reviews: clarifications, fixed typos, extended figure captions, and a small revision to Figure 1.", "Thank you for your comments. We have addressed some typos and unclear sentences and agree that additional experiments in the future to understand the nature of the gendered smile bias face results would be interesting.\n\nNote that our work can be viewed as engaging with the theoretical problem of estimating unobservable mental content. MCMCP in pixel space provides the perfect solution to this problem, yet is surely intractable. Here we propose that a tractable first step is to assume a reasonable approximation (using an invertible feature space), from which further iterative improvements can be made.", "Thank you for your comments and suggestions. We include many comparisons with the classification image method used by Vondrick et al. (2015) that focus on the mental content of the captured distributions, which is the goal of our paper (see Figures 2, 3, and 5, as well as Table 1). Like Vondrick et al. (2015), we show that classifiers derived from mental distributions do better than chance in predicting labels of real images. However, unlike Vondrick et al. (2015), note that we are not interested in augmenting computer vision methods to improve benchmark scores, but rather in developing innovative methods for modeling human mental representations.\n\nAnswers to specific comments and questions:\n\n- Our newest draft makes Figure 1 more informative. Note that the inference network does not need to be used, and was not used in Section 4.1, because we know which z vectors generated which images for each set of trials, and do not need to convert the generated images back to inferred z representations in practice. However, the inference network is necessary for any application of our method that requires the use of any image not rendered by the network (i.e., in order to classify new images such as in Section 4.3).\n\n- Figure 2 has been enlarged and the newest uploaded draft extends the caption. The FLD projections simply show that the chains for different categories are well-separated, meaning that they successfully characterize different featural content.\n\n- We told AMT workers that it was important that they answer as best they can and used stringent selection criteria. If a single image did not load, the data was thrown out and a new subject was recruited to continue the chain at its original entry point. \n\n- In order to include inference in our GAN network, we use BiGAN (Donahue et al., 2016).\n\n- In Figure 4, we show the means of the mixture components with the largest mixture weights. We excluded only a small set of components that were presumably useful in explaining holdout samples and classifying, but which had no discernable visual content (washed out brown color). This appears to happen whenever large numbers of samples are summarized by a single mean (the CI method often showed only this behavior).\n\n- Comparing MCMCP Density and MCMCP Mean for groups 1 and 2 tells us little because the individual categories do not always give the same results. More importantly, we see the variation in results as a lesson that an inflexible method may not be able to cope with particular categories and how they interact with the particular latent space learned by the network. Using MCMCP avoids having to make any such limiting assumptions, and we can simply choose the density with the best fit to human samples.", "Thank you for your comments and suggestions. We agree that more can be done to inspect the nature of the solution obtained by combining MCMCP with modern generative networks, but we see this as future application of the overall toolset we’ve designed and demonstrated in the current work. The suggested method of using classification features to change image attributes assumes a linear/additive feature space. Since we can learn any distribution with MCMCP, these simple methods do not apply to the general case.\n\nOne of the methods we used to compare Classification Images (CI) and MCMCP was to assess how learned mental distributions could predict class labels for images held out from the training set. However, it is important to note that any held out set of images suffers from the same dataset bias that we sought to avoid (see introduction). For this reason, while a better estimate of a mental concept may perform better than other methods in predicting held out sets, there is no guarantee that it will converge to a model that performs equally or better than classifiers trained on those biased datasets. In keeping with the specific goals of our paper, we included no such analysis in our paper. However, the reviewer may find it useful to know that classifiers trained on a similar training set to the held out images were more successful in predicting class labels for those held out images (the test set). Inspecting the samples from the captured mental distributions gives us good reason to believe mental and synthetic concepts are different because many images favored by humans appear a great deal more abstract than what would be expected from current generative models (e.g., see water bottle examples).\n\nIt is unclear how stratification of classes in the datasets used to train our networks detract significantly from the results presented in our paper (i.e., it is unlikely to interact with our finding regarding the improvement over CI). Also note that we strategically avoided classes that are most disproportionately represented in the ILSVRC12 dataset, such as “dog”, which makes up more than 10% of the dataset." ]
[ 6, 5, 5, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BJy0fcgRZ", "iclr_2018_BJy0fcgRZ", "iclr_2018_BJy0fcgRZ", "iclr_2018_BJy0fcgRZ", "r1i3YtHgG", "H1jrxgclM", "Bk01UWclf" ]
iclr_2018_Sy4c-3xRW
DropMax: Adaptive Stochastic Softmax
We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes with some probability, for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are learned based on the input via regularized variational inference. This stochastic regularization has an effect of building an ensemble classifier out of combinatorial number of classifiers with different decision boundaries. Moreover, the learning of dropout probabilities for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains improved accuracy over regular softmax classifier and other baselines. Further analysis of the learned dropout masks shows that our model indeed selects confusing classes more often when it performs classification.
workshop-papers
This paper proposes a general regularization algorithm which builds on the dropout idea. This is a very significant topic. The overall motivation is good, but the specific design choices are less well motivated over, for example, ad-hoc choices. Some concerns remain after the post-rebuttal discussion with the reviewers: the improvement is incremental in terms of concepts and methodology, the clarity needs to be improved and the experiments are somehow weak. In summary, the main idea and research direction is interesting, but the attempted generality of the algorithm and the significance of the area call for a more clear and convincing presentation.
train
[ "rkBcQHBgG", "ryl6gl5xM", "BkQfMZLNz", "Bkp1F5OlG", "ry0kYb6Xf", "Syd-ObpXM", "HkjrLl2Gf", "ry49UlnGz", "SJs2_xhGz", "BJbUYx3GM", "ryj2c77MG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper propose an adaptive dropout strategy for class logits. They learn a distribution q(z | x, y) that randomly throw class logits. By doing so they ensemble predictions of the models between different set of classes, and focuses on more difficult discrimination tasks. They learn the dropout distribution by variational inference with concrete relaxation. \n\nOverall I think this is a good paper. The technique sounds, the presentation is clear and I have not seen similar paper elsewhere (not 100% sure about the originality of the work though). \n\nPro:\n* General algorithm\n\nCon:\n* The experiment is a little weak. Only on CIFAR100 the proposed approach is much better than other approaches. I would like to see the results on more datasets. Maybe should also compare with more dropout algorithms, such as DropConnect and MaxOut.", "Pros\n- The proposed model is a nice way of multiplicatively combining two features :\n one which determines which classes to pay attention to, and other that\nprovides useful features for discrimination.\n\n- The adaptive component seems to provide improvements for small dataset sizes\n and large number of classes.\n\nCons\n- \"One can easily see that if o_t(x; w) = 0, then class t becomes neutral in the\n classification and the gradients are not back-propagated from it.\" : This does\nnot seem to be true. Even if the logits are zero, the class would have a\nnon-zero probability and would receive gradients. Do the authors mean\nexp(o_t(x;w)) = 0 ?\n\n- Related to the above, it should be clarified what is meant by dropping a\n class. Is its logit set to zero or -\\infty ? Excluding a class from the\nsoftmax is equivalent to having a logit of -\\infty, not zero. However, from the\nequations in the paper it seems that the logit is set to zero. This would not\nresult in excluding the unit. The overall effect would just be to raise the\nmagnitude of logits across the entire softmax.\n\n- It seems that the model benefits from at least two separate effects - one is\n the attention mechanism provided by the sigmoids, and the other is the\nstochasticity during training. Presently, it is not clear if only one of the\ncomponents is providing most of the benefits, or if both things are useful. It\nwould be great to compare this model to a non-stochastic one which just has the\nmultiplicative effects applied in a deterministic way (during both training and\ntesting).\n\n- The objective of the attention mechanism that sets the dropout mask seems to\n be the same as the primary objective of classifying the input, and the\nattention mechanism is prevented from solving the task by adding an extra\nentropy regularization. It would be useful to explain more why this is needed.\nWould it not be fine if the attention mechanism did a perfect job of selecting\nthe class ?\n\nQuality\nThe paper makes relevant comparisons and is overall well-motivated. However,\nsome aspects of the paper can be improved by adding more explanations.\n\nClarity\nSome crucial aspects of the paper are unclear as mentioned above.\n\nOriginality\nThe main contribution of the paper is similar to multiplicative gating. The\nadded stochasticity and the model ensembling interpretation is probably novel.\nHowever, experiments are insufficient to determine whether it is this novelty\nthat contributes to improved performance or just the gating.\n\nSignificance\nThis paper makes incremental improvements and would be of moderate interest to\nthe machine learning community.\n\nTypos :\n- In Eq 3, the numerator has z_t. Should that be z_y ?\n- In Eq 5, the denominator has z_y. Should that be z_t ?", "Thank you for your response which I have read. ", "The paper discusses dropping out the pre-softmax logits in an adaptive manner. This isn't a huge conceptual leap given previous work, for instance that of Ba and Frey 2013 or the sequence of papers by Gal and his coauthors on variational interprations of dropout. In the spirit of the latter series of papers on variational dropout there is a derivation of this algorithm using ideas from variational inference. The variational approximation is a bit odd in that it doesn't have any variational parameters, and indeed a further regulariser in equation (14) is needed to give the desired behaviour. A fairly small, but consistent improvement on the base model and other similar ideas is reported in Table 1. I would have liked to have seen results on ImageNet. I don't find (the too small) Figure 2 to be compelling evidence that \"our dropmax effectively prevents\noverfiting by converging to much lower test loss\". The test loss in question looks like a noisy version of the base test loss with a slightly lower mean. There are grammatical errors throughout the paper at a higher rate than would normally be found in a successful submission at this stage. Figure 3 illustrates the idea nicely. Which of the MNIST models from Table 1 was used?\n", "We really appreciate your effort on reproduction of the experimental results. Here we clarify what you have mentioned about the experimental setup.\n\n1. Experimental setup for Cifar10 and 100: The batch size is 128, and the number of epoch is 200. Weight decay is fixed at 1e-4. Learning rate starts from 0.1 and multiplied by 0.1 at 80, 120, and 160-th epoch. We used SGD optimizer with momentum of 0.9. The baseline model is resnet-34, which you can obtain from https://github.com/tensorflow/models/tree/master/official/resnet.\n\n2. Experimental setup for AwA: The batch size is 125 and the number of epoch is 100. Weight decay is fixed at 1e-4. Learning rate starts from 0.001 and is multiplied by 0.1 at 30 and 60 epochs. We used the SGD optimizer with the momentum of 0.9. You can obtain the pretrained model and code from https://github.com/kratzert/finetune_alexnet_with_tensorflow, with explanation from https://kratzert.github.io/2017/02/24/finetuning-alexnet-with-tensorflow.html.\n\n3. We used the same validation set for tuning of hyperparameters for all other models.\n\n4. S = 100 in MNIST, while S=30 for other dataset. However, we do not consider S as a significant factor.\n\n5. We updated the convergence plot in the revision.\n\n6. The variational term is essential for valid variational inference, and thus it should not be ignored. We checked it with our own experimental setting, that the variational term is also crucial for the performance.\n\n7. Instead of dropping out logits, in our revision, we drop out class exponentiation as you have suggested.\n", "We really appreciate the constructive comments from all reviewers and thank to the UC Irvine team for reproduction of the experimental results. Here we briefly mention what has been updated in the revision. For more detailed explanations, please refer to the response to each reviewer.\n\n1. Instead of dropping out class logits, exponentiations of logits are dropped in the revision, as suggested by AnonReviewer3.\n2. All the experimental results, corresponding figures and Dropmax contours are updated according to the change in 1 -- dropping out the exponentiations.\n3. We added Epsilon in Eq. (3) to prevent the denominator from becoming zero.\n4. We added Figure 1 to illustrate the concept of how Dropmax improves fine-grained recognition.\n5. We added in deterministic attention baseline, as suggested by AnonReviewer3.\n6. We updated the learning curve (Figure 3). Now it is more stable and easy to interpret.", "We really appreciate your comments.\n\n- It seems that the model benefits from at least two separate effects - one is the attention mechanism provided by the sigmoids, and the other is the stochasticity during training. Presently, it is not clear if only one of the components is providing most of the benefits, or if both things are useful. It would be great to compare this model to a non-stochastic one which just has the multiplicative effects applied in a deterministic way (during both training and testing).\n\n: As said, our model benefits from two separate effects - 1) adaptive input-dependant attention generation and 2) stochasticity during training. \n\nThe effect of 1) is clear since our adaptive dropmax significantly outperforms random dropmax. To show the effect of 2) we added in the results from the deterministic model in the revision, which we name as Deterministic-Attention, in Table 1. This model is almost identical to “Adaptive-Dropout”, except that the stochastic ‘z_t’ is replaced with deterministic ‘\\rho_t’.\n\nWe observe that stochasticity does indeed help improve the model performance, as Adaptive-Dropmax outperformed Deterministic-Attention by 0.59% in MNIST-1K, 0.22% in MNIST-5K and similarly on the other datasets (except on MNIST-55K). \nFurther, our deterministic attention model has both KL term and entropy regularizer as in “Adaptive-Dropout”, with \\lambda found via separate holdout set, such that the target class is strongly attended for each input while non-target classes are not. This design, which is also used in our Adaptive-Dropmax, is also a novelty of our model since a naive implementation of deterministic attention produces much worse results than the base model,\n\n\n- The objective of the attention mechanism that sets the dropout mask seems to be the same as the primary objective of classifying the input, and the attention mechanism is prevented from solving the task by adding an extra entropy regularization. It would be useful to explain more why this is needed. Would it not be fine if the attention mechanism did a perfect job of selecting the class?\n\n: The objective of the dropout mask generator is to stochastically rule out non-target classes such that the model can learn features for both coarse-grained and fine-grained classification. If we allow the dropout mask generator to become another classifier, then the original classifier has no problem to solve and will not learn anything useful, and thus we should differentiate the role of the classifier and the dropout mask generator.\n\nWe found that even in the case where it is easy enough for the mask generator to do a perfect job of selecting the target (See Figure 4(a) in the revision - Figure 3(a) in the original paper) the performance was the best when the non-target classes are not completely ruled out as \\lambda was found to be nonzero (0.1 ~ 0.0001).\n\nTo verify it, we experimented with Deterministic-Attention model, with the Sigm() in Eq. (4) replaced with Softmax(). It makes the mask generator to be another classifier, because generated masks become mutually exclusive, with only one of them close to 1 per each instance. The entropy regularizer (14) is removed for our purpose. We tested it on MNIST and Cifar-100, and the results are as follows:\nMNIST-1K: 7.13\nMNIST-5K: 2.57\nMNIST-55K: 1.09\nCifar-100: 30.38\nThe results are similar to or worse than the baseline, meaning that the role of the mask generator should be controlled in a principled way.\n\n\n- The main contribution of the paper is similar to multiplicative gating. Experiments are insufficient to determine whether it is just the gating that contributes to improved performance.\n\n: As mentioned above, the newly added in experimental results for the deterministic attention model shows that the stochasticity is still important for obtaining meaningful performance improvement, as it enables to obtain an ensemble of exponentially many classifiers in a single model training.\n", "We really appreciate your comments.\n\n- \"One can easily see that if o_t(x; w) = 0, then class t becomes neutral in the classification and the gradients are not back-propagated from it.\" : This does not seem to be true. Even if the logits are zero, the class would have a non-zero probability and would receive gradients. Do the authors mean exp(o_t(x;w)) = 0 ?\n\n: This is indeed correct and is a mistake caused by the explanation of a legacy model. We have experimented with two different versions of Dropmax (one that drops out the o_t and the other that drops out the exp(o_t) and opted to go with the former. \n\nIn the revision, we have corrected the inaccurate description of the model and added in new experimental results based on the dropout of the exponential term (including Figure 4). The results show that dropping exp(o_t) =0 yields similar classification errors to dropping o_t=0, except on Cifar-10, on which the former significantly outperforms the latter.\n\n\n- Related to the above, it should be clarified what is meant by dropping a class. Is its logit set to zero or -\\infty ? Excluding a class from the softmax is equivalent to having a logit of -\\infty, not zero. However, from the equations in the paper it seems that the logit is set to zero. This would not result in excluding the unit. The overall effect would just be to raise the magnitude of logits across the entire softmax.\n\n: Dropping class logits (o_t = 0) does not raise the magnitude of logits of negative classes. Rather, it is equivalent to setting class probabilities to neutral (p_t = 1/T), which is “neither certainly positive(+) nor negative(-)” for a given instance. However, we corrected it by setting exp(o_t)=0 to completely exclude a class from classification boundary as suggested. ", "We really appreciate your comments.\n\n- The paper discusses dropping out the pre-softmax logits in an adaptive manner. This isn't a huge conceptual leap given previous work, for instance that of Ba and Frey 2013 or the sequence of papers by Gal and his coauthors on variational interpretations of dropout.\n\n: The main focus of this paper is not interpreting dropout (or adaptive dropout) wrt variational inference. Those are simply our choice of tools for solving the proposed problem, and the main novelty comes from stochastically ruling out classes from consideration at each iteration. None of the previous work exploits such idea.\n\n\n- The variational approximation is a bit odd in that it doesn't have any variational parameters.\n\n: Our decision of setting the q (or recognition) network the same as the p (or prior) network is motivated from (Sohn et al., 2015) (Section 4.2). Since we are training with q network while predicting with p network, the consistency between the two network is crucial in obtaining the desired performance. It is indicated by KL[q||p] term in the Eq. (7). \n\nSuppose use a different set of variational parameters \\phi for q(z|x,y). The problem in this case is that reconstructing y with q(z|x,y;\\phi) and reconstructing y with p(z|x;\\theta) are significantly different in their difficulties. The former is much easier because it learns trivial mapping y -> z -> y, where the dimension of z is the same as that of y. Thus, we decided to replace q(z|x,y;\\phi) with q(z|x,y;\\theta) that shares the same structure and the set of parameters with p(z|x;\\theta). In our preliminary experiment, we also experimented with the model that uses a separate parameter for q, but it did not work well.\n\n\n- a further regulariser in equation (14) is needed to give the desired behaviour.\n: Since regularized variational inference is a general framework and allows us to avoid the weird solution all z=0 or z=1, we argue that (14) is reasonable.\n\n\n- I would have liked to have seen results on ImageNet.\n: We will run the experiments on the ImageNet dataset and will include the results in the revision if we obtain the results by the rebuttal deadline. \n\n\n- I don't find (the too small) Figure 2 to be compelling evidence that \"our dropmax effectively prevents overfitting by converging to much lower test loss\". The test loss in question looks like a noisy version of the base test loss with a slightly lower mean.\n\n: The plot was not the most representative and we included in a more stable version in the revision. Also the main point we want to make with Figure 2 is that our model is still able to achieve lower test loss, while retaining the same convergence speed as the baseline.\n\n\n- There are grammatical errors throughout the paper at a higher rate than would normally be found in a successful submission at this stage.\n: We have corrected the grammatical errors in the revision. \n\n\n- Which of the MNIST models from Table 1 was used?\n: We used the MNIST-1K model.\n", "We really appreciate your comments\n\n- The experiment is a little weak. Only on CIFAR100 the proposed approach is much better than other approaches. I would like to see the results on more datasets. Maybe should also compare with more dropout algorithms, such as DropConnect and MaxOut.\n\n: We are experimenting on ImageNet 1K dataset, and will include the results if we obtain the results by the rebuttal deadline. \n\nDropConnect and MaxOut are not much relevant to our motivation of learning an ensemble of multiple classifiers in a single training stage, as they do not drop out classes.\n", "Rating: 7 It does produce a better result\nReview: Our review is based on reproducibility\n\nOverall the paper seemed to reasonably comply with the standards of reproducibility set out for this challenge. The data was very easily obtainable and its partitions were well defined. The DropMax paper mentioned what frameworks were used but did not give any code or pseudo-code. The DropMax paper’s hyperparameter selection for the adaptive DropMax model was well stated but was non-existent for the other models. Despite this lack of documentation, we were able to show the distinct improvement DropMax had over the base networks. The language of the DropMax paper was clear but could have been made drastically more clear by including a diagram of the network the paper was proposing. The DropMax paper did not mention any of the computing hardware used. The runtime of the experiments was significant but reasonable for an academic research setting. Based on the above compliance with the criteria of reproducibility set forth by Joelle Pineau we believe that DropMax is adequately reproducible. We give it a 7/10 overall on reproducibility.\n\n\nAs other commentators pointed out, DropMax, as proposed in the paper, drops class logits and not classes. This is the cause of z being identically 1 without regularization of rho; if o_t is negative for some non-target class, then ∂L/∂ρt\nis positive since dropping the logit for that class actually increases the predicted\nthe probability for that class. We confirm that this is the cause of failure experimentally\nby applying ReLU activation to o_t, and successfully avoid z = 1 without having to use\nregularization on ρ. However, our validation experiments show that regularization of ρ\ncan still be helpful, even when dropping classes.\n\nhttps://github.com/jamesal1/DropMax.git\n\nConfidence: 4 We are basing our opinions on the code we used to copy the results. " ]
[ 6, 6, -1, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sy4c-3xRW", "iclr_2018_Sy4c-3xRW", "SJs2_xhGz", "iclr_2018_Sy4c-3xRW", "ryj2c77MG", "iclr_2018_Sy4c-3xRW", "ryl6gl5xM", "ryl6gl5xM", "Bkp1F5OlG", "rkBcQHBgG", "iclr_2018_Sy4c-3xRW" ]
iclr_2018_SJUX_MWCZ
Predict Responsibly: Increasing Fairness by Learning to Defer
When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly. To fulfill these three requirements, a model must be able to output a reject option (i.e. say "``I Don't Know") when it is not qualified to make a prediction. In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user. We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives. We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline.
workshop-papers
This work is proposing an approach for ensuring classification fairness through models that encapsulate deferment criteria. On the positive side, the paper provides ideas which are conceptually interesting and novel. On the other hand, the reviewers find the technical contribution to be limited and, in some cases, challenge the practicality of the method (e.g. requirement for second set of training samples). After extensive post-rebuttal discussion, the consensus is that the above issues make the paper fall below the threshold for acceptance – even if the “out-of-scope” issue is not taken into account.
train
[ "r1RTd8hgG", "HJMCQAFgf", "HJkk4w6lM", "BkIG-LYzf", "Bk8LlIYff", "BkLnyItzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The proposed method is a classifier that is fair and works in collaboration with an unfair (but presumably accurate model). The novel classifier is the result of the optimisation of a loss function (composed of a part similar to a logistic regression model and a part being the disparate impact). Hence, it can be interpreted as a logistic loss with a fairness regularisation.\n\nThe results are promising and the applications are very important for the acceptance of ML approaches in the society.
However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied.\nFinally, this paper used uncommon vocabulary (for the machine learning community) and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity).\n\nWhen reading the submitted paper, it was unclear (until section 6) how deferring could help fairness. Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss).\n\nTo conclude, although the application is of high interest and the numerical results encouraging, the methodological approach does not seem to be very novel.\n\nMinor comment : \n- The list of authors of the reference “Machine bias : theres software…” apperars incorrectly (some comma may be missing in the .bib file) and there is a small typo in the title.\n\nPossible extensions :\n- The proposed fairness aware loss could be made more general (and not only in the case of a logistic model) \n- It could also be generalised to a mixture of biased classifier (more than on DM).\n\nEdited :\nAs noted by a fellow reviewer, the paper is a bit out of the scope of ICLR and may be more in line with other ML conferences.", "I like the direction this paper is going in by combining fairness objectives with deferment criteria and learning. However, I do not believe that this paper is in the scope defined by the ICLR call for papers, as there is nothing related to learned representations in it.\n\nI find it quite interesting that the authors go beyond 'classification with a reject option' to learning to defer, based on output predictions of the second decision maker. However, the authors do not make this aspect of the work very clear until Section 6. The distinction and contribution of this part is not made obvious in the abstract and introduction. And it is not stated clearly whether there is any related prior work on this aspect. I'm not sure there is, and if there isn't, then the authors should highlight that fact. The authors should discuss more extensively how the real-world situation will play out in terms of having two different training labels per sample (one from the purported DM and another used for the training of the main supervised learning portion). Typically it is DM labels that are the only thing available for training and what cause the introduction of unfairness to begin with.\n\nThroughout the paper, it is not obvious or justified why certain choices are made, e.g. cross-entropy.\n\nIn the related work section, specifically Incorporating IDK, it would be good to discuss the work of Wegkamp et al., including the paper http://www.jmlr.org/papers/volume9/bartlett08a/bartlett08a.pdf and its loss function. Also, the work of https://doi.org/10.1145/2700832 would be nice to briefly discuss.\n\nAlso in the related work section, specifically AI Safety, it would be good to discuss the work of Varshney and Alemzadeh (https://doi.org/10.1089/big.2016.0051) --- in particular the 'safety reserves' and 'safe fail' sections which specifically address reject options and fairness respectively. \n\nThe math and empirical results all seem to be correct and interesting.", "Strengths: \n1. This paper proposes a novel framework for ensuring fairness in the classification pipeline. To this end, this work explores models that learn to defer. \n2. The work is conceptually very interesting. The idea of learning to defer (as proposed in the paper) as a means to fairness is not only novel but also quite apt. \n3. Experimental results demonstrate that the proposed learning strategy can not only increase predictive accuracy but also reduce bias in decisions. \n\nWeaknesses: \n1. While this work is conceptually quite novel and interesting, the technical novelty and contributions seem fairly minimal. \n2. The proposed formulations are essentailly regularized variants of fairly standard classification models and the optimization also relies upon standard search procedures.\n3. Experimental analysis on deferring to a biased decision maker (Section 7.3) is rather limited. \n\nSummary: This paper proposes a novel framework for ensuring fairness in the classification pipeline. More specifically, the paper outlines a strategy called learn to defer which enables the design of predictive models which not only classify accurately and fairly but also defer if necessary. Deferring a decision is used as a mechanism to ensure both fairness and accuracy. Furthermore, the authors consider two variants depending on if the model has some information about the decision maker or not. Experimental results on real world datasets demonstrate the effectiveness of the proposed approach in building an end to end pipeline that ensures accuracy and fairness. \n\nNovelty: The main novelty of this work stems from the idea of introducing learning to defer mechanisms in the context of fairness. While the ideas of learning to defer have already been studied in the context of classification models, this is the first contribution which leverages learning to defer strategy as a means to achieve fairness. However, beyond this conceptual novelty, the work does not demonstrate a lot of technical novelty or depth. The objective functions proposed are simple extensions of work done by Zafar et. al. (WWW, AISTATS 2017). The optimization procedures being used are also fairly standard. Furthermore, the authors do not carry out any rigorous theoretical analysis either. \n\nOther detailed comments:\n1. I would strongly encourage the authors to carry out a more in-depth theoretical analysis of the proposed framework (Refer to \"Provably Fair Representations\" McNamara et. al. 2017)\n2. Experimental evaluation can also be strengthened. More specifically, analysis in Section 7.3 can be made more thorough. Instead of just sticking to one scenario where the decision maker is extremely biased (how are you quantifying this?), how about plotting a graph where x axis denotes the extent of bias in decision-maker's judgments and y-axis captures the model performance?\n3. Overall, the paper is quite well written and is well motivated. There are however some typos and incorrect figure refernces (e.g., Section 7.2 first line, Figure 7.2, there is no such figure). \n\n\n", "Thank you for the review. We are glad you found the paper enjoyable and interesting. A couple of clarifying responses:\n\n-\"I do not believe that this paper is in the scope defined by the ICLR call for papers, as there is nothing related to learned representations in it.\"\n\nIn fact our paper does contains learned representations - our neural networks have a hidden layer. We have clarified this in the paper (by changing the phrase \"one-layer\" to \"one-hidden-layer\" i.e. one hidden layer of non-linear units and two layers of weights); we apologize for the miscommunication.\n\nAnd, we disagree about the out-of-scope critique. The scope of ICLR is far broader than papers directly concerning the specifics of learned representations. A quick glance at the papers chosen for oral presentations at ICLR 2017 reveals papers on optimization, generalization, and privacy. Fairness and learning with rejection are accepted, important, and popular topics with a rich literature in the machine learning and deep learning communities; they are well within the scope of this conference.\n\n-\"the authors do not make this aspect of the work very clear until Section 6. The distinction and contribution of this part is not made obvious in the abstract and introduction.\"\n\nWe have edited the introduction and abstract to clarify our contribution.\n\n-\"it is not stated clearly whether there is any related prior work on this aspect. I'm not sure there is, and if there isn't, then the authors should highlight that fact.\"\n\nWe have edited the Related Work and Introduction to clarify our contribution.\n\t\n-\"The authors should discuss more extensively how the real-world situation will play out in terms of having two different training labels per sample. Typically it is DM labels that are the only thing available for training and what cause the introduction of unfairness to begin with.\"\n\nSelective label bias is certainly a prominent problem in constructing datasets. However, a decision-maker can display significant bias above and beyond selective label bias, so our method can still be very useful. For instance, all models that train on the COMPAS dataset (a standard dataset in fairness research), by necessity only know the ground truth for defendants who received bail; we can never know if those who did not receive bail would have recidivated. Yet many papers publish with results on this dataset, and show useful results. This is an issue larger than our work, encompassing the field of machine learning as a whole - every paper must consider the effects of bias in the data generation process.\n\n-\"it is not obvious or justified why certain choices are made, e.g. cross-entropy.\"\n\nTo our knowledge, cross-entropy is a fairly standard choice for training a classifier to maximize accuracy when the classifier must be differentiable.\n\n-\"In the related work section, specifically Incorporating IDK, it would be good to discuss the work of Wegkamp et al., including the paper http://www.jmlr.org/papers/volume9/bartlett08a/bartlett08a.pdf and its loss function. Also, the work of https://doi.org/10.1145/2700832 would be nice to briefly discuss. Also in the related work section, specifically AI Safety, it would be good to discuss the work of Varshney and Alemzadeh (https://doi.org/10.1089/big.2016.0051) --- in particular the 'safety reserves' and 'safe fail' sections which specifically address reject options and fairness respectively.\"\n\nWe have included discussion of these papers in our related work section.\n\n\n", "Thank you for the comments. We're glad you agree this is an important and promising research direction. A couple of quick notes in response:\n\n-\"However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied.\"\n\nIn section 5.2, in which we describe how the \"learning to defer\" framework can be used in a Bayesian weight uncertainty setting. Our framework for learning to defer is extremely general - the models included do not have to be regularized logistic losses; they can be any type of model trained on any supervised loss, as long as they have some type of uncertainty/deferral output. In fact, it is the generality of the framework that makes theoretical analysis difficult.\n\n-\"this paper used uncommon vocabulary (for the machine learning community) and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity).\"\n\nWe are sorry the vocabulary was hard to follow. Figure 1 contains a diagram of a typical system containing an IDK model, and we describe the role of the DM in section 3; however, we have re-written the introduction and other sections to clarify the terms.\n\n-\"it was unclear (until section 6) how deferring could help fairness. Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss).\"\n\nThanks for the suggestion. We describe in section 3 how deferring can help accuracy, but we do not describe how deferring can help fairness. We have extended the example in section 3 to rectify this.\n\n-\"The list of authors of the reference ???Machine bias : theres software?????? apperars incorrectly (some comma may be missing in the .bib file) and there is a small typo in the title.\"\n\nThank you, we have corrected these typos.\n\n-\"The proposed fairness aware loss could be made more general (and not only in the case of a logistic model)\"\n\nAny of the fair learning and rejection learning methods can be used within the \"learning to defer\" framework; in this paper we demonstrated two possible options.\n\n-\"It could also be generalised to a mixture of biased classifier (more than on DM).\"\n\nWe agree that this would be an interesting future direction of research.\n", "Thank you for the comments. We're happy you found the work interesting, and we appreciate the constructive criticism. We'd like to clarify a couple of points regarding our contribution.\n\n-\"While the ideas of learning to defer have already been studied in the context of classification models, this is the first contribution which leverages learning to defer strategy as a means to achieve fairness.\"\n\t\nIt is true that we are the first to consider deferring as a means to achieve fairness. However, it is important not to confuse \"learning to punt\" with \"learning to defer\". As we mention in our Related Work section, what we call \"learning to punt\" has been studied extensively in the context of classification models (also known as rejection learning, KWIK learning, learning to abstain, IDK models, among others). The other main contribution of our paper is the \"learning to defer\" framework, which extends the concept of learning to punt to consider other decision makers in the pipeline. As we motivate in sections 1, 3, and 6, and demonstrate in section 7, learning to defer can greatly enhance the real-world performance of models which punt - this performance can be measured as accuracy, fairness, or any other supervised loss function.\n\n-\"the work does not demonstrate a lot of technical novelty or depth. The objective functions proposed are simple extensions of work done by Zafar et. al. (WWW, AISTATS 2017)\"\n\nThe objective function for learning to defer is novel, and unrelated to Zafar et al (https://people.mpi-sws.org/~mzafar/papers/disparate_impact.pdf and https://people.mpi sws.org/~mzafar/papers/disparate_mistreatment.pdf). The fair punting objectives described in equation 4 and 5 use a similar regularization approach - we mention in our Fair Classification section that this approach is not novel, citing Kamashima et al. and Bechavod & Ligett. We could have used any fair supervised learning algorithm in place of this, and the results would hold.\n\n-\"analysis in Section 7.3 can be made more thorough. Instead of just sticking to one scenario where the decision maker is extremely biased (how are you quantifying this?), how about plotting a graph where x axis denotes the extent of bias in decision-maker's judgments and y-axis captures the model performance?\"\n\nAs we state in our caption of figure 5, we train a biased DM by setting alpha (the coefficient on the fairness regularization term) to -0.1, thereby encouraging solutions with higher disparate impact.\n\nWe cannot plot model performance on y-axis due to the complexity of the evaluative metric - assessing the accuracy-fairness tradeoff requires a two-dimensional Pareto-front style visualization. We have produced the same graph for several values of alpha, and did not think it was too illuminating; but we could include it in the Appendix.\n\n-\"There are however some typos and incorrect figure references (e.g., Section 7.2 first line, Figure 7.2, there is no such figure).\"\n\nThank you, we have corrected the figure reference." ]
[ 5, 4, 6, -1, -1, -1 ]
[ 3, 5, 3, -1, -1, -1 ]
[ "iclr_2018_SJUX_MWCZ", "iclr_2018_SJUX_MWCZ", "iclr_2018_SJUX_MWCZ", "HJMCQAFgf", "r1RTd8hgG", "HJkk4w6lM" ]
iclr_2018_r1hsJCe0Z
Semantic Code Repair using Neuro-Symbolic Transformation Networks
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.
workshop-papers
To summarize the pros and cons: Pro: * Interesting application * Impressive results on a difficult task * Nice discussion of results and informative examples * Clear presentation, easy to read. Con: * The method appears to be highly specialized to the four bug types. It is not clear how generalizable it will be to more complex bugs, and to the real application scenarios where we are dealing with open world classification and there is not fixed set of possible bugs. There were additional reviewer complaints that comparison to the simple seq-to-seq baseline may not be fair, but I believe that these have been addressed appropriately by the author's response noting that all other reasonable baselines require test cases, which is an extra data requirement that is not available in many real-world applications of interest. This paper is somewhat on the borderline, and given the competitive nature of a top conference like ICLR I feel that it does not quite make the cut. It is definitely a good candidate for presentation at the workshop however.
test
[ "rJvBlRteG", "SkSfxq9xM", "SynixA1WM", "r1XRhmeXf", "r1LW37xmM", "HkIRj7g7z", "rJtTxEgXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents a neural network architecture consisting of the share, specialize and compete parts for repairing code in four cases, i.e., VarReplace, CompReplace, IsSwap, and ClassMember. Experiments on the source codes from Github are conducted and the performance is evaluated against one sequence-to-sequence baseline method.\n\nPros:\n\n* The problem studied in this paper is of practical significance. \n* The proposed approach is technically sound in general. The paper is well-written and easy to follow.\n\nCons:\n\n* The scope of this paper is narrow. This paper can only repair the program in the four special cases. It leads to a natural question that how many other cases besides the four? It seems that even if the proposed method works pretty well in practice, it would not be very useful since it is effective to only 4 out of a huge number of cases that a program could be wrong.\n\n* Although the proposed architecture is specially designed for this problem, the components are a straight-forward application of existing approaches. E.g., The SHARE component that using bidirectional LSTM to encode from AST has been studied before and the specialized network has been studied in (Andreas et al., 2016). This reduces the novelty and technical contribution of this paper.\n\n* Many technical details have not been well-explained. For example, how to determine the number of candidates m, since different snippets may have different number of candidates? How to train the model? What is the loss function?\n\n* The experiments are weak. 1) the state-of-the-art program repair approaches such as the statistical program repair models (Arcuri and Yao, 2008) (Goues et al., 2012), Rule-Based Static Analyzers (Thenault, 2001) (PyCQA, 2012) should be compared. 2) the comparsion between SSC with and Seq-to-Seq is not fair, since the baseline is more general and not specially crafted for these 4 cases.\n", "This paper introduces a neural network architecture for fixing semantic bugs in code. Focusing on four specific types of bugs, the proposed two-stage approach first generates a set of candidate repairs and then scores the repair candidates using a neural network trained on synthetically introduced bug/repair examples. Comparing to a prior sequence-to-sequence approach, the proposed approach achieved dominantly better accuracy on both synthetic and real bug datasets. On a real bug dataset constructed from GitHub commits, it was shown to outperform human. \n\nI find the application of neural networks to the problem of code repair to be highly interesting. The proposed approach is highly specialized for the specific four types of bugs considered here and appears to be effective for fixing these specific bug types, especially in comparison to the sequence-to-sequence model based approach. However, I was wondering whether limiting the output choices (based on the bug type) is going a long way toward improving the performance compared to seq-2-seq, which does not utilize such output constraints. What if we introduce the same type of constraints for the seq-2-seq model? For example, one can simply modifying the decoding process such that for locations that are not in the candidate set, the network simply makes no change, and for candidate-repair locations, the output space is limited to the specific choices provided in the candidate set. This will provide a more fair comparison between the different models. \nRight now it is not clear how much of the observed performance gain is due to the use of these constraints on the output space. \n\nIs there any control mechanism used to ensure that the real bug test set do not overlap with the training set? This is not clear to me. \n\nI find the comparison result to human performance to be interesting and somewhat surprising. This seems quite impressive. The presented example where human makes a mistake but the algorithm is correct is informative and provides some potential explanation to this. But it also raises a question. The specific example snippet could be considered to be correct when placed in a different context. Bugs are context sensitive artifacts. The setup of considering each function independently without any context seems like an inherent limitation in the types of bugs that this method could potentially address. Some discussion on the limitation of the proposed method seems to be warranted. \n\n\n\n\nPro:\nInteresting application \nImpressive results on a difficult task\nNice discussion of results and informative examples\nClear presentation, easy to read.\n\nCon: \nThe comparison to baseline seq-2-seq does not seem quite fair\nThe method appears to be highly specialized to the four bug types. It is not clear how generalizable it will be to more complex bugs, and to the real application scenarios where we are dealing with open world classification and there is not fixed set of possible bugs. \n", "This paper describes the application of a neural network architecture, called Share, Specialize, and Compete, to the problem of automatically generating big fixes when the bugs fall into 4 specific categories. The approach is validated using both real and injected bugs based on a software corpus of 19,000 github projects implemented in python. The model achieves performance that is noticeably better than human experts.\n\nThis paper is well-written and nicely organized. The technical approach is described in sufficient detail, and supported with illustrative examples. Most importantly, the problem tackled is ambitious and of significance to the software engineering community.\n\nTo me the major shortcoming of the model is that the analysis focuses only on 4 specific types of semantic bugs. In practice, this is a minute fraction of what can actually go wrong when writing code. And while the high performance achieved on these 4 bugs is noteworthy, the fact that the baseline compared against is more generic weakens the contribution. The authors should address this potential limitation. I would also be curious to see performance comparisons to recent rule-based and statistical techniques.\n\nOverall this is a nice paper with very promising results, but I believe addressing some of the above weaknesses (with experimental results, where possible) would make it an excellent paper.\n\n", "Thanks for the review and questions. In our response, we briefly explain why the 4 classes of bugs we consider in this work are actually quite broad, and why other state-of-the-art program repair techniques are not applicable in our setting of identifying and repairing the programs without having access to test cases.\n\nQ. Scope of the paper is narrow and considers only 4 classes of bugs?\n\nFirst, we would like to point out that the 4 classes of semantic bugs that we chose were based on an extensive analysis of common classes of errors that programmers make, and which experienced programmers can potentially fix by only observing the program syntax without having access to any test cases or runtime information.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that use models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n\nQ. how to determine the number of candidates m, since different snippets may have different number of candidates? How to train the model? What is the loss function?\n\nFor each snippet, our model first uses the SHARE module to emit a d-dimensional vector for an AST node of the snippet, which are then encoded using a bi-LSTM to compute a shared representation H. Next, for each repair type, the SPECIALIZE module uses H and either an MLP or a Pooled Pointer module to produce an un-normalized scalar score for each of the m repair candidates. For a given snippet, we first identify the possible repair locations based on our 4 classes. For each repair location, the m candidates are computed depending on the AST node class. For example, if the repair location is of type comparison operator, it will consists of m=7 repair candidates, where 7 is the number of comparison operators we consider (==, <=, >=, <, >,!=,No-op). Similarly, for IsSwap and ClassMember there are 2 choices per location and a No-op. For VarReplace, the corresponding candidates for a variable node is computed by considering every other variable node defined in the program. Finally, a separate softmax is used for each candidate repair location to generate a distribution over all repair choices at that location (including No-Op).\n\nSince we train our model on a set of synthetically injected bugs, we know exactly for a given snippet which candidate repairs are applicable (if any). For each repair instance (snippet+repair location), we obtain a different training instance, and use the standard cross-entropy loss to get the softmax distribution as close as possible to the ground truth corresponding to the injected bug.\n\nQ. the state-of-the-art program repair approaches such as the statistical program repair models (Arcuri and Yao, 2008) (Goues et al., 2012), Rule-Based Static Analyzers (Thenault, 2001) (PyCQA, 2012) should be compared\n\nPlease note that the state-of-the-art statistical approaches for program repair such as (Arcuri and Yao, 2008) and (Goues et al. 2012) use a set of test-cases to perform evolutionary algorithm to guide the search for program modifications. Our goal in this work is to automatically generate semantic repairs only looking at the program syntax without any test cases. This requirement is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might.\n\nThe general rule based static analyzers only consider shallow syntactic errors and do not consider the class of semantic errors we are tackling in this work, so they would not produce any results.\n\nQ. the comparsion between SSC with and Seq-to-Seq is not fair, since the baseline is more general and not specially crafted for these 4 cases.\n\nAttention based seq-to-seq trained on the same training set is the closest state of the art model previously proposed in recent syntactic program repair approaches (Gupta et. al. AAAI 2017 and Bhatia et. al. 2016). \n\n\nPlease let us know if there are any more clarifications that might be needed. We would like to reinforce this again that one of the goals of our work is to develop new neural models that are able to identify a rich class of semantic bugs without any test cases.\n", "Thanks for the helpful comments and suggestions.\n\nQ. What if we add additional constraints on the output choices for seq2seq decoder to only candidate locations?\n\nThis constraint of only modifying the candidate locations is implicitly provided in our training set, where only bugs at candidate locations are provided and the remaining code is copied. When we analyze the baseline results, the seq2seq network is quite good at learning such a constraint of only modifying the candidate locations and it gets the right repair about 26% of cases (and 40% with some additional modifications). The remaining cases for which it makes mistakes in suggested repairs, it either predicts the wrong repair or chooses the wrong program location, but it performs such modifications only at the candidate locations, i.e. it already learns the constraint to only modify the candidate locations.\n\nQ. Is there any control mechanism used to ensure that the real bug test set do not overlap with the training set?\n\nFor the synthetic bug dataset (real code with synthetically injected bugs), we partition the data into training, test, and validation at the repository level, to eliminate any overlap between training and test. Moreover, we also filter out any training snippet which overlapped with any test snippet by more than 5 lines.\nThe real bug dataset (real code with real bugs) was obtained by crawling a different set of github repositories from the ones used in training. We also ensure there is no overlap of more than 5 lines with training programs.\n\nQ. Discussion about limitation of this work regarding not leveraging the context in which snippets are being used.\n\nThanks for the suggestion. We will add a new paragraph regarding this limitation and future work. Yes, our current model is trained on a dataset where we extracted every function from each Python source file as a code snippet. Each snippet is analyzed on its own without any surrounding context. Adding more context regarding usage of functions in larger codebases would be an interesting future extension of this work, which will involve developing more scalable models for larger codebases.\n\nQ. Specialized to only 4 classes of errors?\nFirst, we would like to point out that the 4 classes of semantic bugs that we chose were based on an extensive analysis of common classes of errors that programmers make, and which experienced programmers can potentially fix by only observing the program syntax without having access to any test cases or runtime information.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that introduce new models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n", "We thank the reviewer for the helpful comments and suggestions.\n\nQ. Only 4 classes of semantic bugs?\n\nFirst, we would like to point out that the 4 classes of semantic bugs that we chose were based on an extensive analysis of common classes of errors that programmers make, and which experienced programmers can potentially fix by only observing the program syntax without having access to any test cases or runtime information.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that use models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n\n\nQ. Baseline is generic and weak?\n\nPlease note that in our problem setting, we do not have access to the set of test cases. Most of the previous semantic program repair techniques rely on the availability of a set of test cases to find a repair. The only input to our model is the buggy program (its Abstract syntax tree), and the model needs to learn to predict whether there is a semantic bug (amongst the 4 classes) present in the snippet and if yes, pinpoint the node location and suggest a repair. We chose the attentional seq-to-seq model because it is one of the common models that has previously been used in recent literature for syntactic program repair (Gupta et. al. AAAI 2017 and Bhatia et. al. 2016).\n", "We thank the reviewers for their helpful comments and feedback. It seems although the reviewers liked our neural network architecture for semantic program repair, there is a common concern regarding the generality and scope of the 4 classes of bugs we selected for evaluation. We are explaining this concern in a separate comment just to reinforce the fact that the 4 classes we consider are actually quite general and cover a large number of program bugs in our exploratory study of github codebases, especially compared to other recent work that only considers 1 class (out of our 4 classes) and show its prevalence in other codebases.\n\nFirst, we selected the 4 classes of semantic bugs based on an extensive analysis of popular Python codebases on github to identify common classes of errors that programmers make, and using the following criterion “Bugs\nthat can be identified and fixed by an experienced human programmer, without running the code or\nhaving deep contextual knowledge of the program.” This requirement of not having test cases is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might, and is also a great real-world test bed for developing models of understanding source code. Note that this requirement also disallows using majority of recent statistical semantic program repair techniques that relies on the availability of test cases.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that use models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n" ]
[ 4, 6, 6, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1hsJCe0Z", "iclr_2018_r1hsJCe0Z", "iclr_2018_r1hsJCe0Z", "rJvBlRteG", "SkSfxq9xM", "SynixA1WM", "iclr_2018_r1hsJCe0Z" ]
iclr_2018_rk3pnae0b
Topic-Based Question Generation
Asking questions is an important ability for a chatbot. This paper focuses on question generation. Although there are existing works on question generation based on a piece of descriptive text, it remains to be a very challenging problem. In the paper, we propose a new question generation problem, which also requires the input of a target topic in addition to a piece of descriptive text. The key reason for proposing the new problem is that in practical applications, we found that useful questions need to be targeted toward some relevant topics. One almost never asks a random question in a conversation. Due to the fact that given a descriptive text, it is often possible to ask many types of questions, generating a question without knowing what it is about is of limited use. To solve the problem, we propose a novel neural network that is able to generate topic-specific questions. One major advantage of this model is that it can be trained directly using a question-answering corpus without requiring any additional annotations like annotating topics in the questions or answers. Experimental results show that our model outperforms the state-of-the-art baseline.
workshop-papers
The pros and cons of the paper under consideration can be summarized below: Pros: * Reviewers thought the underlying model is interesting and intuitive * Main contributions are clear Cons: * There is confusion between keywords and topics, which is leading to a somewhat confused explanation and lack of clear comparison with previous work. Because of this, it is hard to tell whether the proposed approach is clearly better than the state of the art. * Typos and grammatical errors are numerous As the authors noted, the concerns about the small dataset are not necessarily warranted, but I would encourage the authors to measure the statistical significance of differences in results, which would help alleviate these concerns. An additional comment: it might be worth noting the connections to query-based or aspect-based summarization, which also have a similar goal of performing generation based on specific aspects of the content. Overall, the quality of the paper as-is seems to be somewhat below the standards of ICLR (although perhaps on the borderline), but the idea itself is novel and results are good. I am not recommending it for acceptance to the main conference, but it may be an appropriate contribution for the workshop track.
train
[ "rk27E6PlG", "HyrNUBYlz", "B1chwjFlz", "r18wTowMG", "S1ZZ5jvff", "ByHq_sPMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper presents a neural network-based approach to generate topic-specific questions with the motivation that topical questions are more meaningful in practical applications like real-world conversations. Experiments and evaluation have been conducted on the AQAD corpus to show the effectiveness of the approach.\n\nAlthough the main contributions are clear, the paper contains numerous typos, grammatical errors, incomplete sentences, and a lot of discrepancies between text, notations, and figures making it ambiguous and difficult to follow. \n\nAuthors claim to generate topic-specific questions, however, the dataset choice, experiments, and examples show that the generated questions are essentially keyword/key phrase-based. This is also apparent in Section 4.1 where authors present some observation without any supporting proof or empirical evidence. Moreover, the example in Figure 1 shows a conversation, but, typically, in an ongoing multi-round conversation people do not tend to repeat the keywords or key phrases or named entities, and topic shifts might occur at any time. \n\nOverall, a misconception about topic vs. keywords might have led the authors to claim that their work is the first to generate topic-specific questions whereas this has been studied before by Chali & Hasan (2015) in a non-neural setting. \"Topic\" in general has a broader meaning, I would suggest authors to see this to get an idea about what topic entails to in a conversational setting: https://developer.amazon.com/alexaprize/contest-rules . I think the proposed work is mostly related to: 1) \"Towards Natural Question-Guided Search\" by Kotov and Zhai (2010), and 2) \"K2Q: Generating Natural Language Questions from Keywords with User Refinements\" by Zheng et al. (2011), and other recent factoid question generation papers where questions are generated from a given fact (e.g. \"Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus\" by Serban et al. (2016)).\n\nIt is not clear how the question types are extracted from the given sentences. Please provide details. Which keywords are employed to accomplish this? Also, please explain the absence of the \"why\" type question. \n\nFigure 3 and the associated descriptions are very hard to follow. Please draw the figure by matching it with the descriptions. Where are the bi-LSTMs in the figure? What are ac_t and em_t? \n\nMy major concern is with the experiments and evaluation. The dataset essentially contains questions about product reviews and does not match authors motivation/observation about real-world conversations. Moreover, evaluation has been conducted on a very small test set (just about 1% of the selected corpus), making the results unconvincing. More details are necessary about how exactly Kim's and Liu's models are used to get question types and topics. \n\nHuman evaluation results per category would have been more useful. How did you combine the scores of the human evaluation categories? Also, automatic evaluation and human evaluation results do not correlate well. Please explain.\n\n\n\n\n\n\n\n", "This paper proposed a topic-based question generation method, which requires the input of target topic in addition to the descriptive text. In the proposed method, the authors first extract the topic based on the similarity of the target question token and answer token using word embedding. Then, the author proposed a topic-specific question generation model by encoding the extracted topic using LSTM and a pre-decode technique that the second decoding is conditioned on the hidden representation of the first decoding result. The authors performed the experiment on AQAD dataset, and show their performance achieve state-of-the-art result when using automatically generated topic, and perform better when using the ground truth topic. \n\n[Strenghts]\n\nThis paper introduced a topic-based question generation model, which generate question conditioned on the topic and question type. The authors proposed heuristic method to extract the topic and question type without further annotation. The proposed model can generate question with respect to different topic and pre-decode seems a useful trick. \n\n[Weaknesses]\n\nThis paper proposed an interesting and intuitive question generation model. However, there are several weaknesses existed:\n\n1: It's true that given a descriptive text, it is often possible to ask many types of questions. But it also leads to different answers. In this paper, the authors treat the descriptive text as answers, is this motivation still true if the question generation is conditioned on answers, not descriptive text? Table 4 shows some examples, given the sentence, even conditioned on different topics, the generated question is similar. \n\n2: In terms of the experiment, the authors use AQAD to evaluate proposed method. When the ground truth topic is provided, it's not fair to compare with the previous method, since knowing the similar word present in the answer will have great benefits to question generation. \n\nIf we only consider the automatically generated topic, the performance of the proposed model is similar to the previous method (Du et al). Without the pre-decode technique, the performance is even worse. \n\n3: In section 4.2, the authors claim this is the theoretical explanation of the generalization capability of the proposed model (also appear in topic effect analysis). It is true that the proposed method may have better compositionality, but I didn't see any **theoretical** explantation about this. \n\n4: The automatically extracted topic can be very noisy, but the paper didn't mention any of the extracted topics on AQAD dataset. \n\n[Summary]\n\na topic-based question generation method, which requires the input of target topic in addition to the descriptive text. However, as I pointed out above, there are several weaknesses in the paper. Taking all these into account, I think this paper still needs more works to make it solid and comprehensive before being accepted.\nAdd Comment", "The authors propose a scheme to generate questions based on some answer sentences, topics and question types. Topics are extracted from questions using similar words in question-answer pairs. It is similar to what we find in some Q&A systems (like lexical answer types in Watson). A sequence classifier is also used to tag the presence of topic words. Question types correspond mostly to salient questions words. LSTMs are used to encode the various inputs and generate the questions. \n\nThe paper is well written and easy to follow. I would expect more explanations why sentence classification and labeling results presented in Table 2 are so low. \n\nExperimental results on question generation are convincing and clearly indicate that the approach is effective to generate relevant and well-structured short questions. \n\nThe main weakness of the paper is the selected set of question types that seems to be a fuzzy combination of answer types and question types (for ex. yes/no). Some questions type can be highly ambiguous; for instance “What” might lead to a definition, a quantity, some named entities... Hence I suggest you revise your qt set. \n\nI would also suggest, for your next experiments, that you try to generate questions leading to answers with list of values. ", "1. We are very sorry about the typos, grammatical errors, etc. We will fix them in the final version. And we will fix the incomplete Figure 3 in the new version.\n\n2. Thank you for pointing out the \"topic\" problem. The terms topic and keyword are fairly ambiguous. We can use the keyword in the new version. Our work is quite different from the paper that you mentioned. Our is not generating questions totally based on given keywords. Our motivation is based on the fact that several questions can be asked based on a given sentence. Hence, we want to generate questions about the given \"subject\" or \"theme\" conditioned on the given descriptive text. \n\n3. We simply use keywords such as \"what\",\"how\", etc. to extract question types from the questions in the training data set. We will add details in the new version. The absence of the “why” type is a mistake, we missed it. In that case, all the \"why\" data is divided into the \"other\" type. Especially, the experimental results for other types won't be changed. And our conclusion still holds.\n\n4. It is impossible for people to ask questions about some things but don't mention them in the question. For example: **question** \"are the ==tips== interchangeable ?\", **answer**: \"the ==tips== are one piece metallic construction solid glued in place .\". So our observation/motivation is still true. As for the size of the test set, we believe that thousands of samples are enough to test the model since those samples are randomly sampled from a big dataset. For example, the NIST test set for machine translation consists of thousands samples too and its training set can be 2 million sentence pairs (we can see that in lots of machine translation researches). But we will employ a larger dataset in the next experiment. We will add more details about Kim's and Liu's models in the revised version. \n\n5. For \"How did you combine the scores of the human evaluation categories?\", that is a problem. We also found it is hard to combine them, so we ask the participants to give one score according to the naturalness modality from an overall view. For \"why automatic evaluation and human evaluation results do not correlate well,\" in general, e.g. \"+how tall is+ the =lamp= itself ?\", regarding different ways to formulate questions, it needs many words (marked by ++) to express , while regarding to what subject/topic (marked by ==) to ask questions, it needs just one or two. Since BLEU favours longer references while humans judge based on overall expressions, that's the reason for different performances of automatic BLEU evaluation and human judgments.", "1. We believe our motivation still true even if the question generation is conditioned on answers. That is because: (1) quite many answers can’t be classified properly even by people. Please see the example for reviewer 1. (2) If there is a good correspondence among all the questions and answers, the accuracy of the sentence classification and labeling will not be so low (see Table 2 in the paper). (3) Actually , daily conversation can also be regarded as a kind of inquiry and answer. (4) Here, we given an example to explain that our data set still match our motivation/observation : **question**: \"are the =tips= interchangeable ?\", **answer**: \"the =tips= are one piece metallic construction solid glued in place .\". On the other hand, we believe the generated questions conditioned on different topics on Table 4 are not similar. The first question in the last three rows is about \"the manufacturing date\", and the second question is asking \"the origin of the bottle\", while the third one is about the \"manufacturer\". Of course, it is right to say they are similar if we talk about the similarity at a higher level since they are all about \"manufacture\". And this is because they have the same context \"bottle says 'made in usa'.\"\n\n2. Yes, we know and that is also the reason why we split the Table 1 into 2 parts. Let us only consider the automatically generated topic. It is true that our performance is even worse without the pre-decode technique (but we have pre-decode technique). (1) We give the reason for that in the paper. It is because we would like to build a system to generate controlled questions but the poor sentence classification and labeling accuracy lead our system to generate wrong sentences. (2) As there is no existing method that can perform our proposed new task, we compare with the conventional question generation (our model is not designed for that purpose). (3) The inconsistency of training and testing puts our model at a disadvantage. To achieve the proposed goal, we have to use the extracted ground truth to train, but to have a fair comparison with the conventional question generation method we then must test our model using auto-generated topics and question types.\n\n3. Thank you for pointing out this problem. We will correct that. What we have given is not a strict mathematical proof, it is a brief explanation. As assumed it is hard to give a mathematical proof in neural networks. \n\n4. We used several methods to tackle the noise problem. (1) We removed the stop words using NLTK(a natural language toolkit). (2) We employed the Bagging approach in the extraction process. (3) We proposed the pre-decode technique. At the same time we allow the topic to correspond to the empty value. According to our statistics, about 32.6% of the sentences in the training set failed to extract the topic. We would rather have it corresponding to the empty value than to introduce noise. Here, we given some examples: \n(1) **answer** : \"yes it is . it is a great product . it works with all devices except samsung 7 inch tablet . it is not the keyboard it is samsung . good luck.\" **question** : \"is this keyboard compatable with the acer a500 tablet ?\" **topics** : \"tablet keyboard\"\n(2) **answer** : \"yes . it is comfortable even with my glasses on .\" **question** : \"are these comfortable if you wear glasses ? do they hurt your ears from physical contact ?\" **topics** : \"comfortable glasses\" \n(3) **answer** : \"it does . but it is not as good as smart phone apps available today .\" **question** : \"this equipment translates from spanish to portuguese ?\" **topics** : \"\" (empty)\n", "We are also surprised about the poor accuracy of sentence classification and labeling. Based on observation and analysis of the result. We noticed that the difficulty lies in the fact that multiple questions can be asked based on one given sentence. It is also based on this fact that we proposed to ask questions targeted toward some relevant indicators. \n\nFor example, for a given answer \"it's sort of got a cardboard feel to it , but it feels very sturdy nonetheless.\" , the question might be \"how does it feel?\", \"what does it feel like?\" or \"is it sturdy enough?\" but the ground truth question is \"what material is it made out of ?\". So, it is quite challenging to generate questions consistent with the given ground truth based only on the given answer.\n\nWe will revise the qt set. Thanks. There are actual ambiguities existing in some question types. Since we want to propose a scheme to generate controlled questions in an unsupervised manner, so there is little information can help us to identify the question types in detail. But the \"answer types\" you mentioned greatly inspired us. We can do that by considering the answers type. Thanks.\n\nThanks for experiment suggestion. It's an interesting idea. We will do that in the next experiment. We think an special memory mechanism can be designed to do that." ]
[ 3, 4, 8, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rk3pnae0b", "iclr_2018_rk3pnae0b", "iclr_2018_rk3pnae0b", "rk27E6PlG", "HyrNUBYlz", "B1chwjFlz" ]
iclr_2018_SJDJNzWAZ
Time-Dependent Representation for Neural Event Sequence Prediction
Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence. While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call. The time span between events can carry important information about the sequence dependence of human behaviors. In this work, we propose a set of methods for using time in sequence prediction. Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization. We also introduce two methods for using next event duration as regularization for training a sequence prediction model. We discuss these methods based on recurrent neural nets. We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks. The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings.
workshop-papers
I've summarized the pros and cons of the reviews below: Pros: * The method for time representation in event sequences is novel and well founded * It shows improvements on several but not all datasets that may have real-world applications Cons: * Gains are somewhat small * The task is also not of huge interest to ICLR in particular, and thus the paper might be of limited interest As a result, because the paper is well done, but drew little excitement from any of the reviewers, I suggest that this not be accepted to the main conference, but encouraged to present at the workshop track.
train
[ "Sye5BLIyG", "S1edG-sxf", "rk0LDknlz", "ByyJ4OT7M", "SJ5WB_a7f", "S1jyLOpmG", "rknWGOaXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Quality above threshold.\nClarity above threshold.\nOriginality slightly below threshold.\nSignificance slightly below threshold.\n\nPros:\nThis paper proposed a RNN for event sequence prediction. It provides two constructed choices for combining time(duration) information to event. Experiments on various datasets were conducted and most details are provided.\n\nCons (concerns):\n\n1. Event sequence prediction is a hard problem as there’s no clear way to fuse the features about event and the information about the time. It is a nice attempt that in this work, duration is used for event representation. However, the choices are not “principled” as claimed in the paper. E.g., the duration is simply a scaler, but \"time mask\" approache converts that to a multi-dimensional vector while there’s not much information to regularize it.\n\n2. Event-time joint embedding sounds sensible as it essentially remaps from the original value to some segments. E.g., 10 minutes and 11 minutes might have same effect on next event in one dataset while 3 days and a week might have similar effect on next event prediction. But the way how the experiments are designed and analyzed do not provide such insights.\n\n3. The experimental results are not persuasive as no other baselines besides RNN-based methods are provided. Parametric and nonparametric methods both exist for this event prediction problem in previous work. In the results provided, no significant difference between the listed model choices is found, partly because only using event type and duration is not enough. Other info such as time of day, day of week matters a lot. ", "The authors present a model base on an RNN to predict marks and duration of events in a temporal point process. The main innovation of the paper is a new representation of a point process with duration (which could also be understood as marks), which allows them to use a \"time mask\", following the idea of word mask introduced by Choi et al, 2016. In Addition to the mask, the authors also propose a discretization of the duration using one hot encoding and using the event duration as a regularizer. They compare their method to several variations of their own method, two trivial baselines, and one state of the art method (RMTPP) using several real-world datasets and report small gains with respect to that state of the art method.\n\nOverall, the technical contribution of the paper is minor, the gains in performance with respect to a single state of the art are minimal, and the authors oversell their contribution specially in comparison with the related literature. More specifically, my concerns, which prevent me from recommending acceptance, are as follows:\n\n- The authors assume the point process contains duration and intervals, however, point processes generally do not have duration per event but they are discrete events localized in particular time points. Moreover, the duration in their representation (Figure 1) is sometimes an interevent time and sometimes a duration, which makes the whole construction inconsistent. Moreover, what happens then to the representation depicted in Figure 1 when duration is nonexistent or zero?\n\n- The use of \"time mask\" is not properly justified and the authors are just extending the idea of word mask to their setting -- it is unclear why the duration of an event is going to provide context and in any case this seems like a minor technical contribution. \n\n- The use of a time mask does not appear \"more principled\" than previous work (Due et al., Mei & Esiner, Xiao et al.). Previous work use the framework of temporal point processes in a principled way, the current work does not. I would encourage to authors to tone down their language.\n\n- The regularization proposed by the authors uses a Gaussian on the \"prediction error\" of the duration or just cross entropy on a discretization of the duration. Given the inconsistency in the definition of the duration (sometimes it is duration, sometimes is interevent time), the resulting regularization may lead to unexpected/undesirable results. Moreover, it is unclear why the authors do not model the duration time with an appropriate distribution (e.g., Weibull) and add the log-likelihood of the durations under that distribution as regularization. \n\n- The difference in performance with respect to a single nontrivial baseline (the remaining baselines are trivial or versions of their own model) is minimal. Moreover, the authors fail to compare with other methods, e.g., the method by Mei & Eisner, which beats RMTPP. This is specially surprising since the authors mention such work in the related work and there is available source code at https://github.com/HMEIatJHU/neurawkes.", "The paper proposes a set of methods for using temporal information in event sequence prediction. Two methods for time-dependent event representation are proposed. Also two methods for using next event duration are introduced.\n\nThe motivation of the paper is interesting and I like the approach. The proposed methods seem valid. Only concern is that the proposed methods do not outperform others much with some level of significance. More advance models may be needed.\n\n", "Regarding the baseline methods, previous work (e.g., Du et al’s) has compared the performance of RNN-based and parametric approaches such as Hawke processes, which showed that RNN-based models outperformed other alternatives. Thus, we built on top of the understandings of previous work and focused this work on how to improve RNN-based approaches for time-based sequences. \n\nAdditional features such as \"time of day\" and \"day of week\" are indeed useful. However, these features are already well discretized (e.g., 24 hours a day and 7 days a week), and can be directly fed to the embedding layer. Our focus in this work is to explore a good representation for continuous time that does not have a good way for tokenization yet.\n\nReviewer1 brought up a good point (#2) about time representation. In the revision, we added a brief discussion about the learned time representation by our TimeJoint method.\n", "We agree with the reviewer that the performance difference seems small. However, the accuracy gain from our methods, especially TimeJoint, is quite consistent across datasets. On the two of the three public datasets, the accuracy improvement is statistically significant (p<0.05). None of the time-based method seems to help on the third dataset (MIMIC). This indicates that using time might not always help. However, when it does, our methods such as TimeJoint enable more efficient representation of time than simply using the scalar value in RNN models. We have updated the paper to include these results. We also found further tuning (e.g., projection size) enabled additional performance gain for our methods, which we will add in future revisions.", "We updated the paper to address some of the reviewers' comments. The major changes include the following.\n\n1. Clarified that our goal with this work is to develop time representation methods rather than a new RNN model. Our methods can enhance existing RNN models to deal with continuous time;\n2. Added more experimental results, including statistical significance for the performance with three public datasets;\n3. Updated Figure 3 for the performance with the three public datasets resulted from better tuning;\n4. Added Figure 4 to discuss learned time representation.", "1. R2 is right that point processes do not concern event duration. However, in many real world sequences, such as app usage, duration does exist. How long an app is used can carry important information about the nature of the event, e.g., a short versus a long YouTube watch. To deal with both duration and interval, we introduced an idle event so that both types of time spans can be represented as duration. With our framing, the nature of the “duration” depends on the event it is associated with. We feel this framing simplifies the handling of two types of time span. When duration is nonexistent in the sequence or does not matter for the domain, we don’t need to introduce idle events.\n\n2. As discussed above, the duration of an event can provide rich information about an event. For example, in modeling app usage, a longer use of the Map Navigation app implies a more extended driving, which might lead to a different follow-up app usage. In the medical domain, the time length of a symptom is critical for predicting future symptoms and identifying the underlying cause of an illness. “time mask” is based on previous work. But it is a useful application of the contextual masking idea to continuous time that has not been explored before, which contributes new empirical evidence.\n\n3. We have rephrased the sentence. It is not our intention to say our work is more principled. Rather, our work is focused on different aspects while previous work had a different focus. Previous work directly feeds the scalable value of time into the model. Our motivation is that because it is not clear how to properly represent time as input, we simply let the model learn the representation, similar to embeddings for words. Our approach is essentially proposing a new way to represent time instead of arguing for a completely new model. In fact, our approach can be indeed combined with previous work for better performance, e.g., using time-dependent event representation in Du’s or Mei’s model.\n\n4. Regularization can help here because it provides additional information of next event duration in the back propagation process, which is less relevant to how duration is defined (we addressed the duration-vs-interval question above). It is important to note that we do not use Gaussian distribution to model duration. Rather we use the Gaussian distribution to model prediction errors of duration. Previous work by Hinton and Camp (COLT ’93) have discussed this approach. In fact, we do not assume any distribution for duration time in this work. Rather we hope the model will learn that from the data.\n\n5. Our work was developed in parallel to Mei & Eisner’s work, which was discovered as an upcoming NIPS 2017 paper right before the submission. Since both Mei & Eisner’s work and RMTPP use time directly as a scalar value, we can assume our time embedding approach would bring additional accuracy to both of these methods. Again, we want to emphasize that we are NOT proposing a new point process model but we contribute techniques that are add-on to the existing work. We look into ways to enhance event time representation with embedded times and improve training with time regularization. \n" ]
[ 4, 4, 5, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ "iclr_2018_SJDJNzWAZ", "iclr_2018_SJDJNzWAZ", "iclr_2018_SJDJNzWAZ", "Sye5BLIyG", "rk0LDknlz", "iclr_2018_SJDJNzWAZ", "S1edG-sxf" ]
iclr_2018_SkHl6MWC-
Regularization Neural Networks via Constrained Virtual Movement Field
We provide a novel thinking of regularization neural networks. We smooth the objective of neural networks w.r.t small adversarial perturbations of the inputs. Different from previous works, we assume the adversarial perturbations are caused by the movement field. When the magnitude of movement field approaches 0, we call it virtual movement field. By introducing the movement field, we cast the problem of finding adversarial perturbations into the problem of finding adversarial movement field. By adding proper geometrical constraints to the movement field, such smoothness can be approximated in closed-form by solving a min-max problem and its geometric meaning is clear. We define the approximated smoothness as the regularization term. We derive three regularization terms as running examples which measure the smoothness w.r.t shift, rotation and scale respectively by adding different constraints. We evaluate our methods on synthetic data, MNIST and CIFAR-10. Experimental results show that our proposed method can significantly improve the baseline neural networks. Compared with the state of the art regularization methods, proposed method achieves a tradeoff between accuracy and geometrical interpretability as well as computational cost.
workshop-papers
R1 thought the proposed method was novel and the idea interesting. However, he/she raised concerns with consistency in the experimental validation, the trade-off between accuracy and running time, and the positioning/motivation, specifically the claim about interpretability. The authors responded to these concerns, and R1 upgraded their score. R2 didn’t raise major concerns or strengths. R3 questioned the novelty of the work and the experimental validations. All reviewers raised concerns with the writing. Though I think the work is interesting, issues raised about experiments and writing make me hesitant to go against the overall recommendation of the reviewers, which is just below the bar. I think this is a paper that could make a good workshop contribution.
val
[ "HJY_d5iEG", "HJ6Svr8Ez", "H1g6cQsxM", "Sy_2ES9lG", "ryfzqnhxz", "S1cdKD7Gz", "SJenYvmfM", "B16EYDXGM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for your higher score.\n\nWe have tried two ways of setting wi's:\n (1) Fixed values, i.e. wi = 1/3.\n (2) Sampling wi' s from uniform distribution then normalizing them to a vector with unit length.\nAnd we find (2) is better than (1). \n\nHere are potential reasons: \nFirst, when we linearly combine different regularization operators, we get new operators in some sense. And when the operators are combined with different weights, we may say they are all different operators though they are highly related. Thus, compared with the fixed wi's, we can construct more regularization operators by randomly sampling wi's.\nSecond, randomly sampling wi's may introduce noise into the regularization terms. As shown in previous works, the generalization ability can be improved by introducing noise into the model in a proper way.\n\nOther choices, such as sampling wi's from different distributions may be better. But what we want to emphasize is that we believe randomly sampling wi's is better than fixing wi's.", "As, the authors have carefully responded to my comments I upgrade my score to 5. However, for eq. (23) it would be interesting to explain the motivation behind the choice of random sampling wi’s as a uniform random variables and compare to other ponderation choices. ", "This paper tackles the overfitting problem when training neural networks based on regularization technique. More precisely, the authors propose new regularization terms that are related to the underlying virtual geometrical transformations (shift, rotation and scale) of the input data (signal, image and video). By formalizing the geometrical transformation process of a given image, the authors deduce constraints on the objective function which depend on the magnitude of the applied transformation. The proposed method is compared to three methods: one baseline and two methods of the literature (AT and VAT). The comparison is done on three datasets (synthetic data, MNIST and CIFAR10) in terms of test errors (for classification problems) and running time.\n\nThe paper is well formalized and the idea is interesting. The regularization approach is novel compared to the methods of the literature. \n\nMain concerns: \n1)\tThe experimental validation of the proposed approach is not consistent:\nThe description of the baseline method is not detailed in the paper. \nA priori, the baseline should naturally be the method without your regularization terms.\nBut, this seems to be contrary with what you displayed in Figure 3. \nIndeed, in Figure 3, there is three different graphs for the baseline method (i.e., one for each regularization term). It seems that the baseline method depends on the different kinds of regularization term, why? Same question for AT and VAT methods. \nIn practice, what is the magnitude of the perturbations? \nPlease, explain the axis of all the figures. \nPlease, explain how do you mix your different regularization terms in your method that you call VMT-all? \nAll the following points are related to the experiment for which you presented the results in Table 2: \nPlease, provide the results of all your methods on the synthetic dataset (only VMT-shift is provided). What is VMF? Do you mean VMT? \nFor the evaluations, it would be more rigorous to re-implement also the state-of-the-art methods for which you only give the results that they report in their paper. Especially, because you re-implemented AT with L-2 constraint, so, it seems straightforward to re-implement also AT with L-infinite constraint. Same remark for the dropout regularization technique, which is easy to re-implement on the dense layers of your neural networks, within the Tensorflow framework. \nAs you mentioned, your main contribution is related to running time, thus, you should give the running time in all experiments. \n\n2)\tThe method seems to be a tradeoff between accuracy and running time:\nThe VAT method performs better than all your methods in all the datasets. \nThe baseline method is faster than all the methods (Table 3). \nThis being said, the proposed method should be clearly presented in the paper as a tradeoff between accuracy and running time. \n3)\tThe positioning of the proposed approach is not so clear: \nAs mentioned above, your method is a tradeoff between accuracy and running time. But you also mentioned (top of page 2) that the contribution of your paper is also related to the interpretability in terms of ‘’Human perception’’. Indeed, you clearly mentioned that the methods of the literature lacks interpretability. You also mentioned that your method is more ‘’geometrically’’ interpretable than methods of the literature. The link between interpretability in terms of “human perception” and “geometry” is not obvious. Anyway, the interpretability point is not sufficiently demonstrated, or at least, discussed in the paper. \n\n4)\tMany typos in the paper : \nSection 1: “farward-backward”\nSection 2.1: “we define the movement field V of as a n+1…”\nSection 2.2: “lable” - “the another” - “of how it are generated” – Sentence “Since V is normalized.” seems incomplete… - \\mathcal{L} not defined - Please, precise the simplifications like \\mathcal{L}_{\\theta} to \\mathcal{L} \nSection 3: “DISCUSSTION”\nSection 4.1: “negtive”\nFigure 2: “negetive”\nTable 2: “VMF”\nSection 4.2: “Tab 2.3” does not exist \nSection 4.3: “consists 9 convolutional” – “nerual networks”…\nPlease, always use the \\eqref latex command to refer to equations.\nThere is many others typos in the paper, so, please proofread the paper…\n", "This paper proposes to regularize neural networks by the invariance to certain types of transforms. This is framed into a minimax problem, which yields a closed form regularization when constrained to simple types of transforms. \n\nThe basic idea of using derivative to measure sensitivity has been widely known, and is related to tangent propagation and influence function. Please comment on the connection and difference. What is the substantial novelty of this current approach? \n\nThe empirical results are not particularly impressive. The performance is not as good as (and seems significantly worse than) AT and VAT on MNIST. Could you provide an explanation? On CIFAR10, VMT-all is only comparable with VAT. Although VMT is faster than VAT, it seems not a significant advantage since is not faster in a magnitude. \n\nThe writing need to be significantly improved. Currently there are lot of typos and grammar errors, e.g., \\citep vs. \\citet; randon, abouve, batchszie; \\mathcal{Z}^n is undefined when it first appears.\n\nIn VMT-all, how do you decide the relative importance of the three different regularizations? \n\nIs Figure 3 the regularization on the training or testing set? Could you explain why it reflects generalization ability? ", "Summary:\nThe paper propose a method for generating adversarial examples in image recognition problems. The Adversarial scheme is inspired in the one proposed by Goodgellow et al 2015 (AT) that introduces small perturbations to the data in the direction that increases the error. Such a perturbations are random (they have not structure) and lack of interpretation for a human user. The proposal is to limit the perturbations to just three kind of global motion fields: shift, centered rotation and scale (zoom in/out). Since the motions are small in scale, the authors use a first-order Taylor series approximation (as in classical optical flow). This approximation allows to obtain close formulas for the perturbed examples; i.e. the correction factor of the Back-propagation computed derivatives w.r.t. original example. As result, the method is computational efficient respect to the AT and the perturbations are interpretable. \nExperiments demonstrate that with the MNIST database is not obtained an improvement in the error reduction but a reduction of the computational time. However, with ta more general recognition problem conducted with the CIFAR-10 database, the use of the proposed method improves both the error and the computational time, when compared with AT and Virtual Adversarial Train. \n\nComments:\n\n1. The paper presents a series os typos: FILED (title), obouve, freedm, nerual,; please check carfully.\n\n2. The Derivation of eq. (13) should be explained, It could be said that (12) can be casted as a eigenvalue problem [for example: $ max_{\\tilde v} \\| \\nabla_p L^T \\tilde v \\|^2 \\;\\; s.t. \\| v\\|=1 $] and (13) is the largest eigenvalue of $ \\nabla_p L \\nabla_p L^T $]\n\n3. The improvement in the error results in the db CIFAR-10 is good enough to see merit in the proposal approach. Maybe other perturbations with closed formula could be considered and linear combinations of them", "Thanks for your comments.\n1) * For Fig.(3), x-axis means the number of training epoch. y-axis means the values of the regularization term on test set. The baseline is trained without any regularization term. But we can still evaluate the value of the corresponding regularization term of baseline on test set. Same for AT and VAT. Fig.(3a), Fig.(3b) and Fig.(3c) show the values of R_shift, R_rotation and R_scale respectively, no matter what regularization terms the model is trained with. So the baseline on each dataset is same. We describe the baseline on CIFAR-10 in Appendix A.\n\n* The magnitude of the perturbations changes over dataset. For AT and VAT, \\varepsilon ranges from 0.01 to 10. For VMT, \\lambda ranges from 0.005 to 5. We do grid-search over their range.\n\n* We explain how to mix regularization terms in eq.(23) in our updated paper.\n\n* VMT-rotation can't be applied to the synthetic dataset because there is no rotation operator for 1D signal. VMT-scale is not suitable for this dataset. We explain the reason at the end of section 4.1.\n\n* VMF means VMT. This is our writing mistake.\n\n* We don't re-implement AT-L_2 because the performance of AL-L_inf is slightly worse than AT-L_2 in previous literature. Now, we re-implement AT-L_inf on Synthetic dataset and MNIST. We also re-implement dropout on Synthetic dataset. For dropout on MNIST, we still use the result from literature. Because finding the optimal dropout rates for a 4-layer network requires lots of time and our preliminary results are inferior to the result from literature. So we think this result can approximate the best performance for dropout on MNIST.\n\n* We give the training time on Synthetic dataset and MNIST in appendix B. In fact, we think running time is a minor contribution to our work. See following comments.\n\n2) Yes, currently, VMT is a tradeoff between accuracy and running time as well as geometrical interpretability. And it has been clearly presented in our updated paper.\n\n3) In fact, when I first write this paper, I am struggling with the position of our method. Your comments make me think about it deeply. Now, we summarize our main as follows:\n\n* The assumption of \"small perturbations are caused by the virtual movement filed\" is a completely new idea in the literature of adversarial training or adversarial examples. By this assumption, we introduce data dependent constraints into the space of perturbations. And we cast the problem of finding perturbations into the problem of finding movement field.\n\n* We develop a general framework to design regularization terms for neural networks trained with lattice structured data, i.e. solving a min-max problem associated with the movement field. \n\n* To make above min-max problem easier, we introduce strong geometrical constraints into the movement field. Those constraints have two effects: first, it makes the adversarial movement field and the corresponding regularization term solved in closed-form which yields lower computational costs; second, it makes the obtained adversarial movement field has much more geometrical interpretability.\n\nThe word “human perception” may be used inappropriately in the paper. However, we think the link between \"interpretability\" and “geometry” is obvious. For example, for VMT-shift, we can see which direction of movement of an image is most likely to fool the network. We can also see why VMT-scale is not suitable for synthetic dataset. (See section 4.1).\n\n4) Sorry about the typos. We check them carefully in our updated paper.", "Thanks for your comments.\n(1) For tangent propagation, the transformations are predefined while the transformations in VMT are defined by the movement field and are further obtained by solving a constrained min-max problem though the freedom of those transformations is low currently. \n\nFor influence function, there are no constraints in the space of perturbations. And the smoothness w.r.t small perturbations is mainly used to analyze the behaviors of a trained model instead of regularizing it during training. \n\nThe substantial novelties of our work are: \n* The assumption of “small perturbations are caused by the virtual movement filed” is a completely new idea in the literature of adversarial training or adversarial examples. By this assumption, we introduce data dependent constraints into the space of perturbations. And we cast the problem of finding perturbations into the problem of finding movement field.\n\n* We develop a general framework to design regularization terms for neural networks trained with lattice structured data, i.e. solving a min-max problem associated with the movement field. Close-form terms are obtained by introducing proper geometrical constraints to the movement field.\n\nWe have added above content on the introduction section and the related work section in our updated paper.\n\n(2) Although our work is inspired by AT and VAT, it is not an incremental work of AT or VAT. It is unfair to say VMT must be better than AT or VAT. In fact, we focus more on geometrical interpretability and computational efficiency. Our method achieves a tradeoff between accuracy and those two factors compared with AT and VAT. As a regularization method, VMT achieves similar or better performance on synthetic dataset and MNIST compared with dropout, a widely used regularization technique for neural networks. This supports the effectiveness of VMT.\n\nAs mentioned in the paper, VMT finishes the training process in a single forward-backward loop while AT and VAT need at least two forward-backward loops (two in practice). Thus it is impossible to be faster in a magnitude. However, we still think such reducing of computational cost is valuable when we train big neural networks. In fact, we think running time is a minor contribution to our work\n \nMy personal view of the \"bad\" performance on MNIST is that: In VMT, we use finite difference to approximate the direction gradient of the inputs. This requires the local smooth property of the inputs. But for MNIST, we think the local smooth property is not well satisfied (it looks like binary values). AT and VAT do not rely on the local smooth property. Thus VMT is inferior to AT and VAT on MNIST.\n\nWe believe our method can be further improved if we design more regularization terms by changing the constraints and combine those terms. Such combination is cheap in practice. See the performance and training time of VMT-all in Tab 3. Also, it is possible to targeted design regularization terms based on the properties of data. \n\n(3) We define Z in eq.(2) in our updated paper.\n\n(4) We randomly combine three terms in each batch. So, on average, the relative importance is equal. See eq.(23).\n\n(5) The values of Fig(3) are on test set. The word \"generalization ability\" is used unserious. What we want to say is: the values of all regularization methods are lower than baseline and the performance of all regularization methods is better than baseline. We measure \"generalization ability\" by the difference between the empirical risks on test set and training set in our updated paper. See section 4.1.", "Thanks for your recognition of our work. \n(1) Sorry about the typos. We have checked them carefully in our updated paper.\n\n(2) We provide the derivation of eq.(13) in Appendix C. It looks unnecessary to cast eq.(12) as an eigenvalue problem because there is just one unknown variable in eq.(12).\n\n(3) In fact, VMT-all is a linear combination of other three terms (See eq.(23)) and it achieves better performance compared with the individual term. We can except that the performance could be further improved if we design and combine more close-form terms and such combination is cheap in practice (See running time of VMT-all in Tab 3). However, we think 'shift', 'rotation' and 'scale' used in the paper are enough to show the merit of our method." ]
[ -1, -1, 5, 5, 6, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1 ]
[ "HJ6Svr8Ez", "S1cdKD7Gz", "iclr_2018_SkHl6MWC-", "iclr_2018_SkHl6MWC-", "iclr_2018_SkHl6MWC-", "H1g6cQsxM", "Sy_2ES9lG", "ryfzqnhxz" ]
iclr_2018_rJWrK9lAb
Autoregressive Generative Adversarial Networks
Generative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively.
workshop-papers
The reviewers (all experts in this area) appreciated the novelty of the idea, though they felt that the experimental results (samples and Inception scores) did not provide convincing evidence value of this method over already established techniques. The authors responded to the concerns but were not able to address the issue of evaluation due to time constraints. The idea is likely sound but evaluation does not meet the bar, it may make a good contribution as a workshop paper.
train
[ "S1klbTulM", "BkuDb6tgf", "S13bO3cez", "ByWsMXCfz", "S1Ma2AnmM", "rJWGLRJmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a new GAN model whereby the discriminator (rather than being a binary classifier) consists of an encoding network followed by an autoregressive model on the encoded features. The discriminator is trained to maximize the probability of the true data and minimize the probability of the generated samples. The authors also propose a version that combines this autoregressive discriminator with a patchGAN discriminator. The authors train this model on cifar10 and stl10 and show reasonable generations and inception scores, comparing the latter with existing approaches. \n\nPros: This discriminator architecture is well motivated, intuitive and novel. the samples are good (though not better than existing approaches as far as I can tell). The paper is also well written and easy to read.\n\nCons: As is commonly the case with GAN models, it is difficult to assess the advantage of this approach over exiting techniques. The samples generated form this model look fine, but not better than existing samples. The inception scores are ok, but seem to be outperformed by other models (though this shouldn't necessarily be taken as a critique of the approach presented here as inception scores are an approximation to what we care about and we should not be trying to tune models for better inception scores). \n\nDetailed comments:\n- In terms of experiments, I think think paper is missing the following: (1) An additional dataset -- cifar and stl10 are very similar, a face dataset for example would be good to see and is commonly used in GAN papers. (2) the authors claim their method is stable, so it would be good to see quantitative results backing this claim, i.e. sweeps over hyper-parameters / encoding/generator architectures with evaluations for different settings. \n- the idea of having some form of recurrent (either over channels of spatially) processing in the discriminator seems more general that the specific proposal given here. Could the authors say a bit more about what they think the effects of adding recurrence in the discriminator vs optimizing the likelihood of the features under the autoregressive model?\n\nUltimately, the approach is interesting but there is not enough empirical evaluations.\n", "This work attempts to improve the global consistency of samples generated by generative adversarial networks by replacing the discriminator with an autoregressive model in an encoded feature space. The log likelihood of the classification model is then replaced with the log likelihood of the feature space autoregressive model. It's not clear what can be said with respect to the convergence properties of this class of models, and this is not discussed.\n\nThe method is quite similar in spirit to Denoising Feature Matching of Warde-Farley & Bengio (2017), as both estimate a density model in feature space -- this method via a constrained autoregressive model and DFM via an estimator of the score function, although DFM was used in conjunction with the standard criterion whereas this method replaces it. This is certainly worth mentioning and discussing. In particular the section in Warde-Farley & Bengio regarding the feature space transformation of the data density seems quite relevant in this work.\n\nUnfortunately the only quantitative measurements reporter are Inception scores, which is known to be a poor measure (and the scores presented are not particularly high, either); Frechet Inception distance or log likelihood estimates via AIS on some dataset would be more convincing. On the plus side, the authors report an average over Inception scores for multiple runs. On the other hand, it sounds as though the stopping criterion was still qualitative.", "This paper proposes an alternative GAN formulation that replaces the standard binary classification task in the discriminator with a autoregressive model that attempts to capture discriminative feature dependencies on the true data samples.\n\nSummary assessment:\nThe paper presents a novel perspective on GANs and an interesting conjecture regarding the failure of GANs to capture global consistency. However the experiments do not directly support this conjecture. In addition, both qualitative and quantitative results to not provide significant evidence of the value of this technique over and above the establish methods in the literature.\n\nThe central motivation of the method proposed in the paper, is a conjecture that the lack of global consistency in GAN-generated samples is due to the binary classification formulation of the discriminator. While this is an interesting conjecture, I am somewhat unconvinced that this is indeed the cause of the problem. First, I would argue that other high-performing auto-regressive models such as PixelRNN and PixelCNN also seem to lack global consistency. This observation would seem to violate this conjecture. More importantly, the paper does not show any direct empirical evidence in support of this conjecture. \n\nThe authors make a very interesting observation in their description of the proposed approach. In discussing an initial variant of the model (Eqns. (5) and (6) and text immediately below), the authors state that attempting to maximize the negative log likelihood of the auto-regressive modelling the generated samples results in unstable training. I would like to see more discussion of this point as it could have some bearing on the difficulty of GAN to model sequential data in general. Does the failure occurs because the auto-regressive discriminator is able to \"overfit\" the generated samples?\n\nAs a result of the observed failure of the formulation given in Eqns. (5) and (6), the authors propose an alternative formulation that explicitly removes the negative likelihood maximization for generated samples. As a result the only objective for the auto-regressive model is an attempt to maximize the log-likelihood of the true data. The authors suggest that this should be sufficient to provide a reliable training signal for the generator. It would be useful if the authors showed a representation of these features (perhaps via T-SNE) for both true data and generated samples. \n\nEmpirical results:\nThe authors experiments show samples (qualitative comparison) and inception scores (quantitative comparison) for 3 variants of the proposed model and compare these to methods in the literature. The comparisons show the proposed model preforms well, but does not exceed the performance of many of the existing methods in the literature.\n\nAlso, I fail to observe significantly more global consistency for these samples compared to samples of other SOTA GAN models in the literature. Again, there is no attempt made by the authors to make this direct comparison of global consistency either qualitatively or quantitatively. \n\nMinor comment:\nI did not see where PARGAN was defined. Was this the combination of the Auto-regressive GAN with Patch-GAN?\n", "Thanks for your review and feedback. We will include Denoising Feature Matching to our paper and make a clear comparison. Even though Denoising Feature Matching uses density estimation in the latent space there are major differences which makes learning dynamic of our model totally different then theirs. (i) As you mentioned their method is complementary to GAN objective while our method can be learned standalone. (ii) More importantly their discriminator (encoder + classifier) are trained as in original GAN objective which means that features learned from the data distribution are based on classifier's feedback not on density model's. This crucial difference make both works different than one another. (iii) In our model feature co-occurrences is modeled explicitly. (iv) Motivation for both works are totally different.\n\nUnfortunately, we could not include a second score (FID) into the revision due to time limitations.\n", "Thanks for your review and feedback. (1) We have included CelebA dataset at 64x64 resolution with SW-ARGAN objective. (2) We did not claim stability over various architectures or hyperparameters. For setting that we mention in the paper, the method works well without mode collapse or training instability. Also our model is fairly simple and does not use any trick (like in improvedGAN) to improve performance. For your (3) point, our autoregressive modeling can be also modeled with a CNN similar to PixelCNN, so it is not a specific proposal about recurrent modeling. One possible benefit of using autoregression instead of recurrent discriminator is that it takes more bits of information from the objective (similar to EBGAN) instead of single score from the last time step (real/fake score).", "Thanks for your review and feedback. PixelRNN and PixelCNN are pixel prediction methods. Since adjacent pixels in images are highly correlated and pixel values can be captured by local image statistics, their model might be using most of their capacity to model local information instead of global. However in our case, autoregressive modeling is in latent space and adjacent features values are less correlated than adjacent pixel values. Also high level abstract representation are globally related such as co-occurrence of different object parts in a scene. As a result, PixelRNN's and PixelCNN's lack of global consistency is not necessarily about autoregressive modeling but more about their pixel level modeling.\n\nMaximizing Eqns. (5) by updating \"R\" parameters are unstable because the second term in the equation is unbounded. We mentioned this in paper but did not go into detail. When the second term is unbounded gradient of it gets way bigger than the first term's gradient. As a result optimization cares mostly about the second term which means it decreases probability of generated feature while avoiding to increase probability of real features. We see this phenomenon exactly in our experiment. After some iterations the error of the first term starts to increase because the gradients cares about the second term. We tried a simple method to overcome this by using margin loss, as in EBGAN, in the second term which makes the second term bounded. However this trick did not provide better results than by simple discarding the second term from R's objective. We do not think it has something to do with sequential data but being unbounded.\n\n\"The authors suggest that this should be sufficient to provide a reliable training signal for the generator\". Empirical evidence, both qualitative and quantitative, shows that this is a reliable training signal for the generator. Even though, the auto-regressor is not adversarial the encoder is adversarial which satisfies the distinguishability of real and fake samples. Our intuition is that as the auto-regressor fits on only real features it discovers feature co-occurrence statistics in real data distribution. Repelling from fake data distribution is not necessary since it is already satisfied by the encoder. As mentioned previously, margin loss in the second term is not better than discarding it totally.\n\nFor Empirical results: We see that we emphasized global inconsistency a lot in the paper however it is just an observation about what is lacking in the current GAN models and why it might be happening. Generic theme of our model is learning a generative model by using feature co-occurrence statistics in real data distribution which is not found in generative distribution. Our C-ARGAN and S-ARGAN can learn both spatial layout and feature layout. Even though our model is not better than other GAN models, its competitive on DCGAN architecture and can be further improved with more advanced autoregressive modeling as we mentioned in the conclusion section.\n\nSorry for PARGAN confusion. It is simply summation of the Auto-regressive GAN with Patch-GAN without any hyperparameters. It will be included into the revision." ]
[ 5, 3, 5, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1 ]
[ "iclr_2018_rJWrK9lAb", "iclr_2018_rJWrK9lAb", "iclr_2018_rJWrK9lAb", "BkuDb6tgf", "S1klbTulM", "S13bO3cez" ]
iclr_2018_Hy1d-ebAb
Learning Deep Generative Models of Graphs
Graphs are fundamental data structures required to model many important real-world data, from knowledge graphs, physical and social interactions to molecules and proteins. In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest. After learning, these models can be used to generate samples with similar properties as the ones in the dataset. Such models can be useful in a lot of applications, e.g. drug discovery and knowledge graph construction. The task of learning generative models of graphs, however, has its unique challenges. In particular, how to handle symmetries in graphs and ordering of its elements during the generation process are important issues. We propose a generic graph neural net based model that is capable of generating any arbitrary graph. We study its performance on a few graph generation tasks compared to baselines that exploit domain knowledge. We discuss potential issues and open problems for such generative models going forward.
workshop-papers
Predicting graphs is an interesting and important direction, and there exist essentially no (effective) general-purpose techniques for this problem. The idea of predicting nodes one by one, though not entirely surprising, is interesting and the approach makes sense. Unfortunately, I (and some of reviewers) less convinced by evaluation: - For example, evaluation on syntactic parsing of natural language is very weak. First of all, the used metric -- perplexity and exact match are non-standard and problematic (e.g., optimizing exact match would largely correspond to ignoring longer sentences where predicting the entire tree is unrealistic). Also the exact match scores are very low (~30% whereas 45+ were achieve by models back in 2010). - A reviewer had, I believe, valid concerns about comparison with GrammarVAE, which were not fully addressed. Overall, I believe that it is interesting work, which regretfully cannot be published as a conference paper in its current form. + important / under-explored problem + a reasonable (though maybe not entirely surprising / original) approach - issues with evaluation
train
[ "Sy6ZK8IEM", "S1crSKYgM", "ByZ8Tx9ez", "HkCWGa0xM", "SJ0xpMM4f", "HJH-GmaXz", "Hk9Ol76mG", "Hk8ilROQG", "B1GMUvUGf", "Bk1RrPUMz", "r1kajbsyG", "HkzAQO_kM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "public" ]
[ "Thanks for adding this comparison against the Grammar VAE model. I think it certainly allows for a better placement of your proposed model w.r.t related work.\n\nWhile I hoped this comparison would tell a clear story about whether a) decoding with a grammar or b) decoding directly into a graph representation would work better, it seems that the results that you report raise a few more questions.\n\nIt is unclear to me why your proposed LSTM decoder baseline (without respecting SMILES grammar) should work so much better (in terms of number of valid molecules) than the recurrent decoder of the grammar VAE. You mention that you suspect that the lack of full auto-regressiveness (output is not fed back as input to the next time step) might be a distinguishing factor. Since there is such a significant difference in model performance, this would certainly have to be experimentally verified in order to be an acceptable explanation for this difference. What happens if your LSTM decoder is trained with teacher forcing instead (like the original SMILES VAE paper https://arxiv.org/pdf/1610.02415v1.pdf) or without any new input at each time step? Will it similarly degrade performance and explain the difference?\n\nAll in all, I stick to my original evaluation of the paper as I think the paper offers a promising approach for generating (small) graphs which certainly deserves attention by the community. The experimental evaluation is extensive while some points (see above) still require clarification. I hope the authors can address my last few questions should the paper be accepted (or for some later later venue).", "The paper introduces a generative model for graphs. The three main decision functions in the sequential process are computed with neural nets. The neural nets also compute node embeddings and graph embeddings and the embeddings of the current graph are used to compute the decisions at time step T. The paper is well written but, in my opinion, a description of the learning framework should be given in the paper. Also, a summary of the hyperparameters used in the proposed system should be given. It is claimed that all possible types of graphs can be learned which seems rather optimistic. For instance, when learning trees, the system is tweaked for generating trees. Also, it is not clear whether models for large graphs can be learned. The paper contain many interesting contributions but, in my opinion, the model is too general and the focus should be given on some retricted classes of graphs. Therefore, I am not convinced that the paper is ready for publication at ICLR'18.\n\n* Introduction. I am not convinced by the discussion on graph grammars in the second paragraph. It is known that there does not exist a definition of regular grammars in graph (see Courcelle and Engelfriet, graph structure and monadic second-order logic ...). Moreover, many problems are known to be undecidable. For weighted automata, the reference Droste and Gastin considers weighted word automata and weighted logic for words. Therefore I does not seem pertinent here. A more complete reference is \"handbook of weighted automata\" by Droste. Also, many decision problems for wighted automata are known to be undecidable. I am not sure that the paragraph is useful for the paper. A discussion on learning as in footnote 1 shoud me more interesting.\n* Related work. I am not expert in the field but I think that there are recent references which could be cited for probablistic models of graphs.\n* Section 3.1. Constraints can be introduced to impose structural properties of the generated graphs. This leads to the question of cheating in the learning process.\n* Section 3.2. The functions f_m and g_m for defining graph embedding are left undefined. As the graph embedding is used in the generating process and for learning, the functions must be defined and their choice explained and justified.\n* Section 3. As said before, a general description of the learning framework should be given. Also, it is not clear to me how the node and graph embeddings are initialized and how they evolve along the learning process. Therefore, it is not clear to me why the proposed updating framework for the embeddings allow to generate decision functions adapted to the graphs to be learned. Consequently, it is difficult to see the influence of T. Also, it should be said whether the node embeddings and graph embeddings for the output graph can be useful.\n* Section 3. A summary of all the hyperparameters should be given.\n* Section 4.1. The number of steps is not given. Do you present the same graph multiple times. Why T=2 and not 1 or 10 ?\n* Section 4.2. From table 2, it seems that all permutations are used for training which is rather large for molecules of size 20. Do you use tweaks in the generation process.\n* Section 4.3. The generation process is adapted for generating trees which seems to be cheating. Again the choice of T seems ad hoc and based on computational burden.\n* Section 5 should contain a discussion on complexity issues because it is not clear how the model can learn large graphs.\n* Section 5. The discussion on the difficulty of training shoud be emphasized and connected to the --missing-- description of the model architecture and its hyperparameters.\n* acronyms should be expansed at their first use", "The authors proposed a graph neural network based architecture for learning generative models of graphs. Compared with traditional learners such as LSTM, the model is better at capturing graph structures and provides a flexible solution for training with arbitrary graph data. The representation is clear with detailed empirical studies. I support its acceptance.\n\nThe draft does need some improvements and here is my suggestions.\n1. Figure 1 could be improved using a concrete example like in Figure 6. If space allowed, an example of different ordering leads to the same graph will also help.\n\n2. More details on how node embedding vectors are initialized. How does different initializations affect results? Why is nodes at different stages with the same initialization problematic?\n\n3. More details of how conditioning information is used, especially for the attention mechanism used later in parse tree generation.\n\n4. The sequence ordering is important. While the draft avoids the issue theoretically, it does has interesting results in molecule generation experiment. I suggest the authors at least discuss the empirical over-fitting problem with respect to ordering.\n\n5. In Section 4.1, the choice of ER random graph as a baseline is too simplistic. It does not provide a meaningful comparison. A better generative model for cycles and trees could help.\n\n6. When comparing training curves with LSTM, it might be helpful to also include the complexity comparison of each iteration.", "The authors introduce a sequential/recurrent model for generation of small graphs. The recurrent model takes the form of a graph neural network. Similar to RNN language models, new symbols (nodes/edges) are sampled from Bernoulli or categorical distributions which are parameterized by small fully-connected neural networks conditioned on the last recurrent hidden state. \n\nThe paper is very well written, nicely structured, provides extensive experimental evaluation, and examines an important problem that has so far not received much attention in the field.\n\nThe proposed model has several interesting novelties (mainly in terms of new applications/experiments, and being fully auto-regressive), yet also shares many similarities with the generative component of the model introduced in [1] (not cited): Both models make use of (recurrent) graph neural networks to learn intermediate node representations, from which they predict whether new nodes/edges should be added or not. [1] speeds this process up by predicting multiple nodes and edges at once, whereas in this paper, such a multi-step process is left for future work. Training the generative model with fixed ground-truth ordering was similarly performed in [1] (“strong supervision”) and is thus not particularly novel.\n\nEqs.1-3: Why use recurrent formulation in both the graph propagation model and in the auto-regressive main loop (h_v -> h_v’)? Have the authors experimented with other variants (dropping the weight sharing in either or both of these steps)?\n\nOrdering problem: A solution for the ordering problem was proposed in [2]: learning a matching function between the orderings of model output and ground truth. A short discussion of this result would make the paper stronger.\n\nFor chemical molecule generation, a direct comparison to some more recent work (e.g. the generator of the grammar VAE [3]) would be insightful.\n\nOther minor points:\n- In the definition of f_nodes: What is p(y)? It would be good to explicitly state that (boldface) s is a vector of scores s_u (or score vectors, in case of multiple edge types) for all u in V. 
\n- The following statement is unclear to me: “but building a varying set of objects is challenging in the first place, and the graph model provides a way to do it.” Maybe this can be substantiated by experimental results (e.g. a comparison against Pointer Networks [4])?\n- Typos in this sentence: “Lastly, when compared using the genaric graph generation decision sequence, the Graph architecture outperforms LSTM in NLL as well.”\n\nOverall I feel that this paper can be accepted with some revisions (as discussed above), as, even though it shares many similarities with previous work on a very related problem, it is well-written, well-presented and addresses an important problem.\n\n[1] D.D. Johnson, Learning Graphical State Transitions, ICLR 2017\n[2] R. Stewart, M. Andriluka, and A. Y. Ng, End-to-End People Detection in Crowded Scenes, CVPR 2016\n[3] M.J. Kusner, B. Paige, J.M. Hernandez-Lobato, Grammar Variational Autoencoder, ICML 2017\n[4] O. Vinyals, M. Fortunato, N. Jaitly, Pointer Networks, NIPS 2015", "Thanks to the authors for the rebuttal and their modifications. In my opinion, the problem of generating large graphs remains. Also, the interplay between the intermediate node/graph representations and the generation process during learning remains unclear to me.", "Thank you for your review and for suggesting a comparison against grammar VAE.\n\nWe have tried grammar VAE on our dataset. Please see our latest comment above for more detail.", "We have tried the grammar VAE on our dataset and did a comparison with the results reported in our paper. The experiment was based on the published code for grammar VAE available here: https://github.com/mkusner/grammarVAE\n\nThe code did not work directly on our data, so we made a few tweaks:\n- The grammar they used was tailored to their dataset, and did not directly work on our dataset, i.e. many of our molecules cannot be generated by their grammar. We have added a few more grammar rules to make it work on our dataset.\n- They used a variant of the VAE loss where the weighting of the reconstruction part and the KL part of the loss did not directly correspond to the standard ELBO bound, so we changed it to make the VAE bound comparable to our likelihood estimates.\n- No sampling code was provided in the codebase, so we added our own implementation based on their code. The sampling process generates random latents from the prior N(0,1) and then pass them through the decoder with all the grammar handling described in the Algorithm 1 in the grammar VAE paper, this part was used for evaluation.\n\nAfter training, the grammar VAE achieves a negative ELBO of 11.98 on the test set, but out of 100,000 samples generated from the trained model, only 29.56% are valid SMILES strings. Note the 11.98 ELBO bound is considerably better than reported in our paper with the best numbers around 20, but the fraction of valid SMILES strings is a lot worse than our results where it is easy to get over 90% valid, but this result is on par with the reported numbers in the grammar VAE paper where around 31(+/- 7) % are valid after Bayesian optimization.\n\nThese results are a bit surprising, we try to interpret these results with the following explanations:\n- Our graph model and the LSTM baseline are capable of modeling a wider class of molecules than the grammar VAE due to the limited capability of the grammar. The grammar used in grammar VAE is a context-free grammar with a set of simple expansion rules, which is enough for modeling our datasets. But our models would still assign some probability to more complicated graphs (those with nested loops for example) therefore leading to a lower likelihood number.\n- The grammar offers very strong domain knowledge that is very helpful for shaping the likelihood of a given string. In the implementation the grammar is used to zero-out inapplicable expansion rules and renormalize the rest which can significantly boost the likelihood of a given sequence where our model and the LSTM baseline do not have access to any of these.\n- However, when sampling, the grammar is still quite brittle as it can generate many invalid strings, for example unpaired digits for rings, and invalid valence for certain atoms. To capture these more complex behaviors more complicated grammars need to be used, which requires significant expert knowledge. Our approach and the LSTM baseline does not use any such domain knowledge. In our evaluation the quality of the generated samples from the grammar VAE model is considerably worse than both our model and the LSTM baseline.\n- The decoder of the grammar VAE model is not fully auto-regressive, i.e. the output of one step is not fed back to the model as the input to the next step, making it fully auto-regressive may improve performance.\n\nOverall, our approach offers a very generic and powerful solution to the graph generation problem without the need of domain expertise, while the grammar VAEs went the opposite route which relies on expert knowledge (the grammar). Nevertheless, we can combine our graph generation model with domain knowledge including grammars to help us in the graph generation process to further improve performance.", "I would like to thank the authors for their detailed response and for adding a model description section in appendix A that clarifies implementation details. \n\nAs pointed out in my initial review, I still feel that the paper misses a direct experimental comparison against some related established work, which is why I am not willing to change my review score at this point. As mentioned in my review, I think it would be best to compare (or at least comment on why such a comparison was left out) against work such as the Grammar VAE (M.J. Kusner, B. Paige, J.M. Hernandez-Lobato, Grammar Variational Autoencoder, ICML 2017). ", "In the following we clarify a few other concerns raised by the reviewers:\n\nReviewer 4: comparison against Pointer Networks\n\nPointer networks provide a way to select and output items from a set of candidates. We used this pointer-style mechanism in our model in the node selection module f_nodes. Standard pointer nets assume a set of candidates is given, e.g. the input sequence as a set of candidate tokens in a seq2seq framework. In our model we learn to construct this set of candidates (a set of nodes in the graph) starting from an empty set, which is non-trivial.\n\nReviewer 2: model is too general, better focus on restricted classes of graphs\n\nThe primary goal of this work is to have a powerful generic model capable of generating any arbitrary graphs. This is an important but not well-studied task as recognized by Reviewers 3 and 4. For long we have specific models designed for restricted classes of graphs, e.g. models of trees, and models that capture some properties of graphs like the random graph models discussed in the related work, but to our knowledge our work is the first generic model that is capable of generating any type of graphs. The model is powerful and can adapt its graph generating behavior by learning from data. Comparing our proposed model to the previous graph generative models is in spirit analogous to the contrast between RNN language models and grammar-based or n-gram language models.\n\nReviewer 2: the model is tweaked for generating trees, this seems to be cheating\n\nIn the experiments in section 4.1, we used the exact same model to learn on three different datasets without any tweaking for each individual dataset, learning to generate cycles, trees and Barabasi-Albert graphs, and our proposed model can successfully adapt and generate graphs similar to each of these three datasets. In the parsing experiment in section 4.3, we removed the inner loop and always generate one edge fore each new node. This simplified the model and introduced a bit more structure into our model, which results in a performance improvement. Note that the baselines we compared against also exploits the tree structure, in particular the sequentialized trees encode the tree structure with opening and closing brackets, which is very effective, and this information is not available to the graph model as we trained exclusively on the very generic graph generating sequences.\n\nReviewer 2: discussion on graph grammars not clear\n\nThe questions about decidability in graph grammars is a mostly orthogonal to the point of the paper. We included this discussion to provide context to the paper since graph grammars (of various classes) and automata have been widely used in attempts to formalize generative models of graphs. Corcelle’s undecidability/impossibility results are precisely why we are taking an alternative approach to modeling graphs in this work. \n", "We thank the reviewers for the thoughtful reviews and suggestions, and for recognizing the significance and novelty of this work. Graph generation is an important topic, and our work provides a generic framework that is capable of generating arbitrary graphs through learning from data.\n\nWe have updated the submission to address some of the comments we received so far, which help us making this paper better, in particular:\n\n- We have added an entire section B in the appendix to describe model implementation details which should help clarifying confusions, as all reviewers raised this concern. In addition we have added detailed hyperparameter settings for all tasks in appendix C to make the results more reproducible.\n\n- We added a reference to “Learning Graph State Transitions” [1] and discussed the relationship and differences between our work and [1] at the end of section 2. As reviewer 4 and the anonymous comment pointed out, both our work and [1] share some similarities. However [1] mostly uses a graph as intermediate representation to help solving reasoning tasks, while we aim to learn an unconditional or conditional probabilistic model of a distribution of graphs from a sample of representative graphs. As generative models of graphs, [1] assigns soft strengths for each node and edge, while in our generative model in a sample a node / edge either exists or does not exist. [1] also made a few strong assumptions about the graph generation process, while we don’t make any such assumptions. See the paper for more details.\n\n- We added a sentence at the end of the paragraph following equations (1)-(3) to address Reviewer 4’s comments on weight sharing, explaining that the parameters in different rounds of propagation don’t have to be tied, and in the experiments we always use different parameters in different propagation rounds which empirically is consistently better than tied weights. Reviewer 4 also suggested we may drop weight sharing in the outer recurrent loop as well, but this is hard as the graph generating sequences are not fixed length sequences. It is unclear how dropping weight sharing would work here.\n\n- We added some extra discussion on learning an ordering to the last paragraph of section 3.4. We thank Reviewer 4 for pointing out the related work of [2]. [2] described a way to match a set of ground truths to a sequence of candidates generated by a model, which is related to learning an ordering. We have added this reference in the paper. Applying such a matching-based solution seems challenging in our setting though, as it is unclear how this can be used to learn a distribution over graphs, and we don’t have a clear distance metric between the generated graph components and the reference graph. Learning such a distance metric by itself seems to be a nontrivial task. We are aware of some other literature on learning an ordering / permutation, in particular from the learning to rank community, we have added a few other references in this direction and hope this can provide some alternative insights on this problem.\n\n- We modified figure 1 and added another possible graph generating sequence to figure 6, which Reviewer 3 suggested could make the presentation clearer.\n\n- We added a paragraph discussing the effect of fitting the fixed canonical ordering to the end of Appendix C.2. As reviewer 3 pointed out, our model may overfit to a particular ordering if it is always trained with that ordering. In the experiments we do observe that the model assigns higher probability to the canonical ordering it is being trained on, and much less probability to other orderings. However, in some cases the canonical ordering does not have the highest probability under a trained model, as can be also seen from Table 3, where the likelihood under fixed ordering (the ordering being trained on) is not always the same as the likelihood under the best possible ordering. This indicates there may be potential in learning an ordering improving the canonical one.\n\n- We changed the paragraph on the “dependence on T” to focus more on the challenges w.r.t. scalability, as Reviewer 2 mentioned this could make the paper clearer.\n\n- We changed a few numbers in Table 2 and 3 to reflect our latest results after the deadline, which does not change the overall conclusion on the comparison between different approaches.\n\nWe will try to add more experimental results to the paper when they are ready.\n\n[1] Learning Graph State Transitions. Daniel D Johnson. ICLR 2017.\n[2] R. Stewart, M. Andriluka, and A. Y. Ng, End-to-End People Detection in Crowded Scenes. CVPR 2016\n", "Thanks for the comment and bringing up this related paper. We will update our paper with more discussion and citations to related work (we are not allowed to make changes to our submission at the moment).\n\nThe main difference between our work and Johnson (2017) is that our goal in this paper is to learn and represent unconditional or conditional densities on a space of graphs given a representative sample of graphs, whereas Johnson is primarily interested in using graphs as intermediate representations in reasoning tasks. However, Johnson (2017) do offer a probabilistic semantics for their graphs (the soft, real-valued node and connectivity strengths). But, as a generative model, Johnson (2017) did make a few strong assumptions for the generation process, e.g. a fixed number of nodes for each sentence, independent probability for edges given a batch of new nodes, etc.; while our model doesn't make any of these assumptions.\n\nOn the other side, as we are modeling graph structures, the samples from our model are graphs where an edge or node either exists or does not exist; whereas in Johnson (2017) all the graph components, e.g. existence of a node or edge, are all soft, and it is this form of soft node / edge connectivity that was been used for other reasoning tasks. Dense and soft representation may be good for some applications, while the sparse discrete graph structures may be good for others. Potentially, our graph generative model can also be used in an end-to-end pipeline to solve prediction problems as well, like Johnson (2017).", "I've enjoyed reading this paper, but I'm wondering if the authors are aware of \"Learning Graphical State Transitions\" (Johnson, ICLR'17 oral). The work presented here feels like a generalization, but it shares many ideas with the earlier paper, and a discussion of the differences would definitely be very helpful." ]
[ -1, 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HJH-GmaXz", "iclr_2018_Hy1d-ebAb", "iclr_2018_Hy1d-ebAb", "iclr_2018_Hy1d-ebAb", "S1crSKYgM", "Hk8ilROQG", "iclr_2018_Hy1d-ebAb", "HkCWGa0xM", "Bk1RrPUMz", "iclr_2018_Hy1d-ebAb", "HkzAQO_kM", "iclr_2018_Hy1d-ebAb" ]
iclr_2018_r1lfpfZAb
Learning to Write by Learning the Objective
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
workshop-papers
I (and some of the reviewers) find the general motivation quite interesting (operationalizing the Gricean maxims in order to improve language generation). However, we are not convinced that the actual model encodes these maxims in a natural and proper way. Without this motivation, the approach can be regarded as a set of heuristics which happen to be relatively effective on a couple of datasets. In other words, the work seems too preliminary to be accepted at the conference. Pros: -- Interesting motivation (and potential impact on follow-up work) -- Good results on a number of datasets Cons: -- The actual approach can be regarded as a set of heuristics, not necessarily following from the maxims -- More serious evaluation needed (e.g., image captioning or MT) and potential better ways of encoding the maxims It is suitable for the workshop track, as it is likely to stimulate an interesting discussion and more convincing follow-up work.
train
[ "HkN9lyRxG", "ByGHp-S4M", "ByWqV4YlG", "BJFJrHcgz", "rks9NupQG", "SJrPVdp7f", "HkMz4Oamz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding. Building on previous works that utilize modified objectives to generate sequences, this work proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. The human evaluation is straight-forward and meaningful to compensate for the well-known inaccuracies of automatic evaluation. \n\nWhile the paper points out that they introduce multiple inductive biases that are useful to produce human-like sentences, it is not entirely correct that the objective is being learnt as claimed in portions of the paper. I would like this point to be clarified better in the paper. \n\nI think showing results on grounded generation tasks like machine translation or image-captioning would make a stronger case for evaluating relevance. I would like to see comparisons on these tasks. \n\n---- \nAfter reading the paper in detail again and the replies, I am downgrading my rating for this paper. While I really like the motivation and the evaluation proposed by this work, I believe that fixing the mismatch between the goals and the actual approach will make for a stronger work. \n\nAs pointed out by other reviewers, while the goals and evaluation seem to be more aligned with Gricean maxims, some components of the objective are confusing. For instance, the length penalty encourages longer sentences violating quantity, manner (be brief) and potentially relevance. Further, the repetition model address the issue of RNNs failing to capture long-term contextual dependencies -- how much does such a modified objective affect models with attention / hierarchical models is not clear from the formulation. \n\nAs pointed out in my initial review evaluation of relevance on the current task is not entirely convincing. A very wide variety of topics are feasible for a given context sentence. Grounded generation like MT / captioning would have been a more convincing evaluation. For example, Wu et al. (and other MT works) use a coverage term and this might be one of the indicators of relevance. \n\nFinally, I am not entirely convinced by the update regarding \"learning the objective\". While I agree with the authors that the objective function is being dynamically updated, the qualities of good language is encoded manually using a wide variety of additional objectives and only the relative importance of each of them is learnt. ", "While the paper was improved, it didn't address my main concern, that it is unclear whether the model really implements Gricean maxims. Assessing the repetitions and finding that the language model repeats slightly more often is not much evidence in my opinion. Also, the re-ranker should be trained on appropriately generated data, as it happens with the approach proposed. Thus my assessment of the paper remains the same.", "This paper argues that the objective of RNN is not expressive enough to capture the good generation quality. In order to address the problems of RNN in generating languages, this paper combines the RNN language model with several other discriminatively trained models, and the weight for each sub model is learned through beam search. \n\nI like the idea of using Grice’s Maxims of communication to improve the language generation. Human evaluation shows significant improvement over the baseline. I have some detailed comments as follows:\n\n- The repetition model uses the samples from the base RNNs as negative examples. More analysis is needed to show it is a good negative sampling method.\n\n- As Section 3.2.3 introduced, “the unwanted entailment cases include repetitions and paraphrasing”. Does it mean the entailment model also handles repetition problem? Do we still need a separate repetition model? How about a separate paraphrasing model?\n\n- Equation 6 and the related text are not very clearly represented. It would be better to add more intuition and better explained. \n\n- In the Table 2, the automated bleu scores of L2W algorithm for Tripadvisor is very low (0.34 against 24.11). Is this normal? More explanation is needed here.\n\n- For human judgement, how many scores does each example get? It would be better to get multiple workers on M-Turk to label the same example, and compute the mean and variance. One score per example may not be reliable. \n\n- It would be interesting to see deeper analysis about how each model in the objectives influence the actual language generation.", "This paper proposes to improve RNN language model generation using augmented objectives inspired by Grice's maxims of communication. The idea is to combine the standard word-by-word decoding objective with additional objectives that reward sentences following these maxims. The proposed decoding objective is not new; reseachers in machine translation \n have worked on it referring to it as loss-augmented decoding: http://www.cs.cmu.edu/~nasmith/papers/gimpel+smith.naacl12.pdf\nThe use of RNNs in this context might be novel though.\n\nPros:\n- Well-motivated and ambitious goals\n\n- Human evaluation conducted on the outputs.\n\nCons:\n- My main concern is that it is unclear whether the models introduced are indeed implementing the Gricean maxims. For eaxample, the repetition model would not only discourage the same word occurring twice, but also a similar word (according to the word vectors used) to follow another one. \n\n- Similary, for the entailment model, what is an \"obvious\" entailment\"? Not sure we have training data for this in particular. Also, entailment suggests textual cohesion, which is conducive to the relation maxim. If this kind of model is what we need, why not take a state-of-the-art model?\n\n- The results seem to be inconsistent. The working vocabulary doesn't help in the tripAdvior experiment, while the RNN seems to work very well on the ROCstory data. While there might be good reasons for these, the point for me is that we cannot trust that the models added to the objective do what they are supposed to do.\n\n- Are the negative examples generated for the repetition model checked that they contain repetitions? Shouldn't be difficult to do. \n\n- Would be better to give the formula for the length model, the description is intuition but it is difficult to know exactly what the objective is\n\n- In algorithm 1, it seems like we fix in advance the max length of the sentence (max-step). Is this the case? If so why? Also, the proposed learning algorithm only learns how to mix pre-trained models, not sure I agree they learn the objective. It is more of an ensembling.\n\n- As far as I can tell these ideas could have been more simply implemented by training a re-ranker to score the n-best outputs of the decoder. Why not try it? They are very popular in text generation tasks.", "Regarding our approach to implementing Grice's maxims:\n\n- Our repetition module is trained to recognize both exact repetitions and repetitions involving lexical paraphrases, as indicated by the cosine similarity between word embeddings. We believe that this is a more robust approach than placing hard constraints on repetition, and has the advantage that the model can learn to distinguish between desirable and undesirable similarity patterns in human-produced and machine-produced text. \n\n- For the entailment module, while there is a risk that relevant sentences will be penalized, in the training data most of the entailments are direct enough that they are not likely to occur in writing, while the neural class training examples still often contains relevant information. \nIn terms of the model, we chose to use a lightweight bag-of-words model for time and memory efficiency reasons (as it is expensive to do pairwise sentence comparisons to compute the entailment scores), even though a state-of-the-art model is likely to somewhat increase the performance. \n\nWe added an analysis to the paper of the frequency of repetitions in the training data, finding that they indeed occur more frequently in the samples from the language model, which are used as negative examples for training the repetition model, than in the reference endings. \n\nWe added an equation in description of the length module in order to clarify its objective. \n\nThe purpose of the maximum length restriction is simply to guarantee that the beam-search will terminate. In practice the generated sequence (which is the highest-scoring sequence ending with the termination token) is always shorter than the maximum length allowed. \n\nAs suggested, we include a reranker baseline in the results to re-score the n-best outputs after doing beam search decoding using only the language model: We found that it performs much worse due to a lack of diversity in the beam.\n\nWe added more details in the paper to support the claim that the objective is being learned. The scoring function learned in one stage informs the objective in the following stages. First, the expert classifiers are learned to improve the language model by using samples from the language model as negative training data. Subsequently, these expert classifiers are combined in a mixed objective where the weights of the classifiers are learned discriminatively. As a result, the overall objective function for training the generator changes dynamically as the mixture weights are updated because the objective itself depends on those weights. The mixture weights are learned to optimize a discriminative objective, which updates the overall generation objective; this in turn changes the discriminative objective for the next training iteration. ", "We added an analysis to the paper of the frequency of repetitions in the training data, finding that they indeed occur more frequently in the samples from the language model, which are used as negative examples for training the repetition model, than in the reference endings. \n\nEntailment examples in our training data are often but not always a form of paraphrasing, but usually not instances of direct repetition. Therefore we believe that we still need a seperate repetition model to handle more direct repetitions at a lexical level. A separate paraphrasing model is an interesting suggestion for future work, although we believe that the repetition and entailment models together are able to capture most of the paraphrases we are aiming to avoid.\n\nWe improved the description of the entailment score formulation (eq 6). \n\nThe very low BLEU scores observed in our results in the TripAdvisor domain are an artifact of the BLEU metric’s length penalty. The average length of reference completions is 12 sentences, which is much longer than the average length of endings generated by our Learning to Write models. This forces the BLEU score's length penalty to drive down the scores, despite our observation that there is still a significant amount of word and phrase overlap. The completions generated by the base language model are longer on average (as it tends to repeat itself over and over) and therefore do not suffer from this problem. \n\nWhile we agree that more labels per example will be valuable, we believe that the the test sets (of 1000 examples per domain) are large enough that to obtain a reasonably accurate aggregate score, despite the fact that not all of the individual annotations will be reliable.", " We added more details in the paper to support the claim that the objective is being learned. The scoring function learned in one stage informs the objective in the following stages. First, the expert classifiers are learned to improve the language model by using samples from the language model as negative training data. Subsequently, these expert classifiers are combined in a mixed objective where the weights of the classifiers are learned discriminatively. As a result, the overall objective function for training the generator changes dynamically as the mixture weights are updated because the objective itself depends on those weights. The mixture weights are learned to optimize a discriminative objective, which updates the overall generation objective; this in turn changes the discriminative objective for the next training iteration. \n\nThe recommendation to tackle grounded language tasks is a great suggestion, and we are eager to explore this avenue for future work. We believe incorporating grounding introduces novel challenges and so falls out of the scope of this paper, which we have scoped to focus on open-ended, ungrounded generation. \n" ]
[ 6, -1, 5, 4, -1, -1, -1 ]
[ 5, -1, 4, 5, -1, -1, -1 ]
[ "iclr_2018_r1lfpfZAb", "rks9NupQG", "iclr_2018_r1lfpfZAb", "iclr_2018_r1lfpfZAb", "BJFJrHcgz", "ByWqV4YlG", "HkN9lyRxG" ]
iclr_2018_ryZ283gAZ
Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations
Deep neural networks have become the state-of-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LM-architecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LM-ResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress (>50%) the original networks while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.
workshop-papers
The reviewers agree that the proposed architecture is novel. However, there are issues in terms of the motivation. It would be helpful in future drafts to strengthen the argument about why the architecture is expected to be better than others. Most importantly, the gains at this stage are still incremental. A larger improvement from the new architecture would motivate more researchers to focus on this architecture.
train
[ "B15nEv7Sf", "Bk421ULNf", "ByjXORWlG", "ryMdpXref", "ByZSRnteG", "Byn56OfWM", "r1FRx5fWG", "HyG_XDfWM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for taking your time reviewing our manuscript and all your comments. Here are our responses to the \"cons\".\n\nThe reason behind making the current title is to indicate the potential benefit of thinking beyond finite layer neural networks by looking at the continuum, i.e. the underlying dynamic system. This helps to guide us in the design of new architectures such as the proposed LM-architecture. In other words, we would like to convey the idea that there is benefit in \"thinking analog, acting digital\".\n\nSimilar as above, the intuition of bridging discretization of ODEs with the shortcut designs of deep networks opens the possibility of designing new and more effective deep networks. We used LM-architecture as one example. This also enabled us to use concepts in numerical analysis to analyze some of the behavior of the networks. In our paper, we used the concept of modified equations to explain why the LM-architecture can significantly reduce network depth without dropping performance (at the end of Section 2, page 7-8) . \n\nWe do agree that, with same network complexity, we only marginally gain on accuracy. However, for the moment, the major advantage of the LM architecture is the compression of deep networks (and we used modified equation to explain why we have such compression), which is important in applications where heavy-duty networks cannot be used (such as on portable devices).", "Summary\n- This paper draws analogy from numerical differential equation solvers and popular residual network-like deep learning architectures. It makes connection from ResNet, FractalNet, DenseNet, and RevNet to different numerical solvers such as forward and backward Euler and Runge-Kunta. In addition, inspired by the Linear Multi-step methods (LM), the authors propose a novel LM-ResNet architecture in which the next residual block takes a linear combination of the previous two residual blocks’ activations. They also propose a stochastic version of LM-ResNet that resembles Shake-shake regularization and stochastic depth. In both deterministic and stochastic cases, they show a positive improvement in classification accuracy on standard object classification benchmarks such as CIFAR-10/100 and ImageNet.\n\nPros\n- The intuition is good that connects differential equation and ResNet-like architecture, also explored in some of the related work.\n- Building upon the intuition, the author proposes a novel architecture based on a numerical ODE solver method.\n- Consistent improvement in accuracy is observed in both deterministic and stochastic cases.\n\nCons\n- The title is a little bit misleading. “Beyond Finite Layer Neural Networks” sounds like the paper proposes some infinite layer neural networks but the paper only studies finite number of layers.\n- One thing that needs to be clarified is that, if the network is not targeted at solving certain ODEs, then why is the intuition from ODE matters? The paper does not motivate readers in this perspective.\n- Given the widespread use of ResNet in the vision community, the incremental improvement of 1% on ImageNet is less likely to push vision research to switch to a completely different architecture. Therefore, the potential impact of the this paper to vision community is probably limited.\n\nConclusion\n- Based on the comments above, I think the paper is a good contribution which links ODE with Deep Networks and derivation is convincing. The proposed new architecture can be considered in future architecture designs. Although the increase in performance is small, I think it is good enough to accept.", "The authors proposed to bridge deep neural network design with numerical differential equations. They found that many effective networks can be interpreted as different numerical discretization of differential equations and provided a new perspective on the design of effective deep architectures. \n\nThis paper is interesting in general and it will be useful to design new and potentially more effective deep networks. Regarding the technical details, the reviewer has the following comments:\n\n- The authors draw a relatively comprehensive connection between the architecture of popular deep networks and discretization schemes of ODEs. Is it possible to show stability of the architecture of deep networks based on their associated ODEs? Related to this, can we choose step size or the number of layers to guarantee numerical stability?\n\n- It is very interesting to consider networks as stochastic dynamic systems. Are there any limitations of this interpretation or discrepancy due to the weak approximation? ", "The authors cast some of the most recent CNN designs as approximate solutions to discretized ODEs. On that basis, they propose a new type of block architecture which they evaluate on CIFAR and ImageNet. They show small gains when applying their design on the ResNet architectures. They also draw a comparison between a stochastic learning process and approximation to stochastic dynamic systems.\n\nPros:\n(+) The paper presents a way to connect NN design with principled approximations to systems\n(+) Experiments are shown on compelling benchmarks such as ImageNet\nCons:\n(-) It is not clear why the proposed approach is superior to the other designs\n(-) Gains are relatively small and at a price of a more complicated design\n(-) Incosistent baselines reported\n\nWhile the effort of presenting recent CNN designs as plausible approximations to ODEs, the paper does not try to draw connections among the different approaches, compare them or prove the limits of their related approximations. In addition, it is unclear from the paper how the proposed approach (LM-architecture) compares to the recent works, what are the benefits and gains from casting is as a direct relative to the multi-step scheme in numerical ODEs. How do the different approximations relate in terms of convergence rates, error bounds etc.?\n\nExperimentwise, the authors show some gains on CIFAR 10/100, or 0.5% (see ResNeXt Table1), while also introducing slightly more parameters. On ImageNet1k, comparisons to ResNeXt are missing from Table3, while the comparison with the ResNets show gains in the order of 1% for top-1 accuracy. \n\nTable3 is concerning. With a single crop testing scheme, ResNet101 is yielding top-1 error of 22% and top-5 error of 6% (see Table 5 of Xie et al, 2017 (aka ResNeXt)). However, the authors report 23.6% and 7.1% respectively for their ResNet101. The performance stated by the authors of ResNe(X)t weakens the empirical results of LM-architecture.", "Originality\n--------------\nThe paper takes forward the idea of correspondence between ResNets and discretization of ODEs. Introducing multi-step discretization is novel.\n\nClarity\n---------\n1) The paper does not define the meaning of u_n=f(u).\n2) The stochastic control problem (what is the role of controller, how is connected to the training procedure) is not defined\n\nQuality\n---------\nWhile the experiments are done in CIFAR-10 and 100, ImageNet and improvements are reported, however, connection/insights to why the improvement is obtained is still missing. Thus the evidence is only partial, i.e., we still don't know why the connection between ODE and ResNet is helpful at all.\n\nSignificance\n-----------------\nStrength: LM architectures reduce the layers in some cases and achieve the same level of accuracy.\nWeakness: Agreed that LM methods are better approximations of the ODEs. Where do we gain? (a) It helps if we faithfully discretize the ODE. Why does (a) help? We don't have a clear answer; which takes back to the lack of what the underlying stochastic control problem is.\n", "First of all, we appreciate the reviewer's effort in evaluating the manuscript and his/her comments. Our responses to the reviewer's comments are as follows. We have made minor modifications to the manuscript to improve clarity (especially Section 3). \n\nReviewer: It is not clear why the proposed approach is superior to the other designs\n\nOur Responses: We have already discuss this in the paper. Most of recent work like SENet, PolyNet, Inception-v4, are focusing on the improvment of the residual block, which means improving the right-hand-side of the dynamics f(u). We explore the other\ndimension, i.e. the ways to design shortcuts by bridging some of the existing designs of shortcuts with various temporal discretization of dynamic systems. Then we introduced a new micro-structure, called the LM-structure, which can be combined with exsiting designs of f(u). For example, you could have LM-PolyNet, LM-Inception, etc. We take ResNet and ResNeXt as examples to show that the LM-structure can indeed improve accuracy and compress parameters over the original deep networks. We also showed that the LM-structure can also be combined with stochastic training strategy in Section 3.\n\nReviewer: Gains are relatively small and at a price of a more complicated design\n\nOur Responses: The design is SIMPLE! You only need to include one more shortcut to each block of ResNet (or other similar networks). We will comment on \"gains are relatively small\" in our later responses.\n\nReviewer: Incosistent baselines reported\n\nOur Responses: This is because lots of the setting is not the same, like the data augmentation setting is not the same as the [Table5 Xie et al, 2017 (aka ResNeXt)] on Imagenet. We don't apply color jitter, lighting and color normalization. Thus we don't think using the baseline in the ResNeXt paper is a fair comparsion. Moreover, the ResNeXt paper did not use the pre-act ResNet. Nonetheless, we will do more experiment to make the experiment stronger.\n\nReviewer: Experimentwise, the authors show some gains on CIFAR 10/100, or 0.5% (see ResNeXt Table1), while\nalso introducing slightly more parameters.\n\nOur Responses: For ResNeXt is not a deep network, the acceleration of the LM-structure will not bring as much benefits as deeper networks. If we use the LM-structure in deep neural networks, the accuracy will boost significantly. LM-Resnet56 achieves comparable accuracy as LM-Resnet 110 (see ResNet Table 1) on CIFAR10. And LM-ResNet164 gains 1.4% accuracy over ResNet164 on CIFAR100, which is only 0.2% lower than the accuracy of ResNet1001. On the other hand, the reviewer commented that our LM-ResNeXt \"only\" gains 0.5% accuracy boost by adding 0.7M parameters. Let's take a look at the result of ResNeXt again, ResNeXt29(16x64d) adding 34M paramters to ResNeXt29(8x64d), only gains 0.46%. Therefore, we do not think 0.5% is a small improvement for ResNeXt.", "First of all, we appreciate the reviewer's effort in evaluating the manuscript and his/her comments. Our responses to the reviewer's comments are as follows. We have made minor modifications to the manuscript to improve clarity (especially Section 3). \n\nReviewer: Is it possible to show stability of the architecture of deep networks based on their associated ODEs?\n\nOur Responses: One of the cited references discussed stability [Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. arXiv preprint arXiv:1709.03698, 2017]. The authors disceretized a stable ODE to construct a neural network. However, BN or other stochastic training may break the stability. In our opinion, the relavance of stability (of numerical schemes or the ODE itself) to deep architecture design is still a debatable topic. It definitely deserves further invesigation, which will be our future work.\n\nReviewer: It is very interesting to consider networks as stochastic dynamic systems. Are there any limitations of this interpretation or discrepancy due to the weak approximation?\n\nOur Responses: It is a very interesting question. In the revised version of the manuscript, we included some discussions related to this question. In short, under suitable conditions on the parameters, we will have a weak limit. However, at this point, we are not sure whether all the conditions are indeed satisfied in practice. Nonetheless, the weak limit may shed light on the choice of the hyper-parameters, such as the drop probabilities of the stochastic depth ResNet (see our discussions on stochastic depth in Section 3.1).", "First of all, we appreciate the reviewer's effort in evaluating the manuscript and his/her comments. Our responses to the reviewer's comments are as follows. We have made minor modifications to the manuscript to improve clarity (especially Section 3). \n\nReviewer: The paper does not define the meaning of u_n=f(u)\n\nOur Response: There should be a subscript \"n\" of the variable \"u\". We apologize for the confusion. It is a standard notation for a discrete dynamic system. \n\nReviewer: The stochastic control problem (what is the role of controller, how is connected to the training\nprocedure) is not defined\n\nOur Response: The reviewer may have missed our definition of stochastic control, which is given at the end of Section 3.1. The optimization problem is a stochastic control if we consider, for example, the ResNet with stochastic training as a stochastic dynamic system (stochastic ResNet for short). The \"control\" is the stochastic differential equation weakly approximated by the stochastic ResNet. For there is an expectation in the objective function, we only need a week approximation, which means we only need to approximate the distribution of data instead of each individual data (or trajectory). Stochastic control is a classical and yet important topic in applied mathematics, which has wide applications in various areas especially finance. We recommend a popular tutorial given by L. Evans \"Evans L C. OPTIMAL CONTROL THEORY. Springer, 1974\" for further reference.\n\nReviewer: While the experiments are done in CIFAR-10 and 100, ImageNet and improvements are reported,\nhowever, connection/insights to why the improvement is obtained is still missing. Thus the evidence is\nonly partial, i.e., we still don't know why the connection between ODE and ResNet is helpful at all. \n\nOur Response: The reviewer may have missed our explanation on the performance boost of the proposed LM-structure. We explained the performance boost using the concept of \"modified equations\" from the bottom of page 6 to page 8. Basically, we argued, both analytically and experimentally, that the proposed LM-structure can be viewed as adding a momentum to the information propagation.\n\nReviewer: Agreed that LM methods are better approximations of the ODEs. Where do we gain? (a) It helps if we faithfully discretize the ODE. Why does (a) help? We don't have a clear answer; which takes back to the lack of what the underlying stochastic control problem is.\n\nOur Responses: As we stated a couple times in the manuscript, our purpose is not to show that we should seriously approximate dynamic systems. Our main objective is to point out that effective architectures (or topology of the networks) are similar (or identical for some cases) to discretizations of dynamic systems. This is not limited to ResNet. It is more general than what have been discovered in the past. More importantly, we are able to introduce the LM-structure, which is new and can be applied to any ResNet-like networks to significantly compress the number of parameters. If your application does not care about parameter compression, LM-structure can still further improve classification accuracy of heavy-duty networks. Finally, if stochastic training is applied, we identify it with the stochastic control problem, and it is still beneficial to apply the LM-structure to discretize the underlying stochastic differential equation. The performance boost shown by our experiments can be explained using modified equations, which to our best knowledge, is a new perspective to qualitatively evaluate deep networks that can be viewed as approximations of dynamic systems.\n" ]
[ -1, 7, 6, 5, 5, -1, -1, -1 ]
[ -1, 4, 1, 3, 1, -1, -1, -1 ]
[ "Bk421ULNf", "iclr_2018_ryZ283gAZ", "iclr_2018_ryZ283gAZ", "iclr_2018_ryZ283gAZ", "iclr_2018_ryZ283gAZ", "ryMdpXref", "ByjXORWlG", "ByZSRnteG" ]
iclr_2018_B14uJzW0b
No Spurious Local Minima in a Two Hidden Unit ReLU Network
Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this. A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization. We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global. This combined with recent work of Lee et al. (2017); Lee et al. (2016) show that gradient descent converges to the global minimizer.
workshop-papers
This submission is a continuation of a line of theoretical work that seeks to characterize optimization landscapes of neural networks by the presence or absence of spurious local minima. As the number of critical points grows combinatorially for larger networks, it is very challenging to show such results. The present submission extends slightly previous work by considering two hidden units and their proof technique goes beyond that of Brutzkus and Globerson, 2017, potentially leading to more interesting results if they can be extended to more complex networks. The setting of two hidden units is quite limited - far from any practical setting. If this were the stepping stone to proving optimality of certain optimization strategies for more complex networks, this may be of some interest, but it seems doubtful. One indication is given in Sec. 7 / Fig. 1 in which it is shown that for even quite small numbers of hidden units, spurious local optima do occur and are reached 40% of the time for random initializations even with only 11 nodes.
train
[ "By_zofR1z", "rky-kk_eG", "HykMheuxG", "SJh8JW3Gz", "BkkE1bhfG", "BJBbkWhfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary: \nThe paper considers the problem of a single hidden layer neural network, with 2 RELU units (this is what I got from the paper - as I describe below, it was not clear at all the setting of the problem - if I'm mistaken, I will also wait for the rest of the reviews to have a more complete picture of the problem).\nGiven this architecture, the authors focus on characterizing the objective landscape of such a problem.\nThe techniques used depend on previous work. According to the authors, this paper extends(?) previous results on a NN with a single layer with a single unit.\n\nOriginality: \nThe paper heavily depends on the approach followed by Brutzkus and Globerson, 2017. To this end, slighly novel.\n\nImportance: \nUnderstanding the landscape (local vs global minima vs saddle points) is an important direction in order to further understand when and why deep neural networks work. I would say that the topic is an important one.\n\nPresentation/Clarity: \nTo the best of my understanding, the paper has some misconceptions. The title is not clear whether the paper considers a two layer RELU network or a single layer with with two RELU units. In the abstract the authors state that it has to do with a two-layer RELU network with two hidden units (per layer? in total?). Later on, in Section 3, the expression at the bottom of page 2 seems to consider a single-layer RELU network, with two units. \nThese are crucial for understanding the contribution of the paper; while reading the paper, I assumed that the authors consider the case of a single hidden unit with K = 2 RELU activations (however, that complicated my understanding on how it compares with state of the art).\n\nAnother issue is the fact that, on my humble opinion, the main text looks like a long proof. It would be great to have more intuitions.\n\nComments:\n1. The paper mainly focuses on a specific problem instance, where the weight vectors are unit-normed and orthogonal to each other. While the authors already identify that this might be a restriction, it still does not lessen the fact that the configuration considered is a really specific one.\n\n2. The paper reads like a collection of lemmas, with no verbose connection. It was hard to read and understand their value, just because mostly the text was structured as one lemma after the other.\n\n3. It is not clear from the text whether the setting is already considered in Brutzkus and Globerson, 2017. Please clarify how your work is different/new from previous works.\n", "In this paper the authors studied the theoretical properties of manifold descent approaches in a standard regression problem, whose regressor is a simple neural network. Leveraged by two recent results in global optimization, they showed that with a simple two-layer ReLU network with two hidden units, the problem with a standard MSE population loss function does not have spurious local minimum points. Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods. In general I found this paper clearly written and technically sound. I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures. \n\nContribution: \nAs discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units. The analysis becomes considerably more complicated, and the contribution seems to be novel and significant. I am not sure why did the authors mentioned the work on over-parameterization though. It doesn't seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small). \n\nComments on the Assumptions:\n- Please explain the motivation behind the standard Gaussian assumption of the input vector x. \n- Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^\\top w_2=0 (or the acute angle assumption in Section 6). \nWithout extra justifications, it seems that the theoretical result only holds for an artificial problem setting. While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results. \n\nGeneral Comment: \nThe technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs. From the analysis in the main paper, I believe the theoretical contribution is correct and sound. While I appreciate the technical contributions, in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples). Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector, and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems. ", "This paper considers a special deep learning model and shows that in expectation, there is only one unique local minimizer. As a result, a gradient descent algorithm converges to the unique solution. This works address a conjecture proposed by Tian (2017).\n\nWhile it is clearly written, my main concern is whether this model is significant enough. The assumptions K=2 and v1=v2=1 reduces the difficulty of the analysis, but it makes the model considerably simpler than any practical setting.\n\n", "We thank the reviewers for their thoughtful comments. We believe that our paper makes a meaningful contribution to understanding the global properties of neural networks, and in particular to understanding gradient-based optimization in deep learning. Although our setting is simple, it is already significantly more complicated than other works that analyze global geometry of neural networks, which have mostly focused on the case of linear networks or networks with a single filter. \n\n1.\tWe agree that the architecture is simpler than any practical setting. However, this simple architecture of two hidden units is already significantly more complicated than many previous works that analyze global geometry of neural networks, which focus on the case of linear networks or networks with a single filter.\n\n", "We thank the reviewers for their thoughtful comments. We believe that our paper makes a meaningful contribution to understanding the global properties of neural networks, and in particular to understanding gradient-based optimization in deep learning. Although our setting is simple, it is already significantly more complicated than other works that analyze global geometry of neural networks, which have mostly focused on the case of linear networks or networks with a single filter. \n\n1.\tBrutzkus and Globerson, 2017 showed learning No-Overlap Networks without some distributional assumption is NP-hard. However, the No-Overlap Networks can be also be learned for Gaussian Inputs. Following Brutzkus and Globerson 2017 and Tian 2017, we also make the Gaussian assumption on the input vector in our model. \n2.\tWe agree that we have a lot of conditions on the architecture of the network. Most of these conditions are used to simplify the proof, which are very involved even after these simplifications. However, we do believe that the conclusions hold more broadly or approximately hold (meaning gradient descent finds local optima that are nearly globally optimal), but we are unable to prove this now and leave it as a future work.\n\n", "We thank the reviewers for their thoughtful comments. We believe that our paper makes a meaningful contribution to understanding the global properties of neural networks, and in particular to understanding gradient-based optimization in deep learning. Although our setting is simple, it is already significantly more complicated than other works that analyze global geometry of neural networks, which have mostly focused on the case of linear networks or networks with a single filter. \n\n1.\tWe apologize for the confusion of the setting and model from the abstract. The abstract has been updated. As stated clearly in the first displayed equation of the paper (beginning of Section 3), the architecture in this paper is y= \\sigma(<w_1,x>) +\\sigma(<w_2,x>). \n\n2.\tThe only thing we use from Brutzkus and Globerson, 2017 is is the formula: “E[relu(w^T x) relu (v^Tx)]= formula” and the rest is novel. In Brutzkus and Globerson, 2017, they consider one-hidden-layer network with No Overlap. Due to this assumption, they can identify every critical point in their network. However, in our problem, it’s impossible to identify every critical point and we use a different method to analyze the landscape of the network. Our analysis is substantially different from Brutzkus and Globerson 2017 and this setting is NOT considered in Brutzkus and Globerson.\n3.\tWe have submitted a revision that clarifies our contributions and adds more intuition.\n\n" ]
[ 4, 6, 6, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1 ]
[ "iclr_2018_B14uJzW0b", "iclr_2018_B14uJzW0b", "iclr_2018_B14uJzW0b", "HykMheuxG", "rky-kk_eG", "By_zofR1z" ]
iclr_2018_Bki4EfWCb
Inference Suboptimality in Variational Autoencoders
Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity of the recognition network to generalize inference over all datapoints. We analyze approximate inference in variational autoencoders in terms of these factors. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.
workshop-papers
Thank you for submitting you paper to ICLR. This paper provides an informative analysis of the approximation contributions from the various assumptions made in variational auto-encoders. The revision has demonstrated the robustness of the paper’s conclusions, however these conclusions are arguably unsurprising. Although the work provides a thorough and interesting piece of detective work, the significance of the findings is not quite great enough to warrant publication. Reviewer 1 was searching for a reference for work in similar vein to section 5.4: The second problem identified in the reference below shows examples where using an approximating distribution of a particular form biases the model parameter estimates to settings that mean the true posterior is closer to that form. R. E. Turner and M. Sahani. (2011) Two problems with variational Expectation Maximisation for time-series models. Inference and Learning in Dynamic Models. Eds. D. Barber, T. Cemgil and S. Chiappa, Cambridge University Press, 104–123, 2011.
train
[ "HyXty1qlM", "rJ5VMfcxG", "r1_Ulf9gz", "rJ201f3QG", "By5gZbnmf", "Byip9x2XG", "S14LJgnmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "* EDIT: Increased score from 5 to 6 to reflect improvements made in the revision.\n\nThe authors break down the \"inference gap\" in VAEs (the slack in the variational lower bound) into two components: 1. the \"amortization gap\", measuring what part of the slack is due to amortizing inference using a neural net encoder, as compared to separate optimization per example. 2. the \"approximation gap\": the part of the slack due to using a restricted parametric form for the posterior approximation. They perform various experiments to analyze how these quantities depend on modeling decisions and data sets.\n\nBreaking down the inference gap into its components is an interesting idea and could potentially provide insights when analyzing VAE performance and for further improving VAEs. I enjoyed reading the paper, but I think its contribution is on the small side for a conference paper. It would be a good workshop paper. The main limitation of the proposed method of analysis I think is that the two parts of the inference gap are not really separable: Because the VAE encoder is trained jointly with the decoder, the different limitations of the encoder and decoder all interact. E.g. one could imagine cases where jointly training the VAE encoder and decoder finds a local optimum where inference is perfect, but which is still much worse than the optimum that could be achieved if the encoder would have been more flexible. The authors do seem to realize this and they provide experiments examining this interaction. I think these experiments should be elaborated on. For example: What happens when the decoder is trained separately using more flexible inference (e.g. Hamiltonian MC) and the encoder is trained later? What happens when the encoder is optimized separately for each data point during training as well as testing?", "=======\nUpdate:\n\nThe new version addresses some of my concerns. I think this paper is still pretty borderline, but I increased my rating to a 6.\n=======\n\nThis article examines the two sources of loose bounds in variational autoencoders, which the authors term “approximation error” (slack due to using a limited variational family) and “amortization error” (slack due to the inference network not finding the optimal member of that family).\n\nThe existence of amortization error is often ignored in the literature, but (as the authors point out) it is not negligible. It has been pointed out before in various ways, however:\n* Hjelm et al. (2015; https://arxiv.org/pdf/1511.06382.pdf) observe it for directed belief networks (admittedly a different model class).\n* The ladder VAE paper by Sonderby et al. (2016, https://arxiv.org/pdf/1602.02282.pdf) uses an architecture that reduces the work that the encoder network needs to do, without increasing the expressiveness of the variational approximation. That this approach works well implies that amortization error cannot be ignored.\n* The structured VAE paper by Johnson et al. (2016, https://arxiv.org/abs/1603.06277) also proposes an architecture that reduces the load on the inference network.\n* The very recent paper by Krishnan et al. (posted to arXiv days before the ICLR deadline, although a workshop version was presented at the NIPS AABI workshop last year; http://approximateinference.org/2016/accepted/KrishnanHoffman2016.pdf) examines amortization error as a core cause of training failures in VAEs. They also observe that the gap persists at test time, although it does not examine how it relates to approximation error.\n\nSince these earlier results existed, and approximation-amortization decomposition is fairly simple (although important!), the main contributions of this paper are the empirical studies. I will try to summarize the main novel (i.e., not present elsewhere in the literature) results of these:\n\nSection 5.1:\nInference networks with FFG approximations can produce qualitatively embarrassing approximations.\n\nSection 5.2:\nWhen trained on a small dataset, training amortization error becomes negligible. I found this surprising, since it’s not at all clear why dataset size should lead to “strong inference”. It seems like a more likely explanation is that the decoder doesn’t have to work as hard to memorize the training set, so it has some extra freedom to make the true posterior look more like a FFG.\n\nAlso, I think it’s a bit of an exaggeration to call a gap of 2.71 nats “much tighter” than a gap of 3.01 nats.\n\nSection 5.3:\nAmortization error is an important contributor to the slack in the ELBO on MNIST, and the dominant contributor on the more complicated Fashion MNIST dataset. (This is totally consistent with Krishnan et al.’s finding that eliminating amortization error gave a bigger improvement for more complex datasets than for MNIST.)\n\nSection 5.4:\nUsing a restricted variational family causes the decoder to learn to induce posteriors that are easier to approximate with that variational family. This idea has been around for a long time (although I’m having a hard time coming up with a reference).\n\nThese results are interesting, but given the empirical nature of this paper I would have liked to see results on more interesting datasets (Celeb-A, CIFAR-10, really anything but MNIST). Also, it seems as though none of the full-dataset MNIST models have been trained to convergence, which makes it a bit difficult to interpret some results.\n\n\nA few more specific comments:\n\n2.2.1: The \\cdot seems extraneous to me.\n\n5.1: What dataset/model was this experiment done on?\n\nFigure 3: This can be inferred from the text (I think), but I had to remind myself that “IW train” and “IW test” refer only to the evaluation procedure, not the training procedure. It might be good to emphasize that you don’t train on the IWAE bound in any experiments.\n\nTable 2: It would be good to see standard errors on these numbers; they may be quite high given that they’re only evaluated on 100 examples.\n\n“We can quantitatively determine how close the posterior is to a FFG distribution by comparing the Optimal FFG bound and the Optimal Flow bound.”: Why not just compare the optimal with the AIS evaluation? If you trust the AIS estimate, then the result will be the actual KL divergence between the FFG and the true posterior.", "This paper studies the amortization gap in VAEs. Inference networks, in general, have two sources of approximation errors. One due to the function family of variational posterior distributions used in inference and the other due to choosing to amortize inference rather than doing per-data-point inference as in SVI.\n\nThey consider learning VAEs using two different choices of inference networks with (1) fully factorized Gaussian and (2) normalizing flows. The former is the de-facto choice of variational approximation used in VAEs and the latter is capable of expressing complex multi-modal distributions.\n\nThe inference gap is log p(x) - L[q], the approximation gap is log p(x) - L[q^* ] and the amortization gap is L[q^* ] - L[q]. The amortization gap is easily evaluated. To evaluate the first two, the authors use estimates (lower bounds of log p(x)) given from annealed importance sampling and the importance sampling based IWAE bound (the tighter of the two is used).\n\nThere are several different observations made via experiments in this work but one of the more interesting ones is quantifying that a deep generative model, when trained with a fully factorized gaussian posterior, realizes a true posterior distribution that is (more) approximately Gaussian. While this might be (known) intuition that people rely on when learning deep generative models, it is important to be able to test it, as this paper does. The authors study several discrete questions about the aforementioned inference gaps and how they vary on MNIST and FashionMNIST. The concerns I have about this work revolve around their choice of two small datasets and how much their results are affected by variance in the estimators.\n\nQuestions:\n* How did you optimize the variational parameters for q^* and the flow parameters in terms of learning rate, stopping criteria etc.\n* In Section 5.2, what is \"strong inference\"? This is not defined previously.\n* Have you evaluated on a larger dataset such as CIFAR? FashionMNIST and MNIST are similar in many ways.\n* Which kind of error would using a convolution architecture for the encoder decrease? Do you have insights on the role played by the architecture of the inference network and generative model?\n\nI have two specific concerns:\n* Did you perform any checks to verify whether the variance in the estimators use to bound log p(x) is controlled (for the specific # samples you use)? I'm concerned since the evaluation is only done on 100 points.\n* In Section 5.2.1, L_iw is used to characterize encoder overfitting where the argument is that L_ais is not a function of the encoder, but L_iw is, and so the difference between the two summarizes how much the inference network has overfit. How is L_iw affected by the number of samples used in the estimator? Presumably this statement needs to be made while also keeping mind the number of importance samples. For example, if I increase the number of importance samples, even if I'm overfitting in Fig 3(b), wouldn't the green line move towards the red simply because my estimator depends less on a poor q?\n\nOverall, I think this paper is interesting and presents a quantitative analysis of where the errors accrue due to learning with inference networks. The work can be made stronger by addressing some of the questions above such as what role is played by the neural architecture and whether the results hold up under evaluation on a larger dataset.", "\nWe would like to thank Reviewer 3 for providing a detailed review and interesting suggestions for further experimentation. \n\nOverall, we acknowledge that the different limitations of the encoder and decoder all interact, e.g. in Section 5.3 we have quantitatively demonstrated that a VAE trained with a factorized Gaussian, typically have a true posterior that is more like a factorized Gaussian. Without doubt, we also agree that there could be cases where the generative model fits embarrassingly to the data, and yet inference is perfect. However, this does not hinder our analysis of the gaps according to our definitions (Section 3.1). We would also like to note that, although our calculations of the gaps are only estimates, such a amortization-approximation decomposition may be valuable to guiding improvements to approximate inference. \n\n\"What happens when the decoder is trained separately using more flexible inference (e.g. Hamiltonian MC) and the encoder is trained later? What happens when the encoder is optimized separately for each data point during training as well as testing?\"\n\nThese are interesting ideas. We performed local optimization of the variational parameters only for evaluation purposes. The quality of inference is an important factor for the optimization of the generator. Consequently, training with HMC would likely result in a better trained generator compared to training via amortized inference, especially early during training. We refer to [1] for experiments that explore optimizing the variational parameters in the inner loop of amortized inference during training.\n\n[1] R. G. Krishnan, D. Liang, and M. Hoffman. On the challenges of learning with inference networks on sparse, high-dimensional data.ArXiv e-prints, October 2017\n", "\nWe would like to thank Reviewer 2 for their thorough analysis of our work. We acknowledge their concerns and address their comments below:\n\n“How did you optimize the variational parameters for q^* and the flow parameters in terms of learning rate, stopping criteria etc.”\n\nThank you for asking. We have added the description in section 6.4 of the Appendix.\n\n\"What is \"strong inference\"? This is not defined previously.\"\n\nBy strong inference, we mean there is a small inference gap. We understand that this could be confusing given no prior explanation, thus we’ve changed the wording accordingly. \n\n“Have you evaluated on a larger dataset such as CIFAR? FashionMNIST and MNIST are similar in many ways.”\n\nWe acknowledge that both MNIST and Fashion-MNIST are similar datasets. To enhance our analysis, we performed some new experiments on CIFAR-10 whose result is added to Table 2 and analyzed in section 5.2. \n\n“Which kind of error would using a convolution architecture for the encoder decrease?\" \n\nAlthough we have not experimented extensively on the influence of the encoder architectures, more powerful encoders usually lead to lower amortization error. Our experiments with larger encoders demonstrates this (Section 5.2). \n\n\"Do you have insights on the role played by the architecture of the inference network and generative model?”\n\nThank you for the interesting question. Our most recent draft contains new results exploring the effect of increasing the capacity of the generative model. We observe that increasing the capacity leads to true posteriors that fit better to the choice of approximation. (Section 5.3)\n\n“Did you perform any checks to verify whether the variance in the estimators use to bound log p(x) is controlled (for the specific # samples you use)? I'm concerned since the evaluation is only done on 100 points.”\n\nWe acknowledge that the variance of the bounds can be quite large, and the numbers we obtained for evaluating 100 datapoints might suffer from this. Thus, we re-performed all experiments on MNIST and Fashion-MNIST with 1k datapoints to reduce the variance. The new results are consistent with our previous results.\n\n\"Presumably this statement needs to be made while also keeping mind the number of importance samples.\"\n\nThank you for pointing this out. Yes, this statement needs to be made while also keeping in mind the number of importance samples, since measuring the overfitting is dependent on the number of samples.\n\n", "\nWe would like to thank Reviewer 1 for their detailed comments regarding our contributions and providing citations of relevant work.\n\nWe address their comments below:\n\n“When trained on a small dataset, training amortization error becomes negligible. I found this surprising, since it’s not at all clear why dataset size should lead to 'strong inference' \"\n\nWe believe the explanation for better inference on a smaller dataset is mostly due to the encoder having fewer datapoints to memorize, reducing the amortization error. Our analysis with larger encoders in Section 5.2 is relevant to supporting this claim. \n\n\"It seems like a more likely explanation is that the decoder doesn’t have to work as hard to memorize the training set, so it has some extra freedom to make the true posterior look more like a FFG.\"\n\nThis idea is interesting. We believe that the decoder having to work less hard is related to reducing the approximation error. Our results of Section 5.3 explore this idea.\n\n\"Also, I think it’s a bit of an exaggeration to call a gap of 2.71 nats “much tighter” than a gap of 3.01 nats.\"\n\nYes, we agree. We have re-worded that statement.\n\n\"Using a restricted variational family causes the decoder to learn to induce posteriors that are easier to approximate with that variational family.\"\n\nYes, this idea has been around for a while. One example is demonstrated with visualizations in Appendix C of the IWAE paper, which we’ve noted in the Related Works section. Our results provide quantitative measurements of this intuition. \n\n“These results are interesting, but given the empirical nature of this paper I would have liked to see results on more interesting datasets (Celeb-A, CIFAR-10, really anything but MNIST). ”\n\nWe agree that more extensive empirical results are important. To this end, we performed new experiments on CIFAR-10 whose results are added to Table 2 and section 5.2. \n\n“The \\cdot seems extraneous to me.”\n\nThank you for pointing it out, we have fixed this. \n\n\"What dataset/model was this experiment done on?\"\n\nWe trained our VAE models on MNIST for the visualization. \n\n“It would be good to see standard errors on these numbers; they may be quite high given that they’re only evaluated on 100 examples.”\n\nWe acknowledge that the variance of the bounds can be quite large, and the numbers we obtained for evaluating 100 datapoints might suffer from this. Thus, we re-performed all experiments on MNIST and Fashion-MNIST with 1k datapoints to reduce the variance. The new results are consistent with our previous results.\n\n“Why not just compare the optimal with the AIS evaluation? If you trust the AIS estimate, then the result will be the actual KL divergence between the FFG and the true posterior.”\n\nThank you for pointing this out. We have corrected the analysis accordingly. \n\n", "\nWe’d like to thank the reviewers for the thoughtful and thorough reviews. \n\nThe consensus of the reviews is that the contribution of the original paper was limited, of which we completely agree. We’ve thus taken steps to extend our results with relevant experiments. We’ve used the same methodology as before in settings that we think highlight and strengthen important points about the paper. \n\nMain additions:\n\nCIFAR-10: we’ve run the same experiments on the CIFAR-10 dataset in order to gain a more comprehensive view of inference suboptimality. (see Section 5.2 and Table 2)\n\nMore datapoints: previously we evaluated the various gaps on a subset of 100 datapoints. We’ve increased the subset to 1000 datapoints for most experiments in order to make our results more reliable. The new results are consistent with our previous results.\n\nInfluence of flows on amortization: we demonstrate that the parameters used in increasing the expressiveness of the approximate distribution also contribute to reducing the amortization error. (see section 5.2.1 and Table 4)\n\nInfluence of decoder capacity on approximation gap: we demonstrate that increasing the number of hidden layers of the decoder leads to smaller approximation gaps. (see Section 5.3 and Table 5)\n\n\nOther modifications:\n\nTitle: We changed the title of the paper from ‘Inference Dissection in Variational Autoencoders’ to ‘Inference Suboptimality in Variational Autoencoders’ because we believe it better reflects the content of the paper.\n\nOrganization: We’ve moved some less relevant sections to the appendix, such as the description of AIS and our section on VAE under/overfitting.\n\nAbstract: We’ve updated the abstract given the new contributions.\n" ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Bki4EfWCb", "iclr_2018_Bki4EfWCb", "iclr_2018_Bki4EfWCb", "HyXty1qlM", "r1_Ulf9gz", "rJ5VMfcxG", "iclr_2018_Bki4EfWCb" ]
iclr_2018_B1lMMx1CW
THE EFFECTIVENESS OF A TWO-LAYER NEURAL NETWORK FOR RECOMMENDATIONS
We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music. It produces recommendations based on customer’s implicit feedback history such as purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and auto-encoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories, and we observe significant improvements in an online A/B test. We also discuss key enhancements to the neural network model and describe our production pipeline. Finally we open-sourced our deep learning library which supports multi-gpu model parallel training. This is an important feature in building neural network based recommenders with large dimensionality of input and output data.
workshop-papers
Meta score: 6 This is a thorough empirical paper, demonstrating the effectiveness of a relatively simple model for recommendations: Pros: - strong experiments - always good to see simple models pushed to perform well - presumably of interest to practioners in the area Cons: - quite oriented to the recommendation application - technical novelty is in the experimental evaluation rather than any new techniques On balance I recommend the paper is invited to the workshop.
train
[ "H1RrugAbf", "SJqZdRi1f", "r1rOlgOlz", "B1RdHXTef", "B1WDuZAWz", "SJ3gtkUWz", "SJOFHTH-M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank you for your feedback\nIn addition to your comments we would like to highlight several points:\n1. Two methods of integrating time decay of purchases into the learning framework were proposed:\n1.1 Convolutional layer for exploring the shape of the time decay function.\n1.2 We explored properties of different neural network based recommenders: predictor and auto-encoder models and proposed a method of combining their properties by integrating time decay of purchases into the learning framework. The final model (“soft” split) can be interpreted as a generalized auto-encoder which has time decay on input layer and time decay on cost function. The evaluation is done on both internal and public data sets.\n2 Our relatively simple model can capture seasonality changes with daily re-training.\n3 We designed and open-sourced the core library which was used on these experiments. This library supports multi-gpu model parallel training and allows us to train large neural networks based recommender (model size can be more than several GB) in timely manner.\n4 Our approach is successfully scaled and outperforms existing recommenders on four different categories of one of the largest retail catalog.\n\nSeveral reasons of emphasizing the online production A/B test results are presented below:\nI would like to highlight the importance of reporting the online A/B test results for recommender systems which was done in this paper. The standard evaluation of real recommender system estimates KPI gain (for example number of purchases) and confidence level (p-value). \nIf p-value is low and KPI gain is high it means that we are confident that KPI gain has low probability of being random. \nIf p-value is high it means that we are not confident in results and it is highly probable that KPI gain is random.\n\nThat is why if only offline metrics improvements of recommender system with no confidence evaluation is reported, then we do not know what is the probability of the offline gain being random. For example we report offline improvements on next week, but how about second, third week and etc. Even if we produce the full accuracy distribution over next several months it will not be real because of the second point below.\nSecond point about \"pure\" offline evaluation: it is done on purchases made by customers which were exposed to recommendations produced by different recommender (for example by legacy recommender). So again, offline metrics do not show real picture. In this case even if there is no gain in offline metrics we still can get KPI gain during online test and vice versa.\nDuring online A/B test the recommender loop is “closed”: we are evaluating KPI metrics on purchases which were done by customers who are exposed to the recommender which we are evaluating.\n\nIn conclusion offline evaluation is a preliminary test of the opportunity (which says that designed method can produce some recommendations) and only online A/B test shows real value of the designed approach. That is why we highlighted the last point in our paper: we demonstrated low p-value (less than 0.05) and increased KPI (number of purchases).", "The paper proposes a new neural network based method for recommendation.\n\nThe main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed.\n\nThis contribution is not bad for an empirical paper. There's certainly not that much here that's groundbreaking methodologically, though it's certainly nice to know that a simple and scalable method works.\n\nThere's not much detail about the data (it is after all an industrial paper). It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration. As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere.\n\nOther than that, it's nice to see an evaluation on real production data, and it's nice that the authors have provided enough info that the method should be (more or less) reproducible. There's some slight concern that maybe this paper would be better for the industry track of some conference, given that it's focused on an empirical evaluation rather than really making much of a methodological contribution. Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks.", "Authors describe a procedure of building their production recommender system from scratch, begining with formulating the recommendation problem, label data formation, model construction and learning. They use several different evaluation techniques to show how successful their model is (offline metrics, A/B test results, etc.)\n\nMost of the originality comes from integrating time decay of purchases into the learning framework. Rest of presented work is more or less standard.\n\nPaper may be useful to practitioners who are looking to implement something like this in production.", "This paper presents a practical methodology to use neural network for recommending products to users based on their past purchase history. The model contains three components: a predictor model which is essentially a RNN-style model to capture near-term user interests, a time-decay function which serves as a way to decay the input based on when the purchase happened, and an auto-encoder component which makes sure the user's past purchase history get fully utilized, with the consideration of time decay. And the paper showed the combination of the three performs the best in terms of precision@K and PCC@K, and also with good scalability. It also showed good online A/B test performance, which indicates that this approach has been tested in real world.\n\nTwo small concerns:\n1. In Section 3.3. I am not fully sure why the proposed predictor model is able to win over LSTM. As LSTM tends to mitigate the vanishing gradient problem which most likely would exist in the predictor model. Some insights might be useful there.\n2. The title of this paper is weird. Suggest to rephrase \"unreasonable\" to something more positive. ", "We would like to thank all the reviewers for their careful consideration of our paper, and very useful comments. \nWe provided detailed responses below and uploaded a new revision of the paper", "Thank you for your feedback.\n1. Comments about benchmarking on public data sets:\nYou raised a good point about evaluation on public data sets. It was not done because of several reasons:\n1.1 There is no public data sets which have the same properties with our data: implicit feedbacks(purchase events + date of purchase), large number of products, large number of customers. \n1.2 Most of the papers are reporting RMSE or precision@K on randomly held out data sets, whereas we measure precision@K at particular time (future week). So, that estimated metrics are as close as possible to real production environment.\n\nWe would like to alleviate your concern about evaluation on public data sets.\nWe are going to pick MovieLens data [http://files.grouplens.org/datasets/movielens/ml-20m.zip] because it is related to one of the categories we use in the paper. \nWe are going to convert all rating to watches events (implicit feedbacks) by thresholding the ratings: 1 if rating >= 3, 0 otherwise. The same implicit feedback conversion was used in the paper [Vito Claudio Ostuni et. Al Top-N recommendations from implicit feedback leveraging linked open data. RecSys '13]\nWe are going to split the MovieLens data into past and future purchases. Then use past purchase for training the models and future for evaluation.\nIn the end we will compare accuracy metrics of our method with existing techniques on MovieLens data sets and report precision@K and PCC@K on future week (as described in point 1.2).\nPlease let me know if above approach can alleviate your concern about bench-marking on public data sets.\n\n2. Comments about our contribution:\n2.1 Yes, one of the focus of this paper is scaling neural network based recommender on all digital categories in real production environment.\nWe also open sourced the core library which is used in our experiments. It supports multi-gpu model parallelization. It allows us to train a neural networks with million of input and output dimensionalities (so that model size can be more than several gigabytes) in timely manner.\n\n2.2 In addition to that we did methodological contribution (which was a key for success in running these models in production):\nWe proposed to use convolutional layer for exploring the shape of the time decay. \nWe proposed different methods for increasing precision@K and PCC@K of neural network based recommender. We presented results on different data sets such as video, audiobooks, ebooks, music\nThese data sets have different properties: \na). For example on video data sets (which are popularity biased) we showed that predictor model combined with time decay on input data improves precision@K only, we also showed how to combine predictor model with auto-encoder so that PCC@K can be increased 2 times without significant reduction of precision@K. \nb). On other data sets like ebooks and audio-books (which are less popularity biased then video data) we showed that combination of predictor and autoencoder models increases both precision@K and PCC@K in comparison with predictor model. \nThe goal of this project was to find a solution which can be scaled to all digital categories of retail catalog. It makes it different with other referred papers where one category is picked and then model is specifically designed for it.\n\nWe will add above comments in the next revision of the paper.\n", "Thank you for your feedback.\n\n1. Comments about vanishing gradient:\nWe acknowledged that we did not add detailed info about vanishing gradient of predictor model(feedforward neural network). More details with experimental results are presented below:\nWe use ReLU activation function to mitigate vanishing gradient in predictor model.\nWith increase of the depth (number of hidden layer) of predictor model, accuracy metrics can degrade significantly (vanishing gradient is one of the reason of such effect). That is why we measured the impact of the NN depth on Precision@1, and observed that with increasing the NN depth, Precision@1 is going down as follow (even with ReLU):\nDepth 1 2 3 4 5 6\nPrecision@1 0.072 0.072 0.07 0.068 0.067 0.065\nOne of the method of mitigating the accuracy degradation (due to depth of NN) is residual neural networks [K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.]. We explored residual NN with predictor model on our data sets, and observed that it mitigates vanishing gradient effect, so that precisoion@1 stayed the same regardless of the depth of the neural network: around 0.072. But it does not improve accuracy metrics in comparison with two layers NN. That is why we picked neural network model with number of hidden layer no more than 2.\nWe will add these comments with experimental results in the new paper revision.\n\n2. Comments about LSTM performance:\nLSTM is well applied on sequences like text, speech etc. These sequences has “strong” grammatical rules, which are well captured by LSTM. We explain lower accuracy of LSTM by our data properties (or lack of “strong” grammatical rules in sequences of purchases in our data). For example on ebooks data, if one customer buy books in order: “Harry Potter”, “Golden Compass”, “Inkheart”, another customer can buy these books in different order: “Inkheart”, “Harry Potter”, “Golden Compass” and another one in different order, etc. So these purchases can be in any order and “long” term dependencies can be noisy.\nAnother important properties of our data(video, ebooks) is the popularity of the recommended products at particular date. Our approach (predictor model) is modeling these properties by re-training the model every day and predicting the next purchases which are popular in the current week, whereas LSTM is recommending only next purchases (which are not necessary popular at current week). \nWe can expect better performance of LSTM on other categories of products (where order of purchases is more important), for example probability of buying a game for a cell phone after purchasing a cell phone is higher than probability of buying these products in reversed order. \n\n3. Comment about paper title:\nWe will rename it to: “THE EFFECTIVENESS OF A TWO-LAYER NEURAL NETWORK FOR RECOMMENDATIONS”\n\nWe will add above comments in the next paper revision." ]
[ -1, 6, 6, 7, -1, -1, -1 ]
[ -1, 3, 4, 3, -1, -1, -1 ]
[ "r1rOlgOlz", "iclr_2018_B1lMMx1CW", "iclr_2018_B1lMMx1CW", "iclr_2018_B1lMMx1CW", "iclr_2018_B1lMMx1CW", "SJqZdRi1f", "B1RdHXTef" ]
iclr_2018_rk8wKk-R-
Convolutional Sequence Modeling Revisited
This paper revisits the problem of sequence modeling using convolutional architectures. Although both convolutional and recurrent architectures have a long history in sequence prediction, the current "default" mindset in much of the deep learning community is that generic sequence modeling is best handled using recurrent networks. The goal of this paper is to question this assumption. Specifically, we consider a simple generic temporal convolution network (TCN), which adopts features from modern ConvNet architectures such as a dilations and residual connections. We show that on a variety of sequence modeling tasks, including many frequently used as benchmarks for evaluating recurrent networks, the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and sometimes even highly specialized approaches. We further show that the potential "infinite memory" advantage that RNNs have over TCNs is largely absent in practice: TCNs indeed exhibit longer effective history sizes than their recurrent counterparts. As a whole, we argue that it may be time to (re)consider ConvNets as the default "go to" architecture for sequence modeling.
workshop-papers
meta score: 5 This paper gives a thorough experimental comparison of convolutional vs recurrent networks for a variety of sequence modelling tasks. The experimentation is thorough, but the main point of the paper, that convolutional networks are unjustly ignored for sequence modelling, is overstated as there are several areas where convolutional networks are well explored. Pros: clear and well-written thorough set of experiments Cons original contribution is not strong it is not as radical to consider convolutional networks for sequence modeling as the authors seem to suggest
train
[ "SkdHpQDez", "HkUwN_Ylf", "HkTNLM5gM", "rkR0McB7M", "SyYmNNQzf", "S1FZN4mzf", "SyxkEEQfM", "rkehXEQff", "HkDIQN7ff", "ByC7mNQGf", "B1lh3uXbf", "Bk2tcOm-f", "BJ7tE5OCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public" ]
[ "In this paper, the authors argue for the use of convolutional architectures as a general purpose tool for sequence modeling. They start by proposing a generic temporal convolution sequence model which leverages recent advances in the field, discuss the respective advantages of convolutional and recurrent networks, and benchmark their architecture on a number of different tasks.\n\nThe paper is clearly written and easy to follow, does a good job of presenting both the advantages and disadvantages of the proposed method, and convincingly makes the point that convolutional architectures should at least be considered for any sequence modeling task; they are indeed still often overlooked, in spite of some strong performances in language modeling and translation in recent works.\n\nThe only part which is slightly less convincing is the section about effective memory size. While it is true that learning longer term dependencies can be difficult in standard RNN architectures, it is interesting to notice that the SoTA results presented in appendix B.3 for language modeling on larger data sets are architectures which focus on remedying this difficulty (cache model and hierarchical LSTM). It would also be interesting to see how TCN works on word prediction tasks which are devised explicitly to test for longer memory, such as Lambada (1) or Children Books Test (2).\n\nAs a minor point, adding a measure of complexity in terms of number of operations could be a useful hardware-independent indication of the computational cost of the architecture.\n\nPros:\n- Clearly written, well executed paper\n- Makes a strong point for the use of convolutional architecture for sequences\n- Provides useful benchmarks for the community\n\nCons:\n- The claims on effective memory size need more context and justification\n\n1: The LAMBADA dataset: Word prediction requiring a broad discourse context, Paperno et al. 2016\n2: The Goldilocks principle: reading children's books with explicit memory representation, Hill et al. 2016", "The authors claim that convolutional networks should be considered as possible replacements of recurrent neural networks as the default choice for solving sequential modelling problems. The paper describes an architecture similar to wavenet with residual connections. Empirical results are presented on a large number of tasks where the convolutional network often outperforms modern recurrent baselines or reaches similar performance.\n\nThe biggest strength of the paper is the large number of tasks on which the models are evaluated. The experiments seem sound and the information in both the paper and the appendix seem to allow for replication. That said, I don’t think that all the tasks are very relevant for comparing convolutional and recurrent architectures. While the time windows that RNNs can deal with are infinite in principle, it is common knowledge that the effective length of the dependencies RNNs can model is quite limited in practise. Many of the artificial task like the adding problem and sequential MNIST have been designed to highlight this weakness of RNNs. I don’t find it very surprising that these tasks are easy to solve with a feedforward architecture with a large enough context window. The more impressive results are in my opinion those on the language modelling tasks where one would indeed expect RNNs to be more suitable for capturing dependencies that require stack-like memory functionality. \n\nWhile the related work is quite comprehensive, it downplays the popularity of convolutional architectures throughout history a bit. Especially in speech recognition, RNNs have only recently started to gain popularity while deep feedforward networks applied to overlapping time windows (i.e., 1D convolutions) have been the state-of-the-art for years. Of course the recent successes of dilated convolutions are likely to change the landscape in this application domain yet again.\n\nThe paper is well-structured and written. If anything, it is perhaps a little bit wordy at times but I prefer that over obscurity due to brevity.\n\nThe ideas in the paper are not novel and neither do the authors claim that they are. Unfortunately, I also think that the impact of the work is also somewhat limited due to the enormous success of the wavenet architecture. I do think that the results on the real-world tasks are valuable and worthy of publication. However, I feel that the authors exaggerate the extent to which researchers in this field still consider RNNs superior models for sequences. \n\n+ Many experiments and tasks.\n+ Well-written and clear.\n+ Good results\n- Somewhat exaggerated claims about the extent to which RNNs are still being considered more suitable sequence models\n than dilated convolutions. Especially in light of the success of Wavenet.\n- Not much novelty/originality.\n", "This paper argues that convolutional networks should be the default\napproach for sequence modeling.\n\nThe paper is nicely done and rather easy to understand. Nevertheless, I find\nit difficult to assess its significance. In order to support the original hypothesis,\nI think that a much larger and more diverse set of experiments should have\nbeen considered. As pointed out by another reviewer please add https://arxiv.org/abs/1703.04691\nto your references.", "We thank the reviewers and other discussants for their comments. In order to address points discussed in OpenReview reviews, comments, and our responses, we have updated our paper. The key changes are as follows:\n\n1. We’ve added content to the Related Work section. This content elaborates on the relationship to prior work (e.g., non-dilated gated ConvNets, convolutional models for sequence to sequence prediction, etc.), in accordance with our responses to OpenReview reviews and comments. As highlighted in the revision, the TCN model we focus on avoids much of the specialized machinery present in prior work and is evaluated on an extremely diverse set of tasks rather than a specific domain or application.\n\n2. We have added experiments on the LAMBADA dataset, as suggested by Reviewer 3, which in fact show very strong performance for the TCN models. LAMBADA is an especially challenging task where each data sample consists of a long context segment (4.6 sentences on average) and a target sentence, the last word of which needs to be predicted. In this setting, a human can perfectly predict the last word when given the context, but most of the existing models (e.g., LSTM, vanilla RNN) fail to do so. As shown in Table 1 of Section 4 in the revision, without much tuning (due to limited rebuttal time), TCN can achieve a perplexity of < 1300 on LAMBADA, substantially outperforming LSTMs (~4000 ppl) and vanilla RNNs (~15000 ppl), as listed in prior works. This is a strong result that suggests that TCNs are able to recall from a much larger context than recurrent networks, and thus may be more suitable for tasks where long dependencies are required.\n\n3. The appendix now includes a new section that compares the baseline TCN to a TCN that uses a gating mechanism. This mainly serves as a comparison point to the Dauphin et al. paper, which one reviewer pointed out was not sufficiently addressed in our original draft. Our experiments show that a gating mechanism can indeed be useful on certain language modeling tasks, but such benefits may not generalize well to other tasks (e.g., polyphonic music and other benchmark tasks). Thus, while we do absolutely agree with the relevance of the Dauphin et al. paper, and stress this more in the update, we also feel that much the same considerations apply here as to e.g., the WaveNet paper, where the focus of the previous work was really on a single domain, whereas our paper stresses the generality of convolutional sequence models.\n\n4. The revision includes the latest results on certain large experiments (e.g., Wikitext-103). Specifically, as mentioned in our responses, the TCN achieves a perplexity of 45.2 on this dataset (the only change from our original result was simple optimizing the model for longer), compared to an LSTM that achieves 48.4 perplexity.\n", "Thanks for your note.  We will certainly update the paper to include this arXiv report.  However, we also believe that the precise conclusions of this report are somewhat orthogonal as it applies an architecture virtually identical to WaveNet to one particular time series prediction task; thus, from an architectural standpoint, we think that the WaveNet paper is the more relevant prior work, which of course we do cite and discuss.  In contrast, the goal of our current work is to highlight a simpler architecture and empirically study it across a wide range of sequence modeling tasks.  But as mentioned, we're happy to include the reference and explain this connection.\n", "Thank you very much for the review, we agree with virtually all your points.   As per your suggestion, we are currently integrating experiments on the LAMBADA dataset into the paper, and will post a revision with these results shortly.\n", "Thank you very much for this review.  We agree on most points, except in the ultimate conclusions and assessment of the current \"default\" mindset of temporal modeling in RNNs.\n\nFirst, we agree that speech data in particular (or perhaps audio data more broadly), is indeed one instance where CNNs do appear to have a historical edge over recurrent models, and we can emphasize this in the background section.  Indeed, as you mention, the success of WaveNet has certainly made clear the power of CNNs in this application domain.\n\nThe question, then, is to what extent the community already feels that the success of WaveNet in the speech setting is sufficient to \"standardize\" the use of CNNs across all sequence prediction tasks.  And our genuine impression here is that these ideas have yet to permeate the mindset of the community for generic sequence prediction.  Numerous resources (e.g., Goodfellow et al.'s deep learning book, with its chapter \"Sequence Modeling: Recurrent and Recursive Nets\", plus virtually all current papers on recurrent networks), still highlight LSTMs and other similar architectures as the \"standard\" for sequence modeling.  The precise goal of our work is to highlight the fact that WaveNet-like architectures (though substantially simplified too, as we describe below) can indeed work well across the many other settings we consider.  And we feel that this is an important point to make empirically, even if the results or conclusion may seem \"unsurprising\" to people who are very familiar with CNN architectures.\n\nThe second point, also, is that the architecture we consider is indeed simpler than WaveNet in many respects: e.g. no gated activation but just ReLUs (which, as we highlighted in our response to a previous reviewer, we will include more experimentation on in a forthcoming update), no context stacks, etc; and residual units and dilation structure that more directly mirror the corresponding \"standard\" architectures in convolutional image networks.  Thus, a practitioner wishing to apply WaveNet-style architectures to some new sequence prediction task may be unclear about which elements of the architecture are really necessary, and we attempt to distill this as much as possible in our current paper.\n\nOverall, therefore, we agree that the significance of our current work is largely making the empirical point that TCN architectures are not just for audio, but really for any sequence modeling problem.  But we do feel that this is an important point to make and thoroughly substantiate, even given the success of WaveNet.\n", "Thanks for your note, though we honestly found it a bit surprising.  The entire point of our paper _is_ to evaluate the improved TCN performance over a large and diverse set of experiments, and on this point it is by far the single _most diverse_ study of CNN vs. RNN performance that we are aware of.  And while many of the particular benchmarks are indeed \"small-sized\" in and of themselves, they are standard benchmarks for evaluating the performance of recurrent networks (see appendix A for some references to papers that used these benchmark tests); and we include experiments on domains such as Wikitext-103, which is certainly not a small dataset.\n\nRegarding arXiv:1703.04691, see our comments in the response to the discussant who originally brought this up.\n", "Thanks for the note.  We believe this note is addressing the same points as the note above (with a few additional follow-on points), so we refer to our comment above.", "Thanks very much for your note.  We absolutely agree with your general comments about the related work. We respond to two different points here, because in our mind there are two different categories in the papers you mention.\n\nFirst, the Kalchbrenner et al., and Gehring et al., papers both relate to convolutional sequence to sequence models. While we absolutely agree that this work is related to our topic, we made the explicit choice not to consider seq2seq models in this paper.  The rationale for us is that these models differ in substantial ways from \"pure\" temporal convolutional models.  Since the input to the model is the entire input sentence (captured by non-causal convolutions), and only the autoregressive output network needs to follow causal generation, the task itself is quite different from pure temporal sequence modeling, even if it may be an extension.  Specifically, the two-stage encoder/decoder architecture (first to encode the entire input sentence, then to autoregressively generate the translation) of typical seq2seq models seems so fundamental to these approaches that we felt it was substantially more specialized than the generic temporal modeling problem.\n\nHowever, we also of course concede that the work is related, especially given the machine translation community's departure from pure recurrent networks to convolutional (or even pure attention-based) models.  Thus we will edit the paper to cite these works and address these points (we'll be posting a revised version within a week or so).\n\nSecond, there is the work of Dauphin et al., which more directly relates to a language modeling task.  And while we _do_ cite this work, we believe your point combined with the point in the comment below is more that we don't devote sufficient attention to this previous work.  We agree that the relationship is not clarified enough in the paper and are currently revising to fix this, but let us briefly mention here the connections and how we see this relationship.\n\nFirst, we should mention that while we did include the 48.9 PPL figure on one GPU, running the TCN model for more epochs (still on one GPU) actually achieves a PPL of 45.2, which isn't far off from Dauphin’s 44.9. (Note that we use a network approximately half the size of Dauphin et al.’s, and little tuning.) We'll naturally update the paper on this point.  Second, the main technical contribution of the paper of Dauphin et al. is the combination of (non-dilated) convolutional networks with a gating mechanism.  We experimented quite extensively with this gating mechanism combined with our generic TCN architecture, but didn’t see significant overall performance improvements due to the gating mechanism.  We can include these results in an appendix.  Indeed, a main characteristic of our work is simply the claim that the generic TCN architecture (which is quite simple in nature, as we highlight) is _sufficient_ to achieve most of the benefits proposed by more complex convolutional architectures, without the need for attention, gating mechanisms, and other architectural elaborations.  We believe that the comparison to the Dauphin et al. work actually supports this conclusion, and we will update the paper accordingly (we will post a follow-up note here once the paper has been updated).\n", "(1) It is already known that convolutional architectures perform well on sequence modeling tasks (for example, Oord et al 2016, Wavenet). This paper does not discuss many related works on convolutional sequence modeling for text that should be addressed specifically in the related work section, for example Kalchbrenner et al 2016, Dauphin et al 2017, and Gehring et al, 2017. \n\n(2) This paper tests on Wikitext-103 but it does not cite that https://arxiv.org/pdf/1612.08083.pdf already published better results on Wikitext-103 with a very similar convolutional model (44.9 PPL on 1 GPU/37.2 PPL on 4 GPU, compared to the reported 48.9 PPL here, a significant difference). \n", "This paper does not acknowledge most very prominent recent work on CNNs for text generation, e.g.,\n\nKalchbrenner et al. \"Neural Machine Translation in Linear Time\". 2016.\nGehring et al. \"Convolutional Sequence to Sequence Moedeling\". 2017.\n\nThose papers make precisely the same points and have much stronger empirical results. The authors cite Dauphin et al. (2016) at the very end of the paper but do not acknowledge that many of the points made are already covered by recent other work.", "This work very closely resembles the work that was first done by the authors in https://arxiv.org/abs/1703.04691. The network structure employed seems almost identical. Furthermore the conclusion that CNNs can be an efficient alternative to RNNs was also already reached in the above mentioned paper. Thus it would be advisable to cite this work in your paper. " ]
[ 8, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rk8wKk-R-", "iclr_2018_rk8wKk-R-", "iclr_2018_rk8wKk-R-", "iclr_2018_rk8wKk-R-", "BJ7tE5OCb", "SkdHpQDez", "HkUwN_Ylf", "HkTNLM5gM", "Bk2tcOm-f", "B1lh3uXbf", "iclr_2018_rk8wKk-R-", "iclr_2018_rk8wKk-R-", "iclr_2018_rk8wKk-R-" ]
iclr_2018_Hkfmn5n6W
Exponentially vanishing sub-optimal local minima in multilayer neural networks
Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with \Omega\(N) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N\rightarrow\infty datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d_0=\tilde{\Omega}(\sqrt{N}), and a more realistic number of d_1=\tilde{\Omega}(N/d_0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d_0 = 16 hidden neurons.
workshop-papers
The paper analyzes neural network with hidden layer of piecewise linear units, a single output, and a quadratic loss. The reviewers find the results incremental and not "surprising", and also complained about comparison with previous work. I think the topic is very pertinent, and definitely more relevant compared to studying multi-layer linear networks. Hence, I recommend the paper be presented in the workshop track.
train
[ "BJG3EUIVz", "ryA1wwKgz", "HkgrJeEgM", "Skq3uQKxG", "SJtwo9DMf", "r1k7o5vGz", "HkRqc9DfM", "HyQgq9wGf" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "After our revision, the only remaining major concern of the reviewer is that\n\"the main message is mostly covered by existing works.\"\n\nHowever, it is not clear to us what existing work the reviewer is referring to. If the reviewer is saying that similar results were proved earlier, then we disagree. In any case, would like to know which results the reviewer is referring to (even so that we can revise the paper to address any such concerns).\n\nIf the reviewer is saying instead that our results on \"vanishing bad local minima\" are not surprising because many previous papers already conjectured this (or proved this under unrealistic conditions), then we believe that: \n(1) There is some value in advancing towards rigorous proofs of \"unsurprising\" important conjectures (e.g., P != NP). \n(2) This issue is not well understood as the reviewer suggests, since, under similar conditions, optimization with respect to the expected loss (instead of the empirical loss, which we used here) can converge to local minima: https://arxiv.org/abs/1712.08968. This suggests we still do not understand the complexity of this issue, even in the most basic settings.\n", "This is a theory paper. The authors consider networks with single hidden layer. They assume gaussian input and binary labels. Compared to some of the existing literature, they study a more realistic model that allows for mild overparametrization and approximately speaking d_0=d_1=sqrt(N). The main result is that volume of suboptimal local minima exponentially decreases in comparison to global minima.\n\nIn my opinion, paper has multiple drawbacks.\n1) Lack of surprise factor: There are already multiple papers essentially saying similar things. I am not sure if this contributes substantially on top of existing literature.\n2) Lack of algorithmic results: While the volume of suboptimal DLM being small is an interesting result, it doesn't provide substantial algorithmic insight. Recent literature contains results that states not only all locals are global but also gradient descent provably converges to the global with a good rate. See Soltanolkotabi et al.\n3) Mean squared error for classification problem (discrete labels) does not sound reasonable to me. I believe there are already some zero error results for continuous labels. Logistic loss would have made a more compelling story.\n\nMinor comments:\ni) Results are limited to single hidden layer whereas the title states multilayer. While single hidden layer is multilayer, stating single hidden layer upfront might be more informative for the reader.\nii) Theorem 10 and Theorem 6 essentially has the same bound on the right hand side but Theorem 10 additionally divides local volume by global which decreases by exp(-2Nlog N). So it appears to me that Thm 10 is missing an additional exp(2Nlog N) factor on the right hand side.\n\nRevision (response to authors): I appreciate the authors' response and clarification. I do agree that my comparison to Soltanolkotabi missed the fact that his result only applies to quadratic activations for global convergence (also many thanks to Jason for clarification). Additionally, this paper appeared earlier on arXiv. In this sense, this paper has novel technical contribution compared to prior literature. On the other hand, I still think the main message is mostly covered by existing works. I do agree that squared-loss can be used for classification but it makes the setup less realistic. Finally, while introduction discusses the \"last two layers\", I don't see a technical result proving that the results extends to the last two layers of a deeper network. At least one of the assumptions require Gaussian data and the input to the last two layers will not be Gaussian even if all previous layers are fixed. Consequently, the \"multilayer\" title is somewhat misleading.", "This paper studies the question: Why does SGD on deep network is often successful, despite the fact that the objective induces bad local minima?\nThe approach in this paper is to study a standard MNN with one hidden layer. They show that in an overparametrized regime, where the number of parameters is logarithmically larger than the number of parameters in the input, the ratio between the number of (bad) local minima to the number of global minima decays exponentially. They show this for a piecewise linear activation function, and input drawn from a standard Normal distribution. Their improvement over previous work is that the required overparameterization is fairly moderate, and that the network that they considered is similar to ones used in practice. \n\nThis result seems interesting, although it is clearly not sufficient to explain even the success on the setting studied in this paper, since the number of minima of a certain type does not correspond to the probability of the SGD ending in one: to estimate the latter, the size of each basin of attraction should be taken into account. The authors are aware of this point and mention it as a disadvantage. However, since this question in general is a difficult one, any progress might be considered interesting. Hopefully, in future work it would be possible to also bound the probability of starting in one of the basins of attraction of bad local minima.\n\nThe paper is well written and well presented, and the limitations of the approach, as well as its advantages over previous work, are clearly explained. As I am not an expert on the previous works in this field, my judgment relies mostly on this work and its representation of previous work. I did not verify the proofs in the appendix. \n", "## Summary\nThis paper aims to tackle the question: \"why does standard SGD based algorithms on neural network converge to 'good' solutions?\" \n\nPros: \nAuthors ask the question of convergence of optimization (ignoring generalization error): how \"likely\" is that an over-parameterized (d1d0 > N) single hidden layer binary classifier \"find\" a good (possibly over-fitted) local minimum. They make a set of assumptions (A1-A3) which are weaker (d1 > N^{1/2}) than the ones used earlier works. Previous works needed a wide hidden layer (d1 > N).\n\nAssumptions (d0=input dim, d1=hidden dim, N=n of datapoints, X=datapoints matrix):\nA1. Datapoints X come from a Gaussian distribution \nA2. N^{1/2} < d0 =< N\nA3. N polylog(N) < d0d1 (approximate n of. parameters) and d1 =< N\n\nThis paper proves that total \"angular volume\" of \"regions\" (defined with respect to the piecewise linear regions of neuron activations) with differentiable bad-local minima are exponentially small when compared with to the total \"angular volume\" of \"regions\" containing only differentiable global-minimal. The proof boils down to counting arguments and concentration inequality.\n\nCons: \nNon-differentiable stationary points are left as a challenging future work on this paper. Non-differentiability aside, authors show a possible way by which shallow neural networks might be over-fitting the data. But this is only half the story and does not completely answer the question. First, exponentially vanishing (in N) volume of the \"regions\" containing bad-local minima doesn't mean that the number of bad local minima are exponentially small when compared to number global minima. \nSecondly, as the authors aptly pointed out in the discussion section, this results doesn't mean neural networks will converge to good local minima because these bad local minimas can have a large basins of attraction.\nLastly, appropriate comparisons with the existing literature is lacking. It is hinted that this paper is more general as the assumptions are more realistic. However, it comes at a cost of losing sharpness in the theoretical results. It is not well motivated why one should study the angular volume of the global and local minima. \n\n## Questions and comments\n1. How critical is Gaussian-datapoints assumption (A1)? Which part of the proof fails to generalize? \n2. Can the proof be extended to scalar regression? It seems hard to generalize to vector output neural networks. What about deep neural networks? \n3. Can you relate the results to other more recent works like: https://arxiv.org/pdf/1707.04926.pdf.\n4. Piecewise linear and positively homogeneous (https://arxiv.org/pdf/1506.07540.pdf) activation seem to be important assumption of the paper. It should probably be mentioned explicitly.\n5. In the experiments section, it is mentioned that \"...inputs to the hidden neurons converge to a distinctly non-zero value. This indicates we converged to DLMs.\" How can you guarantee that it is a local minimum and not a saddle point?\n", "We thank the reviewer for his positive review. We hope our main response in the submission forum clarified some of the uncertainty regarding our novelty.", "## Reply to general comments \n\n[“Exponentially vanishing (in N) volume of the \"regions\" containing bad-local minima doesn't mean that the number of bad local minima are exponentially small when compared to number global minima.”, “It is not well motivated why one should study the angular volume of the global and local minima.”]\n\nA explained in section 3, the “number” of local minima is not a well-defined, in the over-parameterized regime. In this case local minima are not points, but linear manifolds (e.g., lines, hyperplanes) within each differentiable region, since there are certain directions in which we can change the weights and do not modify the loss. Instead, one can try to count the numbers of “local minima manifolds” of each type (which are equal the number of differentiable regions containing bad/good minima). However, this can be misleading since some minima occupy much larger regions than others. To take this into account we therefore chose to bound the total (angular) volume of the regions for each type (the regular volume is infinite). Incidentally, we also bound the number of regions (equal to the number of “local minima manifolds”) in the derivation of the total volume bounds, since we use a product of the two worst case bounds on (number of regions)*(single region volume). This is the reason we focused on the “angular volume” of local minima, as the strongest possible interpretation we could think of the “number” of local minima. We can clarify this further in the paper, if needed.\n\n[“Appropriate comparisons with the existing literature is lacking.” “It is hinted that this paper is more general as the assumptions are more realistic. However, it comes at a cost of losing sharpness in the theoretical results.” “As the authors aptly pointed out in the discussion section, this results doesn't mean neural networks will converge to good local minima because these bad local minimas can have a large basins of attraction.”]\n\nPlease see our main response in the submission forum. We believe that advancing theory on realistic models is more important then proving strong claims on highly non-realistic models. In other words, though other papers prove seemingly stronger results, this was always at the high price of being unrealistic and therefore very far from practical usage, as we review in the introduction and the our main response in the submission forum. \n\n## Reply to specific questions and comments \n\n1) The Gaussian assumption could be relaxed to other near-isotropic distributions (e.g., sparse-land model, (Elad, 2010, Section 9.2)), as scale constants do not affect any of the calculations. If the input is non-isotropic then it could harm several probabilistic proofs: First, the bound on P(WX>0) in Lemma 16 could be much worse, which can harm the proof of theorem 6 (upper bound on sub-optimal local minima). Second, the bound on probability for a certain angular margin (Lemma 22 and 23) could also become worse, which will harm the proof of Theorem 9 (lower bound on global minima).\n\n2) Extension to scalar regression mainly requires the extension of the Theorem 8 to this case. Which we believe is quite possible, yet outside the scope of this paper. Our results apply also to multilayer neural network with more then single hidden layer if only the last two layers are trained, and our assumptions hold with respect to those two layers, as we discuss in the introduction. Therefore, it suggests that reaching zero training error might be easy even in more complicated neural nets with over-parameterization in the two last layers (e.g., Alexnet). We believe that extending our results to deep networks where all the layers are optimized, and to multi-output case, is challenging, yet possible, and requires much more work, as we mention in the discussion.\n\n3) Please see our main response in the submission forum. \n\n4) Please see our response on the Haeffele & Vidal paper in main submission forum. We state explicitly both in the abstract and introduction that we focus on neural nets “with one hidden layer of piecewise linear units” (this is in the first line to our discussion of the results in both cases). Since essentially all piecewise linear units used in practice (e.g., ReLU) are “positively homogeneous”, we did not mention this explicitly. \n\n5) Good point. It is easy to show that saddle points in a single hidden layer network must have zero weights in the last layer, and we can verify numerically this is not the case. However, the main point in this paragraph was to show we do not converge to a non-differentiable critical point, so we simply changed the phrasing in the last sentence to “This indicates we did not converge to converged to non-differentiable critical points.\"", "## Reply to general comments \n\n[“Results of similar flavor already exists”, “There are already multiple papers essentially saying similar things”, “ I believe there are already some zero error results for continuous labels.”, “Recent literature contains results that states not only all locals are global but also gradient descent provably converges to the global with a good rate. See Soltanolkotabi et al.“]\n\nWe believe there has been a misunderstanding: no “vanishing bad local minima”/“zero error”/”convergence to global minimum” results have been proven without using highly unrealistic assumptions, as we clarify in our or main response (detailed in the submission forum). If we understood correctly, all major concerns of the reviewer stem from this misunderstanding. We hope we clarified this issue.\n\n## Reply to Minor comments: \n\n[ Theorem 10 and Theorem 6 essentially has the same bound on the right hand side but Theorem 10 additionally divides local volume by global which decreases by exp(-2Nlog N). So it appears to me that Thm 10 is missing an additional exp(2Nlog N) factor on the right hand side.]\n\nThis is not an error. There are two bounds on the global minima volume in Theorem 9. To prove theorem 10, we use the left bound (exp(-d_1^*d_0 logN) which is better than the right bound exp(-2Nlog N). Specifically, this left bound becomes negligible in comparison to the bound of Theorem 6 (from assumption 4) so it has no effect on the final bound of Theorem 10. \n\n[ Logistic loss would have made a more compelling story.]\n\nYes, logistic loss is indeed better for classification. However, note that (1) almost all previous theory paper use quadratic error, (2) yet, to the best of our knowledge, as we clarified in our main response, there are no zero error results for continuous labels with realistic assumptions. (3) It is possible to do binary classification also with quadratic loss. In this paper we aimed to find the simplest case where the property of vanishing “bad” local minima could be proved for the first time under reasonably realistic conditions. We believe it will not be very hard to extend our results to logistic loss, as we write in the discussion, but this analysis is outside the scope of this paper. \n\n[Results are limited to single hidden layer whereas the title states multilayer. While single hidden layer is multilayer, stating single hidden layer upfront might be more informative for the reader.]\n\nWe can make this modification if the reviewer insists, but as this information is already written in the abstract, we believe that changing “multilayer” to “single hidden layer” will make the title a bit too long (also, using “two-layer” instead is a bit vague, as some people call such a network “three-layer”). Furthermore, it will somewhat undersell this paper, as our results relate also to multilayer neural network with more then single hidden layer if only the last two layers are trained, and our assumptions hold with respect to those two layers, as we discuss in the introduction. This suggests that reaching zero training error might be easy even in more complicated neural nets with over-parameterization in the two last layers (e.g., Alexnet).", "We sincerely thank the reviewers for their feedback on our paper. We believe that the major concerns may have been a result of a misunderstanding of previous literature. Specifically, in several previous papers the results may appear stronger then they truly are, if one misses an unrealistic assumption buried in the mathematical details, as we explain below, for all the papers mentioned by the reviewers.\n\nFirst, the reviewers mention Soltanolkotabi et al. (https://arxiv.org/pdf/1707.04926.pdf., which we already cite in the paper, and originally appeared after us) as a previous paper that proved stronger results. However, these results require highly unrealistic assumptions regarding the initialization or activation functions. Specifically, as we mentioned in our paper, the main result in this paper (Theorem 2.5) unrealistically assumes that the weights are initialized very close to the target weights of the teacher generating the labels: see eq. 2.4 in this paper, and recall that k (# neurons)>=d (input dimension) << n (number of samples), so this distance is ~ (d/n)^(1.5)), which is typically very small. Other theorems in this paper assume quadratic activation functions, which is also unrealistic (e.g., such network can only approximate quadratic functions). We confirmed this in a personal communication with an author of Soltanolkotabi et al., who also agreed the assumptions in our paper are significantly more realistic then this paper, and also in comparison to other related results. Lastly, in case the reviewers had in mind another paper by Soltanolkotabi (https://arxiv.org/abs/1705.04591), it only examined the case of a *single* ReLU neuron. \n\nSecond, in the paper by Haeffele & Vidal (https://arxiv.org/pdf/1506.07540.pdf) the main result requires unrealistically wide neural layers: the condition r>card(D) in Theorem 17, when applied to a neural net with single hidden layer, implies that the number of neurons is larger then (input dimension)*(number of samples). For such extremely large layers, it is easy to get zero error by optimizing the last (linear) layer alone (where #variables > #samples), like we discuss in the introduction in the case of extremely wide layers (d_{L-1}>N). We now cite it together with the list of other works that also assumed extremely wide layers (this list was not meant to be exhaustive).\n\nWe hope our answers below will help clarify any such misunderstandings. To emphasize, no previous paper rigorously proved similar results without requiring highly unrealistic assumptions (either a heavily modified neural net model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with more units then data samples). This prevents these previous results from being used in practice, and indicates the inherent difficulty of such proofs. In contrast to previous works, the results in our paper are applicable in *some* situations (e.g., Gaussian data) where a neural net trained using SGD might be used and be useful (e.g., have a better performance then a linear classifier). Therefore, we feel that our results are a step in the direction of a global convergence proof, for a reasonably realistic models. We feel that trying to close the gap towards a “convergence to global minimum” proof for such models is a worthy goal, given that such a proof seems far from reach, despite many years of research and many papers on the subject. In the discussion we suggest how to extend our results towards this final goal.\n\nRevision summary: Added references by Haeffele & Vidal, and clarified a sentence in the experimental section, following a a comment by reviewer 3.\n\nAdditional comments are answered individually for each reviewer." ]
[ -1, 5, 7, 6, -1, -1, -1, -1 ]
[ -1, 3, 2, 3, -1, -1, -1, -1 ]
[ "ryA1wwKgz", "iclr_2018_Hkfmn5n6W", "iclr_2018_Hkfmn5n6W", "iclr_2018_Hkfmn5n6W", "HkgrJeEgM", "Skq3uQKxG", "ryA1wwKgz", "iclr_2018_Hkfmn5n6W" ]
iclr_2018_ByxLBMZCb
Learning Deep Models: Critical Points and Local Openness
With the increasing interest in deeper understanding of the loss surface of many non-convex deep models, this paper presents a unifying framework to study the local/global optima equivalence of the optimization problems arising from training of such non-convex models. Using the "local openness" property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of matrix multiplication mapping in its range. Then we use our characterization to: 1) show that every local optimum of two layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y, and input data matrix X. 2) develop almost complete characterization of the local/global optima equivalence of multi-layer linear neural networks. We provide various counterexamples to show the necessity of each of our assumptions. 3) show global/local optima equivalence of non-linear deep models having certain pyramidal structure. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond "full-rank" cases.
workshop-papers
The paper nicely unifies previous results and develops the property of local openness. While interesting, I find the application to multi-layer linear networks extremely limiting. There appears to be a sub-field in theory now focusing on solely multi-layer linear networks which is meaningless in practice. I can appreciate that this could give rise to useful proof techniques and hence, I am recommending it to the workshop track with the hope that it can foster more discussions and help researchers move away from studying multi-layer linear networks.
train
[ "BkL0g3a1f", "rkimHPzbz", "SJtc2C4bz", "SkSlUunQf", "HJImbOhmM", "HkNlXO37M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary: The paper focuses on the characterization of the landscape of deep neural networks; i.e., when and why local minima are global, what are the conditions for saddle critical points, etc. The paper covers a somewhat wide range of deep nets (from shallow with linear activation to deeper with non-linear activation); it focuces only on feed forward neural networks.\nAs the authors state, this paper provides a unifying perspective to the subject (it justifies the results of others through this unifying theory, but also provides new results; e.g., there are results that do not depend on assumptions on the target data matrix Y).\n\nOriginality: The paper provides similar results to previous work, while removing some of the assumptions made in previous work. In that sense, the originality of the results is weak, but definitely there is some novelty in the methodology used to get to these results. Thus, I would say original.\n\nImportance: The paper deals with the important problem of when and why training algorithms might get to global/local/saddle critical points. While there are no direct connections with generalization properties, characterizing the landscape of neural networks is an important topic to make further steps into better understanding of deep learning. It will attract some attention at the conference.\n\nClarity: The paper is well-written - some parts need improvement, but overall I'm satisfied with the current version.\n\nComments:\n1. If problem (4) is not considered at all in this paper (in its full generality that considers matrix completion and matrix sensing as special cases), then the authors could just start with the model in (5).\n\n2. Remark 1 has a nice example - could this example be shown with Y not being the all-zeros vector?\n\n3. In section 5, the authors make a connection with the work of Ge et al. 2016. They state that the problems in (10)-(11) constitute generalizations of the symmetric matrix completion case, considered in Ge et al. 2016. However, in that work, the main difficulty of proving global optimality comes from the randomness of the sampling mask operator (which introduces the notion of incoherence and requires results in expectation). It is not clear, and maybe it is an overstatement, that the results in section 5 generalize that work. If that is the case, could the authors describe this a bit further?", "Summary:\n\nThis paper studies the geometry of linear and neural networks and provides conditions under which the local minima of the loss are global minima for these non-convex problems. The paper studies locally open maps, which preserve the local minima geometry. Hence a local minima of l(F(W)) is a local minima of l(s) when s=F(W) is a locally open map. Theorem 3 provides conditions under which the multiplication X*Y is a locally open map. For a pyramidal feed forward net, if the weights in each layer have full rank, input X is full rank, and the link function is invertible, then that local minima is a global minima. \n\nComments:\n\nThe locally open maps (Behrends 2017) is an interesting concept. However I am not convinced that the paper is able to show stronger results about the geometry of linear/neural networks. Further the claims all over the paper, comparing with the existing works. are over the top and not justified. I believe the paper needs a significant rewriting.\n\nThe results are not a strict improvement over existing works. For neural networks, Nguyen and Hein (2017) assume the link function is differentiable. This paper assumes the link function is invertible. Both papers can handle sigmoid/tanh, but cannot handle ReLU.\n\nResults for linear networks are not an improvement over existing works. Paper claims to remove assumption on Y, but they get much weaker results as they cannot differentiate between saddle points and global minima, for a critical point. Results are also written in a confusing way as stating each critical point is a saddle or a global minima. Instead the presentation can be simplified by just discussing the equivalency between local minima and global minima, as the proposed framework cannot handle critical points directly.\n\nProof of Lemma 7 seems to have typos/mistakes. What is \\bar{W_i}? Why are the first two equations just showing d_i \\leq d_i ? How do you use this to conclude locally openness of \\mathcal{M}?\n\nAuthors claim their result extends the results for matrix completion from Ge et al. (2016) . This is false claim as (10) is not the matrix completion problem with missing entries, and the results in Ge et al. (2016) do not assume any non-degeneracy conditions on W.", "The paper studies the local optima of certain types of deep networks. It uses the notion of a locally open map to draw equivalences between local optima and global optima. The basic idea is that for fitting nonlinear models with a convex loss, if the mapping from the weights to the outputs is open, then every local optimum in weight space corresponds to a local optimum in output space; by convexity, in output space every local optimum is global. \n\nThis is mostly a “theory building” work. With an appropriate fix, lemma 4 gives a cleaner set of assumptions than previous work in the same space (Nguyen + Hein ’17), but yields essentially the same conclusions. \n\nThe notion of local openness seems very well adapted to deriving these type of results in a clean manner. The result in Section 3 on local openness of matrix multiplication on its range (which is substantially motivated by Behrends 2017) may be of independent interest. I did not check the proof of this result in detail, but it appears to be correct. For the linear, deep case, the paper corrects imprecisions in the previous work (Lu + Kawaguchi). \n\nFor deep nonlinear networks, the results require the “pyramidal” assumption that the dimensionality is nonincreasing with respect to layer and (more restrictively) the feature dimension in the first layer is larger than the number of input points. This seems to differ from typical practice, in the sense that it does not allow for wide intermediate layers. This seems to be a limitation of the methodology: unless I'm missing something, this situation cannot be addressed using locally open maps. \n\n\n\nThere are some imprecisions in the writing. For example, Lemma 4 is not correct as written — an invertible mapping \\sigma is not necessarily locally open. Take $\\sigma_k(t) = t for t rational and -t for t irrational$ as an example. This is easy to fix, but not correct as written. \n\nDespite mentioning matrix completion in the introduction and comparing to work of Ge et. al., the paper does not seem to have strong implications for matrix completion. It extends results of Ge and collaborators for the fully observed symmetric case to non-symmetric problems. But the main interest in matrix completion is in the undersampled case — in the full observed case, there is nothing to complete. \n\n\n", "Thank you for the detailed feedback and understanding our contributions. We significantly revised the manuscript considering the reviewer's concerns. In what follows we list the concerns raised by the reviewer and provide our detailed replies:\n\n-- Comment: The paper provide similar results to the previous work.\n-- Response: We significantly revised the presentation and clarified our contributions. We also used our framework to include additional results. In short, our contributions are summarized as follows:\n• Formally state the local openness property and its use in studying local/global equivalence of optimization problems arising from training non-convex deep models.\n• Provide a complete characterization of the local openness of the matrix multiplication mapping.\n• Show that every local optimum of a two layer linear network optimization problem is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y , and input data matrix X\n• Develop “almost complete” characterization of the local/global optima equivalence of multi- layer linear neural networks, and provide various counterexamples to show the necessity of each assumption.\n• Show global/local optima equivalence of non-linear deep models having certain pyramidal struc- ture. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond “full-rank cases. In this case, we do agree with the reviewer that we do not allow wide intermediate layers. We explicitly mentioned this in our revised manuscript.\n\n-- Comment: If problem (4) is not considered at all in this paper (in its full generality that considers matrix completion and matrix sensing as special cases), then the authors could just start with the model in (5).\n-- Response: We revised accordingly. \n\n-- Comment: Remark 1 has a nice example - could this example be shown with Y not being the all-zeros vector?\n-- Response: For the given dimensions (m=2, k=1, n=2), it is not possible. The reason is that if both vectors are non-zero, then they are both full rank. Hence, according to our main result on local openness of the matrix product, our mapping is locally open. However, one can easily con- struct other non-zero examples for larger dimensions using our main result as our theorem provides a complete characterization.\n\n--Comment: In section 5, the authors make a connection with the work of Ge et al. 2016. They state that the problems in (10)-(11) constitute generalizations of the symmetric matrix completion case, considered in Ge et al. 2016. However, in that work, the main difficulty of proving global optimality comes from the randomness of the sampling mask operator (which introduces the notion of incoherence and requires results in expectation). It is not clear, and maybe it is an overstatement, that the results in section 5 generalize that work. If that is the case, could the authors describe this a bit further?\n-- Response: Indeed, we only consider the fully-observed matrix completion problem. The matrix com- pletion part has been de-emphasized in the revised manuscript.", "We would like to thank the reviewer for the careful reading of the manuscript. We significantly revised our submission considering the reviewer's comments. In our revision, we relaxed almost any assumption possible. For example, we relaxed the full rankness of X and Y in the two-layer linear neural networks, and provide an “almost complete” characterization of the local/global optima equivalence of multi-layer linear neural networks. We also included multiple counterexamples to show the necessity of the remaining set of assumptions. To clarify the contributions of the paper, we re-wrote the abstract. In short, our contributions are summarized as follows:\n\n• Formally state the local openness property and its use in studying local/global equivalence of optimization problems arising from training non-convex deep models.\n• Provide a complete characterization of the local openness of the matrix multiplication mapping.\n• Show that every local optimum of a two layer linear network optimization problem is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y , and input data matrix X\n• Develop \"almost complete\" characterization of the local/global optima equivalence of multi- layer linear neural networks, and provide various counterexamples to show the necessity of each assumption.\n• Show global/local optima equivalence of non-linear deep models having certain pyramidal struc- ture. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions. In this case, we do agree with the reviewer that we do not allow wide intermediate layers. We explicitly mentioned this in our revised manuscript.\n\nIn what follows we list the concerns raised by the reviewer and provide our detailed replies:\n\n--Comment: There are some imprecisions in the writing. For example, Lemma 4 is not correct as written an invertible mapping σ is not necessarily locally open. Take σk(t) = t for t rational and −t for t irrational as an example. This is easy to fix, but not correct as written.\n--Response: Correct. We fixed it in our revision.\n\n-- Comment: Despite mentioning matrix completion in the introduction and comparing to work of Ge et. al., the paper does not seem to have strong implications for matrix completion. It extends results of Ge and collaborators for the fully observed symmetric case to non-symmetric problems. But the main interest in matrix completion is in the under-sampled case in the full observed case, there is nothing to complete.\n-- Response: The matrix completion part has been de-emphasized in the revised manuscript.", "We significantly revised the manuscript considering your comments. In what follows we list the concerns raised by the reviewer and provide our detailed responses:\n\n-- Comment: Paper need significant revisions in terms of comparison with existing results.\n-- Response: We believe the comment was addressed in the revised manuscript. However, we appreciate any new feedback.\n\n-- Comment: Nguyen and Hein (2017) assume the link function is differentiable. This paper assumes the link function is invertible. Both papers can handle sigmoid/tanh, but cannot handle ReLU.\n-- Response: Notice that in the paper by Nguyen and Hein, they also assume strict monotonicity activation (which implies invertibility). Also note that, while our result cannot handle ReLU functions, leaky ReLU activation functions satisfy our assumptions. This has been clarified in the revised manuscript.\n\n-- Comment: Results for linear networks are not an improvement over existing works.\n-- Response: We significantly revised the manuscript to clarify our contributions for linear networks. In short, our contributions for linear networks are the followings:\n• Show that every local optimum of a two layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y , and input data matrix X.\n• Develop \"almost complete characterization\" of the local/global optima equivalence of multi-layer linear neural networks, and provide various counterexamples to show the necessity of each assumption.\n\n-- Comment: Proof of Lemma 7 is not clear.\n-- Response: We agree with the reviewer that some parts in the original proof was not clear. Enjoy our revised detailed proof.\n\n-- Comment: The problem considered in the manuscript is the fully observed matrix completion prob- lem and thus the results in the manuscript do not extend the results for matrix completion from Ge et al. (2016). Moreover, the results in Ge et al. (2016) do not assume any non-degeneracy conditions on W.\n-- Response: The matrix completion part has been de-emphasized, and the non-degeneracy condition was relaxed." ]
[ 6, 5, 6, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_ByxLBMZCb", "iclr_2018_ByxLBMZCb", "iclr_2018_ByxLBMZCb", "BkL0g3a1f", "SJtc2C4bz", "rkimHPzbz" ]
iclr_2018_rJGY8GbR-
Deep Mean Field Theory: Layerwise Variance and Width Variation as Methods to Control Gradient Explosion
A recent line of work has studied the statistical properties of neural networks to great success from a {\it mean field theory} perspective, making and verifying very precise predictions of neural network behavior and test time performance. In this paper, we build upon these works to explore two methods for taming the behaviors of random residual networks (with only fully connected layers and no batchnorm). The first method is {\it width variation (WV)}, i.e. varying the widths of layers as a function of depth. We show that width decay reduces gradient explosion without affecting the mean forward dynamics of the random network. The second method is {\it variance variation (VV)}, i.e. changing the initialization variances of weights and biases over depth. We show VV, used appropriately, can reduce gradient explosion of tanh and ReLU resnets from exp⁡(Θ(L)) and exp⁡(Θ(L)) respectively to constant Θ(1). A complete phase-diagram is derived for how variance decay affects different dynamics, such as those of gradient and activation norms. In particular, we show the existence of many phase transitions where these dynamics switch between exponential, polynomial, logarithmic, and even constant behaviors. Using the obtained mean field theory, we are able to track surprisingly well how VV at initialization time affects training and test time performance on MNIST after a set number of epochs: the level sets of test/train set accuracies coincide with the level sets of the expectations of certain gradient norms or of metric expressivity (as defined in \cite{yang_meanfield_2017}), a measure of expansion in a random neural network. Based on insights from past works in deep mean field theory and information geometry, we also provide a new perspective on the gradient explosion/vanishing problems: they lead to ill-conditioning of the Fisher information matrix, causing optimization troubles.
workshop-papers
All the reviewers agree that this is an interesting paper but have concerns about readability and presentation. There is also concern that many results are speculative and not concretely tested. I recommend the authors to carefully investigate their claims with stronger experiments and submit it to another venue. I recommend presenting at ICLR workshop to obtain further feedback.
train
[ "rkDLp95lG", "SJTc3MAgf", "Bk8iCb0Wz", "B17PSW0Qz", "S1iqVbRmG", "BkbKHuamf", "HJtXHu6mM", "H1sObu67z", "S1pyeO67G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper further develops the research program using mean field theory to predict generalization performance of deep neural networks. As with all recent mean-field papers, the main query here is to what extent the assumptions (Axioms 1+2, which basically define the asymptotic parameters of interest to be the quantities defined in Sec. 2.; and also the fully connected residual structure of the network) apply in practice. This is answered using the same empirical standard as in [Yang and Schoenholz, Schoenholz et al.], i.e. showing that the dynamics of initialization predict generalization behavior on MNIST according to theory.\n\nAs with the earlier papers in this recent program, the paper is notation-heavy but generally written well, though there is some overreliance on the readers' knowledge of previous work, for instance in presenting the evidence as above. Try as I might, I cannot find a detailed explanation of the color scale for the important Fig. 4. A small notation issue: the current Hebrew letter for the gradient quantity does not go with the other Greek letters and is typographically poor choice because of underlining, etc.). Also, several of the citations should be fixed to reflect peer-reviewed publication of Arxiv papers. I was not able to review all the proofs, but what I checked was sound. Finally, the techniques of WV and VV would be more applicable if it were not for the very tenuous relationship between gradient explosion and performance, which should be mentioned more than the one time it appears in the paper.", "The authors study mean field theory for deep neural nets. \n\nTo the best of my knowledge we do not have a good understanding of mean field theory for neural networks and this paper and some references therein are starting to address some of it. \n\nHowever, my concern about the paper is in readability. I am very familiar with the literature on mean field theory but less so on deep nets. I found it difficult to follow many parts because the authors assume that the reader will have the knowledge of all the terminology in the paper, which there is a lot of. ", "Mean field theory is an approach to analysing complex systems where correlations between highly dependent random variables are ignored, thus making the problem analytically tractable. It is hoped that analytical insights gained in this idealised setting might translate back to the original (and far messier) problem. The authors use a mean field theory approach to study how varying certain network hyperparameters with depth can effect gradient and activation statistics. A correlation between the behaviour of these statistics and training performance on MNIST is noted.\n\nAs someone asked to conduct an 'emergency' review of this paper, I would have greatly appreciated the authors making more of an effort to present their results clearly. Some general comments in this regard:\n\nClarity issues:\n- the authors appear to have ignored the ICLR style guidelines\n- the references are all written in green, making them difficult to read\n- figures are either missing color maps or make poor choice of colors\n- the figure captions are difficult to understand in isolation from the main text\n- the authors themselves appear to muddle their 'zigs' and 'zags' (first line of discussion)\n\nNow to get to the actual content of the paper. The authors do not properly place their work in context. Mean field theory has been studied in the context of neural networks at least since the 80's. Entire books have been written on the statistical mechanics of neural networks. It seems wrong that the authors only cite papers on this matter going back to 2016.\n\nWith that said, the main thrust of the paper is very interesting. The authors derive recurrence relations for mean activations and gradients. They show how scaling layer width and initialisation variance with depth can better control the propagation of these means. The results of their calculations appear to match their random network simulations, and this part of the work seems strong.\n\nWhat is not clear is what effect we should expect these quantities to have on learning? The authors claim there is a tradeoff between expressivity and exploding gradients. This seems quite speculative since it is not clear to me what effect either of these things will have on training. For one, how expressive does a model need to be to correctly classify MNIST? And are exploding gradients necessarily a bad thing? Provided they do not reach infinity, can we not just choose a smaller learning rate?\n\nI'm open to reevaluating the review if the issues of clarity and missing literature review are fixed.", "\n> And are exploding gradients necessarily a bad thing? \n\nThis is a great question. “Conventional wisdom” (starting from Bengio et al. (1994)) posits that they are always bad for training a deep net, and Pascanu et al. hypothesized that the reason is the ill-conditioning of the Hessian.\n\n\nIn the updated version of our paper, we show this hypothesis is true if we replace “Hessian” with “Fisher information matrix” (which is the Hessian for KL divergence). See our new section 2 for details. Thus we do expect concrete optimization obstacles when there is gradient explosion/vanishing.\n\n\nIn the context of random networks, this is supported experimentally by recent works by Schoenholz et al. (2017) and Yang and Schoenholz (2017), where optimal initializations are those that avoid gradient explosion (without losing too much expressivity). This is also supported by our new experiments on applying VV to tanh resnets, where imposing stronger variance decay improves performance (until the point where metric expressivity drops too much).\n\n\nBut our ReLU experiments also show that mysteriously, in the zag regime of VV for ReLU resnets, larger weight gradients correlate with better performance, and we do not know how to explain it any other way. Thus your question reflects exactly one point raised by our work: are there in fact scenarios where greater gradient explosion can actually cause better performance? We hope to answer this in the future.\n\n\n> Provided they do not reach infinity, can we not just choose a smaller learning rate?\n\n\nIn fact a “smaller learning rate” was essentially what Pascanu et al. proposed --- gradient clipping --- and remains one of the most popular ways to deal with gradient explosion when they occur. However, as discussed in our new section 2, gradient explosion causes optimization difficulties in the way of ill-conditioned Fisher information. In the case when we are actually minimizing the KL divergence so that Fisher information is in fact its Hessian, this ill-conditioning presents an obstruction to first order optimization methods, regardless of learning rate. Please see our text for details. We want to stress that gradient explosion is not simply a matter of gradient magnitude too big, but rather an issue where the first few layers of a deep network gets \"more error signals\" in the form of gradients than the last few. Multiplying every gradient term by the same learning rate does not change this circumstance. This \"information propagation\" perspective is in fact the theme of Schoenholz et al. (2017).\n\nWe do agree however that more research is needed to decipher the cross effect of learning rate and initialization. Work is currently underway.", "> The authors claim there is a tradeoff between expressivity and exploding gradients. This seems quite speculative since it is not clear to me what effect either of these things will have on training. For one, how expressive does a model need to be to correctly classify MNIST?\n\nWe want to first make the following clarification: We are only claiming there is an effect on relative performance, i.e. we can say that one initialization achieves weakly better results (in particular, weakly better learning curves) than another initialization. We are NOT saying that that by initializing a certain way, you can solve MNIST or imagenet. We admit that we have not been sufficiently clear in the paper, and have stressed this point from the get-go in the updated version.\n\nGradient explosion/vanishing is one of the most famous obstacles to training deep neural networks; see Bengio et al. (1994) and Pascanu et al. (2013), for example. The former noted that much of the difficulty of training RNNs arise from such gradient problems. In fact, in that paper already, the notion of expressivity vs trainability has arised: it is easy for an RNN to suffer from gradient explosion/vanishing problems when it tries to learn long time dependencies (striving to be expressive).\n\nThe form of the claim specific to our case originates in Yang and Schoenholz (2017). There the authors made the observation that the optimal initialization scheme for tanh resnets makes an optimal tradeoff between expressivity and trainability: if the initialization variances are too big, then the random network will suffer from gradient explosion with high probability; if they are too small, then the random network will be approximately constant (i.e. has low metric expressivity) with high probability. Metric expressivity of a random network is the expectation of ||f(x) - f(x’)||^2, where f is the random net and x and x’ are two different input vectors. It measures how much the network expands the input space, on average. Intuitively, a larger metric expressivity means that it is easier to tell apart two vectors from their neural network embeddings via a linear separator.\nThis claim is strongly corroborated by their experiments with tanh and ReLU resnets.\n\nIn our paper, we see this tradeoff determining the outcome of experiments in all but one case (ReLU resnet in the zag phase). We discuss this tradeoff at length in our revised paper, but we provide a summary below in case the reviewer does not have time to look at it.\n\nWe confirm this behavior in tanh resnets when decaying their initialization variances with depth: When there is no decay, gradient explosion bottlenecks the test set accuracy after training; when we impose strong decay, gradient dynamics is mollified but then metric expressivity (essentially the average distance between the images of two different input vectors), being strongly constrained, caps the performance.\nIndeed, we can predict test set accuracy by level curves of the magnitude of gradient explosion in the region of small variance decay, while we can do the same with level curves of metric expressivity when in the region of large decay. The performance peaks at the intersection of these two regions. Please see our experimental section in VV for more details.\n\nWith ReLU resnets, there are two phases of behavior when we apply VV. In one (the zig phase), we start applying variance decay to some parameters (w and b). We see what is very similar to Yang and Schoenholz's observation, that decaying the variance prevents training failure from numerical overflow, but decaying it further reduces test time accuracy by reducing metric expressivity. This is consistent with the tradeoff: Our ReLU resnets in this zig phase have fairly tame gradient explosion (polynomial with low degree) while the metric expressivity is growing superpolynomially with depth, so the latter naturally dominates the effect on performance. \n\nIn the other (zag) phase, which continues from the zig phase, we start decaying variances of other parameters. Here we observe a seeming counterexample to this tradeoff: weight gradient explosion worsens and expressivity decreases but the test set accuracy increases! In this phase, both metric expressivity and gradient explosion have polynomial dynamics with low degrees. So plausibly, a new factor begins to dominate the effect on performance that we do not know about yet.\n", "We have revamped the presentation of the paper, improving its presentation and addressing your concerns in readability. We hope you can give it another read.", "We appreciate you answering the emergency call to review our paper.\nOur responses are as follows.\n\n> Clarity issues:\n> - the authors appear to have ignored the ICLR style guidelines\nIn the new version, we have done the following:\nAbstract merged into 1 paragraph.\nChanged table title to be lower case except first word and pronoun.\nWe have put parentheses around tail citations.\nPlease let us know if you found more violations of the style guideline.\n\n> - the references are all written in green, making them difficult to read\nWe thought that they actually improve readability, but based on your suggestion we have turned off colored links.\n\n> - figures are either missing color maps or make poor choice of colors\nThank you for pointing this out. We have added color bars and improved color choices, especially in the heatmaps and their contour overlays.\n\n> - the figure captions are difficult to understand in isolation from the main text\nIn response to your feedback, we have made figure captions much more self-contained.\n\n> - the authors themselves appear to muddle their 'zigs' and 'zags' (first line of discussion)\nThanks for pointing out this error. It has been fixed.\n\n> Now to get to the actual content of the paper. The authors do not properly place their work in context. Mean field theory has been studied in the context of neural networks at least since the 80's. Entire books have been written on the statistical mechanics of neural networks. It seems wrong that the authors only cite papers on this matter going back to 2016.\n\nWe apologize for this omission. In the new version, a significant chunk of the introduction is used for surveying previous works on mean field theory of neural networks.\n\n\n", "We respond to your comments as follows.\n\n> As with the earlier papers in this recent program, the paper is notation-heavy but generally written well, though there is some overreliance on the readers' knowledge of previous work, for instance in presenting the evidence as above. \n\nThank you for your kind review. We agree that this overreliance has lead to poor presentation of our results. We have significantly rewritten our main text, devoting much space to summarizing the previous work and context, while toning down the heaviness of notation and technicality in favor of more intuitive discussion. See the changelog for a full list of changesl\n\n> Try as I might, I cannot find a detailed explanation of the color scale for the important Fig. 4. \nThank you for pointing this out. We have added color bars to our heatmaps.\n\n> A small notation issue: the current Hebrew letter for the gradient quantity does not go with the other Greek letters and is typographically poor choice because of underlining, etc.). \nWe have changed the Hebrew daleth to the Greek letter Chi, and bolded all mean field quantities to make them more readable. We have also compiled a symbol glossary to ameliorate the notation heaviness of our paper.\n\n> Also, several of the citations should be fixed to reflect peer-reviewed publication of Arxiv papers.\nThank you for pointing out the error. We have updated the citations accordingly.\n\n> I was not able to review all the proofs, but what I checked was sound. \n\n> Finally, the techniques of WV and VV would be more applicable if it were not for the very tenuous relationship between gradient explosion and performance, which should be mentioned more than the one time it appears in the paper.\n\nIt is true that, as Yang and Schoenholz observed in their NIPS 2017 paper, ReLU resnets are not bottlenecked by trainability but rather by (metric) expressivity. This is what we find in the zig phase of ReLU resnet VV, where metric expressivity predicts performance. However, VV does indeed decrease the activation explosion of ReLU resnets to prevent forward computation from overflowing.\n\nIn the updated version of our paper, we have included our experiments on applying VV to tanh resnets, and there variance decay does improve performance by reducing gradient explosion. This is apparent in our figure 3 (in the new version), which shows that the optimal variance decay is larger for larger depth L. Again, this is expected based on Yang and Schoenholz's observation that tanh resnets are bottlenecked by trainability when variances are too large.\n\nLet us know if you are satisfied with our responses.", "We have updated our paper as follows:\n1.\tWe added a new section that elucidates the gradient explosion/vanishing problem from an information geometry perspective. We reason that this problem manifests in the exponential ill-conditioning of the Fisher information matrix, so that (stochastic) gradient descent approximates the natural gradient poorly.\n2.\tWe added experiments on applying VV to tanh resnets. We find that variance decay improves performance of tanh resnets. In particular, the optimal decay cannot be too small nor too large, but rather must balance trainability and expressivity.\n3.\tWe added a background section summarizing the recent line of work that we are building on and discuss how our work relates to them.\n4.\tWe added a section overviewing our techniques and main results in intuitive terms. In particular, we devote a significant chunk to discussing the trainability vs expressivity tradeoff.\n5.\tWe devoted significant space in the introduction to discuss prior works in mean field theory and recent trends.\n6.\tWe swapped out the Hebrew letters for better alternatives; for example, Hebrew daleth is now chi. We also bolded all mean field quantities to improve readability.\n7.\tWe added a notation glossary to improve readability.\n8.\tWe improved colors and presentations of the plots, especially the heatmaps and overlaid contours. We also added color bars.\n9.\tWe moved the detailed discussion on the VV dynamics in the original manuscript to the appendix, and only sketch the key points in enough detail in the main text for the experiments to make sense to the reader.\n10.\tWe moved discussion of mean field assumption to the appendix, as they might be confusing to the first time reader.\n11.\tSimilarly we moved definition of the integral operators V and W, along with the table of dynamical equations we derive in this paper, to the appendix, to decrease notation baggage. Most of the main text can be understood without examining these details.\n12.\tWe rewrote figure captions to be self-contained.\n13.\tWe fixed various ICLR style guideline issues.\n14.\tWe turned off colored links.\n15.\tWe fixed various typos and grammatical mistakes.\n\n" ]
[ 7, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ 3, 1, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJGY8GbR-", "iclr_2018_rJGY8GbR-", "iclr_2018_rJGY8GbR-", "S1iqVbRmG", "HJtXHu6mM", "SJTc3MAgf", "Bk8iCb0Wz", "rkDLp95lG", "iclr_2018_rJGY8GbR-" ]
iclr_2018_BygpQlbA-
Towards Provable Control for Unknown Linear Dynamical Systems
We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state. Using a recent spectral filtering technique for concisely representing such systems in a linear basis, we formulate optimal control in this setting as a convex program. This approach eliminates the need to solve the non-convex problem of explicit identification of the system and its latent state, and allows for provable optimality guarantees for the control signal. We give the first efficient algorithm for finding the optimal control signal with an arbitrary time horizon T, with sample complexity (number of training rollouts) polynomial only in log(T) and other relevant parameters.
workshop-papers
This paper studies the control of symmetric linear dynamical systems with unknown dynamics. While the reviewers agree that this is an interesting topic, there are concerns that the assumptions are not realistic. Lack of experiments also stands out. I recommend the paper to workshop track with the hope that it will foster more discussions and lead to more realistic assumptions.
train
[ "HynVA_vxG", "SydMCJ9gz", "ryr6tuv-G", "ByvMH7zMz", "HJGD4Qffz", "BJE1VXfGz", "H1hFmXMGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a new algorithm to generate the optimal control inputs for unknown linear dynamical systems (LDS) with known system dimensions.\n\nThe idea is exciting LDS by wave filter inputs and record the output and directly estimate the operator that maps the input to the output instead of estimating the hidden states. After obtaining this operator, this paper substitutes this operator to the optimal control problem and solve the optimal control problem to estimate the optimal control input, and show that the gap between the true optimal cost and the cost from applying estimated optimal control input is small with high probability.\nI think estimating the operator from the input to the output is interesting, instead of constructing (A, B, C, D) matrices, but this idea and all the techniques are from Hazan et. el., 2017. After estimating this operator, it is straightforward to use this to generate the estimated optimal control input. So I think the idea is OK, but not a breakthrough.\n\nAlso I found the symmetric matrix assumption on A is quite limited. This limitation is from Hazan et. el., 2017, where the authors wants to predict the output. For prediction purposes, this restriction might be OK, but for control purposes, many interesting plants does not satisfy this assumption, even simple RL circuit. I agree with authors that this is an attempt to combine system identification with generating control inputs together, but I am not sure how to remove the restriction on A.\nDean et. el., 2017 also pursued this direction by combining system identification with robust controller synthesis to handle estimation errors in the system matrices (A, B) in the state-feedback case (LQR), and I can see that Dean et. el. could be extended to handle observer-feedback case (LQG) without any restriction.\n\nDespite of this limitation I think the paper's idea is OK and the result is worth to be published but not in the current form. The paper is not clearly written and there are several areas need to be improved.\n\n1. System identification.\nSubspace identification (N4SID) won't take exponential time. I recommend the authors to perform either proper literature review or cite one or two papers on the time complexity and their weakness. Also note that subspace identification can estimate (A, B, C, D) matrices which is great for control purposes especially for the infinite horizon LQR.\n\n2. Clarification on the unit ball constraints.\nOptimal control inputs are restricted to be inside the unit ball and overall norm is bounded by L. Where is this restriction coming from? The standard LQG setup does not have this restriction.\n\n3. Clarification on the assumption (3).\nWhere is this assumption coming from? I can see that this makes the analysis go through but is this a reasonable assumption? Does most of system satisfy this constraint? Is there any? It's ok not to provide the answer if it's hard to analyze, but if that's the case the paper should provide some numerical case studies to show this bound either holds or the gap is negligible in the toy example.\n\n4. Proof of theorem 3.3.\nTheorem 3.3 is one of the key results in this paper, yet its proof is just \"noted\". The setup is slightly different from the original theorem in Hazan et. el., 2017 including the noise model, so I strongly recommend to include the original theorem in the appendix, and include the full proof in the appendix.\n\n5. Proof of lemma 3.1.\nI found it's hard to keep track of which one is inside the expectation. I recommend to follow the notation E[variable] the authors been using throughout the paper in the proof instead of dropping these brackets.\n \n6. Minor typos\nIn theorem 2.4, ||Q||_op is used for defining rho, but in the text ||Q||_F is used. I think ||Q||_op is right.", "This paper studies the control of symmetric linear dynamical systems with unknown dynamics. Typically this problem is split into a (non-convex) system ID step followed by a derivation of an optimal controller, but there are few guarantees about this combined process. This manuscript formulates a convex program of optimal control without the separate system ID step, resulting in provably optimality guarantees and efficient algorithms (in terms of the sample complexity). The paper is generally pretty well written.\n\nThis paper leans heavily on Hazan 2017 paper (https://arxiv.org/pdf/1711.00946.pdf). Where the Hazan paper concerns itself with the system id portion of the control problem, this paper seems to be the controls extension of that same approach. From what I can tell, Hazan's paper introduces the idea of wave filtering (convolution of the input with eigenvectors of the Hankel matrix); the filtered output is then passed through another matrix that is being learned online (M). That matrix is then mapped back to system id (A,B,C,D). The most novel contribution of this ICLR paper seems to be equation (4), where the authors set up an optimization problem to solve for optimal inputs; much of that optimization set-up relies on Hazan's work, though. However, the authors do prove their work, which increases the novelty. The novelty would be improved with clearer differentiation from the Hazan 2017 paper.\n\nMy biggest concerns that dampen my enthusiasm are some assumptions that may not be realistic in most controls settings:\n\n- First, the most concerning assumption is that of a symmetric LDS matrix A (and Lyapunov stability). As far as I know, symmetric LDS models are not common in the controls community. From a couple of quick searches it seems like there are a few physics / chemistry applications where a symmetric A makes sense, but the authors don't do a good enough job setting up the context here to make the results compelling. Without that context it's hard to tell how broadly useful these results are. In Hazan's paper they mention that the system id portion, at least, seems to work with non-symmetric, and even non-linear dynamical systems (bottom of page 3, Hazan 2017). Is there any way to extend the current results to non-symmetric systems?\n\n- Second, it appears that the proposed methods may rely on running the dynamical system several times before attempting to control it. Am I misunderstanding something? If so this seems like it may be a significant constraint that would shrink the application space and impact even further.\n", "The paper presents a provable algorithm for controlling an unknown linear dynamical system (LDS). Given the recent interest in (deep) reinforcement learning (combined with the lack of theoretical guarantees in this space), this is a very timely problem to study. The authors provide a rigorous end-to-end analysis for the LDS setting, which is a mathematically clean yet highly non-trivial setup that has a long history in the controls field.\n\nThe proposed approach leverages recent work that gives a novel parametrization of control problems in the LDS setting. After estimating the values of this parametrization, the authors formulate the problem of finding optimal control inputs as a large convex problem. The time and sample complexities of this approach are polynomial in all relevant parameters. The authors also highlight that their sample complexity depends only logarithmically on the time horizon T. The paper focuses on the theoretical results and does not present experiments (the polynomials are also not elaborated further).\n\nOverall, I think it is important to study control problems from a statistical perspective, and the LDS setting is a very natural target. Moreover, I find the proposed algorithmic approach interesting. However, I am not sure if the paper is a good fit for ICLR since it is purely theoretical in nature and has no experiments. I also have the following questions regarding the theoretical contributions:\n\n(A) The authors emphasize the logarithmic dependence on T. However, the bounds also depend polynomially on L, and as far as I can tell, L can be polynomial in T for certain systems if we want to achieve a good overall cost. It would be helpful if the authors could comment on the dependence between T and L.\n\n(B) Why does the bound in Theorem 2.4 become worse when there are some directions that do not contribute to the cost (the lambda dependence)?\n\n(C) Do the authors expect that it will be straightforward to remove the assumption that A is symmetric, or is this an inherent limitation of the approach?\n\nMoreover, I have the following comments:\n\n(1) Theorem 3.3 is currently not self-contained. It would enhance readability of the paper if the results were more self-contained. (It is obviously good to cite results from prior work, but then it would be more clear if the results are invoked as is without modifications.)\n\n(2) In Theorem 1.1, the notation is slightly unclear because B^T is only defined later.\n\n(3) In Section 1.2 (Tracking a known system): \"given\" instead of \"give\"\n\n(4) In Section 1.2 (Optimal control): \"symmetric\" instead of \"symmetrics\"\n\n(5) In Section 1.2 (Optimal control): the paper says \"rather than solving a recursive system of equations, we provide a formulation of control as a one-shot convex program\". Is this meant as a contrast to the work of Dean et al. (2017)? Their abstract also claims to utilize a convex programming formulation.\n\n(6) Below Definition 2.3: What is capital X?\n\n(7) In Definition 2.3: What does the parenthesis in \\phi_j(1) denote?\n\n(8) Below Theorem 2.4: Why is Phi now nk x T instead of nk x nT as in Definition 2.3?\n\n(9) Lemma 3.2: Is \\hat{D} defined in the paper? I assume that it involves \\hat{M}, but it would be good to formally define this notation.", "1. We thank the review for pointing this out. However, we did not find clear provable guarantees for N4SID (in terms of sample complexity, etc.) in our setting. If the reviewer were to give a clear reference or explanation, we would be happy to include it.\nOur claim on exponential time is based on the fact that system identification using any kind of local search (ex. gradient descent) converges to a local optimum. It’s not clear how to ensure that the search will reach the actual parameters, beyond a method that takes exponential time such as grid search.\n3. This condition is now rewritten to be clearer. The assumption $Q>\\lambda I$ is reasonable because it says that all directions of the output incur cost - a common case is just $Q=I$. Inequality (3) says that we can incur not much more loss than just the background noise. This is true as long as the system can be driven to 0 in a reasonable amount of time.\n4. See Main Point 3.\n5. Done.\n6. Done.\n", "Re: innovation compared to HSZ’17: The reviewer asked whether LDS control is a simple consequence of the ability to predict the next reward, as shown in HSZ17. This issue confused us too originally. But prediction in the sense of HSZ17 is a lot easier because the guarantee is in terms of mean-squared error for a single input-output sequence, over a large number of steps. Such MSE error permits predictions to be off for long stretches of time. To do control on the other hand one needs to look ahead at results of all control choices up to the horizon L and pick the best. Since the HSZ17 predictions for different lookahead paths may have arbitrary error in any time interval, the estimate for the max reward over all paths can be arbitrarily off. The bulk of the paper is showing that it is nevertheless possible with small sample complexity, and the proof is novel over HSZ17.\n\n1. The assumption that the LDS uses a *symmetric* matrix is indeed crucial for our result. However, note that solving the symmetric case is still significant progress on the problem of provably efficient control of LDS, which has been open for decades.\n\n2. The reviewer is correct that our proposed methods will rely on running the dynamical system several times. The need for multiple restarts is inherent to the problem of learning the system, at least under the assumptions in our setting. Notice that one cannot simply wait for the state to decay, since the transition matrix can have an eigenvalue of 1. A basic example shows this: suppose A is a tridiagonal matrix, B controls the first dimension of h, C observes the last dimension of h. Then, multiple restarts are needed to find the optimal control, since there is a delay before C can be determined. We will update the appendix with the full construction, to clarify this point. See also Main Point 1.", "A. For reasonable systems L is a constant. See Main Point 3.\nB. If there are directions that do not contribute to the cost, then under the optimal control, the output may be large in that direction. Our bounds for the error depend on the size of the outputs y (Lemma 3.4) because the error in estimating the quadratic form depends linearly on the size of y.\nC. This requires further work. We have ongoing work on extending the work of Hazan, Singh, and Zhang to the nonsymmetric case, which will then also allow control.\n2. Fixed.\n3. Fixed.\n4. Fixed.\n5. See Main Point 2.\n6. Should be x. Fixed.\n7. phi_j(k) denotes the kth entry of \\phi_j.\n8. Typo, fixed.\n9. \\hat{D} is exactly the analogue of D for the predicted dynamics.\n", "We thank the reviewers for their comments, and note the following main points.\n\n1. The difference between this paper and [HSZ17] is as follows. The results of [HSZ17] together with random exploration requires sample complexity that scales with poly(T). We show how to explore better with the filters than with random exploration, significantly reducing sample complexity to polylog(T), This is an important point, since poly(T) bounds can be obtained by straightforward regression and can be considered folklore. \n\n2. Our work is distinguished from Dean et al’s work as follows: \nThe Dean et al. work considers a case with no hidden state - this is known to be efficiently solvable by convex optimization. \nIn contrast, our setting is more general and has an evolving hidden state. The natural formulation is thus via *non-convex* optimization, for which no efficient algorithm was known before to our work. \n\n3. Clarification on the unit ball constraints (Optimal control inputs are restricted to be inside the unit ball and overall norm is bounded by L):\nThe constraint comes from the fact that the error from the learned dynamics scales as the input.\nUnit ball constraint: This is a reasonable setting because often there is a maximum input that one can put into the system. It is without loss of generality because for the unrestricted setting, for a reasonable system starting at a bounded hidden state, the optimal control input will be bounded by some norm, which can be rescaled to 1. (Just scale down by an upper bound on the norm.)\nOverall norm constraint: This is reasonable because when the system is controllable, the optimal control decays the state geometrically, and the total sum of inputs is bounded." ]
[ 4, 7, 5, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BygpQlbA-", "iclr_2018_BygpQlbA-", "iclr_2018_BygpQlbA-", "HynVA_vxG", "SydMCJ9gz", "ryr6tuv-G", "iclr_2018_BygpQlbA-" ]
iclr_2018_HJYQLb-RW
On the limitations of first order approximation in GAN dynamics
Generative Adversarial Networks (GANs) have been proposed as an approach to learning generative models. While GANs have demonstrated promising performance on multiple vision tasks, their learning dynamics are not yet well understood, neither in theory nor in practice. In particular, the work in this domain has been focused so far only on understanding the properties of the stationary solutions that this dynamics might converge to, and of the behavior of that dynamics in this solutions’ immediate neighborhood. To address this issue, in this work we take a first step towards a principled study of the GAN dynamics itself. To this end, we propose a model that, on one hand, exhibits several of the common problematic convergence behaviors (e.g., vanishing gradient, mode collapse, diverging or oscillatory behavior), but on the other hand, is sufficiently simple to enable rigorous convergence analysis. This methodology enables us to exhibit an interesting phenomena: a GAN with an optimal discriminator provably converges, while guiding the GAN training using only a first order approximation of the discriminator leads to unstable GAN dynamics and mode collapse. This suggests that such usage of the first order approximation of the discriminator, which is a de-facto standard in all the existing GAN dynamics, might be one of the factors that makes GAN training so challenging in practice. Additionally, our convergence result constitutes the first rigorous analysis of a dynamics of a concrete parametric GAN.
workshop-papers
All the reviewers agree that the paper is studying an important problem and makes a good first step towards understanding learning in GANs. But the reviewers are concerned that the setup is too simplistic and not relevant in practical settings. I recommend the authors to carefully go through reviews and to present it at the workshop track. This will hopefully foster further discussions and lead to results in more practically relevant settings.
train
[ "HkxWKlkgM", "H1FyxrBgz", "SyLbm9eWM", "Hk-tVNhmM", "rJTPEN2Qz", "S1SS4437M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Although GAN recently has attracted so many attentions, the theory of GAN is very poor. This paper tried to make a new insight of GAN from theories and I think their approach is a good first step to build theories for GAN. \n\nHowever, I believe this paper is not enough to be accepted. The main reason is that the main theorem (Theorem 4.1) is too restrictive.\n\n1.\tThere is no theoretical result for failed conditions. \n2.\tTo obtain the theorem, they assume the optimal discriminator. However, most of failed scenarios come from the discriminator dynamics as in Figure 2. \n3.\tThe authors could make more interesting results using the current ingredients. For instance, I would like to check the conditions on eta and T to guarantee d_TV(G_mu*, G_hat{mu})<= delta_1 when |mu*_1 – mu*_2| >= delta_2 and |hat{mu}_1 – hat{mu}_2| >= delta_3. In Theorem 4.1, the authors use the same delta for delta_1, delta_2, delta_3. So, it is not clear which initial condition or target performance makes the eta and T.\n", "The authors proposes to study the impact of GANS in two different settings:\n1. at each iteration, train the discriminator to convergence and do a (or a few) gradient steps for updating the generator\n2. just do a few gradient steps for the discriminator and the generator\nThis is done in a very toy example: a one dimensional equally weighted mixture of two Gaussian distributions.\n\nClarity: the text is reasonably well written, but with some redundancy (e.g. see section 2.1) , and quite a few grammatical and mathematical typos here and there. (e.g. Lemma 4.2., $f$ should be $g$, p7 Rect(0) is actually the empty set, etc..)\n\nGaining insights into the mechanics of training GANs is indeed important. The authors main finding is that, in this very particular setting, it seems that training the discriminator to convergence leads to convergence. Indeed, in real settings, people have tried such strategies for WGAN for examples. For standard GANs, if one adds a little bit of noise to the labels for example, people have also reported good result for such a strategy (although, without label smoothing, this will indeed leads to problems).\n\nAlthough I have not checked all the mathematical fine details, the approach/proof looks sound (although it is not at all clear too me why the choice of gradient step-sizes does not play a more important roles the the stated results). My biggest complain is that the situation analyzed is so simple (although the convergence proof is far from trivial) that I am not at all convinced that this sheds much light on more realistic examples. Since this is the main meat of the paper (i.e. no methodological innovations), I feel that this is too little an innovation for deserving publication in ICLR2018.", "Summary:\n\nThis paper studies the dynamics of adversarial training of GANs for Gaussian mixture model. The generator is a mixture of two Gaussians in one dimension. Discriminator is union of two intervals. Synthetic data is generated from a mixture of two Gaussians in one dimension. On this data, adversarial training is considered under three different settings depending on the discriminator updates: 1) optimal discriminator updates, 2) standard single step gradient updates, 3) Unrolled gradient updates with 5 unrolling steps.\n\nThe paper notices through simulations that in a grid search over the initial parameters of generator optimal discriminator training always succeeds in recovering the true generator parameters, whereas the other two methods fail and exhibit mode collapse. The paper also provides theoretical results showing global convergence for the optimal discriminator updates method.\n\n\n\nComments:\n1) This is an interesting paper studying the dynamics of GANs on a simpler model (but rich enough to display mode collapse). The results establish the standard issues noticed in training GANs. However no intuition is given as to why the mode collapse happens or why the single discriminator updates fail (see for ex. https://arxiv.org/abs/1705.10461)?\n\n2) The proposed method of doing optimal discriminator updates cannot be extended when the discriminator is a neural network. Does doing more unrolling steps simulate this behavior? What happens in your experiments as you increase the number of unrolling steps?\n\n3) Can you write the exact dynamics used for Theorem 4.1 ? Is the noise added in each step? \n\n4) What is the size of the initial discriminator intervals used for experiments in figure 2?\n", "We thank the reviewer for appreciating our approach for building a theory for GANs. We now address the concerns:\n\n1) We do not provide theoretical results for the failed conditions because the experiments already demonstrate convincingly that the first-order methods for training discriminators have serious deficiencies in our model. Moreover, in the supplementary material, we discuss specific ways in which the first-order approach fails. For instance, Figure 2 shows that for most initial generator states, less than 20% of the discriminator configurations give rise to first-order dynamics that successfully learn the unknown distribution.\n\n2) Our convergence results are indeed only for the optimal discriminator. However, this is a necessity, because (as the reviewer points out) the first order dynamics often do not converge. Therefore, it is impossible to even hope for a general convergence result in this setting. In fact, we view demonstrating that the first-order dynamics can fail in such a systematic way an important contribution of our paper. In particular, this hints towards a fundamental separation between optimal and first order dynamics for training GANs, and the need to understand what we can and cannot achieve when we rely on first-order methods for training\n\n3) We agree with the reviewer that it would be interesting to understand the relationships between the parameters at a more fine-grained level. However, since it did not seem to change the qualitative message of the results, we chose not to optimize parameters in favor of simplicity of exposition. In the updated version, we will flesh out said relationships more explicitly.\n", "The main concern of the reviewer is that our model is simplistic, and that our insights might not transfer to more realistic settings. While we agree that our model is simple, we argue that it is a necessary first step. In fact, we believe that this simplicity is an advantage of our paper.\n\nCurrently, our understanding of GAN training is in its infancy. While a large number of GAN variants has been proposed, basic questions about the convergence of even simple GANs are still unanswered. Hence it is crucial to begin a principled and rigorous investigation of GAN dynamics to demystify GAN training. From this point of view, studying common methods in simple settings is an important first step: if we do not understand basic principles (such as the impact using first-order approximations when training discriminators) even in such simple settings, there is no hope for gaining such understanding in more complex setups. Following this viewpoint, the absence of methodological novelty in our paper is intentional so we can highlight fundamental aspects of standard GAN training in a rigorous fashion. \n\nIndeed, we have shown that the convergence analysis for optimal discriminator dynamics is already highly non-trivial, even in a simple model. Moreover, we have empirically demonstrated that the natural first order GAN dynamics fail to converge for this model. Any future theory for more sophisticated GANs will have to handle these phenomena as a special case (or exclude our setup via stringent assumptions). Hence we believe that rigorously investigating our simple model is an important contribution.\n\nWe thank the reviewer for finding the typos. We will correct them in the final version of the paper.\n", "We thank the reviewer for the positive feedback.\n\n1) Regarding intuition: In the supplementary material, we highlight a specific failure case that we have observed in our model (the so-called “ discriminator collapse”). At a high level, the discriminator is often incentivized to decrease its representational power in order to increase its current accuracy when using first-order updates. This causes the discriminator to fail to adapt later on (when the generator changes) and can lead to training failure.\n\n2) The reviewer asks if unrolling steps can simulate the optimal dynamics. As reported in the paper, we also experimented with unrolling steps. We found that these unrolling steps did not avoid the pathological behaviors of using single gradient step updates. Our results suggest, in fact, that no dynamics solely based on first order updates can avoid these pathologies.\n\n3) The dynamics in Theorem 4.1 are exactly as stated in equation (7) except we add (very small) Gaussian noise at each step. While we believe that this is unnecessary (at least when we randomly initialize the parameters), our current proof requires this modification. In our experiments, the dynamics as written in Theorem 4.1 (i.e., without noise) always converged.\n\n4) In Figure 2, the initial intervals have endpoints which are drawn iid from the interval [-4, 4] and then sorted. We remark that we did not find any qualitative change in our experiments when we used different choices of initial intervals.\n" ]
[ 4, 5, 7, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_HJYQLb-RW", "iclr_2018_HJYQLb-RW", "iclr_2018_HJYQLb-RW", "HkxWKlkgM", "H1FyxrBgz", "SyLbm9eWM" ]
iclr_2018_H1YynweCb
Kronecker Recurrent Units
Our work addresses two important issues with recurrent neural networks: (1) they are over-parameterized, and (2) the recurrent weight matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and exploding gradient problem. We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU). KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors. Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient. Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance. These results in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced.
workshop-papers
I tend to agree with the most positive reviewer who characterizes the work with the following statements: "Kronecker factorization was introduced for Convolutional networks (citation is in the paper). Soft unitary constraints also have been introduced in earlier work (citations are also in the paper). Nevertheless, showing that these two ideas work also for RNNs in combination (and seeing, e.g. the nice relationship between Kronecker factors and unitary) is a relevant contribution." The most negative reviewer feels that the experimental work could have evaluated the different components explored here more clearly. For this reason the AC recommends an invitation to the workshop track.
train
[ "ByhgguzeM", "Hy28Xy9lM", "HkZmXGcxf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a method to parametrize unitary matrices in an RNN as a Kronecker product of smaller matrices. Given N inputs and output, this method allows one to specify a linear transformation with O(log(N)) parameters, and perform a forward and backward pass in O(Nlog(N)) time. \nIn addition a relaxation is performed allowing each constituent to deviate a bit from unitarity (“soft unitary constraint”).\nThe paper shows nice results on a number of small tasks. \n\nThe idea is original to the best of my knowledge and is presented clearly.\nI especially like the idea of “soft unitary constraint” which can be applied very efficiently in this factorized setup. I think this is the main contribution of this work.\n\nHowever the paper in its current form has a number of problems:\n\n- The authors state that a major shortcoming of previous (efficient) unitary RNN methods is the lack of ability to span the entire space of unitary matrices. This method presents a family that can span the entire space, but the efficient parts of this family (which give the promised speedup) only span a tiny fraction of it, as they require only O(log(N)) params to specify an O(N^2) unitary matrix. Indeed in the experimental section only those members are tested.\n\n- Another claim that is made is that complex numbers are key, and again the argument is the need to span the entire space of unitary matrices, but the same comment still hold - that is not the space this work is really dealing with, and no experimental evidence is provided that using complex numbers was really needed.\n\n- In the experimental section an emphasis is made as to how small the number of recurrent params are, but at the same time the input/output projections are very large, leaving the reader wondering if the workload simply shifted from the RNN to the projections. This needs to be addressed.\n\n- Another aspect of the previous points is that it’s not clear if stacking KRU layers will work well. This is important as stacking LSTMs is a common practice. Efficient KRU span a restricted subspace whose elements might not compose into structures that are expressive enough. One way to overcome this potential problem is to add projection matrices between layers that will do some mixing, but this will blow the number of parameters. This needs to be explored.\n\n- The authors claim that the soft unitary constraint was key for the success of the network, yet no details are provided as to how this constraint was applied, and no analysis was made for its significance. \n", "\nSummary of the paper\n-------------------------------\n\nThis paper proposes to factorize the hidden-to-hidden matrix of RNNs into a Kronecker product of small matrices, thus reducing the number of parameters, without reducing the size of the hidden vector. They also propose to use a soft unitary constraint on those small matrices (which is equivalent to a soft unitary constraint on the Kronecker product of those matrices), that is fast to compute. They evaluate their model on 6 small scale RNN experiments.\n\nClarity, Significance and Correctness\n--------------------------------------------------\n\nClarity: The main idea is clearly motivated and presented, but the experiment section failed to convince me (see details below).\n\nSignificance: The idea of using factorization for RNNs is not particularly novel. However, it is really nice to be able to decouple the hidden size and the number of recurrent parameters in a simple way. Also, the combination of Kronecker product and soft unitary constraint is really interesting.\n\nCorrectness: There are minor flaws. Some of the baselines seems to perform poorly, and some comparisons with the baselines seems unfair (see the questions below).\n\nQuestions\n--------------\n\n1. Section 3: You say that you can vary 'pf' and 'qf' to set the trade-off between computational budget and performances. Have you run some experiments where you vary those parameters?\n2. Section 4: Are you using the soft unitary constraint in your experiments? Do you have an hyper-parameter that sets the amplitude of the constraint? If yes, what is its value? Are you using it also on the vanilla RNN or the LSTM?\n3. Section 4.1: You say that you don't train the recurrent matrix in the KRU version. Do you also not train the recurrent matrix in the other models (RNN, LSTM,...)? If yes, how do you explain the differences? If no, I don't see how those curves compare.\n4. Section 4.3: Why does your LSTM in pMNIST performs so poorly? There are way better curves reported in the literature (eg in \"Unitary Evolution Recurrent Neural Netwkrs\" or \"Recurrent Batch Normalization\").\n5. General: How does your method compares with other factorization approaches, such as in \"Factorization Tricks for LSTM Networks\"?\n6. Section 4: How does the KRU compares to the other parametrizations, in term of wall-clock time?\n\nRemarks\n------------\n\nThe main claim of the paper is that RNN are over-parametrized and take a long time to train (which I both agree with), but you didn't convinced me that your parametrization solve any of those problems. I would suggest to:\n1. Compare more clearly setups where you fix the hidden size.\n2. Compare more clearly setups where you fix the number of parameters.\nWith systematic comparisons like that, it would be easier to understand where the gains in performances are coming from.\n3. Add an experiment where you vary 'pf' and 'qf' (and keep the hidden size fixed) to show how the optimization/generalization performances can be tweaked.\n4. Add computation time (wall-clock) for all the experiments, to see how it compares in practice (this could definitively weight in your favor, since you seems to have a nice CUDA implementation).\n5. Present results on larger-scale applications (Text8, Teaching Machines to Read and Comprehend, 3 layers LSTM speech recognition setup on TIMIT, DRAW, Machine Translation, ...), especially because your method is really easy to plug in any existing code available online.\n\nTypos / Form\n------------------\n\n1. sct 1, par 3: \"using Householder reflection vectors, it allows a fine-grained\" -> \"using Householder reflection vectors, which allows a fine-grained\"\n2. sct 1, par 3: \"This work called as Efficient\" -> \"This work, called Efficient\"\n5. sct 1, par 5: \"At the heart of KRU is the use of Kronecker\" -> \"At the heart of KRU, we use Kronecker\"\n6. sct 1, par 5: \"Thanks to the properties of Kronecker matrices\" -> \"Thanks to the properties of the Kronecker product\"\n7. sct 1, par 5: \"vanilla real space RNN\" -> \"vanilla RNN\"\n8. sct 2, par 1: \"Consider a standard recurrent\" -> \"Consider a standard vanilla recurrent\"\n9. sct 2, par 1: \"step t RNN\" -> \"step t, a vanilla RNN\"\n11. sct 2.1, par 1: \"U and V, this is efficient using modern BLAS\" -> \"U and V, which can be efficiently computed using modern BLAS\"\n12. sct 2.3, par 2: \"matrices have a determinant of 1 or −1, i.e., the set of all rotations and reflections respectively\" -> \"matrices, i.e., the set of all rotations and reflections, have a determinant of 1 or −1.\"\n13. sct 3, par 1: \"are called as Kronecker\" -> \"are called Kronecker\"\n14. sct 3, par 3: \"used it's spectral\" -> \"used their spectral\"\n15. sct 3, par 3: \"Kronecker matrices\" -> \"Kronecker products\"\n18. sct 4.4, par 3: \"parameters are increased\" -> \"parameters increases\"\n19. sct 5: There is some more typos in the conclusion (\"it's\" -> \"its\")\n20. Some plots are hard to read / interpret, mostly because of the round \"ticks\" you use on the curves. I suggest you remove them everywhere. Also, in the adding problem, it would be cleaner if you down-sampled a bit the curves (as they are super noisy). In pixel by pixel MNIST, some of the legends might have some typos (FC uRNN), and you should use \"N\" instead of \"n\" to be consistent with the notation of the paper.\n21. Appendix A to E are not necessary, since they are from the literature.\n22. sct 3.1, par 2: \"is approximately unitary.\" -> \"is approximately unitary (cf Appendix F).\"\n23. sct 4, par 1: \"and backward operations.\" -> \"and backward operations (cf Appendix G and H).\"\n\nPros\n------\n\n1. Nice Idea that allows to decouple the hidden size with the number of hidden-to-hidden parameters.\n2. Cheap soft unitary constraint\n3. Efficient CUDA implementation (not experimentally verified)\n\nCons\n-------\n\n1. Some experimental setups are unfair, and some other could be clearer\n2. Only small scale experiments (although this factorization has huge potential on larger scale experiments)\n3. No wall-clock time that show the speed of the proposed parametrization.", "Typical recurrent neural networks suffer from over-paramterization. Additionally, standard RNNs (non-gated versions) have an ill-conditioned recurrent weight matrix, leading to vanishing/exploding gradients during training. This paper suggests to factorize the recurrent weight matrix as a Kronecker product of matrices. Additionally, in order to avoid vanishing/exploding gradients in standard RNNs, a soft unitary constraint is used. The regularizer is specifically nice in this setting, as it suffices to have the Kronecker factors be unitary. In the empirical section, several RNNs are trained using this approach, using only ~ 100 recurrent parameters, and still achieve comparable results to state-of-the-art approaches. The paper argues that the recurrent state should be high-dimensional (in order to be able to encode the input and extract predictive information) but the recurrent dynamic should be realized by a low-capacity model.\n\nQuality: The paper is well written.\n\nClarity: Main ideas are clearly presented.\n\nOriginality/Significance: Kronecker factorization was introduced for Convolutional networks (citation is in the paper). Soft unitary constraints also have been introduced in earlier work (citations are also in the paper). Nevertheless, showing that these two ideas work also for RNNs in combination (and seeing, e.g. the nice relationship between Kronecker factors and unitary) is a relevant contribution. Additionally, this approach allows a significant reduction of training time it seems.\n\n" ]
[ 6, 5, 7 ]
[ 5, 4, 3 ]
[ "iclr_2018_H1YynweCb", "iclr_2018_H1YynweCb", "iclr_2018_H1YynweCb" ]
iclr_2018_rk4Fz2e0b
Graph Partition Neural Networks for Semi-Supervised Classification
We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.
workshop-papers
This paper was perceived as being well written, but the technical contribution was seen as being incremental and somewhat heuristic in nature. Some important prior work was not discussed and more extensive experimentation was recommended. However, the proposed approach of partitioning the graph into sub graphs and a schedule alternating between intra and inter graph partitions operations has some merit. The AC recommends inviting this paper to the Workshop Track.
train
[ "HkaZrhuez", "r1Q8qCdgf", "Hkk48Xg-f", "ByBxb1Imf", "rytTg1I7z", "BJ2TNUQXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Graph Neural Networks are methods using NNs to deal with graph data (each data point has some features, and there is some known connectivity structure among nodes) for problems such as semi-supervised classification. They can also be viewed as an abstraction and generalizations of RNNs to arbitrary graphs. As such they assume each unit has inputs from other nodes, as well as from some stored representation of a state and upon receiving all its information and executing a computation on the values of these inputs and its internal state, it can update the state as well as propagate information to neighbouring nodes. \n\nThis paper deals with the question of computing over very large input graphs where learning becomes computationally problematic (eg hard to use GPUs, optimization gets difficult due to gradient issues, etc). The proposed solution is to partition the graph into sub graphs, and use a schedule alternating between performing intra and inter graph partitions operations. To achieve that two things need to be determined - how to partition the graph, and which schedules to choose. The authors experiment with existing and somewhat modified solutions for each of these problems and present results that show that for large graphs, these methods are indeed effective and achieve state-of-the-art/improved results over existing methos. \n\nThe main critique is that this feels more of an engineering solution to running such GNNs on large graphs than a research innovations. The proposed algorithms are straight forward and/or utilize existing algorithms, and introduce many hyper parameters and ad-hoc decisions (the scheduling to choose for instance). In addition, they do not satisfy any theoretical framework, or proposed in the context of a theoretical framework than has guarantees of mathematical properties that are desirable. As such it is likely of use for practitioners but not a major research contribution. ", "The authors investigate different message passing schedules for GNN learning. Their proposed approach is to partition the graph into disjoint subregions, pass many messages on the sub regions and pass fewer messages between regions (an approach that is already considered in related literature, e.g., the BP literature), with the goal of minimizing the number of messages that need to be passed to convey information between all pairs of nodes in the network. Experimentally, the proposed approach seems to perform comparably to existing methods (or slightly worse on average in some settings). The paper is well-written and easy to read. My primary concern is with novelty. Many similar ideas have been floating around in a variety of different message-passing communities. With no theoretical reason to prefer the proposed approach, it seems like it may be of limited interest to the community if speed is its only benefit (see detailed comments below).\n\nSpecific comments:\n\n1) \"When information from any one node has reached all other nodes in the graph for the first time, this problem is considered as solved.\"\n\nPerhaps it is my misunderstanding of the way in which GNNs work, but isn't the objective actually to reach a set of fixed point equations. If so, then simply propagating information from one side of the graph may not be sufficient.\n\n2) The experimental results in Section 4.4 are almost impossible to interpret. Perhaps it is better to plot number of edges updated versus accuracy? This at least would put them on equal footing. In addition, the experiments that use randomness should be repeated and plotted on average (just in case you happened to pick a bad schedule).\n\n3) More generally, why not consider random schedules (i.e., just pick a random edge, update, repeat) or random partitions? I'm not certain that a fixed set will perform best independent of the types of updates being considered, and random schedules, like the fully synchronous case for an important baseline (especially if update speed is all you care about).\n\nTypos:\n\n-pg. 6, \"Thm. 2\" -> \"Table 2\"", "Since existing GNNs are not computational efficient when dealing with large graphs, the key engineering contributions of the proposed method, GPNN, are a partitioning and the associated scheduling components. \n\nThe paper is well written and easy to follow. However, related literature for message passing part is inadequate. \n\nI have two concerns. The primary one is that the method is incremental and rather heuristic. For example, in Section 2.2, Graph Partition part, the authors propose to \"first randomly sample the initial seed nodes biased towards nodes which are labeled and have a large out-degree\", they do not give any reasons for the preference of that kind of nodes. \n\nThe second one is that of the experimental evaluation. GPNN is on par with other methods on small graphs such as citation networks, performs comparably to other methods, and only clearly outperforms on distantly-supervised entity extraction dataset. Thus, it is not clear if GPNN is more effective than others in general. As for experiments on DIEL dataset, the authors didn't compare to GCN due to the simple reason that GCN ran out of memory. However, vanilla GCN could be trivially partitioned and propagating just as shown in this paper. I think such experiment is crucial, without which I cannot assess this method properly.\n", "We thank the reviewer for the valuable comments. Given the increasing popularity of graph neural networks, e.g., see recent references in A1 of Anonymous Reviewer 4, we believe it is still valuable to share our studies of graph partitioning and message-passing schedules with the ICLR community. As our results show, these approaches will be very important as the community starts considering larger graphs than those currently being investigated in the graph network literature.", "We thank the reviewer for the valuable comments. We did large-scale experiments with GCN as suggested. \n\nQ1: The method is incremental.\nA1: We agree that our contribution is an extension of earlier work. However, given the rapidly increasing interest in graph neural networks and their variants (cf. some recent references below), we believe studying methods to make them computationally effective is very valuable for the community. As GNNs operate on graphs that are often very different from common probabilistic graphical models (PGMs), the impact of different schedules in the two areas may be very different. For example, spanning tree based schedules are known to be very effective for PGMs. However, many graphs require a very large number of spanning trees to achieve satisfactory performance, which in turn seems to cause optimization problems (cf. the experimental results with minimal spanning trees in Sect. 4.5).\n\nLi, Y., Tarlow, D., Brockschmidt, M. and Zemel, R., 2016. Gated graph sequence neural networks. ICLR. \n\nQi, X., Liao, R., Jia, J., Fidler, S. and Urtasun, R., 2017. 3d graph neural networks for rgbd semantic segmentation. ICCV.\n\nLi, R., Tapaswi, M., Liao, R., Jia, J., Urtasun, R. and Fidler, S., 2017. Situation Recognition with Graph Neural Networks. ICCV.\n\nGarcia, V. and Bruna, J., 2017. Few-Shot Learning with Graph Neural Networks. arXiv preprint arXiv:1711.04043.\n\nBruna, J. and Li, X., 2017. Community Detection with Graph Neural Networks. arXiv preprint arXiv:1705.08415.\n\nNowak, A., Villar, S., Bandeira, A. and Bruna, J. A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks. arXiv preprint arXiv:1706.07450.\n\nQ2: Add literature of message passing.\nA2: We will add relevant work in PGMs. We plan to discuss the two papers below in an updated submission, but would be happy to incorporate more papers.\n\nSontag, D. and Jaakkola, T., 2009, April. Tree block coordinate descent for MAP in graphical models. AISTATS.\n\nKomodakis, N., Paragios, N. and Tziritas, G., 2011. MRF energy minimization and beyond via dual decomposition. IEEE PAMI.\n\nQ3: Preference of nodes with high-degree.\nA3: We prefer the high-degree nodes as the seeds because in other graphs tasks (e.g., influence maximization in social networks), high-degree heuristics are shown to be a simple and yet strong baseline (cf. paper below). We will update the paper to make this reasoning clearer.\n\nKempe, D., Kleinberg, J. and Tardos, É., 2003, August. Maximizing the spread of influence through a social network. In ACM SIGKDD.\n\nQ4: GCN on DIEL.\nA4: We ran a set of experiments of GCN on the DIEL dataset. This required a significant amount of engineering effort in the implementation. First, we use sparse operations in many places and reduce the feature dimension by introducing a learnable linear layer such that the model to fit into 128GB CPU memory. We then implemented a partition based schedule for GCN. In particular, we first get the partition using the proposed multi-seed flood fill method. Then we construct two graph laplacian matrices for the disconnected clusters and the cut, denoting as L_cluster and L_cut. In original GCN, a layer is expressed as ReLU( L * X * W ) where L, X and W are graph laplacian, node states and weight parameters respectively. In the partition based GCN, it is ReLU( L_cut * L_cluster * X * W ). We tuned hyperparameters and the results are summarized as below.\n---------------------------------------------------------------------------\nMethod | GCN | GCN + Partition | GGNN | GPNN |\n---------------------------------------------------------------------------\nAvg. Recall | 48.14 | 48.47 | 51.15 | 52.11 | \n---------------------------------------------------------------------------\n\nWe observe that (1) both GCN and its partition variant are worse than that of GGNN and our GPNN; (2) partition based GCN has a marginal improvement over the vanilla one.\n\nOne reason why GCN performs poorly is that it requires more layers to reach similar performance of GNN since a k-layer GCN will propagate messages k-hops away whereas GNN has the advantage of propagating more even within one layer. Directly adding more layers is infeasible here as it can not fit into memory. We tried to reduce the feature dimension in order to add more layers which leads to a new issue that the features may not be discriminative enough. We hypothesize that if we could add more layers to GCN without reducing the feature dimension too much, GCN will perform similarly. However, it requires more memory and/or intensive optimization of the code which we left as future work. \n\nThe marginal gain of partition based GCN is understandable as the model just splits one linear transform L into L_cut and L_cluster without enhancing the model capacity significantly. Note that the sparsities of L * X and L_cut * L_cluster * X are different.\n\nFinally, our code based on Tensorflow will be released soon.", "We thank the reviewer for bringing up random schedules. We added the experiment as per suggestion.\n\nQ1: Reach a set of fixed point equations.\nA1: The original GNN paper (Scarselli et al. 2009) indeed requires that the state update function is a contraction map (and by Banach’s theorem thus has a fixed point). However, recent gated GNN adaptations (e.g. Li et al. 2015) drop this requirement and instead just fix a number of propagation steps as a hyperparameter; training and testing is then very similar to the RNN setting. We also follow the latter setting since (1) for a general nonlinear dynamic system, no guarantee can be made regarding whether fixed points can be reached; and (2) the learning algorithm, i.e., back-propagation through time (BPTT) would be significantly more time consuming as fixed-point convergence typically requires very many propagation steps, which is impractical for very large graphs. In the paper, we use a synthetic broadcasting problem to study the difference in efficiency of various message passing schedules in an idealized setting. As you observe, propagation across the whole graph may often not suffice to solve all tasks, but is a simple way to study if long-range dependencies between different vertices can be modeled at all.\n\nQ2: Experimental results in section 4.4.\nA2: We assume the reviewer has a typo here in an sense that you actually refer to section 4.5. \nThanks for your suggestion of plotting number of edges updated versus accuracy. We will replot in the final version. To clarify, in Fig. 2 (c), assuming graph G(V, E) is singly connected, then the “# edges per propagation step” of MST, Sequential, Synchronous and Partition are |V|-1, |E|, |E| and |E|. We also attach the average results of 10 runs with different random seeds on Cora as below. \n-----------------------------------------------------------------------------------\n| Prop Step | 1 | 3 | 5 | \n-----------------------------------------------------------------------------------\n| MST | 59.94 +- 0.89 | 71.83 +- 0.96 | 77.1 +- 0.72 |\n-----------------------------------------------------------------------------------\n| Sequential | 73.04 +- 1.93 | 77.55 +- 0.65 | 74.89 +- 1.26 |\n-----------------------------------------------------------------------------------\n| Synchronous | 67.36 +- 1.44 | 80.15 +- 0.80 | 80.06 +- 0.98 |\n-----------------------------------------------------------------------------------\n| Partition | 68.1 +- 1.98 | 80.27 +- 0.78 | 80.12 +- 0.93 |\n-----------------------------------------------------------------------------------\nWe will plot the mean curve with error bar and improve the writing in the final version.\n\nQ3: Random and Synchronous Schedules\nA3: To clarify, we did compare with a fully synchronous schedule which is the one adopted by the GGNN model. Also, speed is not the only benefit, as with partition based schedules, memory is saved which enables us to apply the model to large-scale graph problems. \n\nDeveloping schedules that depend on the type of updates is a very interesting and promising direction. We will explore it in the future. On the other side, our schedule is not fixed in a sense that the partition depends the structure of input graph. \n\nWe did an experiment on random schedules. In particular, for k-step propagation, we randomly sample 1/k proportion of edges from the whole edge set without replacement and use them for propagation. We summarize the results (10 runs) on the Cora dataset in the table below,\n--------------------------------------------------------\n| K | 2 | 3 | 5 | 10 | \n--------------------------------------------------------\n| Avg Acc | 76.03 | 74.71 | 72.09 | 69.99 | \n--------------------------------------------------------\n| Std Acc | 1.55 | 1.31 | 1.81 | 2.26 | \n--------------------------------------------------------\nFrom the results, we can see that the best average accuracy (K = 2) is 76.03 which is still lower than both synchronous and our partition based schedule. Note that this result roughly matches the one with spanning trees. The reason might be that random schedules typically need more propagation steps to spread information throughout the graph. However, more propagation steps of GNNs may lead to issues in learning with BPTT. Additional results on other datasets will be included in the final version. \n" ]
[ 6, 5, 6, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1 ]
[ "iclr_2018_rk4Fz2e0b", "iclr_2018_rk4Fz2e0b", "iclr_2018_rk4Fz2e0b", "HkaZrhuez", "Hkk48Xg-f", "r1Q8qCdgf" ]
iclr_2018_HymuJz-A-
Not-So-CLEVR: Visual Relations Strain Feedforward Neural Networks
The robust and efficient recognition of visual relations in images is a hallmark of biological vision. Here, we argue that, despite recent progress in visual recognition, modern machine vision algorithms are severely limited in their ability to learn visual relations. Through controlled experiments, we demonstrate that visual-relation problems strain convolutional neural networks (CNNs). The networks eventually break altogether when rote memorization becomes impossible such as when the intra-class variability exceeds their capacity. We further show that another type of feedforward network, called a relational network (RN), which was shown to successfully solve seemingly difficult visual question answering (VQA) problems on the CLEVR datasets, suffers similar limitations. Motivated by the comparable success of biological vision, we argue that feedback mechanisms including working memory and attention are the key computational components underlying abstract visual reasoning.
workshop-papers
This paper studies an important problem (visual relationship detection and generalization capabilities existing networks for this task). Unfortunately, all reviewers raise concerns (e.g. limited relations studied) and are largely on the fence about this paper. While this paper does not propose solutions, it does present interesting "negative results" that should get some visibility in the workshop track.
train
[ "B1pcOYBlG", "rkFUZ2uxf", "r1AAFH5xG", "rJptTYrMG", "HkDmAFBMG", "Hyay0YrMz", "HyRn6trMM", "H1VUptHzz", "ryfIDWSff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Quality\n\nThis paper demonstrates that convolutional and relational neural networks fail to solve visual relation problems by training networks on artificially generated visual relation data. This points at important limitations of current neural network architectures where architectures depend mainly on rote memorization.\n\nClarity\n\nThe rationale in the paper is straightforward. I do think that breakdown of networks by testing on increasing image variability is expected given that there is no reason that networks should generalize well to parts of input space that were never encountered before.\n\nOriginality\n\nWhile others have pointed out limitations before, this paper considers relational networks for the first time.\n\nSignificance \n\nThis work demonstrates failures of relational networks on relational tasks, which is an important message. At the same time, no new architectures are presented to address these limitations.\n\nPros\n\nImportant message about network limitations.\n\nCons\n\nStraightforward testing of network performance on specific visual relation tasks. No new theory development. Conclusions drawn by testing on out of sample data may not be completely valid.", "The authors introduce a set of very simple tasks that are meant to illustrate the challenges of learning visual relations. They then evaluate several existing network architectures on these tasks, and show that results are not as impressive as others might have assumed they would be. They show that while recent approaches (e.g. relational networks) can generalize reasonably well on some tasks, these results do not generalize as well to held-out-object scenarios as might have been assumed. \n\nClarity: The paper is fairly clearly written. I think I mostly followed it. \n\nQuality: I'm intrigued by but a little uncomfortable with the generalization metrics that the authors use. The authors estimate the performance of algorithms by how well they generalize to new image scenarios when trained on other image conditions. The authors state that \". . . the effectiveness of an architecture to learn visual-relation problems should be measured in terms of generalization over multiple variants of the same problem, not over multiple splits of the same dataset.\" Taken literally, this would rule out a lot of modern machine learning, even obviously very good work. On the other hand, it's clear that at some point, generalization needs to occur in testing ability to understand relationships. I'm a little worried that it's \"in the eye of the beholder\" whether a given generalization should be expected to work or not. \n\nThere are essentially three scenarios of generalization discussed in the paper:\n (a) various generalizations of image parameters in the PSVRT dataset\n (b) various hold-outs of the image parameters in the sort-of-CLEVR dataset\n (c) from sort-of-CLEVR \"objects\" to PSVRT bit patterns\n\nThe result that existing architectures didn't do very well at these generalizations (especially b and c) *may* be important -- or it may not. Perhaps if CNN+RN were trained on a quite rich real-world training set with a variety of real-world three-D objects beyond those shown in sort-of-CLEVR, it would generalize to most other situations that might be encountered. After all, when we humans generalize to understanding relationships, exactly what variability is present in our \"training sets\" as compared to our \"testing\" situations? How do the authors know that humans are effectively generalizing rather than just \"interpolating\" within their (very rich) training set? It's not totally clear to me that if totally naive humans (who had never seen spatial relationships before) were evaluated on exactly the training/testing scenarios described above, that they would generalize particularly well either. I don't think it can just be assumed a priori that humans would be super good this form of generalization. \n\nSo how should authors handle this criticism? What would be useful would either be some form of positive control. Either human training data showing very effective generalization (if one could somehow make \"novel\" relationships unfamiliar to humans), or a different network architecture that was obviously superior in generalization to CNN+RN. If such were present, I'd rate this paper significantly higher. \n\nAlso, I can't tell if I really fully believe the results of this paper. I don't doubt that the authors saw the results they report. However, I think there's some chance that if the same tasks were in the hands of people who *wanted* CNNs or CNN+RN to work well, the results might have been different. I can't point to exactly what would have to be different to make things \"work\", because it's really hard to do that ahead of actually trying to do the work. However, this suspicion on my part is actually a reason I think it might be *good* for this paper to be published at ICLR. This will give the people working on (e.g.) CNN+RN somewhat more incentive to try out the current paper's benchmarks and either improve their architecture or show that the the existing one would have totally worked if only tried correctly. I myself am very curious about what would happen and would love to see this exchange catalyzed. \n\nOriginality and Significance: The area of relation extraction seems to me to be very important and probably a bit less intensively worked on that it should be. However, as the authors here note, there's been some recent work (e.g. Santoro 2017) in the area. I think that the introduction of baselines benchmark challenge datasets such as the ones the authors describe here is very useful, and is a somewhat novel contribution. ", "Strengths:\n\n-\tThere is an interesting analysis on how CNN’s perform better Spatial-Relation problems in contrast to Same-Different problems, and how Spatial-Relation problems are less sensitive to hyper parameters.\n\n-\tThe authors bring a good point on the limitations of the SVRT dataset – mainly being the difficulty to compare visual relations due to the difference of image structures on the different relational tasks and the use of simple closed curves to characterize the relations, which make it difficult to quantify the effect of image variability on the task. And propose a challenge that addresses these issues and allows controlling different aspects of image variability.\n\n-\tThe paper shows how state of the art relational networks, performing well on multiple relational tasks, fail to generalize to same-ness relationships.\n\nWeaknesses:\n\n-\tWhile the proposed PSVRT dataset addresses the 2 noted problems in SVRT, using only 2 relations in the study is very limited.\n\n-\tThe paper describes two sets of relationships, but it soon suggests that current approaches actually struggle in Same-Different relationships. However, they only explore this relationship under identical objects. It would have been interesting to study more kinds of such relationships, such as equality up to translation or rotation, to understand the limitation of such networks. Would that allow improving generalization to varying item or image sizes?\n\nComments:\n\n-\tIn page 2, authors suggest that from that Gülçehre, Bengio (2013) that for visual relations “failure of feed-forward networks […] reflects a poor choice of hyper parameters. This seems to contradict the later discussion, where they suggest that probably current architectures cannot handle such visual relationships. \n\n-\tThe point brought about CNN’s failing to generalize on same-ness relationships on sort-of-CLEVR is interesting, but it would be good to know why PSVRT provides better generalization. What would happen if shapes different than random squared patterns were used at test time?\n\n-\tAuthors reason about biological inspired approaches, using Attention and Memory, based on existing literature. While they provide some good references to support this statement it would have been interesting to show whether they actually improve TTA under image parameter variations\n", "First, we would like to thank the reviewers for their thoughtful comments, which we believe strengthen the paper greatly. \n\nSecond, we would like to make a general clarification regarding all three experiments we ran. Except in our third experiment, we do not test for network generalization to new regions of the input space. We believe that the original description of our experiments was unclear, and our use of the word “generalization” to describe the behavior of CNNs on PSVRT was erroneous. The manuscript has been revised accordingly. \n\nIn the PSVRT experiment, for example, the TTA obtained in each condition denotes the number of training examples required for a CNN to obtain 95% validation accuracy on images sampled from the same image distribution as the training images. This procedure was replicated over multiple image parameter configurations, resulting in TTAs as shown in Figure 4. There was no holdout set with a different image distribution than the training set. The purpose of the experiment is to measure how TTA was affected by image variability. If a CNN could learn the “rule”, then TTA should not have increased with image variability, since all images obey the rule, regardless of the image parameters. But in our experiment, TTA increased.\n\nOnly in the third experiment do we test a network (the CNN+RN) on images with combinations of attributes not in the training set. However, we now emphasize in the paper that exactly this kind of generalization is indeed found in biological organisms, essentially from birth (see revised manuscript, Discussion, paragraph 3). \n\nThe following is the exhaustive list of revisions we made to the manuscript and where readers can find them:\n1. In Results in Experiment 1 (SVRT) we added a reference to Stabinger et al. (2016).\n2. We changed the Method and architectural details in Experiment 2 (PSVRT) to make it clearer that a CNN is not tested for generalization but instead is trained from scratch for each image parameter to obtain the TTA curves.\n3. In Experiment 3 (RN on Sort-of-CLEVR) and Discussion we emphasized with additional citations the fact that animals are capable of the kinds of generalization we tested using Sort-of-CLEVR.\n4. In Results in Experiment 2 (PSVRT) we added the result from another task, same-different up to rotation, as a reviewer requested.\n5. Minor edits for clarity and to correct typos.", "\"The rationale in the paper is straightforward. I do think that breakdown of networks by testing on increasing image variability is expected given that there is no reason that networks should generalize well to parts of input space that were never encountered before.\"\n\n>>Please see the comment entitled \"Thank You and Important Clarification\". To repeat, in Experiment 2 we *do not* test the CNNs on new image scenarios after training on other image conditions. For each setting of image parameters, a network is trained and tested from scratch on the same data set to obtain a TTA. Each dot in Figure 4 represents a repetition of this procedure for a new data distribution defined by different item size, image size and item number parameters. The purpose of the experiment is to measure how TTA is affected by image variability. Roughly, if the CNN learned the “rule”, then TTA should not have increased with image variability, since all images obey the rule, regardless of the image parameters. If it can only ‘seem to solve it’ by fitting to a particular image distribution, then we would expect TTA to increase with increasing image variability. The confusion may have arisen because of our erroneous use of the term ‘generalization’ in Experiment 2 which, in machine learning literature, refers to the ability to explain new data given a fixed training dataset. We have revised the manuscript to reduce confusion.\n\n\"Straightforward testing of network performance on specific visual relation tasks. No new theory development. Conclusions drawn by testing on out of sample data may not be completely valid.\"\n\n>>Regarding the reviewer’s criticism about out-of-sample data, we would like to clarify once again that all testing data was in-sample except in Experiment 3. When we actually do use out of sample data, in the CNN+RN experiment, we do so in a way that animals are known to solve. For example, we cite a study by Martinho and Kacelnik (2016) in Science showing that ducklings, via imprinting, can learn as well as generalize same-different visual relations immediately after birth. During a training phase, ducklings were exposed to a single pair of simple 3D objects that were either the same or different. Later, they demonstrated a preference for novel objects obeying the relationship observed in the training phase. The conclusion of the authors is that these animals can either rapidly learn the abstract concepts of same and different from a single example or they simply possess these concepts innately. For a recent review of similar literature (including additional evidence for abstract relational reasoning ability in pigeons and nutcrackers), see Wright and Kelly (2017) in Learning and Behavior. Our main theoretical contribution was the first systematic analysis of CNNs on visual-relation problems, varying network hyperparameters and image parameters to show that some visual relations are qualitatively harder than others (Experiment 1,2). We showed that this difference is due neither to a particular architectural choice (Experiment 1) nor to factors unrelated to the visual relations themselves such as image distribution (Experiment 2). In Experiments 2 and 3, we demonstrate that CNNs are limited in their ability to learn and represent abstract rules underlying same-different relations, and instead only solve it by rote memorization. We contrast these results with biological vision, where mechanisms other than template matching play a critical role in learning and detecting visual relations.", "\"I'm intrigued by but a little uncomfortable with the generalization metrics that the authors use... \" \n\n>> Please see the comment entitled \"Thank You and Important Clarification\". In short, we *do not* test generalization in Experiments 1 or 2. \n\n\"... I don't think it can just be assumed a priori that humans would be super good this form of generalization.\"\n \n>> Although we are not aware of an experiment done on human infants learning same-different, we do cite a study by Martinho and Kacelnik (2016) showing that ducklings can learn same-different visual relations immediately after birth. During a training phase, newly-hatched ducklings were exposed to a single pair of 3D objects that were either the same or different. Later, they preferred novel objects obeying the relationship observed in the training phase. The conclusion is that these animals can either rapidly learn the abstract concepts of same and different from a single example or they simply possess these concepts innately. For a recent review of similar literature, see Wright and Kelly (2017). Our experiment 3 is essentially analogous to this. Taken in conjunction with the results from Experiment 2, we conclude that state-of-the-art feedforward architectures only learn same-different relation via memorization of examples. We have expanded the discussion to include these points.\n\n\"What would be useful would either be some form of positive control. Either human training data showing very effective generalization...or a different network architecture that was obviously superior in generalization to CNN+RN.\"\n\n>>While there is a substantial literature specifically on same-different detection in humans going back to Donderi & Zelnicker (1969), the only experiment known to us in which humans are tested on many relation problems is Fleuret et al., 2011. The authors found that humans can learn rather complicated visual rules and generalize them to new instances from just a few examples. Their subjects could learn the rule underlying SVRT problem 20 (the hardest problem for CNNs in our Experiment 1) from about 6 examples. Problem 20 was a complicated problem, involving two shapes such that “one shape can be obtained from the other by reflection around the perpendicular bisector of the line joining their centers.” (See revised manuscript, Discussion, paragraph 2). While there is currently no model with superior generalization compared to a CNN+RN, Ellis et al. (2015) found program synthesis could vastly outperform two different CNN architectures on SVRT. Still, the best visual reasoning machine we know of is the human brain, which is why we suggest attention and memory as the solution to our visual-relation challenges.\n\n\"However, I think there's some chance that if the same tasks were in the hands of people who *wanted* CNNs or CNN+RN to work well, the results might have been different.\"\n\n>> We agree with the reviewer that special care must be taken in criticizing any model. But, note that we do not argue that it is absolutely impossible for *some* CNN to solve a given visual-relation problem. This absolute claim must be false, since feedforward networks are universal function approximators. Rather, the final argument we make in this paper is a relative one: some visual relations are harder than others for CNNs. To support this, we relied on properties diagnostic of rote memorization (e.g. sensitivity to network size in Experiment 1 and sensitivity to image variability in Experiment 2) that are present in same-different results and not in spatial relations results. We varied the CNN architecture (Experiment 1) and image parameters (Experiment 2) to ensure that the qualitative differences between the results obtained from spatial relations and same-different relations are neither due to a particular image distribution nor to a particular CNN hyperparameter choice. We would also like to reassure the reviewers that the experiments were designed with little room for manipulation. The hyperparameters we chose for the CNNs were well within the ‘standard’ range in the CNN literature. Additionally, in Experiment 2 we first chose baseline image parameters and CNN hyperparameters that ensure a very low TTA for both SR and SD problems. Then we simply repeated the training while varying each parameter. We are confident that the trend we observed here will hold outside the range of hyperparameters we have considered. Further, we believe that the limitations of feedforward networks on visual-relation problems have already been recognized by the machine learning community, who have begun to use models based on program induction and memory (e.g., “Inferring and Executing Programs for Visual Reasoning,” Johnson et al., 2017, ICCV). We hope that the challenges we pose in this paper can be used as a benchmark for these new models. ", "\"While the proposed PSVRT dataset addresses the 2 noted problems in SVRT, using only 2 relations in the study is very limited.\"\n\n>> We agree. Although it would certainly be interesting to extend this investigation to a larger set of relations, we limited our focus to these two relations because 1) we wanted to ensure that the relations are defined on the same image distributions, and it is not easy to satisfy this requirement if we include other visual relations, 2) we believe that relative position and sameness are the key factors underlying the dichotomy of CNN accuracies in Experiment 1, and 3) In human and animal psychology, the detection of horizontal/vertical relations and of sameness/difference is a well-established protocol, so using these two PSVRT problems will make it easy to eventually collect human data. \n\n\"The paper describes two sets of relationships, but it soon suggests that current approaches actually struggle in Same-Different relationships. However, they only explore this relationship under identical objects. It would have been interesting to study more kinds of such relationships, such as equality up to translation or rotation, to understand the limitation of such networks. Would that allow improving generalization to varying item or image sizes?\"\n\n>> First of all, please see our above note about generalization. To repeat, our PSVRT task does not measure generalization to left-out regions of the input space . Figure 4 reports the number of samples required to achieve 95% accuracy on the training set. There was no holdout set. Second, we were very intrigued by your suggestion to include rotated items in our same-different experiment. We hypothesized that, as including rotations simply increases the number of ways that items can be “the same,” sample complexity would actually be worse than the PSVRT same-different without rotations. The paper is now updated to show the results of this test. Indeed, the baseline CNN architecture never learned for any parameter configuration on this new task. \n\n\"In page 2, authors suggest that from that Gülçehre, Bengio (2013) that for visual relations “failure of feed-forward networks […] reflects a poor choice of hyper parameters. This seems to contradict the later discussion, where they suggest that probably current architectures cannot handle such visual relationships. \"\n\n>> The citation was made in order to acknowledge the possibility that such previous demonstrations as Gülçehre and Bengio (2013) could have simply reflected poor hyperparameter choices. From this, we motivate our experimental paradigm (Experiment 1) where we used 9 different architectures with varying filter sizes and depth. We found that hyperparameters made little difference on the ‘difficult’ problems, with less than 10% difference in final accuracy between the worst-case and the best-case (Page 3). We have added a sentence at the end of Experiment 1 to make that point clearer.\n\n\"The point brought about CNN’s failing to generalize on same-ness relationships on sort-of-CLEVR is interesting, but it would be good to know why PSVRT provides better generalization. What would happen if shapes different than random squared patterns were used at test time?\"\n\n>> Again, please see our opening remarks. Only training accuracy was measured for PSVRT and there was no holdout test set. Testing accuracy on a left-out condition was only measured in Experiment 3, with CNN+RN on sort-of-CLEVR dataset. However, the referee’s question inspired us to measure generalization in earnest on PSVRT by training a network to high accuracy on one problem with one parameter configuration and then testing it on all other parameter settings. Just as the referee suggests, test accuracy monotonically decreases as the image parameters begin to deviate from their training settings. This decrease was always sharper in same-different problems than in spatial problems.\n\n\"Authors reason about biological inspired approaches, using Attention and Memory, based on existing literature. While they provide some good references to support this statement it would have been interesting to show whether they actually improve TTA under image parameter variations\"\n\n>> Our goal with this paper was to systematically probe the limits of feedforward networks on visual relations problems. We believe our analysis is thorough and fits nicely within the space constraints of a conference paper.", "Thank you for bringing this reference to our attention. An updated submission will include this citation. ", "The general idea and specially the first experiment (using Fleuret's stimuli) is quite similar to a work published last year at ICANN: https://arxiv.org/pdf/1607.08366.pdf\nI think that paper should at least be cited." ]
[ 6, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HymuJz-A-", "iclr_2018_HymuJz-A-", "iclr_2018_HymuJz-A-", "iclr_2018_HymuJz-A-", "B1pcOYBlG", "rkFUZ2uxf", "r1AAFH5xG", "ryfIDWSff", "iclr_2018_HymuJz-A-" ]
iclr_2018_SyunbfbAb
FigureQA: An Annotated Figure Dataset for Visual Reasoning
We introduce FigureQA, a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images. The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts. We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection. To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure. To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives. In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements. We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as strong baseline. Preliminary results indicate that the task poses a significant machine learning challenge. We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.
workshop-papers
This paper was reviewed by 3 expert reviews. While they all see value in the new task and dataset, they raise concerns (templated language, unclear what exactly are the new challenges posed by this task and dataset, etc) that this AC agrees with. To be clear, the lack of a fundamentally new model is not a problem (or a requirement for every paper introducing a new task/dataset), but make a clear compelling case for why people should work on the task is a reasonable bar. We encourage the authors to incorporate reviewer feedback and invite to the workshop track.
train
[ "BJskuZ9ez", "r1y8KxiEf", "ByXaZpqEG", "r1PPfGuNM", "HJ2y6-5gz", "Hk2Kgd3gM", "SytTB_TXG", "BkI24OTQz", "SyJIEO6XG", "BkjM4uamG", "BkNKQ_amG", "B1Lr7dp7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper introduces a new dataset called FigureQA. The images are synthetic scientific style figures and questions are generated from 15 templates which concern various relationships between plot elements. The authors experiment with the proposed dataset with 3 baselines. Text-only baseline, which only considers the questions; CNN+LSTM baseline, which does a late fusion between image and question representation; Relation Network, which follows Santoro et al. (2017). Experiment results show that the proposed task poses a difficult challenge where CNN+LSTM baseline only 2% better than guessing (50%) and relation network which takes spatial reasoning into consideration, performs better on this task (61.54%). \n\n[Strenghts]\n\nThe proposed Figure QA dataset is a first step towards developing models that can recognize the visual representation of data. This is definitely a novel area that requires the machine not only understand the corpus, but also the scientific figure associated with the figure. Traditional VQA methods not working well on the proposed dataset, only 2% better than guessing (50%). The proposed dataset requires the machine understand the spatial arrangement of the object and reason the relations between them. \n\n[Weaknesses]\n\n1: There are no novel algorithms associated with this dataset. CVPR seems a better place to publish this paper, but I'm open to ICLR accept dataset paper.\n\n2: The generated templated questions of proposed FigureQA dataset is very constraint. All the question is the binary question, and there is no variation of the template with respect to the same question type. Most of the question type can be represented as a triplet. In this sense, the proposed dataset requires less language understanding compare to previous synthesis dataset such as CLEVER. \n\n3: Since the generated question is very templated and less variational, a traditional hand-crafted approach may perform much better compared to the end-to-end approach. The paper didn't have any baseline for the hand-crafted approach, thus we don't know how it performs on the proposed dataset, and whether more advanced neural approaches are needed for this dataset. \n\n[Summary]\n\nThis paper introduces a new dataset called FigureQA, which answering the synthetic scientific style figures given the 15 type of questions. The authors experiment the proposed dataset with 3 neural baselines: Question only, LSTM + CNN and relational network. The proposed dataset is the first step towards developing models that can recognize the visual representation of data. However, my major concern about this dataset is the synthesized question is very templated and less variational. From the example, it seems it does not require natural language understanding of the dataset. Triplet representation seems enough to represent the most type of the question. The authors also didn't conduct hand-crafted baselines, which can provide a better intuition about the difficulty lies in the proposed dataset. Taking all these into account, I suggest accepting this paper if the authors could provide more justification on the question side of the proposed dataset. ", "After reading the rebuttal, I'm satisfied with the response in terms of val and test split performance. However, the question part of the proposed dataset is automatically generated without any variations. This is my major concerns about this paper. Considering this paper is the first step towards research in the visual representation of data. I'll improve my rating from 4 to 6. \n\n", "After reading the rebuttal, I am still concerned about the contributions of the paper. The proposed dataset is automatically generated with limited complexity and no necessity of language understanding in its current form, as pointed by AR2. It is not reasonable to review the work based on what its state might be in the future, when language understanding might play a role. Therefore, it is important to show real-life applications of the dataset in the paper. I also fail to clearly see any new challenges being posed by this dataset which are currently not being studied; models such as RN and FiLM were already introduced before this dataset. I recognize that this paper is a first step towards encouraging research in pattern recognition in visual representation of data such as figures, hence I will keep my original rating.\n", "After reading the authors' responses to the concerns raised in the review by me and my fellow reviewers, I would like to recommend acceptance of this paper because it proposes a novel task which seems useful for building intelligent AI agents and the dataset proposed in the paper is a good starting point.", "Summary:\nThe paper introduces a new dataset (FigureQA) of question answering on figures (line plots, bar graphs, pie charts). The questions involve reasoning about figure elements (e.g., “Is X minimum?”, “Does X have maximum area under curve?”, where X refers to an element in a figure). The images are synthetic and the questions are templated. The dataset consists of over 100,000 images and the questions are from 15 templates. The authors also provide the numerical data associated with each figure and bounding box annotations for all plot elements. The paper trains and evaluates three baselines – question only LSTM, CNN + LSTM and Relation Networks on the proposed FigureQA dataset. The experimental results show that Relation Networks outperform the other two baselines, but still ~30% behind human performance.\n\nStrengths:\n1.\tThe proposed task is a useful task for building intelligent AI agents.\n2.\tThe writing of the paper is clear with enough details about the data generation process.\n3.\tThe idea of using different compositions of color and figure during train and test is interesting.\n4.\tThe baselines experimented with in the paper make sense.\n\nWeaknesses:\n1.\tThe motivation behind the proposed task needs to be better elaborated. As of now, the paper mentions in one line that automatic understanding of figures could help human analysists. But, it would be good if this can be supported with real life examples.\n2.\tThe dataset proposed in the paper comes with bounding box annotations, however, the usage of bounding boxes isn’t clear. The paper briefly mentions supervising attention models using such boxes, but it isn’t clear how bounding boxes for data points could be used.\n3.\tIt would have been good if the paper had the experiments on reconstructing quantitave data from plots and using bounding boxes for providing attention supervision, in order to concretize the usage of these annotations?\n4.\tThe paper mentions the that analyzing the performance of the models trained on FigureQA on real datasets would help extend the FigureQA corupus. So, why didn’t authors try the baselines models on FigureSeer dataset?\n5.\tIt is not clear why did the authors devise a new metric for smoothness and not use existing metrics?\n6.\tThe paper should clarify which CNN is used for CNN + LSTM and Relation Networks models?\n7.\tThe paper should clarify the loss function used to train the models? Is it binary cross entropy loss?\n8.\tThe paper does not mention the accuracy using the quantitative data associated with the plot? Is it 100%? What if two quantities being questions about are equal and the question is about finding the max/min? How much is the error due to such situations?", "Summary:\nThe paper introduces a new visual reasoning dataset called Figure-QA which consists of 140K figure images and 1.55M QA pairs. The images are generated synthetically by plotting perturbed sampled data using a visualization tool. The questions are also generated synthetically using 15 templates. Performance of baseline models and humans show that it is a challenging task and more advanced models are required to solve this task.\n\nStrengths:\n— FigureQA can help in developing models that can extract useful information from visual representations of data.\n— Since performance on CLEVR dataset is already close to 100%, more challenging visual reasoning datasets would encourage the community to develop more advanced reasoning models. One of such datasets can be FigureQA.\n— The paper is well written and easy to follow.\n\n\nWeaknesses:\n— Since the dataset is created synthetically, it is not clear if it is actually visual reasoning which is needed to solve this task, or the models can exploit biases (not necessarily language biases) to perform well on this dataset. In short, how do we know if the models trained on this dataset are actually learning something useful? One way to ensure this would be to show that models trained on this dataset can perform well on some other task. The first thing to try to show the usefulness of FigureQA is to show that the models trained on FigureQA dataset perform well on a real (figure, QA) dataset.\n— The only advantages mentioned in the paper of using a synthetic dataset for this task are having greater control over task’s complexity and enabling auxiliary supervision signals, but none of them are shown in this paper, so it’s not clear if they are needed or useful.\n— The paper should discuss what type of abilities are required in the models to perform well on this task, and how these abilities are currently not studied in the research community. Or in short, what new challenges are being introduced by FigureQA and how should researchers go about solving them on a high level?\n— With what goal were these 15 types of questions chosen? Are these the most useful questions analysts want to extract out of plots? I am especially concerned about finding the roughest/smoothest and low/high median. Even humans are relatively bad at these tasks. Why do we expect models to do well on them?\n— Why only binary questions? It is probably more difficult for analysts to ask a binary question than to ask non-binary ones such as “What is the highest in this plot?”
— Why these 5 types of plots? Can the authors justify that these 5 types of plots are the most frequent ones dealt by analysts?\n— Are the model accuracies in Table 3 on the same subset as humans or on the complete test set? Can the authors please report both separately?\n\n\nOverall: \nThe proposed dataset seems reasonable but neither the dataset seems properly motivated (something where analysts actually struggle and models can help) nor it is clear if it will actually be useful for the research community (models performing well on this dataset will need to focus on specific abilities which have not been studied in the research community).", "Thank your to all reviewers for your constructive criticism.\nWe have revised our manuscript and addressed the issues raised by each reviewer below the respective reviews.\n\nA few general remarks:\n- We added updated multi-GPU implementation of the RN and CNN+LSTM baselines and added the improved performances to the revised manuscript.\n- We added a baseline using VGG features pretrained on ImageNet, described in the revised manuscript.\n- We also now provide the validation accuracies in the performance table.\n- As mentioned in the new version, we will make the source code for all models available.", "Thank you very much for your review and the constructive criticism.\nPlease find below our responses (each below a brief summary of the corresponding issue):\n\n1) No novel algorithms associated with data set. CVPR would be better\n--> The manuscript is a dataset paper, that focuses on motivating and introducing a novel task and providing baseline models for benchmarking. We are of the opinion that dataset papers fit well into a conference on representation learning, as long as they target weaknesses of current representation learning algorithms.\nRegarding the choice of venue: We were split between submitting to ICLR or CVPR and decided to submit to ICLR, due to potential restrictions for travelling to the US.\n\n2) Very constraint question templates, all are binary questions, no variation of language w.r.t. the same question type. Could use triplet representation instead of LSTM.\n--> In early experiments we did just that as we thought the same, but the LSTM still had a slight edge over those experiments. As there is not too much overhead, due to relatively short questions and we plan to extend the corpus, we just kept the LSTM. In future versions we plan to add more natural language variation, either by significantly increasing the number of templates based on feedback from the community or via crowdsourcing. We are collecting candidate templates for the next version, but as the experiments show, the current task already poses a challenge.\nWe would also like to mention that the additional datapoint annotations allows the dataset to be extended to any type of question or answer.\n\n3) A handcrafted approach might perform better\n--> Thank you for the suggestion. One could surely engineer an approach that might perform better, but the pipelines would probably differ between plot types. Also one would probably have to encode stronger prior knowledge in the features beyond the priors introduced by using a convolutional network or a relational network. Representation learning allows the model to learn to deal with all plot types in the training set.\nWe have added a baseline using VGG (pretrained on ImageNet) features as input, which does not perform well, suggesting that either pretraining representations on a more related data set or end-to-end training might be necessary.\nWe are not aware of any existing specific handcrafted approaches, that we could have evaluated on FigureQA without major changes to the code base.\n\nConcerning your question earlier about validation scores: We have added them to the revised manuscript. We also updated the implementation of the RN and CNN+LSTM baselines to use multiple GPUs and added the new improved results to the manuscript.\nWe hope that our revisions and the rebuttal address all of your concerns.", "5) Why a new metric for smoothness?\n--> We felt that second-order derivatives across the curve would be sufficient, as the magnitude of the second-order derivative correlates to “bumpiness” as perceived by humans. We approximate the second-order derivative by finite differences in the curve itself. Our roughness measure is similar to the second derivative calculation for a quadratic interpolant with Lagrangian basis polynomials (see Equation 15.44 in https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/lectfiniteDifference.pdf\nWe decided against using a surface roughness measure or variance because these measures do not work well for globally “rough” curves, like quadratics, due to their high deviation from the mean of the line plot.\nOne alternative roughness measure would have been lag-X autocorrelation, though experimentation is necessary to find the right lag parameter and we felt it would be better to have an objective, parameter-free model.\n\n6) The paper should clarify which CNN is used for CNN + LSTM and Relation Network\n--> We completely specified the CNNs both for the CNN + LSTM and the RN in the initial draft of the manuscript. All hyperparameters required for implementation can be found in the respective paragraphs in Section 4 (“Models”).\nThe paragraph “CNN+LSTM” says:\n“The visual representation comes from a CNN with five convolutional layers, each with 64 kernels of size $3\\times3$, stride 2, zero padding of 1 on each side and batch normalization (Ioffe et al. 2015), followed by a fully connected layer of size 512. All layers use the \\gls{relu} activation function.”\nAnd the paragraph of the RN mentions that the same CNN architecture without the fully-connected layer is used.\n\n7) The paper should clarify which loss function is used. Is it the binary cross-entropy?\n--> Yes, we used the standard binary cross-entropy loss. Thanks for pointing this out. We added this information in the revised manuscript.\n\n8) The paper does not mention the accuracies using quantitative data. How much error is due to min/max questions with two equal quantities?\n--> Quantitative data is not used in any of our models, as we are aiming to achieve intuitive visual understanding of figures.\nIf two quantities are the same and greater than all others, then both are maxima. Because of potential misunderstandings (is the maximum meant to be unique?) we avoided this special case in data generation.", "Thank you very much for your review and the constructive criticism.\nPlease find below our responses (each below a brief summary of the corresponding issue):\n\n1) Elaborate motivation of the proposed task. The manuscript only mentions that automatic understanding of figures could help human analysts. Real-life examples would be good.\n--> a) One example in real life is that people who work in finance have to interpret large amounts of simple plots everyday. A computer vision algorithm that can assist here, could save time.\nApart from financial data and plots, the skill of interpreting figures and understanding visual information is very useful and plays a role in education. Similar questions to those we have in FigureQA (among more complicated ones) are usually part of Graduate Record Examinations (GRE). Our choice of plot types and question types were in part motivated by such exams (see for example here: https://www.ets.org/s/gre/accessible/gre_practice_test_3_quant_18_point.pdf pages: 29, 44, 45, 86). We have added/emphasized this point in the revised manuscript.\n\n2) The manuscript mentions that bounding boxes could be used for supervising attention models, but doesn’t clearly describe how.\n--> One example would be to use an attention window to extract image patches containing one or multiple objects of interest (e.g. line segments in a line plot). We initially thought this might be necessary to get models to train at all, but found that training without auxiliary objectives achieved a performance significantly above chance. Because a significant amount of work went into extracting bounding boxes using a modified version of Bokeh, we decided to include them in the release, in case someone wants to use the data for a different task, such as bounding box prediction or reconstruction of data coordinates. The revised manuscript now clarifies this and clearly states that experiments using attention models or extraction of coordinates are outside of the scope of this work.\n\n3) Why no experiments on reconstruction of quantitative data or attention?\n--> This is partially answered under 2). But in addition, the focus of our work is on intuitive understanding of figures based on visual cues, rather than inversion of the visualization pipeline. Humans usually don’t build a table of coordinates to reach the conclusion that one of the curves seems smoother or that a bar is greater than another.\nThe revised manuscript puts more emphasis on our focus on intuitive figure understanding.\n\n4) The manuscript mentions that analyzing performance of models trained on FigureQA on real data could help to extend the FigureQA dataset. Why didn’t the authors add such experiments on FigureSeer?\n-->This is a good point, and we can elaborate more on our decision to not use the FigureSeer dataset (Siegel et al., 2016).\nThe FigureSeer paper claims to have 3,500 QA pairs for a subset of the figures, but these types of questions and answer types are not found in our dataset. The questions are templated, concerning a dataset (for figure retrieval) and a metric (for figure analysis), for which the answer is the numerical value of the metric in that dataset figure. The metrics are not consistent with the question types in FigureQA and the answers are non-binary, so we could not train our baselines on that data. The FigureSeer questions were also not publicly available and were not provided when we requested the full version of the dataset.\nData point annotations included for a subset of 1,000 FigureSeer images were crowdsourced and aren’t reliable for generating figure images or question-answer pairs. Figures with many points are often missing points (e.g. 01951-10.1.1.20.2491-Figure-1 from the dataset sample available here: http://ai2-website.s3.amazonaws.com/data/FigureSeerDataset.zip). These 1,000 annotated images are all line plots, which only covers one fifth of the figure types available in FigureQA, so they are only a starting point.\nFinally, annotating a portion of the FigureSeer dataset as a real-world test set was infeasible given the limited time we had to prepare this rebuttal, though we intend to complete this for the next version of the dataset. \n[we had to split the rebuttal into two parts, this is part 1]", "5) Why only binary questions? It is probably unnatural for analysts to frame their problems in binary questions.\n--> The choice of binary questions allowed us to balance the dataset to avoid problems with language biases as described in Goyal et al. (2016). NLVR (Suhr et al., 2017), another visual reasoning dataset, also poses a binary classification (is the provided statement true or false). We assume that a representation learned on a balanced binary dataset would at least be useful as a strong initialization for the non-binary setting and plan to investigate this in future work.\n\n6) Why these 5 types of plots. Are they the most frequently used ones?\n--> As mentioned in our response to 4), the dataset is in part inspired by math questions, such as those found in GRE exams. Besides scatter plots, these are the standard plot types in Matplotlib, the plotting library we used in the initial phase of development.\nAs the research community gets closer to a solution of the FiguraQA task, we plan to extend the dataset. Examples of interesting plot types would be scatter plots, Venn diagrams, area charts and radar plots, or compound types, such as line/area-bar charts or pareto charts.\n\n7) Are accuracies of the models in Table 3 on the same subset that humans were tested on? If not, they should be reported separately.\n--> Thank you for pointing this out. We agree that this should have been considered and revised the manuscript. It now separately reports the performance on the full test set in one table and compares CNN vs RN vs the human performance on a test subset in another table.\n\nReferences:\n- Goyal, Yash, et al. \"Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering.\" arXiv preprint arXiv:1612.00837 (2016).\n- Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L., & Girshick, R. (2017, July). CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1988-1997). IEEE.\n- Perez, Ethan, et al. \"FiLM: Visual Reasoning with a General Conditioning Layer.\" arXiv preprint arXiv:1709.07871 (2017).\n- Santoro, Adam, et al. \"A simple neural network module for relational reasoning.\" arXiv preprint arXiv:1706.01427 (2017).\n- Siegel, N., Horvitz, Z., Levin, R., Divvala, S., & Farhadi, A. (2016, October). FigureSeer: Parsing result-figures in research papers. In European Conference on Computer Vision (pp. 664-680). Springer International Publishing.\n- Suhr, A., Lewis, M., Yeh, J., & Artzi, Y. (2017). A corpus of natural language for visual reasoning. In 55th Annual Meeting of the Association for Computational Linguistics, ACL.\n", "Thank you very much for your review and the constructive criticism.\nPlease find below our responses (each below a brief summary of the corresponding issue):\n\n1) Synthetic dataset, not clear if visual reasoning actually required or biases are exploited. A good test would be to evaluate the model on another task, e.g. on real figures.\n--> We recognize this and plan to address this issue by annotating images from the FigureSeer dataset (Siegel et al., 2016) and other sources in the future.\nWe tried to address the bias concern in the original manuscript by balancing the data set. The text-only model does not perform better than chance. Since each image has exactly the same number of yes and no answers, a vision-only model can also not reliably achieve a higher performance than chance.\nThe significant amount of annotation work required to produce such a test set with real images has prevented us from completing it for the original submission or our updated paper. Regardless of the source of these real figure images, the colors of the plot elements must be extracted or adjusted, which requires manual effort. We would need to crowdsource questions and answers for these images as well. Both of these aspects have been beyond our time and monetary budgets.\n\n2) The only advantages of synthetic dataset mentioned in paper are greater control over task complexity and availability of auxiliary supervision signals, but this is not explored in the manuscript.\n--> We take inspiration from other visual reasoning datasets like CLEVR (Johnson et al., 2017) and NLVR (Suhr et al., 2017) for our task. Ultimately the effort to collect and annotate real figure images was prohibitively costly. Crowdsourcing efforts would be needed to extract colors, plot element names, and data points as well as to generate questions and answers - all of which have no accuracy guarantees.\nAccording to your suggestion, we have revised the manuscript to clearly state that experiments exploring additional annotations are outside of the scope of this paper and that the annotations are provided to encourage researchers to define other tasks. It also emphasizes that having reliable ground-truth targets is maybe the most important benefit of using a synthetic dataset and that weaknesses can be addressed by iteratively updating the corpus.\n\n3) Need to discuss new challenges posed by FigureQA and high-level description of how to approach a solution to them.\n--> To perform well on the FigureQA task, models need to detect and reason about spatial properties and relationships between them. By our task formulation we want to encourage approaches for intuitive understanding of figures, that do not invert the visualization pipeline, i.e. do not revert to reconstructing the coordinates of data points. We think that end-to-end training of models on raw images and text on this task instead of training multiple disjoint components is more likely to adapt well to future extensions of this data set.\nSince our data includes questions aiming both at small visual details (e.g. dot-line plots) as well as larger patterns (e.g. area under the curve or pie slices), partially scale-invariant approaches may be useful. Standard CNNs are good at detecting patterns, but not great at detecting relations between them. Specialized architectures such as the Relation Network (Santoro et al., 2017) or FiLM (Perez et al., 2017) are more suited to visual reasoning tasks.\n\n4) Why these 15 questions, are they the most useful questions for analysts? Why expect models to be good if humans already struggle with smoothness or median questions?\n--> Questions about maximum, minimum, median, area under the curve and the chosen plot types are often found in maths questions of GRE exams (example: https://www.ets.org/s/gre/accessible/gre_practice_test_3_quant_18_point.pdf, question 17). The goal in AI is to achieve human-level understanding and intelligence. Humans in GRE tests usually don’t create a table containing coordinates of data points before answering such questions, which is part of why we decided against including questions that require the reconstruction of quantities, such as coordinates of data points. As the research community gets closer to human performance in the FigureQA task, we plan to extend the data set with more question templates or crowd-sourced questions.\nThe revised manuscript contains more detail on our motivation.\n[we had to split the rebuttal into two parts due to the character limit, this is the first half]" ]
[ 6, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyunbfbAb", "BkI24OTQz", "BkNKQ_amG", "SyJIEO6XG", "iclr_2018_SyunbfbAb", "iclr_2018_SyunbfbAb", "iclr_2018_SyunbfbAb", "BJskuZ9ez", "BkjM4uamG", "HJ2y6-5gz", "B1Lr7dp7M", "Hk2Kgd3gM" ]
iclr_2018_SylJ1D1C-
PDE-Net: Learning PDEs from Data
Partial differential equations (PDEs) play a prominent role in many disciplines such as applied mathematics, physics, chemistry, material science, computer science, etc. PDEs are commonly derived based on physical laws or empirical observations. However, the governing equations for many complex systems in modern applications are still not fully known. With the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored. Such vast quantity of data offers new opportunities for data-driven discovery of hidden physical laws. Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDE-Net, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. The basic idea of the proposed PDE-Net is to learn differential operators by learning convolution kernels (filters), and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses. Comparing with existing approaches, which either assume the form of the nonlinear response is known or fix certain finite difference approximations of differential operators, our approach has the most flexibility by learning both differential operators and the nonlinear responses. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory). We also discuss relations of the PDE-Net with some existing networks in computer vision such as Network-In-Network (NIN) and Residual Neural Network (ResNet). Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.
workshop-papers
This paper studies the approximation and integration of partial differential equations using convolutional neural networks. By constraining CNN filters to have prescribed vanishing moments, the authors interpret CNN-based temporal prediction in terms of 'pde discovery'. The method is demonstrated on simple convection-diffusion simulations. Reviewers were mixed in assessing the quality, novelty and significance of this work. While they all acknowledged the importance of future research in this area, they raised concerns about clarity of exposition (which has been improved during the rebuttal period), as well as the novelty / motivation. The AC shares these concerns; in particular, he misses a more thorough analysis of stability (under what conditions would one use this method to estimate an actual PDE and obtain some certificate of approximation?) and discussions about pitfalls (in real situations one may not know in advance the family of differential operators involved in the physical process nor the nature of the non-linearity; does the method produce a faithful approximation? why?). Overall, the AC thinks this is an interesting submission that is still in its preliminary stage, and therefore recommends resubmitting to the worshop track at this time.
train
[ "B1JeHrDlf", "rJVcvUYlz", "SJt5gvplM", "Sy9QBDw7z", "HyYNMZ7Zz", "H18xu-XZf", "rky1xeQ-G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper explores the use of deep learning machinery for the purpose of identifying dynamical systems specified by PDEs.\n\nThe paper advocates the following approach:\nOne assumes a dynamic PDE system involving differential operators up to a given order. Each differential operator term is approximated by a filter whose values are properly constrained so as to correspond to finite difference approximations of the corresponding term. The paper discusses the underlying wavelet-related theory in detail. A certain form of the dynamic function and/or source terms is also assumed. An explicit Euler scheme is adopted for time discretization. The parameters of the system are learned by minimizing the approximation error at each timestep. In the experiments reported in the paper the reference signal is provided by numerical simulation of a ground truth system and the authors compare the prediction quality of different versions of their system (eg, for different kernel size).\n\nOverall I find the paper good, well written and motivated. The advocated approach should be appealing for scientific applications of deep learning where not only the quality of approximation but also the interpretability of the identified model is important.\n\nSome suggestions for improvement:\n* The paper doesn't discuss the spatial boundary conditions. Please clarify this.\n* The paper adopts a hybrid approach that lies in between the classic fully analytical PDE approach and the data driven machine learning. I would like to see a couple more experiments comparing the proposed approach with those extremes. (1) in the first experiment, the underlying model order is 2 but the experiment allows filters up to order 4. Can you please report if generalization quality improves if the correct order 2 is specified? (2) On the other side, what happens if no sum (vanishing order) constraints are enforced during model training? This abandons the interpretability of the model as approximating a PDE of given order but I am curious to see what is the generalization error of this less constrained system.\n\nNit: bibtex error in Weinan (2017) makes the paper appear as E (2017).", "Authors propose a neural network based algorithm for learning from data that arises from dynamical systems with governing equations that can be written as partial differential equations. The network architecture is constrained such that regardless of the parameters, it always implements discretization of an arbitrary PDE. Through learning, the network adapts itself to solve a specific PDE. Discretization is finite difference in space and forward Euler in time. \n\nThe article is quite novel in my opinion. To the best of my knowledge, it is the first article that implements a generic method for learning arbitrary PDE models from data. In using networks, the method differs from previously proposed approaches for learning PDEs. Experiments are only presented with synthetic data but given the potential for the method and its novelty, I believe this can be accepted. However, it would have been a stronger article if authors have applied to a real life model with real initial and boundary conditions, and real observations. \n\nI have three main criticism about the article: \n\n 1. Authors do not cite Chen and Pock’s article on learning diffusion filters with networks that first published in CVPR 2015 and then authors published a PAMI article this year. To the best of my knowledge, they are the first to show the connection between res-net type architecture and numerical solutions of PDEs. I think proper credit should be given. [I need to emphasize that I am not an author in that article.] \n 2. Authors emphasize the importance of interpretability, however, the constraint on the moment matrices might cripple this aspect. The frozen filters have clear interpretations. They are in the end finite difference approximations with some level of accuracy depending on the size and the number of zeros. When M(q) matrix is free to change, it is unclear what the effect will be on the filters. Are the numbers that replace stars in Equation 6 for instance, will be absorbed in the O(\\epsilon) term? Can one really interpret the final c_{ij} for filters whose M(q) have many non-zeros? \n 3. The introduction and results sections are well written. The method section on the other hand, needs improvement. The notation is not easy to follow due to missing definitions. I believe with proper definitions — amounting to small modifications — readability of the article can substantially improve. \n\n\nIn addition to the main criticisms, I have some other questions and concerns: \n\n 1. How sensitive is the model? In real life, one cannot expect to get observations every delta t. Data is most often very sparse. Can the model learn in that regime? Can it differentiate between different PDEs and find the correct one with sparse data? \n 2. The average operations decreases the interpretability of the proposed model as a PDE. Depending on the filter size, D_{0}u can deviate from u, which should be the term that should be used in the residual block. Why do authors need this? How does the model behave without it? \n 3. The statement “Thus, the PDE-Net with bigger n owns a longer time stability.” is a very vague statement. I understand with larger n, training would be easier since more data would be used to estimate parameters. However, it is not clear how this relates to “time stability”, which is also not defined in the article. \n 4. How is the relative error computed? Values in relative error plots goes as high as 10^2. That would be a huge error if it is relative. ", "This paper addresses complex dynamical systems modelling through nonparametric Partial Differential Equations using neural architectures. It falls down within the context of a recent and growing literature on the subject.\n\nThe most important idea of the papier (PDE-net) is to learn both differential operators and the function that governs the PDE. To achieve this goal, the approach relies on the approximation of differential operators by convolution of filters of appropriate order. This is really the strongest point of the paper.\n\nMoreover, a basic system called delta t block implements one level of full approximation and is stoked several times.\nA short section relates this work to recent existing work and numerical results are deployed on simulated data.\nIn particular, the interest of learning the filters involved in the approximation of the differential operators is tested against a frozen variant of the PDE-net.\n\nComments:\nThe paper is badly structured and is sometimes hard to read because it does not present in a linear way the classic ingredients of Machine Learning, expression of the full function to be estimated, equations of each layer, description of the set of parameters to be learned and the loss function. Parts of the puzzle have to be found in the core of the paper as well as in simulations.\n\nAbout the loss function, I was surprised not to see a sparsity constraint on the different filters in order to select the order of the differential operators themselves. If one want to achieve interpretability of the resulting PDE, this is very important.\n\nI also found difficult to measure the degree of novelty of the approach considering the recent works and the related work section should have been much more precise in terms of comparison. \n\nFor the simulations, it is perfectly fine to rely on simulated datasets. However the approach is not compared to the closest works (Sonoda et al., for instance).\n\nFinally, I’ve found the paper very interesting and promising but regarding the standard of scientific publication, it requires additional attention to provide a better description the model and discuss the learning scheme to get a strongest and reproducible approach.", "Dear Area Chair and Reviewers,\n\nWe have revised our manuscript according to the reviewers suggestions. In particular, we have\n\n1) added description to some of the notation. In particular, we added some examples after Proposition 2.1 to help the readers understand the concept of sum rules and its relation to differential operators.\n\n2) added some experiments in Section 3 (starting from page 11, \"Further Experiments\"). We compared the original PDE-Net with PDE-Net assuming we know the highest order of the linear PDE is 2. We also compared the PDE-Net with the Freed-PDE-Net (the network without any moment constraints). In a nutshell, with more prior knowledge on the unknown PDE, we are able to obtain a more accurate estimation on the model. Also, having no moment constraints, we cannot identify the PDE model, though we are be able to improve the prediction accuracy over the original PDE-Net.", "We would like to thank the reviewer for his/her constructive suggestions. Our responses are as follows.\n\n1, Thanks for pointing this out! We are aware of Chen and Pock’s CVPR article. We did cite their work since they used numerical PDE (discretization of Perona-Malik) to inspire network architecture which is more related to the idea of unrolling dynamics originally proposed by Gregor and LeCun, ICML 2010. They did not require the underlying denoising process be governed by a PDE, nor did they attempt to recover the PDE (should there exist one). However, we do agree on the importance of Chen and Pock’s contribution, and will properly cite the paper in the revised version. \n\n2, In PDE-Net, we only partly free M(q). On one hand, we impose zero-moment constrains on lower order moments so that we know which differential operator the corresponding convolution is approximating. On the other hand, we free higher order moments so that the filters can adjust themselves to achieve better stability and approximation accuracy according to the data. In this way, we are able to preserve some expressive power of setting all moments free (having full degree of freedom), while still maintain transparency of the network (i.e. knowing which filter is in correspondence to which differential operator so that we can identify the response function correctly). We have done experiments using the diffusion equation with a nonlinear source without any moment constraints. We got great prediction, whereas we cannot identify the equation at all.\n\n3, Thanks for the suggestion! In the revision, we will improve clarity of the method section by including more definitions and explanations.\n\nFor the reviewer’s other questions and concerns:\n\n1, Since it’s hard for us to find suitable real 2D physical data, we have to test the idea on simulated data sets as a proof of concept. For sparse observations in time, if there are many replicated experiments with different initial values, we believe the PDE-Net is still effective though the depth may be limited by the number of temporal observations. If there is only one single experiment with few temporal observations, PDE-Net may fail due to lack of data. At this point, we cannot predict how much less data the PDE-Net can tolerate. But what we know for sure is we will need much less data than normal deep learning regime since PDE-Net has relatively fewer trainable parameters than heavy-duty networks in deep learning.\n\n2, The idea of using D_0 u instead u in PDE-Net comes from stability of numerical PDEs. For a difference scheme of a PDE, $u_m^{n+1}=u_m^n+L_h(u)$, sometimes we use $1/2(u_{m+1}^n+u_{m-1}^n)$ instead of $u_m^n$ to get a modified scheme, which usually has a larger stable region, or it can even make an unstable scheme stable. Inspired by this, we introduce the average operator in PDE-Net. However, this treatment may or may not be vital depending on the data, but it has the potential to boost stability whenever it is needed.\n\n3, When we apply the \\delta t-block to a given data, the output has an error. The errors will accumulate when we repeatedly apply the \\delta t-block. In general the error grows exponentially as shown by the blue error curves in Figure 3. By long time stability of a learned network (more precisely, the learned \\delta t-block), we mean that the errors are well controlled after multiple \\delta t blocks are applied. This is not exactly the same “stability” as in numerical PDEs, but it shares some similarities. To enable a longer time prediction, we demonstrated that if we train an n layer PDE-Net, rather than merely one or just a few \\delta t-blocks, the filters will be learned to at least ensure the stability of applying \\delta t-block n times. Choosing bigger n for the PDE-Net indeed helps to slow down the growth of the prediction error, which was demonstrated in for instance Figure 3&8.\n\n4, The relative error in our paper is defined by $\\epsilon_r=\\frac{\\sum(\\tilde{u}_m-u_m)^2}{\\sum(u_m-\\bar{u})^2}$, where u is the true data, \\bar{u} is the average of u, and \\tilde{u} is the predicted data. Maybe calling it “normalized error” is less misleading, and we will correct it in the revised version.\n\nP.S. we are working on the revision. Once we are done with the requested additional experiments, we will summarize the new results along with other suggested modifications in the revised manuscript. ", "We would like to thank the reviewer for carefully evaluating the paper. The summary captures the main points of the paper. Our responses to the reviewers suggestions are as follows.\n\n1, Boundary conditions are indeed very important for PDEs. In our PDE-Net, we did not emphasize much on dealing with complicated boundary conditions. To some extent, we can think that PDE-Net focuses on initial-value problems. However, for some simple initial-boundary problems, we can still apply PDE-Net by using padding strategies to deal with boundary conditions. For instance, in the first PDE example, we adopted the periodic boundary conditions and used periodic padding for the convolutions in the PDE-Net; in the second PDE example, we adopted the Dirichlet boundary condition and used zero padding.\n\n2, We are working on the requested numerical experiments. We will upload a revised manuscript once they are done.\n\n3, Weinan is the first name of the author. His last name is E and therefore the paper is cited as E (2017).", "First, we would like to thank the reviewer for carefully evaluating our paper. The reviewer’s summary captures most points of our paper. However, we think there are still some details and innovations of the paper that are not noticed, and we will respond to the reviewer’s comments in detail.\n\n1, We understand that different community organizes papers in rather different ways. The subject we study crosses the field of machine learning and applied mathematics. Thus, we had to take the conventions of both fields into account when organizing the manuscript. In the current form of the paper, we first clearly described the PDE to be estimated (Eq.(1)). Then we link convolution with differential operators and state the necessary notions needed before we can introduce our entire network (in Section 2.1). Since the notions are the key to grant transparency to the PDE-Net while preserving its expressive power, it deserves a separate subsection. Then we introduced the network architecture and the loss function in Section 2.2, followed by the discussions on parameters and initialization in Section 2.3. In order to illuminate our thoughts more clearly, we had to state the framework in a general setting first, and show detailed and intricate implementations for each special numerical experiment. \n\n2, Sparsity is really important for selecting the simplest form from dictionaries, just as in symbolic regression (Bongard & Lipson,2007) and sparse regression (Brunton et al. ,2016). For the neural networks, sparsity is also a popular choice of regularization. However, for PDE-Net, our numerical experiments show that it already performs well and has a good generalization even without a sparsity based regularization. We also note that the total number of trainable parameters is smaller than most deep networks. Therefore, we do not need sparsity to further reduce the space of parameters to prevent overfitting.\n\n3, As clearly stated in the introduction, the existing work on learning PDEs from data requires either a fixed (non-trainable) numerical approximation of derivatives (Rudy et al. ,2017), or knowing the exact form of the nonlinear response function (Raissi &Karniadakis , 2017). For PDE-Net, however, unlike the existing work, the proposed network only requires minor knowledge on the form of the nonlinear response function, and requires no knowledge on the involved differential operators (except for their maximum possible order) and their associated discrete approximations. We think it is a breakthrough comparing to the existing work. To the best of our knowledge, it’s also the first time to introduce the relation between sum rule of filters and the order of differential operators into the design of neural networks. \n\n4, As for the “relation to some existing networks”, we think they do not have much to do with learning PDEs from data, since most of those work do not assume (or it’s simply untrue) that the underlying process is governed by some PDE. We have also noticed Sonoda’s contributions in this area. They interpreted continuous denoising autoencoder as an approximation to backward heat equation applied to the distribution of the data set (Sonoda& Murata, 2016). Overall, these networks and their analysis focus on rather different aspects from our task.\n\n5, Learning PDEs from data is a rather challenging task if you want both predictive power and transparency. Therefore, it is worthwhile to first investigate the viability of the proposed approach on simulated data sets so that we can have precise evaluations of the performance. Furthermore, it’s very hard for us to find suitable real 2D physical data, we have to test the idea first on simulated data sets. Application to real-world datasets is definitely one of our future directions. \n\nAt last, thanks again for the reviewer’s comments and suggestions. But we still strongly believe that this manuscript is meaningful and deserves publishing." ]
[ 7, 8, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SylJ1D1C-", "iclr_2018_SylJ1D1C-", "iclr_2018_SylJ1D1C-", "iclr_2018_SylJ1D1C-", "rJVcvUYlz", "B1JeHrDlf", "SJt5gvplM" ]
iclr_2018_S1TgE7WR-
Covariant Compositional Networks For Learning Graphs
Most existing neural networks for learning graphs deal with the issue of permutation invariance by conceiving of the network as a message passing scheme, where each node sums the feature vectors coming from its neighbors. We argue that this imposes a limitation on their representation power, and instead propose a new general architecture for representing objects consisting of a hierarchy of parts, which we call Covariant Compositional Networks (CCNs). Here covariance means that the activation of each neuron must transform in a specific way under permutations, similarly to steerability in CNNs. We achieve covariance by making each activation transform according to a tensor representation of the permutation group, and derive the corresponding tensor aggregation rules that each neuron must implement. Experiments show that CCNs can outperform competing methods on some standard graph learning benchmarks.
workshop-papers
This is a good contribution, with the potential to become extremely good and significant if presentation is substantially improved. All reviewers comment on the lack of clarity of the paper, especially concerning its central contributions (Section 4 and 5), as illustrated also by the relatively low confidence scores. Reviewers also mention the current imbalance between the generality of high-order compositional networks and the motivation and empirical evaluation of these models. Generalizations of graph neural representations based on higher order local interactions are particularly interesting in contexts such as combinatorial optimization, where heuristics typically exploit high-order interactions. In summary, we believe this work deserves a further iteration before it can be in proceedings in order to improve the exposition and the motivation of compositional networks, that will greatly improve its exposure to the community. That said, the idea it lays forward is of potential interest, and thus the AC recommends resubmission to the workshop track.
train
[ "S1MHyoFgf", "H14mK9iNG", "HkfwgoYef", "BJxhdf9lf", "ryrCjA5mz", "HywtoC97f", "HJmIyK5QG", "SkSEy4q7z", "HkiEKl5Xz", "HyqaFy9XG", "SyP9nCt7z", "rkNRStmzG", "Hyx1wVQzf", "Hk0MpZ2WG", "r1wHO8EWM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "public", "public", "public" ]
[ "Thank you for your contribution to ICLR. The paper covers a very interesting topic and presents some though-provoking ideas. \n\nThe paper introduces \"covariant compositional networks\" with the purpose of learning graph representations. An example application also covered in the experimental section is graph classification. \nGiven a finite set S, a compositional network is simply a partially ordered set P where each element of P is a subset of S and where P contains all sets of cardinality 1 and the set S itself. Unfortunately, the presentation of the approach is extremely verbose and introduces old concepts (e.g., partially ordered set) under new names. The basic idea (which is not new) of this work is that we need to impose some sort of hierarchical order on the nodes of the graph so as to learn hierarchical feature representations. Moreover, the hierarchical order of the nodes should be invariant to valid permutations of the nodes, that is, two isomorphic graphs should have the same hierarchical order on their nodes and the same feature representations. Since this is the case for graph embedding methods that collect feature representations from their neighbors in the graph (and where the feature aggregation functions are symmetric) it makes sense that \"compositional networks\" generalize graph convolutional networks (and the more general message passing neural networks framework). \n\nThe most challenging problem, however, namely the problem of finding a concrete and suitable permutation invariant hierarchical decomposition of the nodes plus some aggregation/pooling functions to compute the feature representations is not addressed in sufficient detail. The paper spends a lot of time on some theoretical definitions and (trivial) proofs but then fails to make the connection to an approach that works in practice. The description of the experiments and which compositional network is chosen and how it is chosen seems to be missing. The only part hinting at the model that was actually used in the experiments is the second paragraph of the section 'Experimental Setup', consisting of one long sentence that is incomprehensible to me. \n\nInstead of spending a lot of effort on the definitions and (somewhat trivial) propositions in the first half of the paper, the authors should spend much more time on detailing the experiments and the actual model that they used. In an effort to make the framework as general as possible, you ended up making the paper highly verbose and difficult to follow. \n\nPlease address the following points or clarify in your rebuttal if I misunderstood something:\n\n- what precisely is the novel contribution of your work (it cannot be \"compositional networks\" and the propositions concerning those because these are just old concepts under new names)?\n- explain precisely (and/or more directly/less convoluted) how your model used in the experiments looks like; why do you think it is better than the other methods?\n- given that compositional network is a very general concept (partially ordered set imposed on subsets of the graph vertices), what is the principled set of steps one has to follow to arrive at such a compositional network tailored to a particular graph collection? isn't (or shouldn't) that be the contribution of this work? Am I missing something?\n\nIn general, you should write the paper much more to the point and leave out unnecessary math (or move to an appendix). The paper is currently highly inaccessible.", "Dear authors,\n\nThanks for the additional information. \n\nWhen I write that a partial order is imposed on the nodes of the graph, I mean a partial order where the elements that are ordered are subsets of the set of nodes of the input graph G. This is indeed the case, because in the graph M the nodes are associated with subsets of nodes of the graph G (each node in M is associated with such a subset = the receptive field). I do understand now, however, that my words could be understood as me saying that your approach imposes an order on individual nodes of G. I know this is not the case.\n\nLet me take a step back and let me try to explain again why I think that some parts of the paper are unnecessarily verbose and make it much less accessible than it could be. \n\nFirst, let me reformulate the computational problem you are addressing here (as far as I understand it).\n\nThe input is a graph G (let's call this the data graph) and the output is a computation graph M. The computation graph M has several properties:\n\n(1) The nodes are associated with computations that are determined by the aggregation functions. Incoming edges determine the input to the performed computation (the variables). \n(2) The nodes are associated with a receptive field P. This shows which basic variables (features at individual nodes of G) are involved in the operation that's performed at the node of M.\n(3) The nodes in the first layer are associated with the nodes of G. This makes sense because we want to always start with the nodes in G and their features. The function with these features as variables is determined by the computation graph M.\n(4) We want M to be invariant to (certain types of) symmetries of G. For instance, some permutation of G's labeling that is an isomorphism should also be an isomorphism in M. (Wrt to the graph structure but also the aggregation functions that are computed.)\n\nAgain, this is the computational problem as I understand it. And it is a problem that everyone who tries to learn NNs for graphs is (implicitly) working on. Everything you write in section 3 is a (in my opinion) verbose way of setting up this problem. Algorithm 1 is one way of creating M from G commonly used in the literature. It simply uses the neighborhood information to determine the edge structure of M. \n\nMy worry about your paper is that the reader is already lost at a point where you have essentially just reformulated a problem that several papers have addressed before. \n\nThe interesting new parts of the paper are in section 4 and 5. Unfortunately, I think that these sections are also in parts unnecessarily technical and lack intuitive examples. I generally appreciate works that provide a unifying framework capturing previous work as specific instances. I think that the paper succeeds here and that this is the major contribution. But I also think that it is the authors job to make the paper accessible. That's the reason why I would urge the authors to rework the presentation. Add more examples (the ones given are not very helpful), describe the intuitions behind the definitions earlier etc.\n\nFinally, while the paper is exhaustive on the technical parts, it provides very little details on the experimental set up. There is one short paragraph explaining your model. It would be very helpful to the reader if you would explain the links between the model used in your experiments and the theory introduced earlier in more detail. \n\nin summary, I appreciate the effort of the authors to engage in a conversation that did help me to better understand the paper. In light of this discussion, I'll increase my score by one point. I am still tending more towards a rejection because of the papers presentation that could be improved substantially and also due to the lack of details in the experimental section. \n\n", "The paper presents a generalized architecture for representing generic compositional objects, such as graphs, which is called covariant compositional networks (CCN). Although, the paper is well structured and quite well written, its dense information and its long size made it hard to follow in depth. Some parts of the paper should have been moved to appendix. As far as the evaluation, the proposed method seems to outperform in a number of tasks/datasets compare to SoA methods, but it is not really clear whether the out-performance is of statistical significance. Moreover, in Table 1, training performances shouldn't be shown, while in Table 3, RMSE it would be nice to be shown in order to gain a complete image of the actual performance.", "The paper introduces a formalism to perform graph classification and regression, so-called \"covariant compositional networks\", which can be seen as a generalization of the recently proposed neural message passing algorithms.\n\nThe authors argue that neural message passing algorithms are not able to sufficiently capture the structure of the graph since their neighborhood aggregation function is permutation invariant. They argue that relying on permutation invariance will led to some loss of structural information.\n\nIn order to address this issue they introduce covariant comp-nets, which are a hierarchical decompositon of the set of vertices, and propose corresponding aggregation rules based on tensor arithmetic.\n\nTheir new method is evaluated on several graph regression and classification benchmark data sets, showing that it improves the state-of-the-art on a subset of them.\n\nStrong points:\n+ New method that generalizes existing methods\n\nWeak Points:\n- Paper should be made more accessible, especially pages 10-11\n- Should include more data sets for graph classification experiments, e.g., larger data sets such as REDDIT-*\n- Paper does not include proofs, should be included in the appendix\n- Review of literature could be extended\n\nSome Remarks:\n* Section 1: The reference Feragen et al., 2013 is not adequate for kernels based on walks.\n* Section 3 is rather lengthy. I wonder if its contents are really needed in the following.\n* Section 6.5, 2nd paragraph: The sentence is difficult to understand. Moreover, the reference (Kriege et al., 2016) appears not to be adequate: The vectors obtained from one-hot encodings are summed and concatenated, which is different from the approach cited. This step should be clarified.\n* unify first names in references (Marion Neumann vs. R. Kondor)\n* P. 5 (bottom) broken reference", "[CONTINUE FROM PART 1]\n\n(5) To address the question from (4) in designing a perfectly permutation-invariant neural network, here is the algorithm. \n\nFirst, consider a level/layer/iteration L-th of the message passing. We compute the receptive field of v at level L as the union of the receptive fields at level L - 1 of vertices w in the neighborhood of v. Take the \"first-order covariant compositional network\" as an example where the vertex representation is a matrix. The number of rows of vertex representation of w at level L - 1 is smaller or equal to the number of rows of vertex representation of v at level L, because the receptive field of w at level L - 1 is a subset of the receptive field of v at level L. We need to have the sizes of these two matrices equal. We can do so by multiplying the vertex representation of w at level L - 1 with a permutation matrix P on the left side. This permutation matrix P is simply defined as follows: P_ij = 1 if the i-th vertex in the receptive field of v at level L is the j-th vertex in the receptive field of w at level L - 1. In the case of \"second-order covariant compositional network\", we have to broadcast-multiply P and P-transpose on the left and right hand side.\n\nSecond, now all the vertex representations of vertices in the receptive field of v have the same size. We concatenate or formally saying \"tensor stack\" all these vertex representations. We obtain a higher-order tensor. From here, as in the \"second-order covariant compositional network\", we can \"tensor product\" this higher-order tensor with the reduced adjacency matrix of v at level L (subject to the receptive field of v at level L) and obtain an even higher-order tensor. For example, in the second-order CCN, after the \"tensor product\" step, we obtain a 6-order tensor.\n\nThird, given a high-order tensor as the representation of vertex v, we have to \"tensor contract\" or \"tensor reduce\" or in normal word \"shrink\" back it into a lower-order tensor for feasible computation. These tensor contraction operations as we define are perfectly permutation-invariant. Thus, we contract from high-order tensor into a matrix (in first-order CCN) or in a 3-order tensor (in second-order CCN).\n\nForth, we apply a learnable weight for all the channels in all vertex representations after the tensor contraction step. These learnable weights are leanred via back-propagation.\n\nFifth, on top of the network, we again \"shrink\" the vertex representations (matrices in first-order CCN, or 3-order tensors in second-order CCN) into vectors of channels. We sum up all these \"shrinked representations\" into a single vector, this is the vector for further regression/classification task. In addition, we can concatenate all \"shrinked representations\" of all levels, this can be a richer graph representation.\n\n(6) You may ask a question: The high-order tensors are huge, how can we deal with them? The answer is: we do NOT do the \"tensor product\" operation explicitly, because it cannot be hold in the computer memory. In the \"tensor contraction\" step, for example with GPUs, we introduce a \"virtual indexing system\" for a \"virtual tensor\" that computes the element of tensor only when needed given the index.\n\nThank you so much for your consideration. Please let us know your further questions. We look forward to hear from you soon.\n\nBest regards,\nRepresentative of the paper authors ", "Dear Reviewer 2,\n\nThank you very much for your effort in understanding our paper. I think it is necessary for us to clarify our algorithm here again without too much mathematics to avoid further misunderstanding. Please check my following words carefully.\n\n(1) Based on our definition, all other graph neural networks in the current literature, in particular Neural Graph Fingerprint [Duvenaud et al, 2015], Graph Convolution Neural Network [Kipf et al, 2016], Gated Graph Sequence Neural Network [Li et al, 2016], Learning Convolution Neural Network for Graphs [Niepert et al, 2016], are classified as the \"zero-order message passing\" in which at any level/layer/iteration the vertex representation is a vector, each element of the vector is for a channel. In another words, every channel is represented by a scalar.\n\n(2) From (1), we realize that to empower the representation we need more than a scalar per channel. We introduce the \"first-order message passing\" or in another name the \"first-order covariant compositional network\" in which the vertex representation of vertex v is a matrix, each row of the matrix corresponds to a vertex w in the receptive field of the vertex v. Now we can see that each channel (column of the matrix) is represented by a vector with the length as the size of the receptive field of v. At level/layer/iteration 0, the receptive field of v is a set containing only v. The receptive field of v grows gradually in the following level/layer/iteration.\n\n(3) Furthermore, we introduce the \"second-order message passing\" or in another name the \"second-order covariant compositional network\" such that the vertex representation of vertex v is now a 3-order tensor in which each channel is represented by a matrix of size N x N where N is the size of the receptive field of v.\n\n(4) You may ask a question: The receptive field of v or the set of vertices in the extended neighborhood of v can appear in any order, we have to use some algorithm to find the \"partial ordering\" of that receptive field? The answer is we do NOT need such algorithm. If we use some algorithm like Weisfeiler-Lehman algorithm to rank the vertices then it becomes Learning Convolution Neural Network for Graphs [Niepert et al, 2016]. In addition, finding an optimal ordering is an NP-hard problem, the Weisfeiler-Lehman isomorphism test still fails with a very small probability. You can check [Babai 2015] https://arxiv.org/pdf/1512.03547.pdf for details. What we want is a neural network model that is perfectly permutation-invariant.\n\n[TO BE CONTINUED]", "You fundamentally misunderstood our paper because you think we impose a partial order on the nodes of the input graph. We do not. I cannot stress this enough.\n\nThere are two different graphs in the paper: the input graph G and the corresponding composition scheme M. G is an an undirected graph and there is no partial order on it. M is a directed acyclic graph (by construction) and therefore it defines a partial order. \n\nI can see where you are coming from because if there was a partial order on the nodes of G in the first place then it would be easier to turn it into a neural network. That would be a \"cheap\" way of creating a representation for G. However, as you rightfully point out, imposing a partial order on the nodes of undirected graph so as to reflect multiscale structure is a thorny problem that we explicitly want to avoid (there is a literature on graph reductions which gets tangled up in exactly this issue).\n\nA lot of the paper (specifically, Section 3) is concerned with the relationship between G and M. Some of what you deem \"unnecessary math\" is about how to construct M in such a way that it behaves appropriately under transformations of G (Figure 3).\n\nWe do appreciate your quick response because it is essential that we clarify this point. I would also like to understand what it is that made you think that we impose a partial order on the nodes of G, because if you were mislead by this, then other readers will be mislead too. Turning your question around, can you point out the specific location in the paper that made you think that we impose a partial order on G?\n\nOnce again, G and M are different graphs. Every node of M corresponds to a set of nodes of G. Typically, M has many more nodes than G. There is no partial order on the nodes of G. ", "Before responding to your rebuttal, could you please point out precisely where I \"fundamentally misunderstood\" your paper. You do impose a partial order on the nodes of the input graph. That's not what I could have misunderstood. Thanks!", "The reviewer's comments would be valid if our method really was based on imposing a partial order on the vertices of the input graph (i.e., if we were turning it into a DAG). However this is not what we do.\n\nOur algorithm operates with two separate graphs: the original graph G, and the composition scheme M constructed from G. M is indeed a DAG (i.e., it defines a partial order), but G is whatever the input is, so there is no need to impose \"some sort of hierarchical order\" on its nodes that the reviewer is missing from the presentation.\n\nThe nodes of M correspond to *subsets* of the nodes of G, specifically neighborhoods of increasing radii, which naturally form a partial order by inclusion. This is explicitly stated in the paper in multiple places, eg., \"In composition networks for graphs, the atoms will usually be the vertices, and the P_i parts will correspond to clusters of nodes or neighborhoods of different radii\" (p.4). The construction of M is given explicitly on in points M1-M3 on page 5. Furthermore, Figure 5 shows how the neighborhoods are nested in the case of a simple input graph.\n\nThe reason for all the definitions in Sections 3 and 4 is that we need to define how M is constructed and what covariance conditions the corresponding activations must satisfy. Incidentally, this is also the main novelty of the paper. If the reviewer didn't understand these sections, it is not surprising that the \"Experiments\" section seems seems hazy and the tensor contractions in Section 5 just seem like unnecessary fluff.\n\nGiven the above misunderstanding it is also not surprising that the reviewer thinks that the paper is not very novel. The word \"compositional network\" may have been used before in different contexts, but the general way to construct a DAG from a structured input object, and the invariance properties discussed in Section 4 have not been discussed before in the literature. As we explain, some existing networks are special cases, but the general framework is entirely novel.\n\nBy breaking down our presentation into a sequence of fairly precise definitions and propositions, and drawing figures such as Figure 2, which depicts the composition scheme, and Figure 5, which depicts the neighborhoods in G that correspond to each of the nodes of the composition scheme, we tried to make the notion of composition scheme as clear as possible. We are disappointed that the message still didn't get through, and we would welcome suggestions on how to better convey what is really the main message and main novelty in the paper. (Maybe by a figure with the original graph and the composition scheme side by side?) \n\nWe would appreciate it if the reviewer revised his evaluation in light of the above clarification and would very much welcome suggestions on how to change the presentation so as to avoid other leaders falling prey to the same misunderstanding. However, we can vouch that the math is not unnecessary: without it, this construction would simply not work. \n\n \n\n", "Thank you for your review. We tried to fix everything that you mention. In particular: we have added an appendix that contains all the proofs; added one more reference for kernels based on counting walks; rewrote the paragraph you mention in the \"Experiments\" section; generally extended and cleaned up the experimental results.\n\nWe appreciate your comment about Section 3. The reason that we structured the paper as we did was to emphasize that we are have a new general architecture for learning from structured (multiscale) objects and that graphs are just a special case. In fact, this work started out significantly more abstract and general, revolving around ideas from representation theory, and we were pretty happy that by reformulating it in the compositional framework we could condense it to something that can be described in 15 pages and ultimately reduces to just tensor products and contractions. It seems like maybe we are not doing a very good job of conveying the generality and power of the approach, though, because all three reviews complain about \"why don't you just get down to describe your graph algorithm?\". Pages 10-11 may be dense, but the truth is that, at the end of the day, the tensor operations that they describe are mercifully straightforward, almost trivial.\n\nHaving said this, we could not find any existing deep learning software that implements the type of contractions that we need, so we had to write our own library. The library has now been released, but we cannot include a link here because it would break the anonymity of the submission. At the time of the submission the library could only use CPUs, that's why the datasets are relatively small. Since then, we have extended the library to be able to use GPU's so we now have the capability of running larger experiments and we will definitely try REDDIT. Thanks for the suggestion.\n\n\n ", "Thanks for your comments. The paper is longer than usual because rather than just proposing a tweak on some existing algorithm, it derives a general framework for dealing with the issue of covariance in compositional-type neural architectures. We couldn't figure out how to present all this in less than 15 pages without making the paper unreadably dense.\n\nOur results on HCEP are quite spectacular. While we didn't perform formal tests of statistical significance, it is quite clear that CCN far outperforms the other methods. For the MUTAG, etc. benchmarks it is harder to show statistical significance, simply because these datasets are small and multiple algorithms are neck and neck. For QM9, although PSCN is admittedly close, we consistently beat it (as well as the other two competitors) on all 13 regression tasks in both MAE and RMSE, which again gives us confidence that the results are not just a statistical fluke. \n\nThank you for your suggestion to include RMSE, we added a separate table to show that. Incidentally, the experiments required writing our own custom deep learning library in C++ (which now can also use GPUs). The code has been released on GitHub, unfortunately we cannot provide a link at this point because it would break the anonymity of the submission.\n\nNotwithstanding the above, we feel that sate-of-the-art experimental results are just one part of the paper's contribution. The new conceptual framework of compositional networks and our mathematical results on how to make them covariant is even more important. We would appreciate it if the review was revised to reflect on the main body of the paper as well, not just the \"Experiments\" section.\n\n\n", "Dear Justin, \n\nThanks for reading and your comment! \n\nIn section 4, we explicate on this a bit more. The output of an internal node in a MPNN depends on its receptive field as a set (and not an ordered set), so it is invariant to permutations of the nodes, not covariant. \n\nI hope that answers your question?\n", "Hello, nice paper! I have a question about the following claim - \"While MPNNs have been very successful in various applications and are an active field of research, they differ from classical CNNs in a fundamental way: the internal feature representations in CNNs are equivariant to transformations of the input such as translation and rotations (Cohen & Welling, 2016a;b), contrasted with those in MPNNs, which are merely invariant.\"\n\nThe MPNN message passing phase is equivariant to permutations of the nodes, it is only the readout phase which is invariant. Am I missing something here?", "Social media data-sets, e.g. REDDIT-* and IMDB-BINARY, should also be interesting. Have a look at data sets used in the OA-Kernel-Paper or graphkernels.cs.tu-dortmund.de.", "The graph kernel benchmark has two more datasets (Proteins and Enzymes), it would be interesting if the authors can report the results on them. Also, there are some other papers like Deep Graph Kernels (KDD 2015) and Optimal Assignment WL-graph kernel (NIPS 2016), that their results are not mentioned in table 2." ]
[ 5, -1, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, 2, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1TgE7WR-", "HJmIyK5QG", "iclr_2018_S1TgE7WR-", "iclr_2018_S1TgE7WR-", "SkSEy4q7z", "SkSEy4q7z", "SkSEy4q7z", "HkiEKl5Xz", "S1MHyoFgf", "BJxhdf9lf", "HkfwgoYef", "Hyx1wVQzf", "iclr_2018_S1TgE7WR-", "r1wHO8EWM", "iclr_2018_S1TgE7WR-" ]
iclr_2018_SyUkxxZ0b
Adversarial Spheres
State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input. In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image. Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved. We hypothesize that this counter intuitive behavior is a naturally occurring result of the high dimensional geometry of the data manifold. As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres. For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error. In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size O(1/d). Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound. As a result of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.
workshop-papers
This paper studies the interplay between adversarial examples and generalization in the uniform setting (not specific assumptions on the architecture) in a toy high-dimensional setting. In particular, the authors show a fundamental tradeoff between generalization error and the average distance of adversarial examples. Reviewers were skeptical about the possible significance of this work, but the paper underwent a major revision that greatly improved the quality of presentation. That said, the results are still preliminary since they only consider a toy dataset (concentric spheres). The AC recommends re-submitting this work to the workshop track.
val
[ "r1LAwb9xz", "rJOiq-clf", "HkKWeUCef", "By64zFzVG", "rJFzMBpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The idea of analyzing a simple synthetic data set to get insights into open issues about adversarial examples has merit. However, the results reported here are not sufficiently significant for ICLR.\n\nThe authors make a big deal throughout the paper about how close to training data the adversarial examples they can find on the data manifold are. E.g.: “Despite being extremely rare, these misclassifications appear close to randomly sampled points on the sphere.” They report mean distance to nearest errors on the data manifold is 0.18 whereas mean distance between two random points on inner sphere is 1.41. However, distance between two random points on the sphere is not the right comparison. The mean distance between random nearest neighbors from the training samples would be much more appropriate.\n\nThey also stress in the Conclusions their Conjecture 5.1 that under some assumptions “the average distance to nearest error may decrease on the order of O(1 / d) as the input dimension grows large.” However, earlier they admitted that “Whether or not a similar conjecture holds for image manifolds is unclear and should be investigated in future work.” So, the practical significance of this conjecture is unclear. Furthermore, it is well known that in high dimensions, the distances between pairs of training samples tends towards a large constant (e.g. making nearest neighbor search using triangular inequality pruning infeasible), so extreme care much be taken to not over generalize any results from these sorts of synthetic high dimensional experiments.\n\nAuthors note that for higher dimensional spheres, adversarial examples on the manifold (sphere shell) could found, but not smaller d: “In our experiments the highest dimension we were able to train the ReLU net without adversarial examples seems to be around d = 60.” Yet,in their later statement in that same paragraph “We did not investigate if larger networks will work for larger d.”, it is unclear what is meant by “will work”; because, presumably, larger networks (with more weights) would be HARDER to avoid adversarial examples being found on the data manifold, so larger networks should be less likely “to work”, if “work” means avoid adversarial examples. In any case, their apparent use of only h=1000 unit networks (for both ReLU and quadratic cases) is disappointing, because it is not clear whether the phenomena observed would be qualitatively similar for different fully-separable discriminants (e.g. different h values with different regularization costs even if all such networks had zero classification errors).\n\nThe authors repeat the following exact same phrase in both the Introduction and the Conclusion:\n“Our results highlight the fact that the epsilon norm ball adversarial examples often studied in defence papers are not the real problem but are rather a tractable research problem. “\nBut it is not clear exactly what the authors meant by this. Also, the term “epsilon norm ball” is not commonly used in adversarial literature, and the only reference to such papers is Madry et al, (2017), which is only on ArXiv and not widely known — if these types of adversarial examples are “often studied” as claimed, there should be other / more established references to cite here.\n\nIn short, this work addresses the important problem of better understanding adversarial examples, but the simple setup has a higher burden to establish significance, which this paper as written has not met.\n\n", "The paper considers the synthetic problem setting of classifying two concentric high dimensional spheres and the worst case behavior of neural networks on this task, in the hope to gain insights about the vulnerability of deep networks to adversarial examples. The problem dimension is varied along with the class separation in order to control the difficulty of the problem.\n\nConsidering representative synthetic problems is a good idea, but it is not clear to me why this particular choice is useful for the purpose.\n\n2 kind is \"attacks are generated\" for this purpose, and the ReLU network is simplified to a single layer network with quadratic nonlinearity. This gives an ellipsoid decision boundary around the origin. It is observed that words case and average case empirical error estimates diverge when the input is high dimensional. A Gaussian tail bound is then used to estimate error rates analytically for this special case. It is conjectured that the observed behaviour has to do with high dimensional geometrie.\n\nThis is a very interesting conjecture, however unfortunately it is not studied further. Some empirical observations are made, but it is not discussed whether what is observed is surprising in any way, or just as expected? For instance that there is nearly no error when trying to categorise the two concentric spheres without adversarial examples seems to me expected, since there is a considerable margin between the classes. The results are presented in rather descriptive rather than a quantitative way.\n\nOverall, this works seems somewhat too preliminary at this stage.\n\n", "Adversarial example is studied on one synthetic data.\nA neural networks classifier is trained on this synthetic data. \nAverage distances and norms of errorneous perturbations are computed. \nIt is observed that small perturbation (chosen in a right direction) is sufficient to cause misclassification. \n\nCONS:\nThe writing is bad and hard to follow, with typos: for example what is a period just before section 3.1 for? Another example is \"Red lines indicate the range of needed for perfect classification\", which does not make sense. Yet another example is the period at the end of Proposition 4.1. Another example is \"One counter-intuitive property of adversarial examples is it that nearly \". \n\nIt looks as if the paper was written in a hurry, and it shows in the writing. \n\nAt the beginning of Section 3, Figure 1 is discussed. It points out that there exists adversarial directions that are very bad. But I don't see how it is relevant to adversarial examples. If one was interested in studying adversarial examples, then one would have done the following. Under the setting of Figure 1, pick a test data randomly from the distribution (and one of the classes), and find an adversarial direction\n\nI do not see how Section 3.1 fits in with other parts of the paper. Is it related to any experiment? Why it defining a manifold attack?\n\nPutting a \"conjecture\" on a paper has to be accompanied by the depth of the insight that brought the conjecture. Having an unjustified conjecture 5.1 would poison the field of adversarial examples, and it must be removed.\n\nThis paper is a list of experiments and observations, that are not coherent and does not give much insight into the topics of \"adversarial examples\". The only main messages are that on ONE synthetic dataset, random perturbation does not cause misclassification and targeted classification can cause misclassification. And, expected loss is good while worst-case loss is bad. This, in my opinion, is not enough to be published at a conference. \n", "[2] Anonymous. The Manifold Assumption and Defenses Against Adversarial Perturbations. https://openreview.net/forum?id=Hk-FlMbAZ\n[3] Anonymous. Defense-gan: Protecting classifiers against adversarial attacks using generative models. https://openreview.net/forum?id=BkJ3ibb0-\n[4] Anonymous. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. https://openreview.net/forum?id=rJUYGxbCW.\n", "\tThank you to the reviewers for their comments. We have made significant changes to the abstract, and sections 5 and 6 of the paper. We hope the reviewers can reread these sections as it should answer many of their questions as to what insights and takeaways our experiments give. We also clarify in this rebuttal what this paper achieves, what insights can be gained, and why it is significant. \n\tThe goal of this paper was to understand why machine learning models trained on high dimensional spaces are vulnerable to small perturbations (as in the case of image models). Past work has treated the existence of adversarial examples as a problem due to model architecture, loss function, or training data. We consider instead the possibility that even a small amount of classification error may sometimes logically force the existence of many adversarial examples. We use this synthetic dataset to illustrate and explore this phenomenon. Obtaining a complete picture of the geometry of machine learning decision boundaries in high dimensional spaces is extremely difficult and necessitates simplifying the problem where it can be easily understood. The motivation to study concentric spheres is to isolate the effect of high dimensionality of a data manifold on the nature of adversarial examples in a simple setting that can be analyzed theoretically.\n\tWe have updated the paper with a proof of Conjecture 5.1 (now Theorem 5.1). The theorem follows from the special case we proved in the Appendix and an isoperimetric inequality of [1]. Intuitively, what [1] showed is that the subset E of the sphere of a given measure which maximizes the average distance to E is a “pole” (for example in d=3 a local region near the north pole of area 1% is the region which maximizes the average distance to E). What we calculated in the Appendix is that an error set like the north pole of fixed measure will extend to within O(1/sqrt{d}) of the equator, this calculation combined with the result in [1] implies Theorem 5.1. This gives a theoretically optimal tradeoff between the amount of classification error and the average distance to nearest error, and illustrates the counterintuitive nature of high dimensional spaces.\n\tAmazingly, when we compare several trained neural networks to this optimal bound we see that they naturally are within a small factor of it. This occurs for 3 different architectures, the ReLU and quadratic networks studied in the first version of the paper, and also a large h=2000 depth 8 ReLU network. This behavior can be seen in the updated paper (Figure 5). This answers reviewer 3’s question about larger networks being more vulnerable to small adversarial perturbations. We see that this is not the case at all in Figure 5 - across all 3 architectures, what determines the average distance to nearest error is the accuracy of the network, not the size or complexity of the architecture. Figure 5 demonstrates quantitatively that the model’s decision boundaries are remarkably well behaved, the geometry of the error sets are close to optimal given the amount of test error each network has. Thus the reason random points are “close” to an error is a fundamental consequence of the high dimensional space, and not the result of some other issue of neural networks.\n\tThis paper raises a lot of questions as to the nature of adversarial examples, in particular what is the difference between an adversarial example and a test error? This paper shows that, at least for this dataset, there is no difference between a test error and local adversarial error. Thus there isn’t something “special” to do in order to significantly increase adversarial robustness other than reduce the amount of test error. One type of defense strategy assumes that adversarial examples are off the data manifold, this is what motivated the manifold attack. Several recent submissions to ICLR [2], [3], [4] propose defenses based on this assumption. The manifold attack searches for local errors in the data distribution, and is used to construct all adversarial examples found in this paper. At least in this simple setting we can say confidently that there are local errors both on and off the data manifold, and which ones the attacker finds depends on the attack algorithm.\n\tOf course this is a toy problem, and we should be cautious before arriving at similar conclusions for image datasets. However, we believe this is an important direction to pursue which completely rethinks the nature of adversarial examples. Perhaps local adversarial errors are a naturally occurring phenomenon on high dimensional datasets? We hope that a complete understanding of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.\n\n[1] Tadeusz Figiel, Joram Lindenstrauss, and Vitali D Milman. The dimension of almost spherical\nsections of convex bodies. Acta Mathematica, 139(1):53–94, 1977.\n" ]
[ 4, 5, 3, -1, -1 ]
[ 4, 3, 3, -1, -1 ]
[ "iclr_2018_SyUkxxZ0b", "iclr_2018_SyUkxxZ0b", "iclr_2018_SyUkxxZ0b", "rJFzMBpmz", "iclr_2018_SyUkxxZ0b" ]
iclr_2018_ryZ8sz-Ab
Fast and Accurate Text Classification: Skimming, Rereading and Early Stopping
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.
workshop-papers
this is an interesting approach that applies the idea of dynamically controlling the amount of information from the input fed into the classifier (some of the earlier approaches have used this idea for, e.g., parsing, real-time translation, online speech recognition, and so on...) this is also related to some of the recent work on hierarchical recurrent nets [Chung et al.]. unfortunately, two of the reviewers and other commenters found this manuscript needs more work to clarify motivation, implication and relationship to other existing works, with which i don't necessarily disagree.
train
[ "rkWJ_hbrM", "SkDsWQqgz", "HkN4OZr4G", "HyAwBpKeG", "rk_3IZjlM", "BkTK6oNmG", "Bk7JAsNmM", "HkrNaoE7z", "Hkq1piVQM", "BkOOxwFxz", "ry89PEKgM", "Hkc-XfHA-" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public" ]
[ "Thanks for the insightful reviews, here are our answers to these questions:\n1- Concerning the sequence2sequence and sequence2scalar problem: We have a typo here that writing the PoS tagging as \"seq2scalar\", this task should be classification problem. We are only addressing the classification problem in this work since it is complicated to directly apply skimming and early-stopping mechanisms into seq2seq task since the model should at least visit each unit in the seq2seq tasks. We will leave this research question as interesting further work. \n2- Concerning the use of actor-critic: Thanks for pointing out this statement. We have addressed the issue in our revision. The new statement is that we argue the advanced performance is brought by a better reward design which incorporates the negative energy cost explicitly. \n3- Concerning the chunk size: Yes, the reference should be figure 7 rather than figure 8 here. Also, we notice the chunk size here is as same as the `frame-skip` hyper-parameter in conventional reinforcement learning. So automatically choosing the best frame-skip remains an interesting future work, both for RL and its NLP application. \n4- The reason we are using these two datasets is that Yu et al. only performs experiments on those two. We then conducted experiments on the whole four datasets and updated our result in the revision. On word-level DBpedia dataset, we selected the number of tokens before a jump as five as it is the same as the chunk size of our experiment. Then we conducted a grid search on hyper-parameters (N, K) and finally chose the one with better accuracy than full reading baseline. Here N=8 and K = 3. Yu et al.'s relative FLOPs is 76.36% while ours is 44.34% under same accuracy. Thus, our model outperforms Yu et al.'s with a large margin. Similarly, we obtained the optimal hyper-parameters N = 15 and K = 3 on sentence-level Yelp dataset. Here Yu et al.'s relative FLOPs is 82.34%, which is higher than our FLOPs 70.02% under same accuracy. So our model outperforms Yu's paper on all four datasets, making the experimental result more convincing. ", "This paper proposes to augment RNNs for text classification with a mechanism that decides whether the RNN should re-read a token, skip a number of tokens, or stop and output the prediction. The motivation is that one can stop reading before the end of the text and/or skip some words and still arrive to the same answer but faster.\n\nThe idea is intriguing, even though not entirely novel. Apart from the Yu et al. (2017) cited, there is older work trying to save computational time in NLP, e.g.: \nDynamic Feature Selection for Dependency Parsing.\nHe He, Hal Daumé III and Jason Eisner.\nEmpirical Methods in Natural Language Processing (EMNLP), 2013\nthat decides whether to extract a feature or not.\nHowever, what is not clear to me what is achieved here. In the example shown in Figure 5 it seems like what happens is that by not reading the whole text the model avoids passages that might be confusing it. This could improve predictive accuracy (as it seems to do), as long as the model can handle better the earlier part of the text. But this is just an assumption, which is not guaranteed in any way. It could be that the earlier parts of the text are hard for the model. In a way, it feels more like we are addressing a limitation of RNN models in understanding text. \n\nPros:\n- The idea is intersting and if evaluated thoroughly it could be quite influential.\n\nCons:\n- the introduction states that there are two kinds of NLP problems, sequence2sequence and sequence2scalar. I found this rather confusing since text classification falls in the latter presumably, but the output is a label. Similarly, PoS tagging has a linear chain as its output, can't see why it is sequence2scalar. I think there is a confusion between the methods used for a task, and the task itself. Being able to apply a sequence-based model to a task, doesn't make it sequential necessarily.\n\n- the comparison in terms of FLOPs is a good idea. But wouldn't the relative savings depend on the architecture used for the RNN and the RL agent? E.g. it could be that the RL network is more complicated and using it costs more than what it saves in the RNN operations.\n\n- While table 2 reports the savings vs the full reading model, we don't know how much worse the model got for these savings. \n\n- Having a trade-off between savings and accuracy is a good idea too. I would have liked to see an experiment showing how many FLOPs we can save for the same performance, which should be achievable by adjusting the alpha parameter.\n\n- The experiments are conducted on previously published datasets. It would be good to have some previously published results on them to get a sense of how good the RNN model used is.\n\n- Why not use smaller chunks? 20 words or one sentence at the time is rather coarse. If anything, it should help the model proposed achieve greater savings. How much does the choice of chunk matter?\n\n- it is stated in the conclusion that the advantage actor-critic used is beneficial, however no experimental comparison is shown. Was it used for the Yu et al. baseline too?\n\n- It is stated that model hardly relies on any hyperparameters; in comparison to what? It is better to quantify such statements,", "The authors addressed most of my comments. However the didn't address the following:\n- \"the introduction states that there are two kinds of NLP problems, sequence2sequence and sequence2scalar....\"\n- \"Actor-critic use\": it is still stated the this paper illustrated the advantage of this approach, but it turns out that this is the same as Yu et al. thus it is not novel as implied in the last paragraph before conclusions.\nAlso, it seems like the choice of chunk size is an important hyperparameter of the model, since it affects its efficiency in terms of savings. This needs to be explored fully. Also Figure 8 is referenced as also doing component analysis in section 3.3, and each curve uses different components of the model. Is it meant to be Figure 7 there?\n- There should be more comparisons with the very closely related method of Yu et al. (2017), but almost all of them are against the partial reading baseline. It is odd to consider Yu et al. only in Table 2 and only 2 datasets, and not throughout the paper, in order to convince of the novelty and impact of the proposed approach.\n\nI have improved my score, but I still think the ready is not at the required level for publication in ICLR.", "The authors propose a sequential algorithm that tackles text classification while introducing the ability to stop the reading when the decision to make is confident enough. This sequential framework -reinforcement learning with budget constraint- will be applied to document classification tasks. The authors propose a unified framework enabling the recurrent network to reread or skip some parts of a document. Then, the authors describe the process to ensure both a good classification ability & a reduced budget.\nExperiments are conducted on the IMDB dataset and the authors demonstrate the interest to select the relevant part of the document to make their decision. They improve both the accuracy & decision budget.\n\n\nIn the architecture, fig 1, it is strange to see that the decision to stop is taken before considering the label probability distribution. This choice is probably made to fit with classical sequential decision algorithms, assuming that the confidence level can be extracted from the latent representation... However, it should be discussed.\n\nThe interest of rereading a word/sentence is not clear for me: we simply choose to overweight the recent past wrt the further. Can it be seen as a way to overcome a weakness in the information extraction?\n\nAt the end of page 4, the difference between the early stopping model & partial reading model is not clear for me. How can the partial reading model be overcome by the early-stopping approach? They operate on the same data, with the same computational cost (ie with the same algorithm?)\n\nAt the end of page 6, authors claim that their advantage over Yu et al. 2017 comes from their rereading & early stopping abilities:\n- given the length of the reviews may the K-skip ability of Yu et al. 2017 be seen as an early stopping approach?\n- are the authors confident about the implementation of the Yu et al. 2017' strategy?\n- Regarding the re-reading ability: the experimental section is very poor and we wonder:\n -- how the performance is impacted by rereading?\n -- how many time does the algorithm choose to reread?\n -- experiments on early-stopping VS early-stopping + skipping + rereading are interesting... We want to see the impact of the other aspects of the contribution.\n\nOn the sentiment analysis task, how does the model behave wrt the state of the art?\n\nGiven the chosen tasks, this work should be compared to the beermind system:\nhttp://deepx.ucsd.edu/#/home/beermind\nand the associated publication\nhttp://arxiv.org/pdf/1511.03683.pdf\nBut the authors should also refer to previous work on their topic:\nhttps://arxiv.org/pdf/1107.1322\nThe above mentioned reference is really close to their work.\n\n\nThis article describes an interesting approach but its main weakness resides in the lack of positioning wrt the literature and the lack of comparison with state-of-the-art models on the considered tasks.\n", "The paper present a model for fast reading for text classification with mechanisms that allow the model to reread, skip words, or classify early before reading the entire review. The model contains a policy module that makes decisions on whether to reread, skim or stop, which is rewarded for both classification accuracy and computation cost. The entire architecture is trained end-to-end with backpropagation, Monte Carlo rollouts and a baseline for variance reduction. \n\nThe results show that the architecture is able to classify accurately on all syntactic levels, faster than a baseline that reads the entire text. The approach is simple and seems to work well and could be applied to other tasks where inference time is important. ", "1- Regarding the FLOP counts: The FLOP counts, already provided in the first version, demonstrate that RL nets with significantly fewer operations are sufficient. To provide additional information, take the IMDB dataset as an example: the average number of read words is about 100, so the average number of decisions is 100 / 20 = 5 per sentence (chunk size is 20). The computational cost for each decision is 0.2 million FLOPs. The FLOP count for all decisions is hence 0.2 * 5 = 1 million FLOPs per sentence. Note that this cost is much smaller than the cost of the classifier (at least 25 million FLOPs as shown in Figure 2).\n\n2- Comparison between our model and full-reading: In Table 2, we compared our proposed model to the one of [1] by presenting the energy cost necessary to achieve identical accuracy. Our model only needs 29.33% FLOPs, while 57.80% are needed for the model of [1]. By adjusting the computational budget, our optimal model can achieve a 0.5-1% accuracy improvement compared to the full-reading baseline. \n\n3- FLOPs saved for the same performance: In addition to Table 2, please see our results in Table 1. We achieve a 4.11x, 1.85x, 2.42x, 1.58x speedup compared to the full-reading baseline on four datasets. Here, 4.11x means that our proposed model only needs 1/4.11 times the energy. \n\n4- Regarding the baselines: To the best of our knowledge, our RNN performance is comparable to recent work on four standard datasets with similar data processing and RNN architecture:\nIMDB: 89.1 ([1]), AG_news: 88.1 ([1]), Yelp_polarity: 94.74 ([2]), DBpedia: 86.36 ([3])\n\n5- Concerning the chunk size: To show the performance of different chunk sizes (8, 20, 40), we conducted experiments on the IMDB dataset. The experimental result has been added to the Appendix (Figure 8). We observe our proposed method to outperform the partial reading baseline with a significant margin. Notice that a smaller chunk size leads to a larger number of decision steps for each sentence, resulting in a complicated problem for policy optimization. Thus, prediction accuracy is slightly worse compared to a large chunk size. However, we believe this could be overcome by applying more advanced policy optimization algorithms like proximal policy optimization, left for future work. On the other hand, if the chunk size is too large (40), few decision steps inside the sentence hardly capture differences. \n\n6- Advantage actor-critic use: Advantage actor-critic builds upon REINFORCE. For fair comparison, our training algorithm is the same as the algorithm used by Yu et al [1].\n\n7- Hyper-parameters: We only have a single hyper-parameter alpha to control the budget of the model. In contrast, Yu et al. use three hyper-parameters (N, K, R) to control the budget limitation.\n\n[1]. Learn to skim Text\n[2]. Character-level convolutional network for text classification\n[3]. Semi-supervised sequence learning \n", "1- Concerning the model architecture: To save computation, our policy module estimates an action based upon the latent representation rather than the prediction. We assumed the label probability and confidence level are encoded in this representation. We addressed this in our revision. \n\n2- Concerning the intuition for rereading: Our rereading mechanism was inspired by the success of bi-directional RNNs and human reading. For example, when reading an article, we may need to reread the previous paragraph to obtain a better understanding. Different from overweighting the previous data, we point out that weighting is dependent on the current context. \n\n3- Regarding the comparison between early-stopping and partial reading: In our experiments, we firstly trained an early-stopping model and obtained the truncated sentence for both training and test set. Based on the truncated dataset, we trained a partial reading model. Thus, although these two models have the same computational cost, they are trained with different datasets. We discuss this setting in our revision.\n\n4- Concerning the advantage over Yu et al. 2017: Firstly, the K-skip ability of Yu et al. can be viewed as a combination of skimming and early-stopping. However, their approach adopted classification accuracy as a reward function, while we utilized both accuracy and computation cost together, leading to a more comprehensive understanding of the accuracy-computation trade-off. Secondly, based on communication with Yu et al., we are confident in our implementation. \nThirdly, we conducted an ablation study to show the effectiveness of each component (see Subsection 3.3 and Figure 7). We observe skimming, rereading, and the combination of both to improve the performance.\n\n5- Regarding the comparison to other state-of-the-art methods: To the best of our knowledge, our RNN performance is comparable to recent work on four standard datasets with similar data processing and RNN architecture:\nIMDB: 89.1 ([1]), AG_news: 88.1 ([1]), Yelp: 94.74 ([2]), DBpedia: 86.36 ([3])\n\n[1]. Learn to skim Text\n[2]. Character-level convolutional network for text classification\n[3]. Semi-supervised sequence learning \n", "We added more details to illustrate the effectiveness of our proposed model. We envision this policy mechanism to be helpful for more general tasks beyond language understanding. We leave exploration of those to future work. ", "We thank the reviewers for their feedback and address their comments below. All modifications are highlighted (blue) in the newly uploaded version. The main modifications:\n1. Added more details for our experimental setup\n2. Added an ablation study to demonstrate the effectiveness of each component in our proposed model\n3. Added more details for the comparison between our proposed model and Yu et al. 2017\n", "''Text Classification: A Sequential Reading Approach.'' published in 2011 is also clearly related. ", "The early stopping idea is already implemented in the related work \"Learning to Skim Text\", i.e., the reading will stop if 0 is sampled from the jumping softmax. This can be seen in the two examples of their last experiment.", "The paper referenced in Related Work, \"Rationalizing Neural Predictions\" seems a bit similar to the current work. Also, the comment that they use attention for generating rationales is a bit confusing as the encoder which generates the label only sees the rationales(subset of the original text unless I am mistaken)." ]
[ -1, 5, -1, 5, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "HkN4OZr4G", "iclr_2018_ryZ8sz-Ab", "BkTK6oNmG", "iclr_2018_ryZ8sz-Ab", "iclr_2018_ryZ8sz-Ab", "SkDsWQqgz", "HyAwBpKeG", "rk_3IZjlM", "iclr_2018_ryZ8sz-Ab", "iclr_2018_ryZ8sz-Ab", "iclr_2018_ryZ8sz-Ab", "iclr_2018_ryZ8sz-Ab" ]
iclr_2018_B1KJJf-R-
Neural Program Search: Solving Data Processing Tasks from Description and Examples
We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input / output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the approach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outperforms sequence-to-sequence model with attention baseline.
workshop-papers
the reviewers all found the problem to be important, the proposed approach to be interesting, but the manuscript to be preliminary. i agree with them.
train
[ "SkJQ6oUgG", "rJmyxltgz", "BJNUHb5lz", "SJdDNQvGf", "SyyJBfwGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper presents a seq2Tree model to translate a problem statement in natural \nlanguage to the corresponding functional program in a DSL. The model uses\nan RNN encoder to encode the problem statement and uses an attention-based\ndoubly recurrent network for generating tree-structured output. The learnt model is \nthen used to perform Tree-beam search using a search algorithm that searches \nfor different completion of trees based on node types. The evaluation is performed\non a synthetic dataset and shows improvements over seq2seq baseline approach.\n\nOverall, this paper tackles an important problem of learning programs from \nnatural language and input-output example specifications. Unlike previous\nneural program synthesis approaches that consider only one of the specification \nmechanisms (examples or natural language), this paper considers both of them \nsimultaneously. However, there are several issues both in the approach and the \ncurrent preliminary evaluation, which unfortunately leads me to a reject score,\nbut the general idea of combining different specifications is quite promising.\n\nFirst, the paper does not compare against a very similar approach of Parisotto et al.\nNeuro-symbolic Program Synthesis (ICLR 2017) that uses a similar R3NN network\nfor generating the program tree incrementally by decoding one node at a time.\nCan the authors comment on the similarity/differences between the approaches?\nWould it be possible to empirically evaluate how the R3NN performs on this dataset?\n\nSecond, it seems that the current model does not use the input-output examples at \nall for training the model. The examples are only used during the search algorithm.\nSeveral previous neural program synthesis approaches (DeepCoder (ICLR 2017), \nRobustFill (ICML 2017)) have shown that encoding the examples can help guide \nthe decoder to perform efficient search. It would be good to possibly add another \nencoder network to see if encoding the examples as well help improve the accuracy.\n\nSimilar to the previous point, it would also be good to evaluate the usefulness of\nencoding the problem statement by comparing the final model against a model in which\nthe encoder only encodes the input-output examples.\n\nFinally, there is also an issue with the synthetic evaluation dataset. Since the \nproblem descriptions are generated syntactically using a template based approach, \nthe improvements in accuracy might come directly from learning the training templates\ninstead of learning the desired semantics. The paper mentions that it is prohibitively \nexpensive to obtain human-annotated set, but can it be possible to at least obtain a \nhandful of real tasks to evaluate the learnt model? There are also some recent \ndatasets such as WikiSQL (https://github.com/salesforce/WikiSQL) that the authors\nmight consider in future.\n\nQuestions for the authors:\n\nWhy was MAX_VISITED only limited to 100? What happens when it is set to 10^4 or 10^6?\n\nThe Search algorithm only shows an accuracy of 0.6% with MAX_VISITED=100. What would\nthe performance be for a simple brute-force algorithm with a timeout of say 10 mins?\n\nTable 3 reports an accuracy of 85.8% whereas the text mentions that the best result\nis 90.1% (page 8)?\n\nWhat all function names are allowed in the DSL (Figure 1)? \n\nCan you clarify the contributions of the paper in comparison to the R3NN?\n\nMinor typos:\n\npage 2: allows to add constrains --> allows to add constraints\npage 5: over MAX_VISITED programs has been --> over MAX_VISITED programs have been\n\n", "This paper tackles the problem of doing program synthesis when given a problem description and a small number of input-output examples. The approach is to use a sequence-to-tree model along with an adaptation of beam search for generating tree-structured outputs. In addition, the paper assembles a template-based synthetic dataset of task descriptions and programs. Results show that a Seq2Tree model outperforms a Seq2Seq model, that adding search to Seq2Tree improves results, and that search without any training performs worse, although the experiments assume that only a fixed number of programs are explored at test time regardless of the wall time that it takes a technique.\n\nStrengths:\n\n- Reasonable approach, quality is good\n\n- The DSL is richer than that of previous related work like Balog et al. (2016).\n\n- Results show a reasonable improvement in using a Seq2Tree model over a Seq2Seq model, which is interesting.\n\nWeaknesses:\n\n- There are now several papers on using a trained neural network to guide search, and this approach doesn't add too much on top of previous work. Using beam search on tree outputs is a bit of a minor contribution.\n\n- The baselines are just minor variants of the proposed method. It would be stronger to compare against a range of different approaches to the problem, particularly given that the paper is working with a new dataset.\n\n- Data is synthetic, and it's hard to get a sense for how difficult the presented problem is, as there are just four example problems given.\n\nQuestions:\n\n- Why not compare against Seq2Seq + Search?\n\n- How about comparing wall time against a traditional program synthesis technique (i.e., no machine learning), ignoring the descriptions. I would guess that an efficiently-implemented enumerative search technique could quickly explore all programs of depth 3, which makes me skeptical that Figure 4 is a fair representation of how well a non neural network-based search could do.\n\n- Are there plans to release the dataset? Could you provide a large sample of the data at an anonymized link? I'd re-evaluate my rating after looking at the data in more detail.\n", "This paper introduces a technique for program synthesis involving a restricted grammar of problems that is beam-searched using an attentional encoder-decoder network. This work to my knowledge is the first to use a DSL closer to a full language.\n\nThe paper is very clear and easy to follow. One way it could be improved is if it were compared with another system. The results showing that guided search is a potent combination whose contribution would be made only stronger if compared with existing work.", "Thanks for the detailed review and pointing out typos.\n\nResponse to questions:\n\nMAX_VISITED is limited to 100 mostly due to inefficient implementation of search and DSL interpreter in Python and striving to run experiments and evaluation relatively fast. For Seq2Tree + Search it currently takes 1-4s per example or ~4 hours for full dev set. We have run model on smaller subset of dev set with MAX_VISITED 10^4 and accuracy was within the noise margin of reported, which most probably due to fact that if Seq2Tree model doesn’t know right subspace of programs, search won’t be able to recover even with large number of checks.\n\nIf we let search run for 10 minutes per task, with current size of dev set it would take ~2 months to evaluate on single machine. And given there are roughly 10^25 programs with depth = 3, even 10 minutes of very optimized search would not be enough to actually find correct program of depth = 3 with brute-force search. \n(note, we consider depth = 0 just constant / function, depth = 1 a call to function with arguments, and so on. Depth = 2 already has 17,518,345,206 various programs in our DSL)\n\nFull implementation of DSL is here: https://paste.ofcode.org/SNzgEQzFAL8sVSQrrhBtRA\nWe are going also add an Appendix with details on DSL. You can also find dataset here:\nhttps://www.dropbox.com/s/wep81pcrar5fttl/metaset3.train.jsonl.gz:\nhttps://www.dropbox.com/s/h3mn0abeiqy6foz/metaset3.dev.jsonl.gz\n\nComparing our work with “Neuro-Symbolic Program Synthesis” paper:\n * Our DSL is way more expressive: conditions, map/reduce/filter array operations, lambda functions and recursion.\n * R3NN paper due to origin of their DSL were limited to string->string transformation. We have inputs and outputs of next types: integers, arrays of integers, strings, booleans. And we don’t have any limitations in our model to expand to any other type of inputs/outputs (for R3NN it will require to adapt IO encoder, which right now only consumes characters via LSTM). \n * For non trivial transformations, like integer array -> boolean transformation, even 100s of examples can be not enough to understand what transformation is done, and natural language will work better. Though we agree that we should compare with just IO and IO/text combined in the encoder. Preliminary results show ~12% accuracy from just IO on our dataset for IO2SEQ.\n * R3NN decoder iteratively adds a node to tree, thus claiming to not require an explicit search. They have also shown improving results via backtracking search. From our observations, backtracking search can get stuck in wrong space as some much earlier decision was wrong (i.e. choosing \"+\" with probability of 0.51 instead of \"-\" with 0.49). Our breadth first search in program space guided by neural model is more principled approach to do search that allows to evaluate globally most probable programs.\n\nWe are working on evaluating R3NN approach on our dataset and will update here with results. Additionally, we can run our search on top of R3NN decoding strategy. \n\nPreliminary results applying IO2Seq (from Parisotto et al 2017):\nBEAM_SIZE = 1 0.02\nBEAM_SIZE = 100 0.12\n", "Thanks for the comments!\n\n1. Good point, here are Seq2Seq + Beam Search results. Will update paper accordingly.\n\nBEAM_SIZE = 10 0.719\nBEAM_SIZE = 100 0.728\n\n2. On your comment to make a more efficient enumerative search, it’s indeed limitation of our setup that our executor is in Python and limits how many programs we can find and execute in reasonable time (also reason why MAX_VISITED for beam search is relatively small to be able to evaluate on our dev set in reasonable time).\n\nFor example in our DSL, given just one array as input, total number of syntactically programs of depth = 2 is 17,518,345,206. Which in our current implementation in Python took about ~4.5 hour to enumerate 1B of programs (without evaluating). It’s indeed true that more efficient implementation will be able to iterate over it may be 10-100 times faster. But for depth = 3 the total number of programs is roughly 10^25 which makes it impossible to apply regular enumerative search.\n\nIdeally, would be great to compare with traditional program synthesis techniques. But to run state-of-the-art traditional techniques (like PROSE) requires a large amount of work to build out heuristics and given our DSL is almost full LISP (with lambda functions and recursion) some things would be extremely hard to make work. I may be wrong, but my understanding is that functions like “reduce” or recursive calls can not be implemented in the PROSE’s backpropagation setup (https://microsoft.github.io/prose/documentation/prose/backpropagation/).\n\n3. Please find train/dev data here:\nhttps://www.dropbox.com/s/wep81pcrar5fttl/metaset3.train.jsonl.gz:\nhttps://www.dropbox.com/s/h3mn0abeiqy6foz/metaset3.dev.jsonl.gz\n\nAlso our DSL full implementation can be found here: https://paste.ofcode.org/SNzgEQzFAL8sVSQrrhBtRA" ]
[ 4, 5, 7, -1, -1 ]
[ 4, 4, 4, -1, -1 ]
[ "iclr_2018_B1KJJf-R-", "iclr_2018_B1KJJf-R-", "iclr_2018_B1KJJf-R-", "SkJQ6oUgG", "rJmyxltgz" ]
iclr_2018_Sy3XxCx0Z
Natural Language Inference with External Knowledge
Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset.
workshop-papers
the reviewers seem to agree that this submission could be much more strengthened if more investigation is done in two directions: (1) the effect of different, available resources (e.g., in the comment, the authors mentioned WikiData didn't improve, and this raises a question of what kind of properties of external resources are necessary to help) and (2) alternatives to incorporating external knowledge (e.g., as pointed out by one of the reviewers, this is certainly not the only way to do so, and external knowledge has been used by other approaches for RTE earlier. how does this specific way fare against those or other alternatives?) addressing these two points more carefully and thoroughly would make this paper much more appreciated.
train
[ "SkGeeKygf", "rkumgRdxM", "SkkxKWJbM", "Sy3Z4By-z", "S1uNWq3QM", "r1VZWqhmf", "rknhgcn7G", "BytbgcnQG", "HJCviIs1G", "rJWNCT9kz", "SJ-i-Fy1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "Update:\n\nThe response addressed all my major concerns, and I think the paper is sound. (I'm updating my confidence to a 5.) So, the paper makes an empirically *very* small step in an interesting line of language understanding research. This paper should be published in some form, but my low-ish score is due simply to my worry that ICLR is not the venue. I think this would be a clear 'accept' as a *ACL short paper, and would probably be viable as a *ACL long paper, but it will definitely have less impact on the overall field of representation learning than will the typical ICLR paper, so I can recommend it only with reservations.\n\n--\n\nThis paper presents a method to use external lexical knowledge (word–word relations from WordNet) as an auxiliary input when solving the problem of textual entailment (aka NLI). The idea of accessing outside commonsense knowledge within an end-to-end trained model is one that I expect to be increasingly important in work on language understanding. This paper does not make that much progress on the problem in general—the methods here are quite specific to words and to NLI—and the proposed methods yields only yield large empirical gains in a reduced-data setting, but the paper serves as a well-executed proof of concept. In short, the paper is likely to be of low technical impact, but it's interesting and thorough enough that I lean slightly toward acceptance.\n\nMy only concern is on fair comparison: Numbers from this model are compared with numbers from the published ESIM model in several places (Table 3, Figure 2, etc.) as a way to provide evidence for the paper's core claim (that the added knowledge in the proposed model helps). This only constitutes clear evidence if the proposed model is identical to ESIM in all of its unimportant details—word representations, hyperparameter tuning methods, etc. Can the authors comment on this?\n\nFor what it's worth, the existence of another paper submission on roughly the same topic with roughly the same results (https://openreview.net/pdf?id=B1twdMCab) makes more confident that the main results in this paper are sound, since they've already been replicated, at least to a coarse approximation.\n\nMinor points:\n\nFor TransE, what does this mean:\"However, these kind of approaches usually need to train a knowledge-graph embedding beforehand.\"\n\nYou should say more about why you chose the constant 8 in Table 1 (both why you chose to hard code a value, and why that value).\n\nThere's a mysterious box above the text 'Figure 1'. Possibly a figure rendering error?\n\nThe LSTM equations are quite widely known. I'd encourage you to cite a relevant source and remove them.\n\nSay more about why you choose equation (9). This notably treats all five relation types equally, which seems like a somewhat extreme simplifying assumption.\n\nEquation (15) is confusing. Is a^m a matrix, since it doesn't have an index on it?\n\nWhat is \"early stopping with patience of 7\"? Is that meant to mean 7 epochs?\n\nThe opening paragraph of 5.1 seems entirely irrelevant, as do the associated results in the results table. I suspect that this might be an opportunity for a gratuitous self-citation.\n\nThere are plenty of typos: \"... to make it replicatibility purposes.\"; \"Specifically, we use WordNet to measure the semantic relatedness of the *word* in a pair\"; etc.", "This is a very interesting paper!\nWe are finally back to what has been already proven valid for NLI also know as RTE. External knowledge is important to reduce the amount of training material for NLI. When dataset were extremely smaller yet more complex, this fact has been already noticed and reported in many systems. Now, it is extremely important that it is has been started to be true also in NN-based models for NLI/RTE.\n\nHovever, the paper fails in describing the model with respect to the vast body of research in RTE. In fact, alignment is one of the basis for building RTE systems. Attention models are in fact extremely related to alignment and \"KNOWLEDGE-ENRICHED CO-ATTENTION\" is somehow a model that incorporates what has been already extensively used to align word pairs. \nHence, models such as those described in the book \"Recognizing textual entailment\" can be extremely useful in modeling the same features in NN-based models, for example, \"A Phrase-Based Alignment Model for Natural Language Inference\", EMNLP 2008, or \"Measuring the semantic similarity of texts\", 2005 or \"Learning Shallow Semantic Rules for Textual Entailment\", RANLP, 2007.\n\nThe final issue is the validity of the initial claim. Is it really the case that external knowledge is useful? It appears that external knowledge is useful only in the case of restricted data (see Figure 3). Hence, it is unclear whether this is useful for the overall set. One of the important question here is then if the knowledge of all the data is in fact replicating the knowledge of wordnet. If this is the case, this may be a major result. \n\nMinor issues\n=====\nThe paper would be easier to read if Fig. 1 were completed with all the mathematical symbols.\nFor example, where are a^c_i and b^c_i ? Are they in the attention box?", "This paper adds WordNet word pair relations to an existing natural language inference model. Synonyms, antonyms, and non-synonymous sister terms in the ontology are represented using indicator features. Hyponymy and hypernymy are represented using path length features. These features are used to modify inter sentence attention, the final post-attention word representations, and the pooling operation used to aggregate the final sentence representations prior to inference. All of these three additions help, especially in the low data learning scenario. When all of the SNLI training data is used this approach adds 0.6% accuracy on the SNLI 3-way classification task. \n\nI think that the integration of structured knowledge representations into neural models is a great avenue of investigation. And I'm glad to see that WordNet helps. But very little was done to investigate different ways in which these data can be integrated. The authors mention work on knowledge base embeddings and there has been plenty of work on learning WordNet embeddings. An obvious avenue of exploration would compare the use of these to the use of the indicator features in this paper. Another avenue of exploration is the integration of more resources such as VerbNet, propbank, WikiData etc. An approach that works with all of these would be much more impressive as it would need to handle a much more diverse feature space than the 4 inter-dependent features introduced here.\n\nQuestions for authors:\n\nIs the WordNet hierarchy bounded at a depth of 8? If so please state this and if not, what is the motivation of your hypernymy and hyponymy features?", "This work is interesting and fairly thorough. The ablation studies at the end of the paper are the most compelling part of the argument, more so than achieving SoTa. Having said that, since their studies on performance with a low dataset size are the most interesting part of the paper, I would have liked to see results on smaller datasets like RTE. Additionally, it would be useful to see results on MultiNLI which is more challenging and spans more domains; using world knowledge with MultiNLI would be a good test of their claims and methods. \nI'm also glad that the authors establish statistical significance! I would have liked to see some additional analysis on the kinds of sentences the KIM models succeeds at where their baseline ESIM fails. I think this would be a compelling addition.\n\nPros:-\n- Thorough experimentation with ablation studies; show success of method when using limited training data.\n- Establish statistical significance.\n- Acheive SoTa on SNLI.\n\nCons:-\n- Authors make the broad claim of world knowledge being helpful for textual entailment, and show usefulness in a limited datasize setting, but don't test their method on other datasets RTE (which has a lot less data). If this helps performance on RTE then this could be a technique for low resource settings.\n- No results for MultiNLI shown. MultiNLI has many more domains of text and may benefit more from world knowledge.\n- Authors don't provide a list of examples where KIM succeeds and ESIM fails.", ">>> Reviewer’s comment 1: ”You should say more about why you chose the constant 8 in Table 1 (both why you chose to hard code a value, and why that value).”\n \nAuthor response: Thanks. The original WordNet hierarchy is bounded by the depth of 20. In our paper, we take the setup specified in MacCartney (2009) to bound the depth at 8 (i.e., ignoring pairs in the hierarchy which have more than 8 edges in between). Similarly, we follow MacCartney (2009) for hypernym and hyponym feature design, which we are shown in our experiments to help improve neural-network-based inference model. We will make this clearer in our revision. \n \n>>> Reviewer’s comment 2:”This only constitutes clear evidence if the proposed model is identical to ESIM in all of its unimportant details—word representations, hyperparameter tuning methods, etc. Can the authors comment on this?”\n \nAuthor response: We are sure that all of its unimportant details, such as hyperparameter tuning methods, are identical to ESIM. We modified the ESIM code from GitHub (https://github.com/lukecq1231/nli) and we only modified NLI components to explore external knowledge. In addition, ESIM is a well-tuned model already and tuning ESIM by itself does not yield further gain.\n \n>>> Reviewer’s comment 3: ”Say more about why you choose equation (9). This notably treats all five relation types equally, which seems like a somewhat extreme simplifying assumption.”\n \nAuthor response: Good comment. We actually added a MLP layer in Theano to learn the underlying combination function but did not actually observe further improvement over our best performance. We will add the results and some discussion in our revision.\n \n>>> Reviewer comments 4: What is \"early stopping with patience of 7\"? Is that meant to mean 7 epochs?\n \nAuthor response: Yes, early stopping with patience of 7 means 7 epochs. We will make this clearer. \n\nIn general, as informal reasoning is a core problem, our SoTa results on one of the widely used benchmarks (SNLI), our investigation of missing knowledge in all three major inference components, as well as the analysis, in our opinion, are nice contributions. If space allows, we will also add and discuss some typical, failed investigations/models we have performed.\n \n[References]\nBill MacCartney. Natural Language Inference. PhD thesis, Stanford University, 2009.\n", ">>>Reviewer’s comment 1: ”However, the paper fails in describing the model with respect to the vast body of research in RTE”\n\nAuthor response: The manuscript has focused mainly on neural network based models, but we totally agree with the reviewer--we will add the citations to previous research as kindly suggested by the reviewer. Thank you for the constructive comment.\n \n>>>Reviewer’s comment 2: “...It appears that external knowledge is useful only in the case of restricted data (see Figure 3). ...\"\n\nAuthor response: In the experiments, we found external knowledge constantly improves the performance. As the reviewer pointed out, the improvement is more significant when the size of train data is restricted. It is also significant on when using the entire training set, the proposed model KIM achieved the new state-of-the-art performance on SNLI (88.6% accuracy with a single model and 89.1% with ensembling) over ESIM and the improvement is significant (one-tailed paired t-test at the 99% significance level). We restricted the training data size in order to show the trend and benefit of using external knowledge under different coverage rate. Thank you! We will make these points clearer in revision.\n\n >>>Minor errors \n\nThank you so much for pointing out minor errors; we will follow the suggestions to address them in revision.\n", "Thanks for the constructive comments. We have clarified the questions as follows:\n \n>>> Reviewer’s comment 1: “Is the WordNet hierarchy bounded at a depth of 8? If so please state this and if not, what is the motivation of your hypernymy and hyponymy features?”\n \nAuthor response: The original WordNet hierarchy is bounded by a depth of 20. In our paper, we take the setup specified in MacCartney (2009) to bound the depth to 8 (i.e., ignoring pairs in the hierarchy which have more than 8 edges in between). Similarly, we follow MacCartney (2009) for hypernym and hyponym feature designing, which we show in our experiments to improve neural-network-based inference model to achieve a new state-of-the-art performance (88.6% accuracy with a single model and 89.1% with ensembling).\n \n>>>Reviewer’s comment 2: \"An obvious avenue of exploration would compare the use of these to the use of the indicator features in this paper. \"\n \nThank you. Incorporating WordNet embedding achieved an accuracy of 88.2% on SNLI, compared with 88.6% with KIM and 88.0% with ESIM. WordNet embedding is trained to be sensitive to some semantic relation (e.g., is-a relation which could help detect entailment) but not on others (semantic relations) that would further help NLI (e.g., word pairs with common parents often help identify contradiction). We will add some discussion along this line.\n \n>>>Reviewer’s comment 3: \"Another avenue of exploration is the integration of more resources ...\"\n \nAuthor response: WordNet is a lexical and common sense resource that naturally encodes entailment and contradiction information as discussed in the paper and above (e.g., “is-a” and “sibling” relation between word pairs can help resolve entailment and contradiction, respectively). Particularly, considering how NLI data is constructed (e.g., SNLI relies on annotators’ common sense to write entailment and contradiction sentences), we think WordNet is a good resource to demonstrate our algorithms which enhances NN-based NLI models on all three typical NLI submodules/subcomponents. Furthermore, we indeed incorporated WikiData (Freebase) and it did not improve model performance (it is not surprising as most of WikiData is about entities and relations (e.g. Bill Gates and Microsoft) which do not correspond to common entailment/contradiction relation (e.g., red/yellow is contradicting). Thank you for the comment, which we think is very constructive. We will add discussion on this in our revision. \n \n[References]\nBill MacCartney. Natural Language Inference. PhD thesis, Stanford University, 2009.\n", "Thank you for the comments. We additionally ran the suggested experiments on the MultiNLI dataset with both the model KIM and ESIM. With the same setting as used in SNLI, KIM achieves a 77.4% accuracy on MultiNLI’s “in-domain” test set (vs. 77.0% of ESIM), and 75.8% on the “cross-domain” test set (vs. 75.5% of ESIM), showing similar and consistent improvement. We will discuss the results in our revision. Thank you. Furthermore, we will include a confusion matrix to detail where KIM corrects the mistakes made by ESIM and include some examples, as suggested by the review. \n", "It looks great. Thanks!", "We really appreciate the comments. We ran the proposed models on MultiNLI. With the same setting, on the \"in-domain\" test set, the proposed model, KIM (knowledge-based inference model), achieves an accuracy of 77.4% (vs. 77.0% of the ESIM model (Chen et al. ACL '17)). In addition, on the \"cross-domain\" test set, KIM achieves a 75.8% accuracy (vs. 75.5% of ESIM). We will add these results in our revision. Thanks for the constructive comments, which make the results more comprehensive.\n[References]\nQ. Chen, X. Zhu, Z. Ling, S. Wei, H. Jiang, and D. Inkpen. (2017). Enhanced LSTM for Natural Language Inference. In: Proc. of ACL, Vancouver, Canada.", "Good work. I'm wondering whether you have tried your model on MultiNLI corpus? The pipeline should be same. MultiNLI corpus requires higher level understanding of the text. With external knowledge, the performance on MultiNLI should be better than the systems without external knowledge." ]
[ 6, 5, 3, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 5, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sy3XxCx0Z", "iclr_2018_Sy3XxCx0Z", "iclr_2018_Sy3XxCx0Z", "iclr_2018_Sy3XxCx0Z", "SkGeeKygf", "rkumgRdxM", "SkkxKWJbM", "Sy3Z4By-z", "rJWNCT9kz", "SJ-i-Fy1z", "iclr_2018_Sy3XxCx0Z" ]
iclr_2018_Sk4w0A0Tb
Rotational Unit of Memory
The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. However, RNN still has a limited capacity to manipulate long-term memory. To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms. In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. Moreover, the rotational unit also serves as associative memory. We evaluate our model on synthetic memorization, question answering and language modeling tasks. RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. RUM’s performance in the bAbI Question Answering task is comparable to that of models with attention mechanism. We also improve the state-of-the-art result to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data. The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation.
workshop-papers
although the authors argue that their experiments were selected from the earlier work from which major comparing approaches were taken, the reviewers found the empirical result to be weak. why not some real tasks (i do not believe bAbI nor PTB could be considered real) that could clearly reveal the superiority of the proposed unit against existing ones?
test
[ "Bkq3EZcxM", "B1yLVH5lM", "Byf4Vs5gM", "Byjpfc2Qz", "r1CbMch7G", "BJepZqh7G", "SJdf-qn7f", "rJDVFq8mM", "HJsj51iMz", "HJvHqzeGM", "HJUg-BVWz", "H1MfgBVWM", "SJLOkB4-G", "H1qQJHEZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "author", "author", "author", "author" ]
[ "The authors of this paper propose a new type of RNN architecture that modifies the reset gate of GRU with a rotational operator, where this rotational operator serves as an associative memory of their RNN model. The idea is sound, and the way they conduct experiments also make sense. The motivation and the details of the rotational memory are explained clearly. However, the experimental results reported in the paper seem to be a bit weak to support the claims made by the authors. \n\nThe performance improvements are not so clear to me. Especially, in the character level language modeling, the BPC improvement is only 0.001 when choosing the SOTA model of this dataset as the base architecture. The test BPC score is obtained as a single-run experiment on the PTB dataset, and the improvement seems to be too small. In the copying memory task shown in Section 4.1, how did GORU performed when T=200? \n\nOn the Q&A task, using the bAbI set (Section 4.3), RUM is said to be *significantly outperforming* GORU when the performance gap is 13.2%, and then, it is also said that RUM’s performance *is close to* the MeMN2N when the performance gap is 12.8%. Both performance gaps seem to be very close to each other, but the way they are interpreted in the paper is not.\n\nOverall, the writing is clear, and the idea sounds interesting, but the experimental results are not strongly correlated with the claims made in the paper. In the visual analysis, the authors assume that RUM architecture might be the architecture that utilizes the full representational power of models like RNNs. If this is the case, I would expect to see more impressive improvements in the performance, assuming that all the other conditions are properly controlled.\n\nI would suggest evaluating the model on more datasets. \n\nMinor comments: \nIn Section 2.2: Hopflied -> Hopfield\nIn Section 3.2: I believe the dimension of b_t should be 2*N_h", "The paper proposes a RNN memory cell updating using an orthogonal rotation operator. This approach falls into the phase-encoding architectures. Overall the author's idea of generating a rotation operator using the embedded input and the transformed hidden state at the previous step is clever. Modelling this way makes the 'generating matrix' W_hh learn to couple the input to the hidden state (which contain information in the past) via the Rotation operator.\n\nI have several concerns:\n\n- The author should discuss the intuition why the rotation has to be from the generated memory target τ to the embeded input ε but not the other way around or other direction in this 2D subspace.\n- The description of parameter meter τ is not clear. Perhaps the author meant τ is the generated parameter via the parameter matrix W_hh acting upon the hidden state h_{t-1}\n- The idea of evolving the hidden state by an orthogonal matrix, of which the rotation is a special case, is similar to the GORU paper, which directly parametrizes the 'rotation' matrix. Therefore I am wondering if the better performance of this work than the GORU is because of the difference in parameterization or by limiting the orthogonal transform to only rotations (hence modelling only the phase of the hidden state). Perhaps an additional experiment is needed to verify this.\n", "Summary:\nThis paper proposes a way to incorporate rotation memories into gated RNNs. They use a specific parametrization of the rotation matrices. They run experiments on several toy tasks and on language modelling with PTB character-level language modeling (which I would still consider to be toyish.)\n\n\nQuestion:\nCan the rotation proposed here cause unintentional forgetting by interleaving the memories? Because in some sense rotations are glorified summation in high-dimensions, if you do a full-rotation of a vector (360 degrees) you can end up in the same location. Thus the model might overwrite into its past memories.\n\nPros:\nProposes an interesting way to incorporate the rotation operations into the gated architectures.\n\nCons:\nThe specific choice of rotation operation is not very well justified.\nThis paper more or less uses the same architecture from Jing et al 2017 from EU-RNNs with a different parametrization for the rotation matrices.\nThe experiments are still limited to simple small-scale tasks.\n\n\nGeneral Comments:\n\nThe idea and the premise of this paper is interesting. In general the paper seems to be well-written. However the most important part of the paper section 3.1 is not very well justified. Why this particular parameterization of the rotation matrices is used and where does actually that come from? Can you point out to some citation? I think the RUM architecture section also requires better explanation on for instance why why R_t is parameterized that way (as a multiplicative function of R_{t-1}). A detailed ablation study would help too.\n\nThe model seems to perform really close to the GORU on Copying Task. I would be interested in seeing comparisons to GORU on “Associative Recall” as well. On QA task, which subset of bAbI dataset have you used? 1k or 10k training sets? \n\nOn language modelling there is only insignificant difference between the FS-LSTM-2 with FS-RUM model. This does not tell us much.\n", "Dear Reviewer, \n\nWe are currently evaluating RUM on larger data sets. For example, right now we work on the enwik8 data set. So far, we can run models up to 21M parameters. FS-RUM-2 with 21M params. achieves 1.37 BPC for 35 epochs on enwik8. This is not the state-of-the-art, but note that the state-of-the-art is a much larger model that has 94M params. (Mujika et al. (2017)). We are currently working on fitting larger RUM model to our hardware, so that we can test the RUM model of size about 100M params. \n\nMoreover, the state-of-the-art models on enwik8 (Mujika et al. (2017)) are simply larger versions of the previous state-of-the-art models on the PTB task (Mujika et al. (2017)). We show that a RUM version of those models slightly improves the test by 0.001 and validation by 0.002; therefore we think that PTB is a particularly important data set to experiment with. Moreover, we give a positive answer to the suggestion by Mujika et al. (2017) in their conclusion that the slow LSTM cell may be replaced with a cell that has a better long term memory capacity. As we demonstrate, RUM is suitable for such replacements--a feat, which we think gives RUM scaling power. We hope that our additional experiments provide a clearer picture of the performance of the RUM model. \n\nIn our analysis we do not exactly assume that RUM gives full representational power. We simply conduct a visual analysis of the kernels and offer our interpretation of the results. Whether or not RUM gives full representational power is very difficult to be evaluated, as it is mostly task specific; moreover, such a statement is not a necessary assumption for any part of the paper, thus we agree to rephrase that part of our visual analysis, so that our interpretation is clearer. \n\nOn that note, we also think that currently it is difficult to judge whether an improvement on a particular task, such as the PTB character level, is weak or not. There has been much research on PTB already, and we may not know where the performance limit of deep learning models stands. For this reason, to incorporate your concerns, we have rephrased some of our statements in the paper. \n\nBest wishes!\n\nReferences \n\nAsier Mujika, Florian Meier, and Angelika Steger. Fast-slow recurrent neural networks. NIPS arXiv preprint arXiv:1705.08639, 2017.\n", "Dear Reviewer, \n\nWe agree that the GORU model directly parameterizes the rotation matrix. For easier tasks like the copying task, fixed parametrization may have advantages. However, in most tasks, the “learnware” structure that RUM provides (Balduzzi et al. (2016)), performs better thanks to its flexibility of adjusting weight matrices in RNN, which require different abilities in different time steps. In addition, the learnware structure of RUM has the same representative ability as GORU in terms of the “rotation matrix”. Both of them are able to occupy full orthogonal space (more accurately, orthogonal matrix manifolds with all positive eigenvalues). This assures that RUM will perform no worse than GORU even though it might learn slower in some tasks. In other tasks, such as language modeling, RUM is better because of its learnware structure, as demonstrated by experiments in the original GORU paper (Jing et al. (2017)) vs. experiments in our paper. \n\nBest wishes! \n\nReferences \n\nDavid Balduzzi and Muhammad Ghifary. Strongly-Typed Recurrent Neural Networks. Proceeding ICML'16 Proceedings of the 33rd International Conference on Machine Learning. 48. 1292-1300, 2016\n\nLi Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, and Yoshua Bengio. Gated orthogonal recurrent units: On learning to forget. arXiv preprint arXiv:1706.02761, 2017a.\n", "Dear Reviewer, \n\nInspired by your comments, we worked on clarifying the motivation for RUM and demonstrating signs of its potential applications to more complicated models and tasks. \n\nFirst of all, we believe that using rotations within neural networks is a natural choice, which is backed up by the mathematical properties of those operations. To us, rotations stand out, because through a simple construction, they combine a variety of concepts, important to deep learning, such as: gradient explosion/vanishing, associative memory, and also relate to novel propositions of deep learning models, such as capsules. We think that this inherent universality of the rotation operations helps RUM perform well on a diverse choice of tasks, each requiring a special learning capacity. Thus, we tested RUM on the copying task, question answering, associative recall and language modeling--4 conceptually diverse tasks. \n\nSecond of all, because of the initial success of RUM on those 4 tasks, we are currently evaluating RUM on larger data sets. For example, right now we work on the enwik8 data set. So far, we can run models up to 21M parameters. FS-RUM-2 with 21M params. achieves 1.37 BPC for 35 epochs on enwik8. This is not the state-of-the-art, but note that the state-of-the-art is a much larger model that has 94M params. (Mujika et al. (2017)). We are currently working on fitting larger RUM model to our hardware, so that we can test the RUM model of size about 100M params. \n\nFinally, the state-of-the-art models on enwik8 (Mujika et al. (2017)) are simply larger versions of the previous state-of-the-art models on the PTB task (Mujika et al. (2017)). We show that a RUM version of those models slightly improves the test by 0.001 and validation by 0.002; therefore we think that PTB is a particularly important data set to experiment with. Moreover, we give a positive answer to the suggestion by Mujika et al. (2017) in their conclusion that the slow LSTM cell may be replaced with a cell that has a better long term memory capacity. As we demonstrate, RUM is suitable for such replacements--a feat, which we think gives RUM scaling power. \n\nBest wishes!\n\nReferences \n\nAsier Mujika, Florian Meier, and Angelika Steger. Fast-slow recurrent neural networks. NIPS arXiv preprint arXiv:1705.08639, 2017.\n", "Dear Readers, \n\nWe took the comments of the reviewers into serious consideration, and, along with the minor corrections, we implemented the following major changes in the paper: \n\n1. We added the discussion about RUM and Capsules (sections 2.3 and 5.3). \n2. We clarified the motivation behind using rotations for more efficient models, and some of the steps of the RUM construction in section 3. \n3. We updated the experimental sections for all 4 of the considered tasks to include more comparisons with the GORU and EUNN baseline models. \n4. We now present a more comprehensive language modeling experiments for the PTB data set in section 4.4. and also appendix D. We managed to get to a 0.001 BPC test improvement and a 0.002 BPC validation improvement. \n\nTo sum up, with this paper we want to demonstrate a new phase-encoding learning representation, give intuition about its efficiency by analysing it through the lens of well-established deep learning models, and then demonstrate that the proposed model achieved satisfactory performance, which improves the state-of-the-art slightly, in a diverse range of tasks.\n\nThank you for the constructive discussion!\n", "\nDear Reviewer, \n\nWe finished our experiments on trying different linear combinations within the 2D subspace and reversing the orientation of the rotation, and concluded that the performance stays the same as the current model, which aligned with our original intuition. Here we will motivate the outcome of our experiment through a simple observation.\n\nWe want the rotation to depend on the input and the hidden state. The simplest way for this is to encode the rotation from the embedded input to a linear combination of the input and the hidden state. This linear combination can be written as alpha * x_emb + beta * target_memory. The magnitude of the vectors forming the rotation does not matter. Thus, we can divide by beta to get alpha/beta * x_emb + target_memory, i.e. only alpha/beta matters. Since the magnitude of x_emb is determined by weights/bias, the degree of freedom of alpha/beta is absorbed into the weights/bias. In practice, rotating to the target_memory solely or to x_emb + target_memory gives the same performance. \n\nHopefully this helps to answer your question. \n\nThank you! \n\n", "Dear Reader, \n\nWe thank you for your interest in the topic and questions! First we describe the spiritual similarity between our contributions and the concept of Capsules, presented in Sabour et al. (2017): \n\n\na. *A parallel between RUM’s state and Capsule’s representation*. Think about the hidden state in our model as a vector in the Euclidean space R^n -- it has an orientation and a magnitude. In a similar fashion, a capsule is a vector that has an orientation and a magnitude. Both RUM and Capsule Net learn to manipulate the orientation and magnitude of their respective components. \n\nb. *The Rotation operation and the Routing mechanism*. Both mechanisms are ways of manipulating orientations and magnitudes. In the routing mechanism we start from priors (linearly generated from the input to the given layer of capsules), then generate outputs, and finally measure the dot product between the priors and the output. This dot product essentially measures the similarity between the two vectors through the cosine of the angle between them. This relative position between the two vectors is used for effective routing, so that the orientations of the capsules can be manipulated iteratively. Now, compare this with the Rotation mechanism. We start with the embedded input vector (think about this as an alternative of the priors) and then generate the target memory (think about this as an alternative of the outputs). Then we measure (encode) the rotation between the embedded input and the target memory (think about this as an alternative of taking the dot product). And finally we use that encoded rotation to change the orientation of the hidden state (think about this as the iterative process of the routing mechanism). \n\nc. Some additional remarks: of course, RUM and Capsule Net are not equivalent models in terms of their learning representations, but they share notable spiritual similarities, as noted in a. and b. Note that the hidden state usually has a much larger dimensionality than the capsules that are used in Sabour et al. (2017). Hence, effectively, we demonstrate how to manipulate orientations and magnitudes of a much higher dimensionality (for example, we have experimented with hidden sizes of 1000 and 2000 for language modeling).\n\nTo answer your question 1. Our main idea is to utilize the orientation of the hidden state, viewed as a vector in the Euclidean space R^n. Since we use RNNs, the way for this update to happen is to take an input and a previous hidden state, and then produce a new hidden state. We want the change of the orientation of the hidden state to be guided by the inputs. This is the reason why we compute the rotation between the embedded input and the target memory, because note that the target memory is essentially generated by the previous hidden state. So after we compute this “coupling” between the input and the previous hidden state, we can create a new hidden state. The way for this to happen is by rotating the old hidden state to get a new hidden state (with a different orientation). \n\nFrom here, to answer your question 2. We got the natural idea of phase accumulation. By providing a phase accumulation property for the rotation operation we invoke an associative memory within our RUM cell. This multiplicative structure, with no previously explored analogues to the best of our knowledge, allows for a more flexible rotational manipulation of the hidden state (similar to the “routing mechanism” for the RUM cell). We were positively surprised to see that we can achieve the state-of-the-art on the Associative Recall task with significantly less parameters. The potential for exploring these accumulations of rotations are vast, as we note in the Conclusion section of our paper. \n\nBest wishes! \n\nReferences \n\nSara Sabour, Nicholas Frosst, Geoffrey Hinton. Dynamic Routing Between Capsules.. NIPS 2017 arXiv preprint arXiv:1710.09829, 2016. \n\n", "I am interested in the rotation operation and the rum architecture and also the capsule net.\nBut I do not find the spiritual similarity between these two works, could you help specifying?\n\nBesides, there are some questions.\n1. I am not sure why the rotation operator in RUM is to rotate the previous hidden state from embedded input direction to target vector direction and than plus the embedded input?\n2. What is the purpose of the accumulation of rotation?\n\nThank you!\n\n", "Dear Reviewers, \n\nWe thank you for your comments and suggestions! Our impression is that you acknowledge the novelty and potential of our rotational memory constructions. Hopefully, through the upcoming discussions we will reach to a clear understanding of the theoretical importance and the experimental promise of the Rotation operation and the RUM model. For example, recently we discovered that our paper and Sabour et al. (2017) have a similar conceptual background. \n\nBest wishes!\n\nSara Sabour, Nicholas Frosst, Geoffrey Hinton. Dynamic Routing Between Capsules. NIPS 2017 (to appear) arXiv preprint arXiv:1710.09829, 2016. ", "Dear Reviewer, \n\nWe thank you for the constructive review! For evaluation of RUM we wanted to test the model on diverse benchmark tasks, ranging from the Copying Memory Task and Character-level Language Modeling on PTB, which require a varied set of skills, including long-term memory capacity, associative skill, short-term forgetting mechanisms, etc. Our confidence in RUM is motivated by the state-of-the-art-like performance of the model in all those tasks. \n\nThank you for suggesting to expand the experimental section. As far as the current results are concerned, we are working on a finer grid search, which can yield more impressive improvements. We are also evaluating RUM on a larger data set--enwik8: it is possible these simulations will not finish in time (before the deadline), but we’ll try.\n\nWe agree with your comment on the Q&A task, and we will rephrase this part of the experimental discussion. However, we want to explain why our result in this task is strong. Attention mechanism models hold the record for all Q&A tasks nowadays. Nevertheless, RNN models are still more responsible for long-term memory which should improve the SOTA when combined with attention mechanisms. Frankly, there is still a lack of studies on combining novel RNNs with attention mechanisms to achieve SOTA. Thus, this should not prevent studies on better fundamental RNN models, e.g. Cooijmans et al (2016). For future work, we plan to apply RUM to other Q&A data sets.\n\nFinally, GORU learns the Copying Memory Task for T=200; we will update our figure. We will also implement your minor comments and update the paper accordingly. \n\nThank you! \n\nReferences: \n\nTim Cooijmans, Nicolas Ballas, César Laurent, Çaglar Gülçehre & Aaron Courville. Recurrent Batch Normalization. ICLR 2017 arXiv preprint arXiv:1603.09025, 2016. ", "Dear Reviewer, \n\nWe thank you for the thoughtful review! We believe that your question about the choice of the initial and final vectors, encoding the rotation, within the 2D subspace can lead to new interesting results. We might introduce two new parameters—alpha and beta—that define the rotation from the embedded input to a linear combination alpha*u+beta*v, where u and v form an orthonormal basis of the 2D subspace. The coefficients alpha and beta can be learned by backpropagation. \n\nCurrently, we rotate from the embedded input to the target vector; if we decide to flip the encoding (from target to embedded input) we expect to obtain comparable results to the current ones since we only reverse the orientation of the rotation. \n\nWe thank you for your comments on the description of tau and will update the discussion accordingly. We will also conduct an additional experiment that will answer your questions about the comparison between RUM and GORU. \n\nThank you!\n", "Dear Reviewer, \n\nWe thank you for the insightful review! We believe that our concept of using rotation memories can be used in a large set of deep learning models, including RNNs. Our paper serves to introduce a particular construction (Rotation) that realizes the concept of rotation memories, and then to illustrate advantages of that construction by modifying gated models (RUM).\n\nWe agree that testing RUM on tasks with larger data sets would bolster the case for RUM. Currently we are running our model on enwik8: it is possible our simulations will not finish in time (before the deadline), but we’ll try.\n\nWe believe that your comment about “unintentional forgetting” is interesting and will investigate it further. The RUM model utilizes rotations defined by projections into different (in general) 2d planes (defined by the embedded input vector and the target vector) under which going back to the same point unintentionally (after making a cycle of 360 degrees) is unlikely. Another way to think about this is by viewing the rotations as rotating a unit vector on an (N_h-1)-sphere, where N_h is the hidden size. Since N_h is typically not small, the probability of ending at the same point after a full cycle is negligible. \n\nWhile RUM is only partially motivated by GORU, the RUM model introduces two crucial new concepts, which, we believe, substantially bolster its performance compared to GORU, and many other approaches: 1. The rotation operation is not parameterized directly, as in GORU, but instead it is extracted from the new input and the previous hidden state. In this sense, to parallel our model with the literature, RUM is a “firmware” structure instead of a “learnware” structure as discussed in Balduzzi et al. (2016): our rotation does not require additional parameters to be defined. 2. RUM has an associative memory structure, which is not present in GORU, and more importantly, it is vital for the learning of the Associative Recall task (soon we will report on the inability of GORU to learn the task for T=30 and 50; note that RUM succeeds for T=30 and 50). Moreover, the multiplicative recursive definition of R_t is required to maintain an orthogonal matrix and have an interpretation of phase accumulation because of the multiplicative nature of rotations. We believe that this is the first example of a multiplicative function used for associative memory, contrasting the recursions in Ba et al (2016) and Zhang et al (2017). \n\nAs far as rotations are concerned, they are key objects in a variety of fields such as quantum physics and the theory of Lie groups. If one wants to find inspirations for constructions, similar to ours in section 3.1., they could consult standard books on those subjects (Sakurai et. al (2010), Artin (2011)). This particular parameterization for the rotation is a natural way to define a differentiable orthogonal operation within the RNN cell. Other ways to extract an orthogonal operation from elements in the RNN cell are still possible. Some approaches are as follows: 1. Use a skew-symmetric matrix A to define the orthogonal operator e^A; 2. Use a permutation operator. However, those constructions are difficult to implement and do not offer a natural intuition about encoding memory. We recognize that other constructions are also feasible and potentially interesting for research; however, we believe that our construction of the Rotation is simple and offers enough intuition (and results) to spur more research in constructing successful models other than RUM. We will update the discussion about the motivation of the rotational memory accordingly, but we will leave other constructions as a topic for further work (i.e. for another conference). \n\nFinally, we used the 10k training set for the QA task.\n\nThank you! \n\nReferences: \n\nJimmy Ba, Geoffrey Hinton, Volodymyr Mnih, Joel Z. Leibo, Catalin Ionescu. Using Fast Weights to Attend to the Recent Past. arXiv preprint arXiv:1610.06258, 2016. \n\nWei Zhang and Bowen Zhou. Learning to update auto-associative memory in recurrent neural networks for improving sequence memorization. arXiv preprint arXiv:1709.06493, 2017. \n\nDavid Balduzzi and Muhammad Ghifary. Strongly-Typed Recurrent Neural Networks. Proceeding ICML'16 Proceedings of the 33rd International Conference on International Conference on Machine Learning. 48. 1292-1300, 2016. \n\nJ. J. Sakurai and Jim J. Napolitano. Modern Quantum Mechanics (2nd edition). Pearson, 2010.\n\nMichael Artin. Algebra (2nd edition). Pearson, 2011. " ]
[ 4, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sk4w0A0Tb", "iclr_2018_Sk4w0A0Tb", "iclr_2018_Sk4w0A0Tb", "H1MfgBVWM", "rJDVFq8mM", "H1qQJHEZz", "iclr_2018_Sk4w0A0Tb", "SJLOkB4-G", "HJvHqzeGM", "HJUg-BVWz", "iclr_2018_Sk4w0A0Tb", "Bkq3EZcxM", "B1yLVH5lM", "Byf4Vs5gM" ]
iclr_2018_Byd-EfWCb
Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks
Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.
workshop-papers
this submission has two results; (1) it defines what it means for the optimal representation is, although this is rather uninteresting that it simply says that if the representation from a model is going to be used based on some given metric, the cost function should directly reflect it, and (2) it shows that different choices of encoding and decoding have different implications. as with most of the reviewers, i found these to be a rather weak contribution.
train
[ "BJFJeNPHf", "H1QnkNwSM", "S1GVQk5gG", "HJFbzhFeM", "SkOj779lM", "rkjz4_a7z", "rkwqhGkzz", "Sk9jGf1fG", "B1Fzd-yzM" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Dear Reviewer,\n\nAgain thank you for your the time you took to initially assess our work. \n\nWe found the comments you made informative, and hope that the newer manuscript manages to address the issues you had with the paper. The reviewer that has currently re-reviewed the manuscript has concluded that the paper is much improved, and has increased their score accordingly. We would very much appreciate it if you could take a look at the updated version with respect to your issues with the initial version.\n\nSpecifically:\n\nThe theory section is completely rewritten to put it on a more formal, and less superficial ground.\n\nWe have now evaluated the models in the dot product space (which is the distance measure we should use according to our analysis), giving expanded experimental results, and find that concat consistently outperforms mean. We discuss the concat-mean situation in an Appendix.\n\nWe have added an appendix specifically comparing our models to the original SkipThought implementations for reasonable comparison. \n\nOn further investigation, our initial hypothesis regarding the additional conditioning of the decoders at each time step on the encoder output step turned out not to be true, true as the LayerNorm models available for download in fact don't contain this conditioning mentioned in the paper. We have now attributed this difference to differences in experimental setup.\n\nA new diagram demonstrating how to use the unrolling procedure in practice for STS (and other) tasks has been added.\n\nMany thanks.", "Dear Reviewer,\n\nAgain thank you for your the time you took to initially assess our work. \n\nWe found the comments you made informative, and hope that the newer manuscript manages to address the issues you had with the paper. The reviewer that has currently re-reviewed the manuscript has concluded that the paper is much improved, and has increased their score accordingly. We would very much appreciate it if you could take a look at the updated version with respect to your issues with the initial version.\n\nSpecifically:\n\nWe now introduce the formalism of the optimal representation space (which you pointed out was missing)\n\nWe have more clearly highlighted the reason FastSent works so well in the space it is evaluated in by making a stronger connection to the encoder.\n\nWe have analysed the model in both the derived and canonical representation space, yielding an expanded set of results that better empirically supports our theoretical analysis.\n\nWe have emphasised that it is a sentence-level version of the distributional hypothesis (although the theory section now takes a different, more formal approach in order to arrive at the semantically similar conclusion - the distributional hypothesis is only required for choosing appropriate outputs).\n\nWe have removed the line you pointed out that doesn't make sense.\n\nMany thanks.", "------ updates to review: ---------\n\nI think the paper is much improved. It is much more clear and the experiments are more focused and more closely connected to the earlier content in the paper. Thanks to the authors for trying to address all of my concerns. \n\nI now better understand in what sense the representation space is optimal. I had been thinking (or perhaps \"hoping\" is a better word) that the term \"optimal\" implied maximal in terms of some quantifiable measure, but it's more of an empirical \"optimality\". This makes the paper an empirical paper based on a reasonable and sensible intuition, rather than a theoretical result. This was a little disappointing to me, but I do still think the paper is marginally above the acceptance threshold and have increased my score accordingly. \n\n------ original review below: --------\nThis paper is about rethinking how to use encoder-decoder architectures for representation learning when the training objective contains a similarity between the decoder output and the encoding of something else. For example, for the skip-thought RNN encoder-decoder that encodes a sentence and decodes neighboring sentences: rather than use the final encoder hidden state as the representation of the sentence, the paper uses some function of the decoder, since the training objective is to maximize each dot product between a decoder hidden state and the embedding of a context word. If dot product (or cosine similarity) is going to be used as the similarity function for the representation, then it makes more sense, the paper argues, to use the decoder hidden state(s) as the representation of the input sentence. The paper considers both averaging and concatenating hidden states. One difficulty here is that the neighboring sentences are typically not available in downstream tasks, so the paper runs the decoder to produce a predicted sentence one word-at-a-time, using the predicted words as inputs to the decoder RNNs. Then those decoder RNN hidden states are used via averaging or concatenation as the representation of a sentence in downstream tasks. \n\nThis paper is a source of contributions, but I think in its current form it is not yet ready for publication. \n\nPros:\n\nI think it makes sense to pay attention to the training objective when deciding how to use the model for downstream tasks. \n\nI like the empirical investigation of combining RNN and BOW encoders and decoders. \n\nThe experimental results show that a single encoder-decoder model can be trained and then two different functions of it can be used at test time for different kinds of tasks (RNN-RNN for supervised transfer and RNN-RNN-mean for unsupervised transfer). I think this is an interesting result. \n\nCons: \n\nI have several concerns. The first relate to the theoretical arguments and their empirical support. \n\nRegarding the theoretical arguments: \n\nFirst, the paper discusses the notion of an \"optimal representation space\" and describes the argument as theoretical, but I don't see much of a theoretical argument here. \n\nAs far as I can tell, the paper does not formally define its terms or define in what sense the representation space is \"optimal\". I can only find heuristic statements like those in the paragraph in Sec 3.2 that begins \"These observations...\". What exactly is meant formally by statements like \"any model where the decoder is log-linear with respect to the encoder\" or \"that distance is optimal with respect to the model’s objective\"? It seems like the paper may want to start with formal definitions of an encoder and a decoder, then define what is meant by a \"decoder that is log-linear with respect to the encoder\", and define what it means for a distance to be optimal with respect to a training objective. That seems necessary in order to provide the foundation to make any theoretical statement about choices for encoders, decoders, and training objectives. I am still not exactly sure what that theoretical statement might look like, but maybe defining the terms would help the authors get started in heading toward the goal of defining a statement to prove. \n\nSecond, the paper's theoretical story seems to diverge almost immediately from the choices used in the model and experimental procedure. \n\nFor example, in Sec. 3.2, it is stated that cosine similarity \"is the appropriate similarity measure in the case of log-linear decoders.\" But the associated footnote (footnote 2) seems to admit a contradiction here by noting that actually the appropriate similarity measure is dot product: \"Evidently, the correct measure is actually the dot product.\" This is a bit confusing. \nIt also raises a question: If cosine similarity will be used later for computing similarity, then why not try using cosine similarity in place of dot product in the model? That is, replace \"u_w \\cdot h_i\" in Eq. (2) with \"cos(u_w, h_i)\". If the paper's story is correct (and if I understand the ideas correctly), training with cosine similarity should work better than training with dot product, because the similarity function used during training is more similar to that used in testing. This seems like a natural experiment to try. Other natural experiments would be to vary both the similarity function used in the model during training and the similarity function used at test time. The authors' claims could be validated if the optimal choices always use the same choice for the training and test-time similarity functions. That is, if Euclidean distance is used during training, then will Euclidean distance be the best choice at test time?\n\nAnother example of the divergence lies in the use of the skip-thought decoder on downstream tasks. Since the decoder hidden states depend on neighboring sentences and these are considered to be unavailable at test time, the paper \"unrolls\" the decoder for several steps by using it to predict words which are then used as inputs on the next time step. To me, this is a potentially very significant difference between training and testing. Since much of the paper is about reconciling training and testing conditions in terms of the representation space and similarity function, this difference feels like a divergence from the theoretical story. It is only briefly mentioned at the end of Sec. 3.3 and then discussed again later in the experiments section. I think this should be described in more detail in Section 3.3 because it is an important note about how the model will be used in practice. \n\nIt would be nice to be able to quantify the impact (of unrolling the decoder with predicted words) by, for example, using the decoder on a downstream evaluation dataset that has neighboring sentences in it. Then the actual neighboring sentences can be used as inputs to the decoder when it is unrolled, which would be closer to the training conditions and we could empirically see the difference. Perhaps there is an evaluation dataset with ordered sentences so that the authors could empirically compare using real vs predicted inputs to the decoder on a downstream task?\n\nThe above experiments might help to better connect the experiments section with the theoretical arguments. \n\nOther concerns, including more specific points, are below:\n\nSec. 2: \nWhen describing inferior performance of RNN-based models on unsupervised sentence similarity tasks, the paper states: \"While this shortcoming of SkipThought and RNN-based models in general has been pointed out, to the best of our knowledge, it has never been systematically addressed in the literature before.\" \nThe authors may want to check Wieting & Gimpel (2017) (and its related work) which investigates the inferiority of LSTMs compared to word averaging for unsupervised sentence similarity tasks. They found that averaging the encoder hidden states can work better than using the final encoder hidden state; the authors may want to try that as well. \n\nSec. 3.2:\nWhen describing FastSent, the paper includes \"Due to the model's simplicity, it is particularly fast to train and evaluate, yet has shown state-of-the-art performance in unsupervised similarity tasks (Hill et al., 2015).\"\nI don't think it makes much sense to cite the SimLex-999 paper in this context, as that is a word similarity task and that paper does not include any results of FastSent. Maybe the Hill et al (2016) FastSent citation was meant instead? But in that case, I don't think it is quite accurate to make the claim that FastSent is SOTA on unsupervised similarity tasks. In the original FastSent paper (Hill et al., 2016), FastSent is not as good as CPHRASE or \"DictRep BOW+embs\" on average across the unsupervised sentence similarity evaluations. FastSent is also not as good as sent2vec from Pagliardini et al (2017) or charagram-phrase from Wieting et al. (2016).\n\nSec. 3.3:\nIn describing skip-thought, the paper states: \"While computationally complex, it is currently the state-of-the-art model for supervised transfer tasks (Hill et al., 2016).\"\nI don't think it is accurate to state that skip-thought is still state-of-the-art for supervised transfer tasks, in light of recent work (Conneau et al., 2017; Gan et al., 2017). \n\nSec. 3.3:\nWhen discussing averaging the decoder hidden states, the paper states: \"Intuitively, this corresponds to destroying the word order information the decoder has learned.\" I'm not sure this strong language can be justified here. Is there any evidence to suggest that averaging the decoder hidden states will destroy word order information? The hidden states may be representing word order information in a way that is robust to averaging, i.e., in a way such that the average of the hidden states can still lead to the reconstruction of the word order.\n\nSec. 4:\nWhat does it mean to use an RNN encoder and a BOW decoder? This seems to be a strongly-performing setting and competitive with RNN-mean, but I don't know exactly what this means. \n\n\nMinor things:\n\nSec. 3.1:\nWhen defining v_w, it would be helpful to make explicit that it's in \\mathbb{R}^d.\n\nSec. 4: \nFor TREC question type classification, I think the correct citation should be Li & Roth (2002) instead of Vorhees (2002).\n\nSec. 5:\nI think there's a typo in the following sentence: \"Our results show that, for example, the raw encoder output for SkipThought (RNN-RNN) achieves strong performance on supervised transfer, whilst its mean decoder output (RNN-mean) achieves strong performance on supervised transfer.\" I think \"unsupervised\" was meant in the latter mention.\n\nReferences:\n\nConneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. EMNLP.\nGan, Z., Pu, Y., Henao, R., Li, C., He, X., & Carin, L. (2017). Learning generic sentence representations using convolutional neural networks. EMNLP.\nLi, X., & Roth, D. (2002). Learning question classifiers. COLING.\nPagliardini, M., Gupta, P., & Jaggi, M. (2017). Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. arXiv preprint arXiv:1703.02507.\nWieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2016). Charagram: Embedding words and sentences via character n-grams. EMNLP.\nWieting, J., & Gimpel, K. (2017). Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings. ACL.\n", "The authors provide some theoretical justification for why simple log-linear decoders perform better than RNN decoders for various unsupervised sentence similarity tasks. They also provide a simple method for improving the performance of RNN based models. Please find below my comments/questions/suggestions:\n\n1) I found the theory to be a bit superficial and there is a clearly a gap between what is proven theoretically and demonstrated empirically. For example, as per the theoretical arguments presented in the paper, RNN-concat should do better than RNN-mean. However, the experiments suggest that RNN-mean always does better and in some cases significantly better (referring to Table 1). How does this empirical observation reconcile with the theory ?\n\n2) The authors mention that the results for SkipThought represented in their paper are lower than those presented in the original SkipThought paper. They say that they elaborate on this in Appendix C but there isn't much information provided. In particular, it would be good to mention the original numbers also (of course I can check the original paper but if the authors provide those numbers then it would ensure that there is no misunderstanding)\n\n3) As mentioned in the previous point, the original SkipThought decoder seems to do better than the modified decoder used by the authors. It is not clear, how this can be justified under the theoretical framework presented by the author. I agree that this could be because in the original formulation (referring to equations in Appendix C) the encoder contributes more directly to the decoder. However, it is not clear how this causes it to be \"closer\" to the optimal space. Can this be proven ?\n\n4) Can you elaborate a bit more on how the model is used at test time? Consider the STS17 benchmark and the following sentence pair from it? \n\n- The bird is bathing in the sink.\n- Birdie is washing itself in the water basin.\n\nHow will you use RNN-mean and RNN-concat to find the similarity between these two sentences.\n\n5) In continuation of the above question, how does the computation of similarity differ when you unroll for 3 steps v/s when you unroll for 5 steps\n\n6) Based on the answers to (4) and (5) above I would like to seek further clarifications on Figure 1.", "This paper proposes the concept of optimal representation space and suggests that a model should be evaluated in its optimal representation space to get good performance. It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases. It is disappointing, because this paper is named as \"finding optimal representation spaces ...\".\n\nIn addition, one of the contributions claimed in this paper is about introducing the \"formalism\" of an optimal representation space. However, I didn't see any formal definition of this concept or theoretical justification.\n\nAbout FastSent or any other log-linear model, the reason that dot product (or cosine similarity) is a good metric is because the model is trained to optimize the dot product, as shown in equation 5 --- I think this simple fact is missed in this paper.\n\nThe experimental results are not convincing, because I didn't find any consistent pattern that shows the performance is getting better once we evaluated the model in its optimal representation space.\n\nThere are statements in this paper that I didn't agree with\n\n1) Distributional hypothesis from Harris (1954) is about words not sentences.\n2) Not sure the following line makes sense: \"However, these unsupervised tasks are more interesting from a general AI point of view, as they test whether the machine truly understands the human notion of similarity, without being explicitly told what is similar\"", "Many thanks to the reviewers for their extensive and valuable feedback.\n\nWhile the original scope remains the same, the paper itself has changed significantly:\n- Greatly expanded theory section, including a thorough definition of an optimal representation space and details how such an optimal space can be discovered for a given model. \n- Revisited analysis of BOW and RNN decoders to clarify our argumentation.\n- As suggested by the reviewers we now report results using dot-product as similarity metric in the main body of the paper and have moved the results using cosine similarity into the appendix.\n- Expanded presentation and discussion of the performance of unrolled RNN decoders.\n- Results for mean of unrolled decoder states, and a comparison with (variants of) SkipThought have been added to the appendix.\n- Upon further investigation, the SkipThought LayerNorm model, whose results we were comparing against does not directly condition its decoder output on the encoder output at every time step as initially thought. We attribute differences in performance to experimental setup and have thus removed related comments.\n- Literature is now reviewed in the introduction (including references suggested by the reviewers).\n- An additional figure clarifies the proposed unrolling procedure for RNN decoders.\n- Minor changes to the text\n\nWe believe we have thoroughly addressed the bulk of the issues highlighted by the reviewers. In particular we now provide a solid theoretical framework for optimal representation spaces and how they can be obtained for a given model. Further we provide results that are more consistent with our theoretical argument and have revisited the majority of the text in the paper.\n\nAgain we would like to thank the reviewers for the time and care they have put into their reviews, and would like to invite them to reconsider their original ratings.", "Thank you very much for the time and consideration you have taken with your review. We sincerely appreciate your detailed feedback and would like to address some of your questions and concerns below.\n\n> I don't see much of a theoretical argument here.\n\nThe lack of formalism was mentioned by all 3 reviewers. We will reformulate this section, give some clearer definitions as you suggested, and tone down the “theory” aspect in general.\n\n\n> … theoretical story seems to diverge almost immediately… if cosine similarity will be used later, then why not try using cosine similarity in place of dot product in the model?\n\nYou are absolutely correct. While cosine similarity is clearly related to dot product, it is not a drop-in replacement because it is not always consistent with the dot product. Thank you for pointing out this error in our analysis.\nWe will re-run all our experiments with the dot product and will publish the results in the updated manuscript.\nYou are also right that in principle one can use cosine similarity, Euclidean distance or indeed any chosen measure in the model. Due to time and computational restrictions, we are unable to run these additional experiments by the rebuttal deadline but we feel these ideas are definitely worth exploring.\n\n\n> if Euclidean distance is used during training, then will Euclidean distance be the best choice at test time?\n\nOur analysis suggests that trying Euclidean distance is a sensible thing to do in this case. Of course, the downstream task might differ so much that the distance and the model itself are not useful at all.\n\n\n> Another example of the divergence lies in the use of the skip-thought decoder on downstream tasks. Since the decoder hidden states depend on neighboring sentences and these are considered to be unavailable at test time, the paper \"unrolls\" the decoder for several steps by using it to predict words which are then used as inputs on the next time step. To me, this is a potentially very significant difference between training and testing. Since much of the paper is about reconciling training and testing conditions in terms of the representation space and similarity function, this difference feels like a divergence from the theoretical story.\n\nUnfortunately, when the test sentences have no context, the adjacent sentences need to be approximated (e.g. by using the softmax word embeddings or beam search). In those cases the optimal representation is not really attainable but can be “approximated”. However, in other models such as RNN-RNN autoencoders the optimal space is perfectly attainable.\nWe feel our work is more about paying attention to the objective as you mentioned. If the model maximises some similarity between representations, it is sensible to try that similarity on the downstream tasks.\n\n\n> It would be nice to be able to quantify the impact (of unrolling the decoder with predicted words) by, for example, using the decoder on a downstream evaluation dataset that has neighboring sentences in it. \n\nWe absolutely agree and would love to do this but are unaware of such tasks / datasets. Would the Reviewer be able to help us by suggesting one?\n\n\n> ...The hidden states may be representing word order information in a way that is robust to averaging … \n\nThis is a really good point and we totally agree. We will soften or retract our statement.\n\n\n> What does it mean to use an RNN encoder and a BOW decoder? This seems to be a strongly-performing setting and competitive with RNN-mean, but I don't know exactly what this means.\n\nWe use the RNN to encode a sentence into a vector h, and then use the bag-of-words decoder, i.e. softmax(U*h), where U is the logit matrix. Please do let us know if you would like a more detailed description of this or any other model.\n\n\nWe agree with all other points not mentioned here and will fix accordingly.\n\n\nFinally, we will notify you when all of the above are addressed in the new version.\n\nBest wishes,\n\nICLR 2018 Conference Paper816 Authors\n", "We would like to thank you for your time and a detailed assessment of our work.\nWe hope to address some of your questions and concerns below.\n\n\n> I found the theory to be a bit superficial.\n\nThis has been pointed out by all 3 reviewers. We will make our statements more formal and tone down the “theory” aspect in the updated manuscript.\n\n\n> RNN-concat should do better than RNN-mean (but oftentimes the opposite happens)\n\nWe have re-run our experiments with dot product instead of cosine similarity as pointed out by Reviewer 2. We found that RNN-concat works much better than previously reported. We will update the manuscript to reflect the changes.\n\n\n> it would be good to mention the original SkipThought numbers\n\nWe agree and will do so in the updated manuscript.\n\n\n> The original SkipThought decoder seems to do better than the modified decoder used by the authors\n\nThis was definitely a source of concern for us as well. We hypothesise this is because the encoder contributes more directly to the decoder but we cannot say for sure. It is entirely possible the discrepancy is due to variations in experimental setups. E.g. to the best of our knowledge the original paper uses a combination of bidirectional and unidirectional encoders; we use only the latter.\nIn this work we did not aim to compete with existing models, we were only interested in testing our assumptions in a fair setting. To this end, we built our experiments upon a well-known TensorFlow Skip-Thought implementation and will make our codebase public.\n\n\n> Can you elaborate a bit more on how the model is used at test time?\n\nAbsolutely. The encoder RNN encodes a sentence into some vector f and the decoder RNNs produce sequences of hidden states h(1), h(2), …, h(n) for the previous sentence and g(1), g(2), … g(n) for the next sentence. We simply average or concatenate all those hidden states to form the final sentence representation.\nImportantly, since there are no adjacent sentences during test time, the decoder input for the current step is just the softmax of the previous step multiplied by the word embedding matrix, i.e. w(n) = W*p(n-1).\nPlease let us know if that answers your question. We are always happy to elaborate further.\n\n\n> how does the computation of similarity differ when you unroll for 3 steps v/s when you unroll for 5 steps\n\nWe would unroll N hidden states of each decoder, so in total we have 2*N vectors of dimension d.\nIn RNN-mean, we just average all of them and get 1 vector of dimension d.\nIn RNN-concat, we concatenate all of them and get 1 vector of dimension 2*N*d.\nIn either case, we use dot product to compare the resulting representations.\n\n\n> I would like to seek further clarifications on Figure 1.\n\nThis illustrates the performance on the STS14 task as a function of N, where N is the number of unrolled hidden states of the decoder.\n\n\nFinally, we will notify you when all of the above are addressed in the new version.\n\nBest wishes,\n\nICLR 2018 Conference Paper816 Authors\n", "Thank you very much for your time and assessment of our work.\nWe hope to address some of your concerns below.\n\n> It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases.\n\nWe agree and will describe the general procedure in the updated manuscript.\n\n\n> I didn't see any formal definition of this concept ...\n\nThe lack of formalism was mentioned by all 3 reviewers. We will give some clearer definitions and tone down the “theory” aspect in general.\n\n\n> … the reason that dot product is a good metric is because the log-linear model is trained to optimize the dot product\n\nThis is exactly what we were trying to say. We also argue that dot product is not necessarily appropriate for comparing RNN encoder vectors. Intuitively, due to non-linearities, small changes in the encoder hidden state might lead to big changes in the decoder outputs and vice versa. We are sorry if our core idea was not clear enough, we will present our analysis better in the updated manuscript.\n\n\n> The experimental results are not convincing\n\nCould you perhaps help us by pointing towards the source of your concerns?\nAs our analysis indicates, RNN encoder - RNN decoder is the worst model for similarity because dot product is not appropriate for comparing encoder states when RNN decoder is used. However, dot product is appropriate for BOW encoder - BOW decoder and RNN - RNN-concat. Our experiments show significant and consistent improvements over RNN-RNN.\n\n\n> Distributional hypothesis from Harris (1954) is about words not sentences.\n\nWe had no intention to say otherwise. By “sentence-level version of the distributional hypothesis Harris (1954)” we meant that one can think of a “sentence-level version” of the original word-level hypothesis due to Harris (1954). We will fix our sloppy wording in the updated manuscript.\n\n\n> Not sure the following line makes sense: \"However, these unsupervised tasks are more interesting from a general AI point of view...\n\nWe completely agree with you. In fact, humans are being told what is similar all the time. We will rephrase or retract the statement.\n\n\nFinally, we will notify you when all of the above are addressed in the new version.\n\nBest wishes,\n\nICLR 2018 Conference Paper816 Authors" ]
[ -1, -1, 6, 5, 4, -1, -1, -1, -1 ]
[ -1, -1, 4, 5, 4, -1, -1, -1, -1 ]
[ "HJFbzhFeM", "SkOj779lM", "iclr_2018_Byd-EfWCb", "iclr_2018_Byd-EfWCb", "iclr_2018_Byd-EfWCb", "iclr_2018_Byd-EfWCb", "S1GVQk5gG", "HJFbzhFeM", "SkOj779lM" ]
iclr_2018_rkxY-sl0W
Tree-to-tree Neural Networks for Program Translation
Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to consider employing deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network as an encoder-decoder architecture to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.
workshop-papers
the problem is interesting, and the approach is also interesting. however, the reviewers have found that this manuscript would benefit from more experiments, potentially involving some real data (even at least for evaluation) in addition to largely synthetic data sets used in the submission. i also agree with them and encourage authors to consider this option.
val
[ "HJsu29V4G", "BkLxPNweG", "r1-4-eYlf", "Bk_WgJqgM", "r1Kak22Xz", "BJ8e0oh7M", "rkGWRHCMM", "BkIEnSAff", "B1LYcrRMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "- Linearization\nMake sense, and I recommend to add at least 1 conversion example to the paper to guarantee reproducibility, because some accidental style errors in linearized texts may affect the results.\n\n- \"Meaningless\" test sets\nThe point of my concern is the problem which possibly not-few auto-generated codes have (not \"all\" of them). CFG rules basically represent a wider language than actual specification of \"executable\" codes, and auto-generating codes based on only CFG rules may include actually incorrect ones (e.g., the use-before-define error can occur using the BNFs in Appendix A, but this is one of the critical problems in many programming languages).\nConsidering about the case that there are non-ignorable amount of incorrect codes in the evaluation data, it becomes hard to declare the expressiveness of the proposed model in real problems too.\n\n- CoffeeScript and Javascript\nComplexity class is not the point. CoffeeScript has an explicit map (compiler) to Javascript which is guaranteed by the concept of itself, so converting CoffeeScript to Javascript is too trivial to generalize the effectiveness of the program translation models for other languages, which do not have explicit maps between each other (even when they share some programming paradigms, such as Java and Python). I think that, the translation task in opposite side, i.e., translating \"raw\" Javascript codes (which are gathered or generated from scratch, not generated from CoffeeScript) to CoffeeScript, was more effective to discuss about this point.", "This paper presents a tree-to-tree neural network for translating programs\nwritten in one Programming language to another. The model uses soft attention\nmechanism to locate relevant sub-trees in the source program tree when \ndecoding to generate the desired target program tree. The model is evaluated\non two sets of datasets and the tree-to-tree model outperforms seq2tree and\nseq2seq models significantly for the program translation problem.\n\nThis paper is the first to suggest the tree-to-tree network and an interesting\napplication of the network for the program translation problem. The evaluation\nresults demonstrate the benefits of having both tree-based encoder and decoder. \nThe tree encoder, however, is based on the standard Tree-LSTM and the application\nin this case is synthetic as the datasets are generated using a manual rule-based \ntranslation. \n\nQuestions/Comments for authors:\n\nThe current examples are generated using a manually developed rule-based system. \nAs the authors also mention it might be challenging to obtain the aligned examples\nfor training the model in practice. What is the intended use case then of \ntraining the model when the perfect rule-based system is already available?\n\nHow complex are the rules for translating the programs for the two datasets and what\ntype of logic is needed to write such rules? It would be great if the authors can \nprovide the rules used to generate the dataset to better understand the complexity\nof the translation task.\n\nThere are several important details missing regarding the baselines. For the \nseq2seq and seq2tree baseline models, are bidirectional LSTMs used for the encoder?\nWhat type of attention mechanisms are used? Are the hyper-parameters presented in\nTable 1 based on best training performance?\n\nIn section 4.3, it is mentioned that the current models are trained and tested on \nprograms of length 20 and 50. Does the dataset contain programs of length upto \n20/50 or exactly of length 20/50? How is program length defined -- in terms of \ntree nodes or the number of lines in the program?\n\nWhat happens if the models trained with programs upto length 20 are evaluated on \nprograms of larger length say 40? It would be interesting to observe the \ngeneralization capabilities of all the different models.\n\nThere are two benefits of using the tree2tree model: i) use the grammar of the\nlanguage, and ii) use the structure of the tree for locating relevant sub-trees\n(using attention). From the current evaluation results, the empirical benefit\nof using the attention is not clear. How would the accuracies look when using \nthe tree2tree model without attention or when attention vector e_t is set to the\nhidden state h of the expanding node?\n", "This paper aims to translate source code from one programming language to another using\na neural network architecture that maps trees to trees. The encoder uses an upward pass of\na Tree LSTM to compute embeddings for each subtree of the input, and then the decoder \nconstructs a tree top-down. As nodes are created in the decoder, a hidden state is passed\nfrom parents to children via an LSTM (one for left children, one for right children), and\nan attention mechanism allows nodes in the decoder to attend to subtrees in the encoder.\n\nExperimentally, the model is applied to two synthetic datasets, where programs in the \nsource domain are sampled from a PCFG and then translated to the target domain with a\nhand-coded translator. The model is then trained on these pairs. Results show that the\nnproposed approach outperforms sequence representations or serialized tree representations\nof inputs and outputs.\n\nPros:\n\n- Nice model which seems to perform well.\n\n- Reasonably clear explanation.\n\nA couple questions about the model:\n\n- the encoder uses only bottom-up information to determine embeddings of subtrees. I wonder \nif top-down information would create embeddings with more useful information for the attention\nin the decoder to pick up on.\n\n- I would be interested to know more details about how the hand-coded translator works. Does\nit work in a context-free, bottom-up fashion? That is, recursively translate two children nodes\nand then compute the translation of the parent as a function of the parent node and\ntranslations of the two children? If so, I wonder what is missing from the proposed model\nthat makes it unable to perfectly solve the first task?\n\nCons:\n\n- Only evaluated on synthetic programs, and PCFGs are known to generate unrealistic programs, \nso we can only draw limited conclusions from the results.\n\n- The paper overstates its novelty and doesn't properly deal with related work (see below)\n\nThe paper overstates its novelty and has done a poor job researching related work. \nStatements like \"We are the first to consider employing neural network approaches \ntowards tackling the problem [of translating between programming languages]\" are\nobviously not true (surely many people have *considered* it), and they're particularly\ngrating when the treatment of related work is poor, as it is in this paper. For example, \nthere are several papers that frame the code migration problem as one of statistical \nmachine translation (see Sec 4.4 of [1] for a review and citations), but this paper \nmakes no reference to them. Further, [2] uses distributed representations for the purpose \nof code migration, which I would call a \"neural network approach,\" so there's not any \nsense that I can see in which this statement is true. The paper further says, \"To the best \nof our knowledge, this is the first tree-to-tree neural network architecture in the \nliterature.\" This is worded better, but it's definitely not the first tree-to-tree \nneural network. See, e.g., [3, 4, 5], one of which is cited, so I'm confused about \nthis claim.\n\nIn total, the model seems clean and somewhat novel, but it has only been tested on \nunrealistic synthetic data, the framing with respect to related work is poor, and the\ncontributions are overstated.\n\n\n[1] https://arxiv.org/abs/1709.06182\n[2] Trong Duc Nguyen, Anh Tuan Nguyen, and Tien N Nguyen. 2016b. Mapping API elements for code migration with\nvector representations. In Proceedings of the International Conference on Software Engineering (ICSE).\n[3] Socher, Richard, et al. \"Semi-supervised recursive autoencoders for predicting sentiment distributions.\" Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, 2011.\n[4] https://arxiv.org/abs/1703.01925\n[5] Parisotto, Emilio, et al. \"Neuro-symbolic program synthesis.\" arXiv preprint arXiv:1611.01855 (2016).\n", "Authors proposed a neural network based machine translation method between two programming languages. The model is based on both source/target syntax trees and performs an attentional encoder-decoder style network over the tree structure.\n\nThe new things in the paper are the task definition and using the tree-style network in both encoder and decoder. Although each structure of encoder/decoder/attention network is based on the application of some well-known components, unfortunately, the paper pays much space to describe them. On the other hand, the whole model structure looks to be easily generalized to other tree-to-tree tasks and might have some potential to contribute this kind of problems.\n\nIn experimental settings, there are many shortages of the description. First, it is unclear that what the linearization method of the syntax tree is, which could affect the final model accuracy. Second, it is also unclear what the method to generate train/dev/test data is. Are those generated completely randomly? If so, there could be many meaningless (e.g., inexecutable) programs in each dataset. What is the reasonableness of training such kind of data, or are they already avoided from the data? Third, the evaluation metrics \"token/program accuracy\" looks insufficient about measuring the correctness of the program because it has sensitivity about meaningless differences between identifier names and some local coding styles.\n\nAuthors also said that CoffeeScript has a succinct syntax and Javascript has a verbose one without any agreement about what the syntax complexity is. Since any CoffeeScript programs can be compiled into the corresponding Javascript programs, we should assume that CoffeeScript is the only subset of Javascript (without physical difference of syntax), and this translation task may never capture the whole tendency of Javascript. In addition, authors had generated the source CoffeeScript codes, which seems that this task is only one of \"synthetic\" task and no longer capture any real world's programs.\nIf authors were interested in the tendency of real program translation task, they should arrange the experiment by collecting parallel corpora between some unrelated programming languages using resources in the real world.\n\nGlobal attention mechanism looks somewhat not suitable for this task. Probably we can suppress the range of each attention by introducing some prior knowledge about syntax trees (e.g., only paying attention to the descendants in a specific subtree).\n\nSuggestion:\nAfter capturing the motivation of the task, I suspect that the traditional tree-to-tree (also X-to-tree) \"statistical\" machine translation methods still can also work correctly in this task. The traditional methods are basically based on the rule matching, which constructs a target tree by selecting source/target subtree pairs and arranging them according to the actual connections between each subtree in the source tree. This behavior might be suitable to transform syntax trees while keeping their whole structure, and also be able to treat the OOV (e.g., identifier names) problem by a trivial modification. Although it is not necessary, it would like to apply those methods to this task as another baseline if authors are interested in.", "We have updated the revision to include results of our tree2tree model without the attention mechanism, and observe that the performance decreases significantly. In particular, the program accuracy drops to nearly 0%. More details can be found in the paper.", "We have updated the paper with the following changes:\n\n(1) We include more discussion of related work in Section 5, especially addressing the relationship between our work with previous program translation work using statistical machine translation methods, and with tree-structured autoencoder work.\n(2) We provide more details about our experimental setup in Section 4, and include the implementation of the translator between two synthetic languages in the appendix.\n(3) We have included results of our tree2tree model without the attention mechanism, and observe that the performance degrades dramatically.\n", "Thank you for your valuable comments! We clarify some confusions below, and we would greatly appreciate it if the reviewer could provide more feedbacks based on our response.\n\nWe have updated our paper to provide more details about our experimental setup. We employ the S-expression to serialize the tree. For example, the parse tree of source program in Figure 1 (i.e., the parse tree of x = 1 if y == 0)\nis represented as\n\n(Block(If(Op===(Value(Identifier Literal(y))Value(Number Literal(0)))Block(Assign(Value(Identifier Literal(x))Value(Number Literal(1))))))\n\nThis is the de facto standard approach used in the literature such as [1] and [2]. To the best of our knowledge, we are not aware of more effective ways to encode a tree. We would greatly appreciate it if the reviewer could provide alternatives that have been examined in the literature, and we would be happy to try them out.\n\nAs described in Section 4.1, we use a pCFG to generate train/dev/test programs, while guaranteeing their lengths are equal to the value we specify. We think testing on randomly generated cases can effectively examine the correctness of the learned translator. Different from natural language translation, program translation task requires to handle all corner cases that may not be frequently seen in practice. Thus, using random test cases can effectively reach to all such corner cases, and we cannot agree with the reviewer that doing so is meaningless.\n\nOur program accuracy is an under-approximation of semantic equivalence, while token accuracy can provide a detailed measurement to understand an approach when the program accuracy is low. In this sense, these two metrics can capture some meaningful information about approaches’ effectiveness. Note that verifying if two programs are semantic-equivalent definition is a turing-complete problem, thus all metrics have to be an approximation to some degree. We consider proposing a better metric as future work.\n\nThe reviewer comments on the syntax of CoffeeScript and JavaScript, and argues that CoffeeScript is a subset of JavaScript, on which we do not agree. The syntactical grammars of two languages do not imply their complexity class. Both of these two languages are Turing-complete, meaning any program in one language has a correspondence in another. Also, for both comments on the syntax and the complexity class, we do not see the direct implication on the later comments on that our synthetic task does not capture the real world programs. For the later comment, as we have explained above, our task is designed to capture different corner cases of a program translator, and we consider handling longer real-world examples as an important future direction.\n\nWe are happy to try some traditional statistical machine translation baselines. Thanks for the suggestion!\n\n[1] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey Hinton. Grammar as a foreign language. NIPS 2015.\n[2] Li Dong and Mirella Lapata. Language to logical form with neural attention. ACL 2016.\n", "Thank you for the valuable comments!\n\nThank you for pointing out these related work! We have revised our paper to carefully compare with more prior work. From a high level, the papers cited in Section 4.4 of [1] are not neural network models; [2] uses word2vec, which is simply to learn a lookup table. Therefore, we also do not consider [2] as a deep neural network approach; [3, 4] propose tree-structured autoencoder, which is a generative model, rather than a translation model. The key difference is that a translation model has access to the source tree, while a generative model does not. Therefore, we think it is fair to claim our work as “the first deep neural network approach for the tree-to-tree translation problem.”\n\nIn addition, the reviewer mentions [5] as a tree-to-tree model, which is definitely not true. In fact, [5] is a sequence-to-tree model: the input of the model proposed in [5] is a sequence rather than a tree.\n\nWe clarify the questions below.\n\nWe use the bottom-up fashion to aggregate the information so that each tree node contains all information of its descendants. Propagating information from top to bottom does not match our intuition that the attention is allocated based on the sub-trees of the source tree.\n\nThe hand-coded translator is in a bottom-up fashion, but not context-free. To construct some parents, the translator may need to manipulate its two children. We have added the code of our translator for the synthetic task in the appendix.\n", "Thank you for your valuable comments! We respond to the questions below.\n\nThe reviewer asks about the meaning of the two program translation tasks studied in our work. We have explained that this is a first step to understand the problem of using a deep neural network approach to solve the program translation problem, and we consider the more challenging task without aligned input-output pairs as an important future direction. Also, although we do not evaluate it in our work, we believe the study of tree-to-tree translation model may have applications to other tree-to-tree translation tasks.\n\nThe CoffeeScript-to-JavaScript compiler is available online, which is too complex to explain in the paper. We have added the code of our translator between two synthetic languages in the appendix.\n\nTo clarify some implementation details, our seq2seq model and seq2tree model faithfully implement [1] and [2]. In particular, we only use uni-directional LSTMs for the encoder, and the attention mechanism is the same as described in the original papers as well. We did grid search for hyper-parameters, and chose the best one based on their performance on the validation set.\n\nA program’s length is defined to be the total number of tokens in the program, and is guaranteed to be equal to 20/50.\n\nWhen we train the model on shorter programs (e.g., programs of length 20), then evaluate on longer programs (e.g., programs of length 50), the test accuracy is 0 for all models, including our proposed tree-to-tree model and the baseline models. We consider solving the generalization issue as the next important problem that we want to address in the future. \n\nWe have clarified all above details in our revised version as well.\n\nWe are running more experiments to provide a complete ablation study to understand the effectiveness of attention. In some preliminary results, we observe that the performance decreases dramatically when attention is not used. We will update the results once we finish the experiments.\n\n[1] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey Hinton. Grammar as a foreign language. NIPS 2015.\n[2] Li Dong and Mirella Lapata. Language to logical form with neural attention. ACL 2016.\n" ]
[ -1, 6, 4, 4, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "rkGWRHCMM", "iclr_2018_rkxY-sl0W", "iclr_2018_rkxY-sl0W", "iclr_2018_rkxY-sl0W", "B1LYcrRMz", "iclr_2018_rkxY-sl0W", "Bk_WgJqgM", "r1-4-eYlf", "BkLxPNweG" ]
iclr_2018_BkoXnkWAb
Shifting Mean Activation Towards Zero with Bipolar Activation Functions
We propose a simple extension to the ReLU-family of activation functions that allows them to shift the mean activation across a layer towards zero. Combined with proper weight initialization, this alleviates the need for normalization layers. We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning in this setting. On the Penn Treebank and Text8 language modeling tasks we obtain competitive results, improving on the best reported results for non-gated networks. In experiments with convolutional neural networks without batch normalization, we find that bipolar activations produce a faster drop in training error, and results in a lower test error on the CIFAR-10 classification task.
workshop-papers
the reviewers were not fully convinced of the setting under which the proposed bipolar activation function was found by the authors to be preferable, and neither am i.
train
[ "SkGpJUL4G", "SkBvfy5lz", "rJC1hZqxf", "B1qbgHcxz", "SJiFf4jZG", "SJNVMViZG", "Sy3ZW4sZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for your answer,\n\n I like the general idea of bipolar activation, but I think the empirical evaluation still need to be improved. Although authors show that bipolar activation improve the trainability of deep-stack RNN and simple convolutional networks, their approach tends to underperform other methods that also focus on in networks trainability (gating, batch-norm). ", "The paper proposed a new activation function that tries to alleviate the use of other form of normalization methods for RNNs. The activation function keeps the activation roughly zero-centered. \n\nIn general, this is an interesting direction to explore, the idea is interesting, however, I would like to see more experiments\n\n1. The authors tested out this new activation function on RNNs. It would be interesting to see the results of the new activation function on LSTM.\n\n2. The experimental results are fairly weak compared to the other methods that also uses many layers. For PTB and Text8, the results are comparable to recurrent batchnorm with similar number of parameters, however the recurrent batchnorm model has only 1 layer, whereas the proposed architecture has 36 layers. \n\n3. It would also be nice to show results on tasks that involve long term dependencies, such as speech modeling.\n\n4. If the authors could test out the new activation function on LSTMs, it would be interesting to perform a comparison between LSTM baseline, LSTM + new activation function, LSTM + recurrent batch norm.\n\n5. It would be nice to see the gradient flow with the new activation function compared to the ones without.\n\n6. The theorems and proofs are rather preliminary, they may not necessarily have to be presented as theorems.", "This paper proposes a self-normalizing bipolar extension for the ReLU activation family. For every neuron out of two, authors propose to preserve the negative inputs. Such activation function allows to shift the mean of i.i.d. variables to zeros in the case of ReLU or to a given saturation value in the case of ELU.\n\nCombined with variance preserving initialization scheme, authors empirically observe that the bipolar ReLU allows to better preserve the mean and variance of the activations through training compared to regular ReLU for a deep stacked RNN.\n\nAuthors evaluate their bipolar activation on PTB and Text8 using a deep stacked RNN. They show that bipolar activations allow to train deeper RNN (up to some limit) and leads to better generalization performances compared to the ReLU /ELU activation functions. They also show that they can train deep residual network architecture on CIFAR without the use of BN.\n\nQuestion:\n- Which layer mean and variance are reported in Figure 2? What is the difference between the left and right plots?\n- In Table 1, we observe that ReLU-RNN (and BELU-RNN for very deep stacked RNN) leads to worst validation performances. It would be nice to report the training loss to see if this is an optimization or a generalization problem.\n- How does bipolar activation compare to model train with BN on CIFAR10?\n- Did you try bipolar activation function for gated recurrent neural networks for LSTM or GRU?\n- As stated in the text, BELU-RNN outperforms BN-LSTM for PTB. However, BN-LSTM outperforms BELU-RNN on Text8. Do you know why the trend is not consistent across datasets?\n\n-Clarity/Quality\nThe paper is well written and pleasant to read\n\n\n- Originality:\nSelf-normalizing function have been explored also in scaled ELU, however the application of self-normalizing function to RNN seems novel.\n\n- Significance:\nActivation function is still a very active research topic and self-normalizing function could potentially be impactful for RNN given that the normalization approaches (batch norm, layer norm) add a significant computational cost. In this paper, bipolar activations are used to train very deep stacked RNN. However, the stacked RNN with bipolar activation are not competitive regarding to other recurrent architectures. It is not clear what are the advantage of deep stacked RNN in that context.", "Summary:\nThis paper proposes a simple recipe to preserve proximity to zero mean for activations in deep neural networks. The proposal is to replace the non-linearity in half of the units in each layer with its \"bipolar\" version -- one that is obtained by flipping the function on both axes.\nThe technique is tested on deep stacks of recurrent layers, and on convolutional networks with depth of 28, showing that improved results over the baseline networks are obtained. \n\nClarity:\nThe paper is easy to read. The plots in Fig. 2 and the appendix are quite helpful in improving presentation. The experimental setups are explained in detail. \n\nQuality and significance:\nThe main idea from this paper is simple and intuitive. However, the experiments to support the idea do not seem to match the motivation of the paper. As stated in the beginning of the paper, the motivation behind having close to zero mean activations is that this is expected to speed up training using gradient descent. However, the presented results focus on the performance on held-out data instead of improvements in training speed. This is especially the case for the RNN experiments.\n\nFor the CIFAR-10 experiment, the training loss curves do show faster initial progress in learning. However, it is unclear that overall training time can be reduced with the help of this technique. To evaluate this speed up effect, the dependence on the choice of learning rate and other hyperparameters should also be considered.\n\nNevertheless, it is interesting to note the result that the proposed approach converts a deep network that does not train into one which does in many cases. The method appears to improve the training for moderately deep convolutional networks without batch normalization (although this is tested on a single dataset), but is not practically useful yet since the regularization benefits of Batch Normalization are also taken away.\n", "Thank you for your review.\n\nWe agree that it would be nice to show results with LSTMs or GRUs. However, it is not obvious to us how to do best do so, since LSTM and GRU do not use ReLU-family activation functions, but instead use the tanh and sigmoid functions. Properly introducing bipolar activations to gated network seems to raise enough questions to warrant a paper on its own. It certainly seems like a fruitful direction for future research.\n\nYour review raises some valid concerns about our stacked RNN architecture. The many layers makes it computationally expensive, and it is outperformed by other architectures. However, our paper is fundamentally about bipolar activation functions, not about the RNN architecture. \n\nOur intent is to argue in favor of BReLU over ReLU and BELU over ELU. It is not to argue in favor of stacked Elman-RNNs over LSTMs. \n\nMost successful RNNs use gates and bounded activation functions (tanh and sigmoid). RNNs with unbounded activation functions have a potential for exploding activations. Stacking such models in depth compounds this problem, as exploding dynamics can happen depthwise as well as across time. \n\nIn other words, the architecture we chose is one that makes learning hard, not one that makes it easy. The propensity for exploding dynamics makes it a good testbed for a self-centering activation function. Indeed, in more than half of our experiments on RNNs, we find that bipolar activation functions are required for the training to work at all. ", "Thank you for your review.\n\nWe address your questions and comments below:\n\n* Which layer mean and variance are reported in Figure 2? What is the difference between the left and right plots?\n- These graphs show the development of a repeated application of matrix-multiplication + non-linearity on a random vector. This is like a single layer RNN without any input and without any learning. It is an idealized case which serves to isolate the effect of the recurrent dynamics. The left graph show ReLU vs BReLU while the right graph show ELU vs BELU. In every case, the bipolar variants lead to more stable dynamics.\n\n* In Table 1, we observe that ReLU-RNN (and BELU-RNN for very deep stacked RNN) leads to worst validation performances. It would be nice to report the training loss to see if this is an optimization or a generalization problem.\n- Agreed. As noted in our reply to reviewer 3 above, we have updated the paper with a training error curve for ReLU-RNN vs BReLU-RNN, which shows lower training error with the bipolar variants.\n\n* How does bipolar activation compare to model train with BN on CIFAR10?\n- We note the original results with BN in the last sentence in the section on CIFAR-10 (a test error of 2.98% for ORN and 4.17% for WRN). These results come from an extensive hyperparameter search over networks with BN. We simply copied the hyperparameters for their best results, i.e. we have not attempted to compensate for the loss of regularization due to removing BN.\n\n* Did you try bipolar activation function for gated recurrent neural networks for LSTM or GRU?\n- It is not clear how to introduce bipolar activations to such networks, since both GRU and LSTM use only bounded activation functions like tanh and sigmoid. Most RNNs use bounded activation functions, which avoid exploding dynamics. This explosion risk makes deeply stacked RNNs with unbounded activations a good testing ground for a self-centering activation function.\n\n* As stated in the text, BELU-RNN outperforms BN-LSTM for PTB. However, BN-LSTM outperforms BELU-RNN on Text8. Do you know why the trend is not consistent across datasets?\n- We have not specifically investigated this question. A possible explanation is that we did not do any hyperparameter tuning on the Text8 dataset.\n\n* Performance of the stacked RNN\n\nIt is true that other architectures outperform our stacked RNN. However, our intent was not to introduce a new RNN architecture, but to introduce bipolar activation functions. Indeed, our RNN architecture is simply a stacked Elman-RNN with skip connections.\n\nWe argue that bipolar activations help learning over non-bipolar ReLU-family activations functions. To show this, we compare ReLU vs BReLU and ELU vs BELU in various architectures, and find bipolarity to be helpful in both RNNs and ConvNets. This argument does not rely on stacked RNNs beng superior to LSTMs or other recurrent architectures.\n", "Thanks for your review. It is a fair criticism that we have not included enough evidence of faster training in the RNN setting. \n\nWhat follows is a summary of the evidence we do present that bipolar activations help learning in RNNs. There are two cases to consider: ELU vs BELU and ReLU vs BReLU.\n\n- For the ELU vs BELU case, we find that in every experiment the ELU-RNN diverges, while the BELU-RNN does not.\n- For the ReLU vs BReLU case, we find that the ReLU-RNN diverges in the Text8 experiment, while BReLU-RNN does not.\n\nIn most of our experiments, the non-bipolar RNNs do not converge at all, while the bipolar variant does. \n\nHowever on PennTreebank, both ReLU-RNN and BReLU-RNN do converge. Here the bipolar version achieves higher generalization accuracy on deeper models. This higher accuracy may be because bipolarity helps optimization, and it may be because of better generalization when using the bipolar versions. It is right to point out that the paper does not adequately establish which of these two are happening.\n\nThe way to establish that the PennTreebank results are due to ease of optimization would be to present the training error curve for the two variants. While we don't present it in the paper, we do have this curve. For the 36-layer network we focus on, what it shows is that the bipolar variant has lower training error for the first 88 epochs, until the learning rate is cut in the ReLU-RNN. In other words, at every point where the curves are comparable, the BReLU-RNN error is lower than the ReLU-RNN error, and the BReLU-RNN also ends up with a lower training error in the end.\n\nThat these curves are not in the paper is an omission, and we have updated the paper to include it. Even without this curve, we believe that the remaining evidence makes a strong case that bipolar activations help learning:\n\n- With ConvNets on CIFAR-10, the bipolar version achieved substantially lower training error than the non-bipolar versions.\n- On Text8, both non-bipolar version diverge, while the bipolar versions do not.\n- On PennTreebank, the ELU diverges, while the BELU does not.\n\nFor the remaining case, BReLU vs ReLU on PennTreebank we have updated the paper with the learning curve that shows faster learning in the bipolar case." ]
[ -1, 4, 5, 5, -1, -1, -1 ]
[ -1, 4, 5, 3, -1, -1, -1 ]
[ "SJNVMViZG", "iclr_2018_BkoXnkWAb", "iclr_2018_BkoXnkWAb", "iclr_2018_BkoXnkWAb", "SkBvfy5lz", "rJC1hZqxf", "B1qbgHcxz" ]
iclr_2018_By3v9k-RZ
LEARNING TO ORGANIZE KNOWLEDGE WITH N-GRAM MACHINES
Deep neural networks (DNNs) had great success on NLP tasks such as language modeling, machine translation and certain question answering (QA) tasks. However, the success is limited at more knowledge intensive tasks such as QA from a big corpus. Existing end-to-end deep QA models (Miller et al., 2016; Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size. This is prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web. We propose to solve this scalability issue by using symbolic meaning representations, which can be indexed and retrieved efficiently with complexity that is independent of the text size. More specifically, we use sequence-to-sequence models to encode knowledge symbolically and generate programs to answer questions from the encoded knowledge. We apply our approach, called the N-Gram Machine (NGM), to the bAbI tasks (Weston et al., 2015) and a special version of them (“life-long bAbI”) which has stories of up to 10 million sentences. Our experiments show that NGM can successfully solve both of these tasks accurately and efficiently. Unlike fully differentiable memory models, NGM’s time complexity and answering quality are not affected by the story length. The whole system of NGM is trained end-to-end with REINFORCE (Williams, 1992). To avoid high variance in gradient estimation, which is typical in discrete latent variable models, we use beam search instead of sampling. To tackle the exponentially large search space, we use a stabilized auto-encoding objective and a structure tweak procedure to iteratively reduce and refine the search space.
workshop-papers
i am a big fan of this idea, but i agree with the reviewers that evaluating this idea on bAbI (which was originally created from a small set of rules and primitives) discounts quite a bit of what is being claimed here. one of the future directions mentioned by the authors ("investigating whether the proposed n-gram representation is sufficient for natural languages") should have been included even with a negative result, which would've increased the significance significantly.
train
[ "rJg3uzqlG", "BkonMcoef", "HyywCCnef", "B11RkcSGz", "B1trkcBfM", "B1VMJcHMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors propose the N-Gram machine to answer questions over long documents. The model first encodes the document via tuple extraction. An autoencoder objective is used to produce meaningful tuples. Then, the model generates a program, based on the extracted tuple collection and the question, to find an answer.\n\nI am very disappointed in the authors' choice of evaluation, namely bAbI - a toy, synthetic task long abandoned by the NLP community because of its lack of practicality. If the authors would like to demonstrate question answering on long documents, they have the luxury of choosing amongst several large scale, realistic question answering datasets such as the Stanford Question answering dataset or TriviaQA.\nBeyond the problem of evaluation, the model the authors propose does not provide new ideas, and rather merges existing ones. This, in itself, is not a problem. However, the authors decline to cite many, many important prior work. For example, the tuple extraction described by the authors has significant prior work in the information retrieval community (e.g. knowledge base population, relation extraction). The idea of generating programs to query over populated knowledge bases, again, has significant related work in semantic parsing and program synthesis. Question answering over (much more complex) probabilistic knowledge graphs have been proposed before as well (in fact I believe Matt Gardner wrote his entire thesis on this topic). Finally, textual question answering (on realistic datasets) has seen significant breakthroughs in the last few years. Non of these areas, with the exception of semantic parsing, are addressed by the author. With sufficient knowledge of related works from these areas, I find that the authors' proposed method lacks proper evaluation and sufficient novelty.", "This paper presents the n-gram machine, a model that encodes sentences into simple symbolic representations (\"n-grams\") which can be queried efficiently. The authors propose a variety of tricks (stabilized autoencoding, structured tweaking) to deal with the huge search space, and they evaluate NGMs on five of the 20 bAbI tasks. I am overall a fan of the general idea of this paper; scaling up to huge inputs is definitely a necessary research direction for QA. However, I have some concerns about the specific implementation and model discussed here. How much of the proposed approach is specific to getting good results on bAbI (e.g., conditioning the knowledge encoder on only the previous sentence, time stamps in the knowledge tuple, super small RNNs, four simple functions in the n-gram machine, structure tweaking) versus having a general-purpose QA model for natural language? Addressing some of these issues would likely prevent scaling to millions of (real) sentences, as the scalability is reliant on programs being efficiently executed (by simple string matching) against a knowledge storage. The paper is missing a clear analysis of NGM's limitations... the examples of knowledge storage from bAbI in the supplementary material are also underwhelming as the model essentially just has to learn to ignore stopwords since the sentences are so simple. In its current form, I am borderline but leaning towards rejecting this paper.\n\nOther questions:\n- is \"n-gram\" really the most appropriate term to use for the symbolic representation? N-grams are by definition contiguous sequences... The authors may want to consider alternatives.\n- why focus only on extractive QA? The evaluations are only conducted on 5 of the 20 bAbI tasks, so it is hard to draw any conclusions from the results as to the validity of this approach. Can the authors comment on how difficult it will be to add functions to the list in Table 2 to handle the other 15 tasks? Or is NGM strictly for extractive QA?\n- beam search is performed on each sentence in the input story to obtain knowledge tuples... while the answering time may not change (as shown in Figure 4) as the input story grows, the time to encode the story into knowledge tuples certainly grows, which likely necessitates the tiny RNN sizes used in the paper. How long does the encoding time take with 10 million sentences?\n- Need more detail on the programmer architecture, is it identical to the one used in Liang et al., 2017?\n", "The paper presents an interesting framework for bAbI QA. Essentially, the argument is that when given a very long paragraph, the existing approaches for end-to-end learning becomes very inefficient (linear to the number of the sentences). The proposed alternative is to encode the knowledge of each sentence symbolically as n-grams, which is thus easy to index. While the argument makes sense, it is not clear to me why one cannot simply index the original text. The additional encode/decode mechanism seems to introduce unnecessary noise. The framework does include several components and techniques from latest recent work, which look pretty sophisticated. However, as the dataset is generated by simulation, with a very small set of vocabulary, the value of the proposed framework in practice remains largely unproven.\n\nPros:\n 1. An interesting framework for bAbI QA by encoding sentence to n-grams\n\nCons:\n 1. The overall justification is somewhat unclear\n 2. The approach could be over-engineered for a special, lengthy version of bAbI and it lacks evaluation using real-world data\n", "We thank the reviewer for the insightful feedback.\n\n[lack of sufficient novelty and missing citations] \nWe disagree with the reviewer and would like to clarify the novelty of our proposed framework. The novelty in our framework is the end-to-end objective function (Equation 2), which learns to construct knowledge storage using down-stream QA tasks as weak supervision. This objective function is different from the ones in the related work mentioned by the reviewer. More specifically, 1) comparing to relation extraction, our method does not use expert-defined schema as supervision; 2) comparing to QA over knowledge graph, our method does not assume knowledge graph is given and instead constructs knowledge storage from text. \nAbout the reading comprehension tasks (e.g., SQUAD), they are not comparable to our work since they do not need to solve the search (from a big corpus) problem.\n\n[lack of evaluation] \nWe understand the disappointment about evaluation. At this point, we can only defend that this is a theoretic work, which proposes a novel framework, and points to a new direction of how a long lasting problem in search might be solved.", "We thank the reviewer for the insightful feedback.\n\n[“N-gram” might be a misleading term] \nWe agree that “N-gram” could be misleading, since it commonly means sequences of contiguous words. We are considering other names to use in the future, such as \"skip n-gram\", or “engram” https://en.wikipedia.org/wiki/Engram_(neuropsychology).\n\n[why only extractive QA?] \nExtractive QA is a family of representative tasks in text understanding. To handle non-extractive QA tasks, we will need to add other functions, which operate on infinite domains (e.g., mathematical operations). The overall model structure should not change, but is beyond the scope of the current result.\n\n[How long does the encoding take with 10 million sentences?]\nWith our current implementation, scoring 10M sentences would take more than two hours on a single machine without parallelization. A typical commercial search engine uses thousands of machines to encode the meaning of pages (indexing). Even with more complex LSTM structures, scalability is not likely to be an issue for encoding.\n\n[model design overfit the bAbI dataset?]\nWe agree that the n-gram design and function design have limited expressiveness. We are currently working on more datasets to further understand the balance between model expressiveness and learning difficulty.", "We thank the reviewer for the insightful feedback. \n\n[why not index the text directly?] \nThe proposed knowledge encoder is indeed learning to index the text. From an information retrieval perspective, we expect the proposed approach to be a goal-dependent index mechanism, and produces better quality index than traditional indexing approaches. We are not aware of any existing work in this domain.\n\n[sophisticated model but simulated data] \nThe model architecture is not more sophisticated than a directed probabilistic graphical model with two discrete latent variables, as shown in Figure 1. One might say that the inference procedure is complex, but this is a common challenge shared by many graphical models. The code assist and structure tweak techniques are very similar to conditional sampling (e.g. Gibbs sampling). Therefore, the proposed learning method is principled, and the choice of dataset does not affect this. We will clarify this in the final version.\n" ]
[ 4, 5, 4, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_By3v9k-RZ", "iclr_2018_By3v9k-RZ", "iclr_2018_By3v9k-RZ", "rJg3uzqlG", "BkonMcoef", "HyywCCnef" ]
iclr_2018_HkcTe-bR-
Exploring Deep Recurrent Models with Reinforcement Learning for Molecule Design
The design of small molecules with bespoke properties is of central importance to drug discovery. However significant challenges yet remain for computational methods, despite recent advances such as deep recurrent networks and reinforcement learning strategies for sequence generation, and it can be difficult to compare results across different works. This work proposes 19 benchmarks selected by subject experts, expands smaller datasets previously used to approximately 1.1 million training molecules, and explores how to apply new reinforcement learning techniques effectively for molecular design. The benchmarks here, built as OpenAI Gym environments, will be open-sourced to encourage innovation in molecular design algorithms and to enable usage by those without a background in chemistry. Finally, this work explores recent development in reinforcement-learning methods with excellent sample complexity (the A2C and PPO algorithms) and investigates their behavior in molecular generation, demonstrating significant performance gains compared to standard reinforcement learning techniques.
workshop-papers
The paper creates a dataset for exploration of RL for molecular design and I think this makes it a strong contribution to the community at the intersection of the two. For a methods focussed conference such as ICLR however, it may not be the best fit. Hence I would recommend submitting to a workshop track or targeting a more focussed venue such as a bioinformatics conference.
val
[ "ryhZvNRNM", "HJlpDGpEG", "rkUfmabyM", "rkejdYtxz", "S1ZlQfqeM", "r19_YHamz", "HyIP4S6QM", "Hkqv7raQz", "H1P3mSpQf" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The first version our paper was uploaded to openreview on 27th Oct 2017.\n\nThe paper by Popova et al. was put on arXiv on 29th Nov 2017, that is about a month later than our paper.", "There is a recently published paper on molecular design with deep reinforcement learning which is not addressed in your work:\nPopova, Mariya, Olexandr Isayev, and Alexander Tropsha. \"Deep reinforcement learning for de-novo drug design.\" arXiv preprint arXiv:1711.10907 (2017).\nThis paper discusses alternative to GANs method of novel compounds generation with recurrent neural network and reinforce algorithm, which is relevant to your work. ", "The paper proposes a set of benchmarks for molecular design, and compares different deep models against them. The main contributions of the paper are 19 molecular design benchmarks (with chembl-23 dataset), including two molecular design evaluation criterias and comparison of some deep models using these benchmarks. The paper does not seem to include any method development.\n\nThe paper suffers from a lack of focus. Several existing models are discussed to some length, while the benchmarks are introduced quite shortly. The dataset is not very clearly defined: it seems that there are 1.2 million training instance, does this apply for all benchmarks? The paper's title also does not seem to fit: this feels like a survey paper, which is not reflected in the title. Biologically lots of important atoms are excluded from the dataset, for instance natrium, calcium and kalium. I don't see any reason to exlude these. What does \"biological activities on 11538 targets\" mean? \n\nThe paper discussed molecular generation and reinforcement learning, but it is somewhat unclear how it relates to the proposed dataset since a standard training/test setting is used. Are the test molecules somehow generated in a directed or undirected fashion? Shouldn't there also be experiments on comparing ways to generate suitable molecules, and how well they match the proposed criterion? There should be benchmarks for predicting molecular properties (standard regression), and for generating molecules with certain properties. Currently it's unclear which type of problems are solved here.\n\nTable 1 lists 5 models, while fig 3 contains 7, why the discrepancy? In table 1 the plotted runs seem to differ a lot from average results (e.g. -0.43 to 0.15, or 0.32 to 0.83). Variances should be added, and preferably more than 3 initialisations used.\n\nOverall this is an interesting paper, but does not have any methodological contribution, and there is also few insightful results about the compared methods, nor is there meaningful analysis of the problem domain of molecules either.\n", "Summary:\nThis work is about model evaluation for molecule generation and design. 19 benchmarks are proposed, small data sets are expanded to a large, standardized data set and it is explored how to apply new RL techniques effectively for molecular design.\n\non the positive side:\nThe paper is well written, quality and clarity of the work are good. The work provides a good overview about how to apply new reinforcement learning techniques for sequence generation. It is investigated how several RL strategies perform on a large, standardized data set. Different RL models like Hillclimb-MLE, PPO, GAN, A2C are investigated and discussed. An implementation of 19 suggested benchmarks of relevance for de novo design will be provided as open source as an OpenAI Gym. \n\n\non the negative side:\nThere is no new novel contribution on the methods side. \n\n\n\nminor comments:\n\nSection 2.1. \nsee Fig.2 —> see Fig.1\npage 4just before equation 8: the the", "Summary: This paper studies a series of reinforcement learning (RL) techniques in combination with recurrent neural networks (RNNs) to model and synthesise molecules. The experiments seem extensive, using many recently proposed RL methods, and show that most sophisticated RL methods are less effective than the simple hill-climbing technique, with PPO is perhaps the only exception. \n\nOriginality and significance: \n\nThe conclusion from the experiments could be valuable to the broader sequence generation/synthesis field, showing that many current RL techniques can fail dramatically. \n\nThe paper does not provide any theoretical contribution but nevertheless is a good application paper combining and comparing different techniques.\n\nClarity: The paper is generally well-written. However, I'm not an expert in molecule design, so might not have caught any trivial errors in the experimental set-up. ", "\nWe thank the reviewer for their critical feedback. We have adapted the manuscript to make the contributions and scope clearer.\n\nAlgorithms that allow the generation of small molecules (small graphs) that satisfy given desirable properties are rapidly evolving for medicine, materials and agriculture. This is currently an area of intense investigation, theoretically as well as practically, as highlighted by several ICLR submissions on molecule/graph generation this year, which all employ different and inconsistent benchmarks and training sets. Currently, it is not possible to compare these results.\n\nWith this paper, we unify the problem space of small molecule generation and thoroughly investigate various approaches (including algorithms which have yet to be explored in this domain, e.g. PPO), and include evaluations of our new benchmarks on pre-existing work.\n\nOur results surpass state-of-the-art results previously reported on a few of the sub-domains, establishing new baselines, and come to the perhaps surprising result that the relatively simple hill-climbing MLE method achieves results on par with the some of the most advanced recently-developed RL algorithms such as PPO.\n\n\nSpecific Comments:\n\n* Regarding lack of focus: we have reorganized the paper to improve the clarity of the work.\n \n* Regarding purpose: We hope the above responses address your concern of this being a survey work. We believe this work introduces new benchmarks, evaluates pre-existing algorithms, demonstrates novel pairings of algorithm and domain, and establishes a new state-of-the art as well as a few surprising additional insights (hill-climbing MLE supremacy, temperature ineffectiveness).\n\n* Regarding preprocessing steps: the steps taken in this work are standard and in line with the field of computational chemistry [1,2]. This includes the removal of Sodium, Calcium and Potassium, and other counterions.\n\n* Regarding clarity of RL vs. train/test set: for algorithms that rely on pretraining to help navigate the extremely large space of small molecule generation (~10^60), it is important that the algorithms have not been exposed to a correct solution in the training set, hence the train-test split. As an additional benefit, a train/test split permits the benchmarks to be used with rule-based GOFAI systems, supervised algorithms as well as RL.\n\n* Regarding molecular property prediction: indeed, this is an important sub-field of computational chemistry and is explored under the family of QSAR models [3, 4]. However, that is out-of-scope of this paper, as it attempts to address a different concern.\n\n* Regarding data: the table is a bit overwhelming already, so we chose not to exhaustively show all results for all models and instead focus on representative key models. Due to time and computational constraints, we had not run more than three initializations, but can do so for the revision.\n\n\nWe hope this addresses your concerns.\n\n[1] Glaab, Enrico. \"Building a virtual ligand screening pipeline using free software: a survey.\" Briefings in bioinformatics 17.2 (2015): 352-366.\n[2] Lionta, Evanthia, et al. \"Structure-based virtual screening for drug discovery: principles, applications and recent advances.\" Current topics in medicinal chemistry 14.16 (2014): 1923-1938.\n[3] Tropsha, Alexander. \"Best practices for QSAR model development, validation, and exploitation.\" Molecular informatics 29.6‐7 (2010): 476-488.\n[4] Tropsha, Alexander, and Alexander Golbraikh. \"Predictive QSAR modeling workflow, model applicability domains, and virtual screening.\" Current pharmaceutical design 13.34 (2007): 3494-3504.\n", "\nWe are grateful for your comments. We hope your concern about novelty is addressed with our main comment; indeed, the pairing here is in the algorithm to this particular application area. \nWe further hope that a foundational framework proposed will allow the emergence of future, novel algorithms. Your minor comments have been addressed in the manuscript.\n", "\nWe thank the reviewers for their effort and advice towards improving our submission. We are pleased that the reviewers have identified our important contributions of dataset curation and preprocessing steps, proposed benchmarks, and baseline results using recently-developed algorithms. While we introduce no new reinforcement learning algorithms in this work, our primary aim was to substantially lower the barrier towards automated molecular design to allow computer scientists with no prior background in chemistry to develop novel algorithms to improve molecular design. Indeed, here the novelty lies in the pairing of task and algorithm, and this work is foundational to clearly lay out steps and provide code to apply reinforcement learning algorithms to molecule design. \n\nFinally, we are able to demonstrate results in this manuscript that establish a new state-of-the-art in single and multi objective physicochemical property optimization and chemical space exploration tasks. Subsequent work can then build on this set of standardized molecular design benchmarks to introduce new methods. The benchmark framework is general enough to be used with any possible small molecule generation method, whether rule-based or learned, and is not limited to sequence-based generation relying on SMILES. We have therefore amended the manuscript to reflect the importance of our introduced benchmarks.\n", "We thank the referee for their comments and perspective on our work. \n\nWe hope this reviewer’s comments have been addressed in our overall reply and in the responses to the other reviewers." ]
[ -1, -1, 4, 7, 6, -1, -1, -1, -1 ]
[ -1, -1, 2, 4, 3, -1, -1, -1, -1 ]
[ "HJlpDGpEG", "iclr_2018_HkcTe-bR-", "iclr_2018_HkcTe-bR-", "iclr_2018_HkcTe-bR-", "iclr_2018_HkcTe-bR-", "rkUfmabyM", "rkejdYtxz", "iclr_2018_HkcTe-bR-", "S1ZlQfqeM" ]
iclr_2018_SkBYYyZRZ
Searching for Activation Functions
The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x * sigmoid(beta * x), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.
workshop-papers
The author's propose to use swish and show that it performs significantly better than Relus on sota vision models. Reviewers and anonymous ones counter that PRelus should be doing quite well too. Unfortunately, the paper falls in the category where it is hard to prove the utility of the method through one paper alone, and broader consensus relies on reproduction by the community. As a results, I'm going to recommend publishing to a workshop for now.
train
[ "BkIXIiLNG", "Sy-QnQHef", "Hy7GD19gM", "HylYITVZG", "HJ5pEygNM", "Skfsiap7G", "r1a4oTTmz", "rJMj2S57z", "rkQoM7wmM", "rk32mXwXz", "SkVAW7PXM", "HkC-JdjkG", "S1jZrPjyG", "S1UnJvoyz", "B1sGYLokG", "SkQHfvoA-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "official_reviewer", "author", "author", "author", "public", "public", "public", "public", "public" ]
[ "1. Novelty \n\nThe methodology of searching has been used in Genetic Programming for a long time. The RNN controller has been used in many paper from Google Brain. This paper's contribution is using RL to search in a GP flavor. Although it is new in activation function search field, in methodology view, it is not novel.\n\n2. Theoretical depth\n\nActually, BatchNorm and ReLU provides its explanation of why they work in the original paper and the explanation was accepted by community for a long time. I understand how deep learning community's experimentally flavor, but activation function is a fundamentally problem in understanding how neural network works. Swish performs similarly or slightly better compare to the commonly used activation functions. If without any theoretical explanation, it is hard to acknowledge it as a breaking research. What's more, different activation function may requires different initialization and learning rate, I respect the authors have enough computation power to sweep, but without any theoretical explanation, the paper is more like a experiment report rather than a good ICLR paper. \n\n\n", "Authors propose a reinforcement learning based approach for finding a non-linearity by searching through combinations from a set of unary and binary operators. The best one found is termed Swish unit; x * sigmoid(b*x). \n\nThe properties of Swish like allowing information flow on the negative side and linear nature on the positive have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc. As pointed out by the authors themselves for b=1 Swish is equivalent to SiL proposed in Elfwing et. al. (2017).\n\nIn terms of experimental validation, in most cases the increase is performance when using Swish as compared to other models are very small fractions. Again, the authors do state that \"our results may not be directly comparable to the results in the corresponding works due to differences in our training steps.\" \n\nBased on the Figure 6 authors claim that the non-monotonic bump of Swish on the negative side is very important aspect. More explanation is required on why is it important and how does it help optimization. Distribution of learned b in Swish for different layers of a network can interesting to observe.", "This paper is utilizing reinforcement learning to search new activation function. The search space is combination of a set of unary and binary functions. The search result is a new activation function named Swish function. The authors also run a number of ImageNet experiments, and one NTM experiment.\n\nComments:\n\n1. The search function set and method is not novel. \n2. There is no theoretical depth in the searched activation about why it is better.\n3. For leaky ReLU, use larger alpha will lead better result, eg, alpha = 0.3 or 0.5. I suggest to add experiment to leak ReLU with larger alpha. This result has been shown in previous work.\n\nOverall, I think this paper is not meeting ICLR novelty standard. I recommend to submit this paper to ICLR workshop track. \n\n", "The author uses reinforcement learning to find new potential activation functions from a rich set of possible candidates. The search is performed by maximizing the validation performance on CIFAR-10 for a given network architecture. One candidate stood out and is thoroughly analyze in the reste of the paper. The analysis is conducted across images datasets and one translation dataset on different architectures and numerous baselines, including recent ones such as SELU. The improvement is marginal compared to some baselines but systematic. Signed test shows that the improvement is statistically significant.\n\nOverall the paper is well written and the lack of theoretical grounding is compensated by a reliable and thorough benchmark. While a new activation function is not exiting, improving basic building blocks is still important for the community. \n\nSince the paper is fairly experimental, providing code for reproducibility would be appreciated.", "The authors appear to have made a decision to ignore all comments which are not from reviewers. To be clear, if I were a reviewer, I would score this paper as a 4 with confidence of 4. \n\nIn addition to the above issues, I'd point out that ReLU isn't the only baseline here - to claim a worthwhile contribution, they also need to demonstrate improvement over functions such as PReLU, where the empirical evidence is even weaker to non-existent.", "Thank you for the comment.\n\n[[Our activation only beats other nonlinearities by “a small fraction”]] First of all, we question the conventional wisdom that ReLU greatly outperforms tanh or sigmoid units in modern architectures. While AlexNet may benefit from the optimization properties of ReLU, modern architectures use BatchNorm, which eases optimization even for sigmoid and tanh units. The BatchNorm paper [1] reports around a 3% gap between sigmoid and ReLU (it’s unclear if the sigmoid experiment was with tuning and this experiment is done on the older Inception-v1). The PReLU paper [2], cited 1800 times, proposes PReLU and reports a gain of 1.2% (Figure 3), again on a much weaker baseline. We cannot find any evidence in recent work that suggests that gap between sigmoid / tanh units and ReLU is huge. The gains produced by Swish are around 1% on top of much harder baselines, such as Inception-ResNet-v2, is already a third of the gain produced by ReLU and on par with the gains produced by PReLU. \n\n[[Small fraction gained due to hyperparameter tuning]] We want to emphasize how hard it is to get improvements on these state-of-art models. The models we tried (e.g., Inception-ResNet-v2) have been **heavily tuned** using ReLUs. The fact that Swish improves on these heavily tuned models with very minor additional tuning is impressive. This result suggests that models can simply replace the ReLUs with Swish units and enjoy performance gains. We believe the drop-in-replacement property of Swish is extremely powerful because one of the key impediments to the adoption of a new technique is the need to run many additional experiments (e,g,, a lot of hyperparameter tuning). This achievement is impactful because it enables the replacement of ReLUs that are widely used across research and industry.\n\n[[Searching for betas]] The reviewer also misunderstands the betas in Swish. When we use Swish-beta, one does not need to search for the optimal value of beta because it can be learned by backpropagation.\n\n[[Gradient on the negative side]] We do not claim that Swish is the first activation function to utilize gradients in the negative preactivation regime. We simply suggested that Swish may benefit from same properties utilized by LReLU and PReLU.\n\n[1] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In JMLR, 2015. (See Figure 3: https://arxiv.org/pdf/1502.03167.pdf )\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In CVPR, 2015 (See Table 2: https://arxiv.org/pdf/1502.01852.pdf )\n", "We thank the reviewers for their comments and feedback. We are extremely surprised by the low scores for the paper that proposes a novel method that finds better activation functions, one of which has a potential to be better than ReLUs. During the discussion with the reviewers, we have found a few major concerns and misunderstandings amongst the reviewers, and we want to bring it up to a general discussion:\n\nThe reviewers are concerned that our activation only beats other nonlinearities by “a small fraction”. First of all, we question the conventional wisdom that ReLU greatly outperforms tanh or sigmoid units in modern architectures. While AlexNet may benefit from the optimization properties of ReLU, modern architectures use BatchNorm, which eases optimization even for sigmoid and tanh units. The BatchNorm paper [1] reports around a 3% gap between sigmoid and ReLU (it’s unclear if the sigmoid experiment was with tuning and this experiment is done on the older Inception-v1). The PReLU paper [2], cited 1800 times, proposes PReLU and reports a gain of 1.2%, again on a much weaker baseline. We cannot find any evidence in recent work that suggests that gap between sigmoid / tanh units and ReLU is huge. The gains produced by Swish are around 1% on top of much harder baselines, such as Inception-ResNet-v2, is already a third of the gain produced by ReLU and on par with the gains produced by PReLU. \n\nThe reviewers are concerned that the small gains are simply due to hyperparameter tuning. We stress here that unlike many prior works, the models we tried (e.g., Inception-ResNet-v2) have been **heavily tuned** using ReLUs. The fact that Swish improves on these heavily tuned models with very minor additional tuning is impressive. This result suggests that models can simply replace the ReLUs with Swish units and enjoy performance gains. We believe the drop-in-replacement property of Swish is extremely powerful because one of the key impediments to the adoption of a new technique is the need to run many additional experiments (e,g,, a lot of hyperparameter tuning). This achievement is impactful because it enables the replacement of ReLUs that are widely used across research and industry.\n\nThe reviewers are also concerned that our activation function is too similar to the work by Elfwing et al. When we conducted our research, we were honestly not aware of the work by Elfwing et al (their paper was first posted fairly recently on arxiv in Feb, 2017 and to the best of our knowledge, not accepted to any mainstream conference). That said, we have happily cited their work and credited their contributions. We are also happy to reuse the name “SiL” proposed by Elfwing et al if the reviewers see fit. In that case, Elfwing et al should be thrilled to know that their proposal is validated through a thorough search procedure. We also want to emphasize a number of key differences between our work and Elfwing et al. First, the focus of our paper is to search for an activation functions. Any researcher can use our recipes to drop in new primitives to search for better activation functions. Furthermore, our work has much more comprehensive empirical validation. Elfwing et al. only conducted experiments on relatively shallow reinforcement learning tasks, whereas we evaluated on challenging supervised benchmarks such as ImageNet with extremely tough baselines and equal amounts of tuning for fairness. We believe that we have conducted the most thorough evaluation of activation functions among any published work.\n\nPlease reconsider your rejection decisions.\n\n[1] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, 2015. (See Figure 3: https://arxiv.org/pdf/1502.03167.pdf )\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In CVPR, 2015 (See Table 2: https://arxiv.org/pdf/1502.01852.pdf )\n", "Yes, I do agree that ReLU is one of the major reason for improvement of deep learning models. But, it is not just because ReLU was able to experimentally beat performance of existing non-linearities by a small fraction.\n\nThe fractional increase in performance on benchmarks can be because of various reasons, not just switching non-linearity. For example, in many cases a simple larger batch size can result in small fractional change in performance. The hyper-parameter settings in which other non-linearities might perform better can be different than the ones more suitable for proposed non-linearity. Also, I do not agree that the search factor helps researchers to save time on trying out different non-linearities, still one has to spend time on searching best 'betas' (which will result in small improvement over benchmarks) for every dataset. I would rather use a more well understood non-linearity which gives reasonable results on benchmarks.\n\nThe properties of the non-linearities proposed in the article like \"allowing information flow on the negative side and linear nature on the positive side\"(also mentioned in my review) have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc.\n\nThe results from the article show that Swish-1 ( or SiL from Elfwing et al. (2017)) performs same as Swish.", "1. Can the reviewer explain further why our work is not novel? Our activation function and the method to find it have not been explored before, and our work holds the promise of improving representation learning across many models. Furthermore, no previous work has come close to our level of thorough empirical evaluation. This type of contribution is as important as novelty -- it can be argued that the resurgence of CNNs is primarily due to conceptually simple empirical studies demonstrating their effectiveness on new datasets.\n\n2. We respectfully disagree with the reviewer that theoretical depth is necessary to be accepted. Following this argument, we can also argue that many extremely useful techniques in representation / deep learning, such as word2vec, ReLU, BatchNorm, etc, should not be accepted to ICLR because the original papers did not supply theoretical results about why they worked. Our community has typically followed that paradigm of discovering techniques experimentally and further work analyzing the technique. We believe our thorough and fair empirical evaluation provides a solid foundation for further work analyzing the theoretical properties of Swish.\n\n3. We experimented with the leaky ReLU using alpha = 0.5 on Inception-ResNet-v2 using the same hyperparameter sweep, and and did not find any improvement over the alpha used in our work (which was suggested by the original paper that proposed leaky ReLUs).\n", "We don’t completely understand the reviewer’s rationale for rejection. Is it because of the lack of novelty, the inconsistent gains, or the work being insignificant? \n\nFirst, in terms of the work being significant, we want to emphasize that ReLU is the cornerstone of deep learning models. Being able to replace ReLU is extremely impactful because it produces a gain across a large number of models. So in terms of impact, we believe that our work is significant.\n\nSecondly, in terms of inconsistent gains, the signed tests already confirm that the gains are statistically significant in our experiments. These results suggest that switching to Swish is an easy and consistent way of getting an improvement regardless of which baseline activation function is used. Unlike previous studies, the baselines in our work are extremely strong: they are state-of-the-art models where the models are built with ReLUs as the default activation. Furthermore, the same amount of tuning was used for every activation function, and in fact, many non-Swish activation functions actually got more tuning. Thus, it is unreasonable to expect a huge improvement. That said, in some cases, Swish on Imagenet makes a more than 1% top-1 improvement. For context, the gap between Inception-v3 and Inception-v4 (a year of work) is only 1.2%.\n\nFinally, in terms of novelty, our work differs from Elfwing et al. (2017) in a number of significant ways. They just propose a single activation function, whereas our work searches over a vast space of activation functions to find the best empirically performing activation function. The search component is important because we save researchers from the painful process of manually trying out a number of individual activation functions in order to find one that outperforms ReLU (i.e., graduate student descent). The activation function found by this search, Swish, is more general than the other proposed by Elfwing et al. (2017). Another key contribution is our thorough empirical study. Their activation function was tested only on relatively shallow reinforcement learning models. We performed a thorough experimental evaluation on many challenging, deep, large-scale supervised models with extremely strong baselines. We believe these differences are significant enough to differentiate us. \n\nThe non-monotonic bump, which is controlled by beta, has gradients for negative preactivations (unlike ReLU). We have plotted the beta distribution over the each layer Swish here: https://imgur.com/a/AIbS2 . Note this is on the Mobile NASNet-A model, which has many layers composed in parallel (similar to Inception and unlike ResNet). The plot suggests that the tuneable beta is flexibly used. Early layers use large values of beta, which corresponds to ReLU-like behavior, whereas later layers tend to stay around the [0, 1.5] range, corresponding to a more linear-like behavior. ", "The reviewer suggested “Since the paper is fairly experimental, providing code for reproducibility would be appreciated”. We agree, and we will open source some of the experiments around the time of acceptance.\n", "Figure 7 shows an interesting feature that the β=1 is the most prevalent single β value after training. Since Swish smoothly varies with β, one can only assume that the reason for this inconsistency was that β was initialized to 1 and that during training this parameter was not adjusted in many cases. The text of the paper should clearly state the initialization value of β.\n\nThe more interesting aspect of this distribution is that over 2x more β values were learned to be better in the range of (0.0 to 0.9) than at the (assumed) starting value of β=1. β’s in this range suggests that larger negative values must have some advantage. \n\nIt would be very interesting to see understand if distribution of β values changes in the different layers of the neural network. Are the β in the range (0.0 to 0.9) more important at higher levels or lower levels. It would also be instructive to see the effects of starting with β at another initial starting value.\n\nSwish approaches x/2 as β approaches inf, why is this better than approaching x in the manner that PReLU does?\n\nWhile the paper asserts the non-monotonic feature of Swish as an important aspect of Swish, but there is nothing that explains why this could be an advantage. In fact for Figure 6 show most negative preactivations are between -6 and 0 and given that Figure 7 shows most β between 0 and 1 most negative values will not be effected by non-monotonic behavior. Might the real lesson of the paper be that a smooth activation function with a smooth and continuous derivative function with a \"learnable\" small domain of negative values is more important for learning and generalization than non-montonicity?", "Figure 8 plot should show PReLU not ReLU since given data in Table 6, PReLU is better than ReLU in every case.\n\nin addition, in many of the other results in the paper LReLU is slightly better than PReLU. The two differences are that LReLU has α=0.01 and PReLU at α=.25 and that α in PReLU is learnable. Looking closely at Swish and PReLU plots, a more comparable starting initialization for PReLU would be α=.10 and it would be somewhat closer to the value the you use for LReLU.\n\nWe suggest rerunning PReLU with α=.10 and putting this result in Figure 8 and Table 6.\n", "Given the distribution of actual learned β values for Swish the were presented in Figure 7, it would be more instructive to show β=0, β=0.3, β=0.5, β=1.0 in Figures 4&5. While β=10.0 is interesting to look at in the 1st derivative plot, it doesn’t seem to have been learned as useful value for β.", "You mention this in the body, but it would be helpful in the related work if you pointed out that (Hendrycks & Gimpel, 2016) considered this activation function but found a slightly different version to be better, and that Elfwing et. al already proposed Swish-1 under a different name. \n\nI see you went from sigmoid(x) -> sigmoid(beta * x) to avoid outright duplication, but empirically it looks like Swish-1 is equal or better than Swish? \n\nTable 3 is a little misleading - the magnitude of the differences is what we really care about, and those magnitudes are quite small.\n\nFigure 8 is a little misleading - ReLU's are far and away the worst on that particular dataset+model, I imagine the plot for existing work like PReLU, which gives basically the same performance, would look very different. \n\nIn the original version, you bolded the non-ReLU activations which provide basically the same perf, but you don't in the new version - why not? PReLU is often the same as Swish, but without the bolding it's a lot harder to read.\n\nThe differences in perf are small enough to make me think this is just hyperparameter noise. For instance, you try 2 learning rates for the NMT results, why only 2? What 2 did you choose? Why did you choose them? If you had introduced PReLU, would it's numbers be higher? Concrete questions aside, I have a very hard time trusting this paper.", "You state: \"In Figure 6, a large percentage of preactivations fall inside the domain of the bump (−5 ≤ x ≤ 0), which indicates that the non-monotonic bump is an important aspect of Swish.\" \n\nIt seems that non-monotonic behavior is an artifact of your function that could have negative consequences by making a \"bumpier\" loss surface for optimizers. What is the value of Swish approaching 0 as x heads to -inf? Why wouldn't small negative values be sufficient for all negative pre-actiations (x ≤ -5)? \n\nWouldn't something like CELU with small alpha in the long run be better? CELU paper:\nhttps://arxiv.org/pdf/1704.07483.pdf" ]
[ -1, 4, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rkQoM7wmM", "iclr_2018_SkBYYyZRZ", "iclr_2018_SkBYYyZRZ", "iclr_2018_SkBYYyZRZ", "B1sGYLokG", "rJMj2S57z", "iclr_2018_SkBYYyZRZ", "rk32mXwXz", "Hy7GD19gM", "Sy-QnQHef", "HylYITVZG", "iclr_2018_SkBYYyZRZ", "iclr_2018_SkBYYyZRZ", "iclr_2018_SkBYYyZRZ", "iclr_2018_SkBYYyZRZ", "iclr_2018_SkBYYyZRZ" ]
iclr_2018_ByQZjx-0-
Faster Discovery of Neural Architectures by Searching for Paths in a Large Model
We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.
workshop-papers
First off, this was a difficult paper to decide on. There was some vigorous discussion on the paper centering around the choices that were available to the conv-nets. The author's strongly emphasized the improvements on the PTB task. For my part, I think the method is very compelling -- sharing weights for all the models we are optimizing on seems like a great idea -- and that we can make it work is even more interesting. So from this point of view, I think its a novel contribution and worth accepting. On the other hand, I'm likely to agree with some of the motivations behind the questions raised by R3. Are all the choices really necessary ? perhaps the gains came from just a couple of things like number of skip connections and channels, etc. That exploration is useful. On the flip side, I think it may be an irrelevant question -- the model is able to make the correct decisions from a big set. The authors emphasize the language modelling part, but for me, this was actually less compelling. The authors use some of the tricks from Merity in their model training (perplexity 52.8), and as a result are already using some techniques that produces better results. Further, PTB is a regularization game -- and that's not really the claim of this paper. Although, one could argue that weight sharing between different models can produce an ensembling / regularization effect and those gains may show up on PTB. A much more compelling story would have been to show that this method works on a large dataset where the impact of the architecture cannot be conflated with controlling overfitting better. As a result, this puts the paper on the fence for me; even though I very much like the idea. Polishing the paper and making a more convincing case for both the CNNs and RNNs will make this paper a solid contribution in the future.
train
[ "BkrqNswgf", "rJaD7VugM", "Bkj-7CYef", "BJI7WwsQz", "rJE-zQ9mG", "Sksb-TtQM", "rkMLW6K7z", "H1uV2IG7M", "B1BcnLGXG", "S1KqcLMmz", "BJREcLfXM", "H1wfqUMXf", "HkAyq8MQf", "HkpjFLfQM", "ryc0uUz7M", "H1ThH37-f", "By51thRez", "BJi-csZgM", "rJpVtZbgG", "rJWmCYxyM", "r1SrSDy1f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public" ]
[ "In the paper titled \"Faster Discovery of Neural Architectures by Searching for Paths in a Large Model\", the authors proposed an efficient algorithm which can be used for efficient (less resources and time) and faster architecture design for neural networks. The motivation of the new algorithm is by sharing parameters across child models in the searching of archtecture. The new algorithm is empirically evaluated on two datasets (CIFAR-10 and Penn Treeback) --- the new algorithm is 10 times faster and requires only 1/100 resources, and the performance gets worse only slightly.\n\nOverall, the paper is well-written. Although the methodology within the paper appears to be incremental over previous NAS method, the efficiency got improved quite significantly. \n ", "Summary: \nThe paper presents a method for learning certain aspects of a neural network architecture, specifically the number of output maps in certain connections and the existence of skip connections. The method is relatively efficient, since it searches in a space of similar architectures, and uses weights sharing between the tested models to avoid optimization of each model from scratch. Results are presented for image classification on Cifar 10 and for language modeling.\n\nPage 3: “for each channel, we only predict C/S binary masks” -> this seems to be a mistake. Probably “for each operation, we only predict C/S binary masks” is the right wording\nPage 4: Stabilizing Stochastic Skip Connections: it seems that the suggested configuration does not enable an identity path, which was found very beneficial in (He. et al., 2016). Identity path does not exist since layers are concatenated and go through 1*1 conv, which does not enable plain identity unless learned by the 1*1 conv.\nPage 5: \n-\tThe last paragraph in section 4.2 is not clear to me. What does a compilation failure mean in this context and why does it occur? And: if each layer is connected to all its previous layers by skip connections, what remains to be learned w.r.t the model structure? Isn’t the pattern of skip connection the thing we would like to learn?\n-\tSome details of the policy LSTM network are also not clear to me:\no\tHow is the integer mask (output of the B channel steps) encoded? Using 1-hot encoding over 2^{C/S} output neurons? Or maybe C/S output neurons, used for sampling the C/S bits of the mask? this should be reported in some detail.\no\tHow is the mask converted to an input embedding for the next step? Is it by linear multiplication with a matrix? Something more complicated? And are there different matrices used/trained for each mask embedding (one for 1*1 conv, one for 3*3 conv, etc..)?\no\tWhat is the motivation for using equation 5 for the sampling of skip connection flags? What is the motivation for averaging the winning anchors as the average embedding for the next stage (to let it ‘know’ who is connected to the previous?). Is anchor j also added to the average?\no\tHow is equation 5 normalized? That is: the probability is stated to be proportional to an exponent of an inner product, but it is not clear what the constant is and how sampling is done.\n\nPage 6: \n-\t Section 4.4: what is the fixed policy used for generating models in the stage of training the shared W parameters? (this is answered at page 7\nExperiments:\n-\tThe accuracy figures obtained are impressive, but I’m not convinced the ENAS learning is the important ingredient in obtaining them (rather than a very good baseline)\n-\tSpecifically, in the Cifar -10 example it does not seem that the networks chooses the number of maps in a way which is diverse or different from layer to layer. Therefore we do not have any evidence that the LSTM controller has learnt any interesting rule regarding block type, or relation between block type width and layer index. All we see is that the model does not chose too many maps, thus avoid significant overfit. The relevant baseline here is a model with 64 or 96 maps on each block, each layer.Such a model is likely to do as well as the ENAS model, and can be obtained easily with slight parameter tuning of a single parameter.\n-\t Similarly, I’m not convinced the skip connection pattern found for Cifar-10 is superior to standard denseNet or Resnet pattern. The found configuration was not compared to these baselines. So again we do not have evidence showing the merit of keeping and tuning many parameters with the RINFORCE\n-\tThe experiments with Penn Treebank are described with too less details: for example, what exactly is the task considered (in terms on input-output mapping), what is the dataset size, etc..\n-\tAlso, for the Penn treebank experiments no baseline is given, so it is not possible to understand if the structure learning here is beneficial. Comparison of the results to an architecture with all skip connections, and with a single skip connection per layer is required to estimate if useful structure is being learnt.\n\nOverall:\n-\tPro: the method gives high accuracy results \n-\tCons: \no\tIt is not clear if the ENAS search is responsible to the results, or just the strong baseline. The advantage of ENAS over plain hyper parameter choosing was not sufficiently established.\no\tThe controller was not presented in a clear enough manner. Many of its details stay obscure.\no\tThe method does not seem to be general. It seems to be limited to choosing a specific set of parameters in very specific scenario (scenario which enable parameter sharing between model. The conditions for this to happen seems to be rather strict, and where not elaborated).\n\nAfter revision:\nThe controller is now better presented.\nHowever, the main points were not changed:\n - ENAS seems to be limited to a specific architecture and search space, in which probably the search is already exhausted. For example for the image processing network, it is determining the number of skip connections and structure of a single layer as a combination of several function types. We already know the answers to these search problems (denser skip connection pattern works better, more functions types in a layer in parallel do better, the number of maps should be adjusted to the complexity and data size to avoid overfit). ENAS does not reveal a new surprising architectures, and it seems that instead of searching in the large space it suggests, one can just tune a 1-2 parameters (for the image network, it is the number of maps in a layer).\n - Results comparing ENAS results to the simple baseline of just tuning 1-2 hyper parameters were not shown. I hence believe the strong empirical results of ENAS are a property of the search space (the architecture used) and not of the search algorithm.", "In this paper, the authors look to improve Neural Architecture Search (NAS), which has been successfully applied to discovering successful neural network architectures, albeit requiring many computational resources. The authors propose a new approach they call Efficient Neural Architecture Search (ENAS), whose key insight is parameter sharing. In NAS, the practitioners have to retrain for every new architecture in the search process, but in ENAS this problem is avoided by sharing parameters and using discrete masks. In both approaches, reinforcement learning is used to learn a policy that maximizes the expected reward of some validation set metric. Since we can encode a neural network as a sequence, the policy can be parameterized as an RNN where every step of the sequence corresponds to an architectural choice. In their experiments, ENAS achieves test set metrics that are almost as good as NAS, yet require significantly less computational resources and time.\n\nThe authors present two ENAS models: one for CNNs, and another for RNNs. Initially it seems like the controller can choose any of B operations in a fixed number of layers along with choosing to turn on or off ay pair of skip connections. However, in practice we see that the search space for modeling both skip connections and choosing convolutional sizes is too large, so the authors use only one restriction to reduce the size of the state space. This is a limitation, as the model space is not as flexible as one would desire in a discovery task. Moreover, their best results (and those they choose to report in the abstract) are due to fixing 4 parallel branches at every layer combined with a 1 x 1 convolution, and using ENAS to learn the skip connections. Thus, they are essentially learning the skip connections while using a human-selected model. \n\nENAS for RNNs is similar: while NAS searches for a new architecture, the authors use a recurrent highway network for each cell and use ENAS to find the skip connections. Thus, it seems like the term Efficient Neural Architecture Search promises too much since in both tasks they are essentially only using the controller to find skip connections. Although finding an appropriate architecture for skip connections is an important task, finding an efficient method to structure RNN cells seems like a significantly more important goal.\n\nOverall, the paper is well-written, and it brings up an important idea: that parameter sharing is important for discovery tasks so we can avoid re-training for every new architecture in the search process. Moreover, using binary masks to control network path (essentially corresponding to training different models) is a neat idea. It is also impressive how much faster their model performs on tasks without sacrificing much performance. The main limitation is that the best architectures as currently described are less about discovery and more about human input -- finding a more efficient search path would be an important next step.", "The reviewer has many concerns, but we believe that the reviewer is not impressed by the search space, and the fact that the search space is not interesting. We have a result in the paper where we ran ENAS with a general search space, and search for both skip connection patterns and operations at each layer (Section 4.2, last paragraph of page 7). This search space is as general as the search space in the original NAS paper [1], and is 16M larger than the constrained search spaces. ENAS found a model that achieves 4.23% test error. This result is on par with one of the best human-designed architectures in 2016: WideResNet (4.17% test error).\n\nEven within the restricted search space over patterns skip connections, the patterns in the subspace only **look** similar. They have a wide range of accuracy: a randomly chosen pattern of skip connections has test error 5.11% (our previous comment); densenet pattern has test error 4.07%; the best pattern that ENAS finds has test error 3.87%. The relative improvement compared to the random baseline is (ENAS - densenet) / (ENAS - random) = 0.16, which is statistically significant.\n\nWe understand that the reviewer may not understand the significance of the new recurrent cells and 57.8 perplexity. Here we want to explain its significance: Recurrent Highway Networks [2] was accepted at ICML 2017 by making an improvement of 3 perplexity on a strong baseline, going from 68.5 to 65.4. Their paper is cited 72 times within a year. Here, we are making a similar improvement of 4.6 perplexity on a much stronger baseline, going from 62.4 to 57.8, and setting the state-of-the-art among automatic model design methods. \n\n[1] Barret Zoph, Quoc V. Le. Neural Architecture Search with Reinforcement Learning. In ICLR, 2017.\n[2] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. In ICML, 2017.\n", "I am here briefly referring to the author's points:\n\n1) The claim is not that the search space is small and we have visited every point of it. Instead it is that it is not very 'interesting' in the sense that most of the architectures in it are similar to each other and the accuracy in this subspace is actually determined by a small number of hyper parameters (number of maps, density of skip connections). The search is hence in an over parametrized space which is just seemingly large\n2) This is the same point as 1). It is clear to me that ENAS looks for the pattern of skip connection and not only their number. However, I'm not convinced that the pattern is very important. and it seems that the number and maybe 1 additional parameter (whether connection should be focused on lower or upper layers) are important.\n3,4) the results of standard dense net with varying number of maps per layer are very important, as they provide a real relevant comparison baseline for ENAS. However, 1) these results were not, and are still not presented in the paper, and 2) looking at then, I believe they support my intuition rather than disprove it: it seems that most of the accuracy gained by ENAS (4.07 vs. 3.87) can be gained from searching in a 1-D search space: Dense net with varying number of maps per layer.\nRegarding the LSTM accuracy obtained: I'm less familiar with SOA in text modeling cells, so it is hard for me to judge the cell found by ENAS and its ingenuity.\n\n5) Indeed the success of a NAS algorithm depends on the quality of the search space. ENAS is able to obtain good results because its search space includes state of the art architectures. However, as stated above, as far as I can see, the search space formulated is heavily over parametrized: it tries to tune many parameters with huge number of possibilities, where a much smaller space of 2-3 dimensions to tune would have obtained almost the same results. Results checking this view (for example, the fact that tuning a single parameter (the number of maps in a dense net) gives almost all the accuracy obtained by ENAS) are not clearly presented in the paper. As far as I can see, ENAS formulates a too-complicated search space and does not seem to benefit from it a lot.", "We thank the reviewer for reading our rebuttals, recognizing that the controller has been better presented, and updating the score. However, we are still unsatisfied with the current evaluation of our work, especially based on the remaining concerns that the reviewer has raised. We have supporting evidence to completely address the concerns raised by the reviewer.\n\n*** Note that the paper has been updated with a new / much cleaner search space and better results on the PTB dataset. See Section 3.1 and 4.1. We achieved the perplexity of 57.8, much better than NAS’s previous result of 62.4, and the recurrent cell that ENAS found cannot be obtained with hyper-parameters tuning. We suspect that the reviewer did not see these updates. We use these results in the comments below to address some concerns by the reviewer. We encourage the reviewer to take a look at these sections before reading the rebuttal below. \n\n1. The reviewer comments “ENAS seems to be limited to a specific architecture and search space, in which probably the search is already exhausted.”\n\nWe argue that ENAS and NAS are fundamentally equivalent. Searching a path within a model and searching different operation per step are the same.\n\nWe disagree with this statement “the search is already exhausted”. As presented in Section 3.1, the size of the search space for recurrent cells is about 8.03 × 10^15. ENAS has only seen 735,000 architectures, which is very far away from exhausting the search space (1 in 10^10). We are likewise only able to search for 759500 architectures of our convolutional search space, where the search space size is about 1.6 × 10^29.\n\n2. The reviewer comments “For example for the image processing network, it is determining the number of skip connections and structure of a single layer as a combination of several function types.”\n\nThis observation is also not true. ENAS does not only find “the number of skip connections” but also finds what are those skip connections. It is very clear from Figure 5-Right in our revision. There are 26 skip connections. If one is told that a network with 12 layers needs 26 skip connections, there are still (12C2)C26 = 1.65 × 10^18 possible choices.\n\n3. The reviewer comments “We already know the answers to these search problems (denser skip connection pattern works better,”\n\nWe disagree with this intuition and we have the evidence to support our disagreement. Denser skip connection patterns require more parameters (for conv 1x1), which may lead to overfitting. Indeed, we did a controlled experiment, and the DenseNet pattern (connecting every pair of layers) achieves 5.23% test error, which is worse than the pattern of skip connections found by ENAS. (Note: we have mentioned this in our first round of rebuttal comments)\n\n4. The reviewer comments “ENAS does not reveal a new surprising architectures, and it seems that instead of searching in the large space it suggests, one can just tune a 1-2 parameters (for the image network, it is the number of maps in a layer). ... Results comparing ENAS results to the simple baseline of just tuning 1-2 hyper parameters were not shown.”\n\nThe reviewer is concerned that ENAS didn’t find any surprising architectures. We disagree and argue that the recurrent cell is surprising and cannot easily designed manually (Figure 4 in our revision). It is subjective but one can also argue that the architectures on CIFAR-10 aren’t obvious. \n\nIn terms of hyperparameter tuning vs architecture search, we have the following pieces of evidence:\n\nFirst, we tuned the number of maps at each layer of the DenseNet pattern (64, 128, and 256). The best test error we could get was 4.07%. Second, we randomly sampled a pattern of skip connections, and then tuned the number of maps at each layer (we tried 48, 64, 128, 256, and 512). The lowest test error we could get by doing so is 5.11%. Both of these test errors are worse than the 3.87% obtained by ENAS. We’re happy to add these results to the paper.\n\nThirdly, Melis et al (2017) [1] has performed extensive tuning of an LSTM network. Zoph and Le (2017) also reported to have done a grid search over hyper-parameters. Both of them used intensive computing resources to tune way more than 1-2 hyper-parameters, and yet neither achieved a performance as good as our ENAS recurrent cell (58.9 and 62.4 perplexity, compared to 57.8 by ENAS), and we haven’t tuned any hyper-parameters! It is thus clear that hyper-parameters tuning will not lead to comparable performance with ENAS, at least not without a good model. \n\n[to be continued]", "5. The reviewer comments “I hence believe the strong empirical results of ENAS are a property of the search space (the architecture used) and not of the search algorithm.”\n\nWe agree with this statement “the strong empirical results of ENAS are a property of the search space”. However, the performance of *any* NAS algorithm depends on the search space. Please see the improvements in results from the original NAS paper [2] to the latest NAS paper [3] (state-of-art in CIFAR10, ImageNet), where the change is mainly in the search space. To quote their abstract in [3] “Our key contribution is the design of a new search space which enables transferability.”\n\nWe disagree with your comment that the strong results of NAS is not due to the search algorithm. Here’s some supporting evidence:\n\n- Section 4.1, just above Table 1. The recurrent cell that ENAS finds does not have identity or sigmoid activations, while they are available in the search space. ENAS learns to ignore them. Furthermore, random perturbations in the ENAS recurrent cell worsen the result.\n- Section 4.3, just above Table 3. Without properly training the search algorithm of ENAS, one cannot find a good network architecture. In fact, an independent researcher has commented below that “the result of Sanity Check with Ablation Study section imply that REINFORCE successfully learned a competitive architecture from all the possible ones”.\n\n[1] Gabor Melis, Chris Dyer, and Phil Blunsom. On the state of the art of evaluation in neural language models. Arxiv, 1707.05589, 2017.\n[2] Barret Zoph, Quoc V. Le. Neural Architecture Search with Reinforcement Learning. ICLR, 2017.\n[3] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le. Learning Transferable Architectures for Scalable Image Recognition. arXiv, 1707.07012, 2017.", "| Did you alternate more frequently to see what happens?\n\nWe haven't try alternating more often. We suspect that this will make little change to CIFAR-10, but will perhaps change the results for PTB. This is because in PTB, one carries over the last RNN stage.\n\n| What if you replace the reward with loss function or some combination of loss function and classification accuracy?\n\nWe haven't tried this. However, there is a reason that we chose the reward as in the paper. If you look at the appendix in our original submission, you'd see that those reward functions are chosen to in a similar way to Expectation-Maximization updates: they lead to a surrogate to our objective.\n\n| If you apply ENAS longer, say for 3 days, will the resulting performance be better? Did you find the performance to converge around the 300 epochs?\n\nWe suspect there will be no longer improvement, as we observed convergence in the controller's samples small entropy. However, further improvements are possible, if one comes up with an alternative search space. This is the case for the new Penn Treebank results in our revision (57.8 perplexity).\n\n| After the 300 epochs, you sampled many architectures and determined the one with the highest reward to be one trained from scratch. How many architecture samples did you find to be enough (for convergence)?\n\nIn our 300 epochs, about 300 * 2000 = 600,000 architectures were sampled. We didn't keep track of whether these architectures are different, and we suspect that towards the end of the training process, the sampled architectures are very similar to each other.", "We thank the Anonymous poster of this comment.", "We thank the reviewers for their comments.\n\nA general concern shared among the reviewers is that the paper is not clear. To address this concern, we have completely rewritten the paper. Please see our most recent revision for the changes. We did so with two main goals:\n\n1. Make the presentation of ENAS easier to follow. We tried to present examples of how ENAS and its controller work (Section 3 in our revision), and moved the detailed implementations to our Experiments and Appendix. We will eventually release our implementation of ENAS to clear all ambiguities for readers who wish to replicate our method.\n\n2. Present new experimental results. We conducted experiments on other search spaces, as suggested by the comments of AnonReviewer2 and AnonReviewer3. With these experiments, we hope to show that ENAS is a general method, and that ENAS can discover good models with minimal human inputs (Section 4 in our revision). For details, please see our revision as well as the comments delivered to each reviewer.\n\nWe were dismayed by the low scores that we received. Not only does ENAS speed up NAS by an order of magnitude, but in our revision, ENAS also achieves the state-of-the-art performance on Penn Treebank among automatic model design approaches (57.8 perplexity, without extensive hyper-parameters tuning). Other ICLR 2018 submissions, e.g., Paper 1 and Paper 323, also try to address the expensive use of time and computing resource by NAS approaches. They have worse empirical results than ENAS, and yet receive better scores. We thus hope that the reviewers will reconsider their judgement of our work.\n\nReferences:\n[1] Paper 1: SMASH: One-Shot Model Architecture Search through HyperNetworks (https://openreview.net/forum?id=rydeCEhs-)\n[2] Paper 323: A Flexible Approach to Automated RNN Architecture Generation (https://openreview.net/forum?id=SkOb1Fl0Z)", "We thank the reviewer for their review. To address reviewers’ concerns, we have completely rewritten the paper to make it easier to follow and we have also included new experimental results (all reported in our revision). We sincerely hope that the reviewer will update positively on our revised paper. We believe that ENAS delivers extremely non-trivial contributions to architecture search approaches. \n\nFirst, while the idea of ENAS is simple - sharing parameters between architectures so that they don’t need to be trained again - we believe that this idea is far from incremental. A lot of details are needed to share the parameters appropriately, e.g. the design choice of representing all child models in a large graph (see Sections 3 and 4 in our revision for more details). ENAS leads to two orders of magnitude reduction of computing resource and an order of magnitude reduction of time consumed, all without sacrificing much performance.\n\nSecond, in our revision, we showed that ENAS actually achieves a better perplexity than NAS on Penn Treebank. We designed a different search space, in which ENAS found a recurrent cell that achieves 57.8 perplexity on Penn Treebank (compared to NAS’s 62.4 perplexity), and establishes the state-of-the-art performance on automatic model design on Penn Treebank. Furthermore, this result is also achieved without extensive hyper-parameters tuning (Melis et al., 2017). Details are in Section 4.1 of our revision.\n\nFinally, we find that among other submissions to ICLR 2018, some papers address the similar problem with ENAS: the computational expense of automatic model designing approaches. For example SMASH [1] is a method that automatically designs networks for image classification tasks. As of now, despite the fact that SMASH performs worse than ENAS and was only applied to images and not texts, their paper has a much better score than ours (SMASH has an averaged score of 6 while our paper got an averaged score of 5). We sincerely hope that you will give ENAS another consideration.\n\n[1] Paper 1: SMASH: One-Shot Model Architecture Search through HyperNetworks (https://openreview.net/forum?id=rydeCEhs-)\n", "While we hope that our revision will improve your overall understanding of the paper, we will answer your specific questions below:\n\nPage 3: “for each channel, we only predict C/S binary masks” -> this seems to be a mistake. Probably “for each operation, we only predict C/S binary masks” is the right wording.\n=> Thank you. This is indeed the incorrect wording. We have presented this search space differently in the revision.\n\nPage 4: Stabilizing Stochastic Skip Connections: it seems that the suggested configuration does not enable an identity path, which was found very beneficial in (He. et al., 2016). Identity path does not exist since layers are concatenated and go through 1*1 conv, which does not enable plain identity unless learned by the 1*1 conv.\n=> Indeed, in all ENAS search spaces presented in the paper, there is no identity path. When we designed ENAS, we thought that since ENAS allows each layer to be sent stochastically to any layer above, each layer should at least go through a different transformation in its skip connections. However, thanks to your comment, we believe that this requirement can be relaxed. We will experiment with identity skip connections in the final revision of the paper.\n\nPage 5: \n-\tThe last paragraph in section 4.2 is not clear to me. What does a compilation failure mean in this context and why does it occur? And: if each layer is connected to all its previous layers by skip connections, what remains to be learned w.r.t the model structure? Isn’t the pattern of skip connection the thing we would like to learn?\n=> We present this part differently in the revision. Please refer to the paragraph “Search Spaces” in Section 4.2 of the revision.\n\n-\tSome details of the policy LSTM network are also not clear to me:\no\tHow is the integer mask (output of the B channel steps) encoded? Using 1-hot encoding over 2^{C/S} output neurons? Or maybe C/S output neurons, used for sampling the C/S bits of the mask? this should be reported in some detail.\n=> Each mask is an integer between 1 and 2^(C/S) - 1, and is encoded using one-hot encoding.\no\tHow is the mask converted to an input embedding for the next step? Is it by linear multiplication with a matrix? Something more complicated? And are there different matrices used/trained for each mask embedding (one for 1*1 conv, one for 3*3 conv, etc..)?\n=> Each mask has its own embedding. If there are B x (2^(C/S) - 1) possible masks (one for each operation), then there will be B x (2^(C/S) - 1) embedding vectors.\no\tWhat is the motivation for using equation 5 for the sampling of skip connection flags? What is the motivation for averaging the winning anchors as the average embedding for the next stage (to let it ‘know’ who is connected to the previous?). Is anchor j also added to the average?\n=> The idea of using attention weights to sample skip connections is inspired from Zoph and Le (2017). In their paper, Zoph and Le sampled each connection using a Bernoulli. In ENAS, we sample multiple connections using a multinomial. We average the winning anchors to tell the controller LSTM which previous layers have been sampled. Anchor j is not added to the average.\no\tHow is equation 5 normalized? That is: the probability is stated to be proportional to an exponent of an inner product, but it is not clear what the constant is and how sampling is done.\n=> The probabilities are normalized by the sum of exp(.) for all previous steps.\n", "4) More details on Penn Treebank experiments have been added. We designed a different search space for recurrent cells, in which ENAS finds a novel recurrent cell that achieves 57.8 test perplexity on Penn Treebank. We have reported this new result in our revision (Sections 4.1 and 5.1). Let’s call this the ENASCell. \n\nENASCell is very novel compared to the recurrent highway network (see Figure 4 in our revision). While the search space for ENASCell still uses highway connections, the ENAS controller has discovered several novelties:\n- the use of the ReLU activation, unlike in recurrent highway network where only the tanh activation is used\n- the pattern of connections within the ENASCell\nWe have also mentioned in the revision that ENASCell is, in a sense, a local optimum. If we slightly vary its components, its performance drops. This means that ENASCell is not trivial to find, affirming the role of ENAS.\n\nAdditionally, to our knowledge, ENASCell’s perplexity of 57.8 is the state-of-the-art among automatic model design approaches on Penn Treebank. It outperforms NAS (62.4 perplexity), which uses two orders of magnitude more computations and way more time. ENASCell, with almost no hyper-parameter tuning, also outperforms LSTM with extensive hyper-parameters tuning (59.5 perplexity) (Melis et al., 2017).\n\nBased on these results, we believe that it is clear that: 1) ENAS a crucial component to our design of novel architectures that achieves good performances and 2) ENAS is a general method: whenever a search space is specified, ENAS is applicable. ENAS’s performance indeed depends on the search space, but this is also the case with other NAS methods.\n\nFinally, we can compare ENAS to other ICLR 2018 submissions that address the computational expense of automatic model designing approaches. SMASH [1] is a method that automatically designs networks for image classification tasks. As of now, despite the fact that SMASH performs worse than ENAS and was only applied to images and not texts, their paper has an average score of 6 while our paper received an averaged score of 5. \n\nWe believe that ENAS delivers significant contributions to automatic model designing, and that ENAS has compelling advantages compared to hyperparameter tuning, especially in its ability to achieve good performances with a low usage of computing resource and time. We sincerely hope that you will give ENAS a reconsideration.\n\n[1] Paper 1: SMASH: One-Shot Model Architecture Search through HyperNetworks (https://openreview.net/forum?id=rydeCEhs-)\n----------------------------------------\n", "We thank the reviewer for the comments. We were very dismayed by the low score. Subsequently, we have completely rewritten the paper to make it easier to follow .We have also included new experimental results to address the reviewer’s concerns. All the results are reported in our revision. We hope that our revisions of the paper can clear the reviewer’s reservation about ENAS’s ability.\n\nIn the following, we try to address the reviewer’s concerns.\n\nFirst, we have completely rewritten the paper for clarity. We focused on delivering the high level ideas, and we moved a lot of implementation details into our appendix. To address the reviewer’s particular comment about the lack of details on the controller, we refer the reviewer to Section 3 in our revision.\n\nAs evidenced by the anonymous comment above, our previous presentation is sufficient for independent researchers to to implement our method. Therefore, we believe that our revision, which aims to improve the original presentation, does not obscure ENAS’s details. After the reviewing cycle, we will also publish our code, which we hope will clear any ambiguity about ENAS.\n\nSecond, in our revision, we have included new experimental results to address the reviewer’s concerns that ENAS is not general, and whether ENAS is responsible for the good results. We summarize them below:\n\n1) ENAS is indeed responsible for the results. This information was in our original submission. In our revision, it is highlighted in the paragraph “Sanity Check and Ablation Study” at the beginning Section 4.3. In particular, a model randomly sampled from our search space does not perform as well as a model sampled by the ENAS controller. Also, we if train ENAS without training its controller, performance is much worse. Both observations, as presented in the paper, indicate the importance of ENAS.\n\nTo further address the reviewer’s concerns, we have conducted more controlled experiments. Following are their results:\n\n1a) 64 or 96 maps on CIFAR-10 models. Sure, a model with randomly chosen 64 or 96 maps on each block, each layer may perform similarly to ENAS. However, in this search space, the controller can take up to 256 maps. Without ENAS, a random model designer would select roughly 128 maps at each block, each layer. If you haven’t seen ENAS’s decisions to pick 64 or 96 maps, would you think of such a baseline? We do agree that a slight tune of hyper-parameters may also lead to this model. However, in other search spaces (e.g. see Section 4.2 in our revision), where one needs to figure out the skip connections, the tuning of hyper-parameters won’t be as “slight”.\n\n1b) The pattern of skip connections found by ENAS is indeed better than the DenseNet pattern and the ResNet pattern. In our settings, the DenseNet pattern (connecting every pair of layers) achieves 5.23% test error, and the ResNet pattern (connecting each layer to the next) achieves 6.01% test error. We also note that the DenseNet and the ResNet patterns in our settings are not the DenseNet and ResNet in their original papers. The reason for the differences lies in the design choice of our search spaces: we make skip connections go through a conv1x1 instead of concatenation as in the original DenseNet (Huang et al., 2016), or identity and addition as in the original ResNet (He et al., 2015). Such design choice may be sub-optimal, and we will try the identity skip connections in our next revision. However, our controlled experiment does show that in our search space, the skip connections that ENAS finds do achieve non-trivial improvements compared to standard patterns.\n\n2) ENAS is a general method. To see this, note that one way to do programming is to search for a path in a bigger program, where all operations are available at every step. In ENAS, the computations in a neural architecture can be viewed as a program, which is represented as directed acyclic graph (DAG) (see Section 1 of our revision). To apply ENAS to any task, e.g. designing a convolutional network, or designing a recurrent cell, one only needs to specify the DAG’s components (examples are now in Section 3 of our revision).\n\n3) We further elaborate point 2) above by applying ENAS to different search spaces. First, we use ENAS to search for both skip connections and layer operations (convolutions with different filter sizes, or average pooling, or max pooling). It turns out that in this search space, ENAS could discover a model with CIFAR-10 test error of 4.23%. This resulting model is comparable to the model found in the restricted search space over convolutions and pooling masks. Therefore, ENAS works in this search space.\n\n[to be continued]", "We thank the reviewer for the comments. We were very dismayed by the low score. Subsequently, we have completely rewritten the paper to make it easier to follow. We have also included new experimental results to address the reviewer’s concerns. All the results are reported in our revision and are summarized below. We hope that our revisions of the paper can clear the reviewer’s reservation about ENAS’s ability.\n\nSummary of New Results: \n\n1) The reviewer is concerned that ENAS can only search small search spaces. This is not the case. We used ENAS to search for both skip connections and layer operations (convolutions with different filter sizes, or average pooling, or max pooling). It turns out that in this large search space, ENAS could also discover a model with CIFAR-10 test error of 4.23%. This resulting model is comparable to the model found in the restricted search space over convolutions and pooling masks. Therefore, search space size is not a limitation of ENAS.\n\n2) The reviewer is concerned that the best ENAS model is “less about discovery and more about human input.” We show that ENAS can do well with less human inputs. In particular, we took the pattern of skip connections discovered by ENAS (Figure 5-Right in our revision), and simply increased the number of output channels at each layer from 256 to 512. The resulting model achieves 3.87% test error on CIFAR-10. In the original paper, the best ENAS result on CIFAR-10 is 3.86% test error, achieved by using multiple branches at each layer. Therefore, we showed that the model found by ENAS, with minimal human inputs, can achieve a similar performance to models that are designed with more human inputs.\n\nWe note that ENAS’s principle, i.e. searching for a path in a big model is general. Under this principle, we can do whatever other NAS approaches can do. We also note that the human input of increasing the number of output channels was also performed by the original NAS paper (Zoph and Le, 2017).\n\n3) We designed a different search space for recurrent cells. In this search space, ENAS finds a novel recurrent cell that achieves 57.8 test perplexity on Penn Treebank. We have reported this new result in our revision (Sections 4.1 and 5.1). Let’s call this the ENASCell. ENASCell is very novel compared to recurrent highway network (see Figure 4 in our revision). While the search space for ENASCell still benefits from highway connections, the ENAS controller has discovered several novelties:\n- the use of the ReLU activation, unlike in recurrent highway network where only the tanh activation is used\n- the pattern of connections within the ENASCell\nTo our knowledge, ENASCell’s perplexity of 57.8 is the state-of-the-art among automatic model design approaches on Penn Treebank. ENASCell outperforms NAS (62.4 perplexity), which uses two orders of magnitude more computations and one order of magnitude more time. ENASCell, with almost no hyper-parameters tuning, also outperformed LSTM with extensive hyper-parameters tuning (59.5 perplexity) (Melis et al., 2017).\n\nWe now compare ENAS to other ICLR 2018 submissions which address similar problems, i.e. computationally inexpensive approaches for automatic model designing. In particular, Paper 1 presents SMASH, a method that automatically designs networks for image classification tasks, and Paper 323 presents a method that automatically designs recurrent cells. As of now, despite the fact that their methods perform worse than ENAS, both papers receive at least one 7 in their reviews.\n\nWe believe that ENAS’s contributions are significant, both in the novelty of its idea and in the significance of its results. We sincerely hope that you will give ENAS another consideration.\n\n\n[1] Paper 1: SMASH: One-Shot Model Architecture Search through HyperNetworks (https://openreview.net/forum?id=rydeCEhs-)\n[2] Paper 323: A Flexible Approach to Automated RNN Architecture Generation (https://openreview.net/forum?id=SkOb1Fl0Z)\n", "Doesn't the result of Sanity Check with Ablation Study section imply that REINFORCE successfully learned a competitive architecture from all the possible ones? I think the resulting performance being not much better than that of the baseline is because of the chosen search space. If they adapted the search space of \"Learning Transferable ...\" by Zoph et. al., they would be able to achieve a comparable performance given they used PPO instead, since they achieved the performance comparable to that of NAS by Zoph & Le. I think it's bit too harsh to give 4 for a paper that reduced the computation cost of NAS to 1/100. SMASH achieved much higher score from the reviewers, but they achieved a similar performance. It relies on parameter sharing assumption as well, and they demonstrated that the assumption is valid in their case and therefore reasonable to assume for similar cases. Since the author of ENAS cited SMASH paper, I don't think they have to mention the assumption. You claim that many of the details of the controller are obscure, but we, a third party, didn't experience much difficulty in implementing this algorithm for CNN part after asking a few questions below. So, I'd argue that just a few are obscure, which happens among successful papers as well. ", "After an epoch of omega updates, theta updates were performed. Did you alternate more frequently to see what happens?\n\nWhat if you replace the reward with loss function or some combination of loss function and classification accuracy?\n\nIf you apply ENAS longer, say for 3 days, will the resulting performance be better? Did you find the performance to converge around the 300 epochs? \n\nAfter the 300 epochs, you sampled many architectures and determined the one with the highest reward to be one trained from scratch. How many architecture samples did you find to be enough (for convergence)? ", "Thanks for the question.\n\nThe input to each time step of the controller RNN at time step t is the embedding of the decision sampled from time step t-1. As there are 2^(C/S)-1 possible decisions at each time step, the embedding matrix has 2^(C/S)-1 rows, which are shared among all 6 operations (1x1_conv, 3x3_conv, 5x5_conv, 7x7_conv, max_p, and avg_p). So to answer your second question, the mask indices from the same embedding matrix are shared among these 6 operations.\n\nMeanwhile, the skip_anchor is treated differently. The input it provides to the next step (which decides the mask for a 1x1_conv) is the mean of the previous anchor steps that get sampled. We describe this in the paragraph right below Equation (5) in the paper.", "In Figure 2 of ENAS, what exactly is the input (besides the RNN hidden state) to each timestep of the RNN?\n\nDoes the mask index a row of a different input embedding matrix (1 of 256 rows in 1 of 7 possible 256x64 matrices) (256 is from 2^(C/S)-1) (7 is from the 7 possible layer components) depending on whether the mask from previous timestep output was used to mask a 1x1_conv, 3x3_conv, 5x5_conv, 7x7_conv, max_p, avg_p, or skip_anchor? <--Is this row embedding method the only/correct input to the RNN or is it actually something else?", "1. We went with REINFORCE for the ease of implementation. We have not tried other methods, such as PPO, TRPO, etc. but we will look into this soon.\n\n2. We tried, and we needed at least M=10 to training the policy π. We suspect this is because the gradient estimated with REINFORCE has a high variance. M=1 works for training omega because every update for omega has its gradient computed on a batch of training example, leading to a smaller variance.", "1) Was there any empirical result that made you choose REINFORCE rule instead of using PPO loss function (as in \"Learning Transferable Architectures for Scalable Image Recognition\") for updating theta, which was stated to work better in Zoph et. al. 2017?\n\n2) In the section \"training shared parameter omega,\" it says M=1 is sufficient. Does this apply to \"Training the Policy π\" as well?\n\n\n" ]
[ 6, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByQZjx-0-", "iclr_2018_ByQZjx-0-", "iclr_2018_ByQZjx-0-", "rJE-zQ9mG", "Sksb-TtQM", "rJaD7VugM", "Sksb-TtQM", "By51thRez", "H1ThH37-f", "iclr_2018_ByQZjx-0-", "BkrqNswgf", "HkAyq8MQf", "HkpjFLfQM", "rJaD7VugM", "Bkj-7CYef", "rJaD7VugM", "iclr_2018_ByQZjx-0-", "rJpVtZbgG", "iclr_2018_ByQZjx-0-", "r1SrSDy1f", "iclr_2018_ByQZjx-0-" ]
iclr_2018_r1pW0WZAW
Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies
Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM). We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past. We show that MIST RNNs 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and 3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies.
workshop-papers
I think the model itself is not very novel, as pointed by the reviewers and the analysis is not very insightful either. However, the results themselves are interesting and quite good (on the copy task and pMnist, but not so much the other datasets presented (timit etc) where it not clear that long term dependencies would lead to better results). Since the method itself is not very novel, the onus is upon the authors to make a strong case for the merits of the paper -- It would be worth exploring these architectures further to see if there are useful elements for real world tasks -- more so than is demonstrated in the paper -- for example showing it on tasks such as machine translation or language modelling tasks requiring long term propagation of information or even real speech recognition, not just basic TIMIT phone frame classification rate. As a result, while I think the paper could make for an interesting contribution, in its present form, I have settled on recommending the paper for the workshop track. As a side note, paper is related to paper 874 in that an attention model is used to look at the past. The difference is in how the past is connected to the current model.
train
[ "BJMTxiOlG", "H1OSO2dlz", "rycLSbcgf", "BJMxrXomf", "SyaF_Qomf", "SJkNAGiQM", "Bka_1mi7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The followings are my main critics of the paper: \n1. Analysis does not provide any new insights. \n2. Similar work (recurrent skip coefficient and the corresponding architecture in [1]) has been done, but has not been mentioned. \n3. The experimental results are not convincing. This includes 1. the choices of tasks are limited -- very small in size, 2. the performance in pMNIST is worse than [1], under the same settings.\n\nHence I think the novelty of the paper is very little, and the experiments are not convincing.\n\n[1] Architectural Complexity Measures of Recurrent Neural Networks. Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov, Yoshua Bengio. NIPS, 2016. ", "The presented MIST architecture certainly has got its merits, but in my opinion is not very novel, given the fact that NARX RNNs have been described 20 years ago, and Clockwork RNNs (which, as the authors point out in section 2, have a similar structure) have also been in use for several years. Still, the presented results are good, with standard LSTMs being substantially outperformed in three out of five standard RNN/LSTM benchmark tasks. The analysis in section 3 is decent (see however the minor comments below), but does not offer revolutionary new insights - it's perhaps more like a corollary of previous work (Pascanu et al., 2013).\n\nRegarding the concrete results, I would have wished for a more detailed analysis of the more surprising results, in particular, for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it \"difficult to learn long-term behavior that must be detected at high frequency\" [section 2]? How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)? In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units?\n\nIn summary, for me this paper is solid, and although the architecture is not that new, it is worth bringing it again into the focus of attention.\n\n\nMinor comments:\n- In several places, the formulas are rather strange and/or occasionally incorrect. In particular,\n* on the right-hand sind of the inline formula in section 3.1, the symbol v is missing completely, which cannot be right;\n* in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined;\n* the \\theta_l in the beginning of section 3.3 (formula 13) is completely superfluous.\n- The position of the tables and figures is rather weird, making the paper less readable than necessary. The authors should consider moving floating parts around (one could also move figure three to the bottom of a suitable page, for example).\n- It is a matter of taste, but since all experimental results except the ones on the copy task are tabulated, one could think of adding a table with the results now contained in figure 3.\n\nRelation to prior work: the authors are aware of most relevant work. \n\nOn p2 they write: \"Many other approaches have also been proposed to capture long-term dependencies.\" There is one that seems close to what the authors do: \n\nJ. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992\n\nIt is related to clockwork RNNs, about which the authors write:\n\n\"A recent architecture that is similar in spirit to our work is that of Clockwork RNNs (Koutnik et al., 2014), which split weights and hidden units into partitions, each with a distinct period. When it’s not a partition’s time to tick, its hidden units are passed through unchanged, thus in some ways mimicking the behavior of NARX RNNs. However Clockwork RNNs differ in two key ways. First, Clockwork RNNs sever high-frequency-to-low-frequency paths, thus making it difficult to learn long-term behavior that must be detected at high frequency (for example, learning to depend on quick motions from the past for activity recognition). Second, Clockwork RNNs require hidden units to be partitioned a priori, which in practice is difficult to do in any meaningful way. NARX RNNs suffer from neither of these drawbacks.\"\n\nThe neural history compressor, however, adapts to the frequency of unexpected events, by ticking only when there is an unpredictable event, thus overcoming some of the issues above. Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks?\n\nGeneral recommendation: Accept, provided the comments are taken into account.\n", "Summary: The authors introduce a variant of NARX RNNs, which has an additional attention mechanism and a reset mechanism. The attention is only applied on subsets of hidden states, referred as delays. The delays are aggregated into a vector using the attention coefficients as weights, and then this vector is multiplied by the reset gates. \n\nThe model sounds a bit incremental, however, the performance improvements over pMNIST, copy and MobiAct tasks are interesting.\n\nA similar kind of architecture has been already proposed:\n[1] Soltani et al. “Higher Order Recurrent Neural Networks”, arXiv 1605.00064\n", "We are pleased that you enjoyed our work. Thank you very much for your detailed review and insightful comments. We have done our best to address every question raised, and we have updated the paper to reflect every response here:\n\n>>>>> for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it \"difficult to learn long-term behavior that must be detected at high frequency\" [section 2]?\n\nFor large delays (D >= 100), this is precisely the reason that Clockwork RNNs fail, but we see no way of providing further empirical evidence of this. We instead describe in detail why Clockwork RNNs must fail:\n\n- Symbol 0 can be 'copied ahead' by all partitions, and so perhaps it is possible to learn to replicate this symbol later in time.\n\n- Symbol 1 can only be seen by the highest-frequency partition (period of T = 1) because 1 % T = 0 for T = 1, but not T = 2, 4, 8, 16, etc. Also, this partition cannot send information to lower-frequency partitions. Hence Clockwork RNNs cannot learn to replicate symbol 1 for the exact same reason that a simple RNN cannot: the shortest past to the loss has at least D matrix multiplies and nonlinearities.\n\n- Symbol 2 can similarly only be seen by the two highest-frequency partitions (T = 1, T = 2), so we have a shortest path with D / 2 nonlinearities and matrix multiplies (a negligible difference for medium-to-large delays).\n\n- Symbol 3 can only be seen by the single highest-frequency partition because again 3 % T = 0 only for T = 1, so the situation is identical to symbol 1.\n\n- And so on. Hence Clockwork RNNs must fail to learn to copy most of these symbols for medium-to-large delays.\n\nFor small delays (D = 50), Clockwork RNNs should solve the copy task, because the highest-frequency partition resembles a simple RNN. However, this partition has only 256 / 8 = 32 hidden units. We thus ran additional Clockwork RNN experiments with 1024 hidden units (and 10x as many parameters), with 128 units allocated to the high-frequency partition. We then see that Clockwork RNNs do solve the copy problem with a delay of 50 and continue to fail to solve the problem for higher delays, as expected.\n\n>>>>> In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units?\n\nWe ran additional experiments with 512 units for both LSTM and MIST RNNs. LSTM obtains an improved error rate of 7.6%, and MIST RNNs obtain an improved error rate of 4.5%. However, we verified that capacity does not help with long-term dependencies; please see the next question.\n\n>>>>> How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)?\n\nWe included Figure 2 to show that empirical observations match our expectations for gradient decay. To provide further empirical validation, we ran additional pMNIST experiments for the 512-unit LSTM and MIST RNNs:\n\n- Based on Figure 2, we used only the last 200 pixels (rather than all 784).\n\n- LSTM performance remained the same (within 1 std. dev., 7.4% error), showing that LSTM gained nothing from including the distant past.\n\n- MIST RNN performance degraded by 15 standard deviations (6.0% error), showing that MIST RNNs do benefit from the distant past.\n\n- Finally we note that MIST RNNs still outperform LSTM. This is expected since LSTM has trouble learning even from steps <= 200 from the loss (as shown in Fig. 2).\n\n>>>>> on the right-hand side of the inline formula in section 3.1, the symbol v is missing\n\nThank you. This arose from merging two previous examples. Fixed.\n\n>>>>> in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined\n\nFixed\n\n>>>>> the \\theta_l in the beginning of section 3.3 (formula 13) is completely superfluous.\n\nWe agree but include this to make the connection to practice immediately evident. We added a sentence to clarify this.\n\n>>>>> The position of the tables and figures is rather weird...\n\nFixed.\n\n>>>>> Relation to prior work: the authors are aware of most relevant work... There is one that seems close to what the authors do: J. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992 ...\n\nLearning a generative model over inputs to identify surprising inputs for processing is an interesting approach; we added this to the Background section.\n\n>>>>> Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks?\n\nWe would not be surprised at all if this method can improve results for some tasks, especially those with highly-correlated, low-dimensional inputs such as MNIST (or even pMNIST). However, addressing this question fully would be far from trivial, so we leave it as future work.", "Thank you for your review. We kindly note that some of the comments in this review are incorrect, and as such we sincerely hope that you are willing to reconsider your evaluation of our work.\n\n>>>>> The experimental results are not convincing. This includes 1. the choices of tasks are limited -- very small in size, 2. the performance in pMNIST is worse than [1], under the same settings.\n\nPoint 2:\n\nPlease note that this is incorrect. In [1], the best reported error rate for pMNIST is 6.0% error, whereas we obtain 5.5 +- 0.2% error. Also, their results (Table 2) correspond to a hyperparameter sweep, with s = 11 achieving 6.0% error. We require no such sweeps: our delays were kept fixed for all 5 tasks in the paper (still outperforming every model proposed in [1]).\n\nPoint 1:\n\nPlease note that we evaluated these methods across\n\n- 2 synthetic tasks that have been widely used for testing long-term dependencies, as was highlighted in Section 5 with references (Hochreiter et al., 1997; Martens et al., 2011; Le et al., 2015; Arjovsky et al., 2016; Henaff et al., 2016; Danihelka et al., 2016)\n\n- 3 real tasks that were chosen because they a) likely require long-term dependencies and b) are of moderate size so that statistically-significant results can be obtained.\n\nWe followed the experimental design of [2], which also includes 3 real tasks of moderate size, preferring random hyperparameter sweeps and statistically-significant results over manual sweeps and statistically-questionable results. Also, please note that this design seems to be reasonable to the community, as [2] has been cited 400+ times since 2014.\n\nRegarding the dataset sizes: TIMIT is standard, with splits identical to [2]. MobiAct contains approximately 3200 sequences of mobile sensor data from 67 users, very similar in size to the datasets in [2]. MISTIC-SL is smaller in size, but we chose this task because long-term dependencies are required and because state of the art is held by LSTM (which we ended up matching with MIST RNNs).\n\n[1] Zhang et al. Architectural complexity measures of recurrent neural networks. Advances in neural information processing systems (NIPS), 2016.\n\n[2] Greff et al. LSTM: A search space odyssey. IEEE Trans. on Neural Networks and Learning Systems, 2016.\n\n>>>>> Similar work (recurrent skip coefficient and the corresponding architecture in [1]) has been done, but has not been mentioned. \n\nBased on this comment, we have added a discussion of [1] to the Background section. However kindly note that\n\n- with regard to the architecture, [1] proposes precisely a simple NARX RNN ([19], discussed extensively in our paper) with non-zero weights for only two delays. This bears little resemblance to our work. Most importantly, MIST RNNs provide exponentially-short paths to the past while maintaining fewer parameters and computations than LSTM. In contrast, [1] does not provide exponentially-short paths, and uses two delays to avoid high parameter/computation counts. In case there is any doubt about this, we quote [1]: \"By using this specific construction, the recurrent skip coefficient increases from 1 (i.e., baseline) to k and the new model with extra connection has 2 hidden matrices (one from t to t + 1 and the other from t to t + k).\"\n\n- with regard to skip coefficients, [1] defines a *measure* of shortest paths called Recurrent Skip Coefficients. However in [1] the motivation for this definition is \"it is known that adding skip connections across multiple time steps may help improve the performance on long-term dependency problems [19, 20].\" Again, [19] introduced simple NARX RNNs, as discussed extensively in our paper. Thus the extent to which [1]'s skip coefficients overlap with our work is that we both recognize that short paths are important. A difference between our work and [1] is that we provide a self-contained derivation of this.\n\n[1] Zhang et al. Architectural complexity measures of recurrent neural networks. Advances in neural information processing systems (NIPS), 2016.\n\n[19] Lin et al. Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329–1338, 1996.\n\n[20] Sutskever et al. Temporal-kernel recurrent neural networks. Neural Networks, 23(2):239–243, 2010.\n\n>>>>> Analysis does not provide any new insights.\n\nThe connection of gradient components to paths via the chain rule for ordered derivatives is new. However we agree that the analysis portion of the paper is not revolutionary - this was not the goal of the analysis. Our goals were to provide a self-contained justification of our approach and to extend the results from ([1], [2]) to general NARX RNNs.\n\n[1] Bengio et al. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166, 1994.\n\n[2] Pascanu et al. On the difficulty of training recurrent neural networks. International Conference on Machine Learning (ICML), 28:1310-1318, 2013.", "Changes:\n\n- The last 3 paragraphs of Section 2 (Background) were expanded and edited based on feedback from all 3 reviewers.\n\n- Section 3 (The Vanishing Gradient Problem in the Context of NARX RNNs) was edited for clarity and to fix typos spotted by AnonReviewer2.\n\n- Section 5.1 (Permuted MNIST results) was heavily modified based on AnonReviewer2's feedback. In particular, results were added with additional hidden-unit counts, and results were added to show that LSTM performance does not depend at all on information from the distant past (whereas MIST RNN performance does).\n\n- A paragraph was added to the end of Section 5.2 (Copy Problem results) based on AnonReviewer2's feedback. In particular we discuss additional Clockwork RNN results; the reasons that Clockwork RNNs must fail for large delays; and show that Clockwork RNNs do indeed behave like simple RNNs if enough hidden units are provided.\n\n- Figures and Tables were moved around for clarity, based on AnonReviewer2's feedback.\n\n- Small miscellaneous edits were made throughout to open space for the previous changes.", "Thank you for your review. We also found it interesting that MIST RNNs can capture such long-term dependencies.\n\n>>>>> A similar kind of architecture has been already proposed: [1] Soltani et al. “Higher Order Recurrent Neural Networks”, arXiv 1605.00064\n\nBased on this comment, we have added a short discussion of [1] to the Background section.\n\nHowever, we would like to kindly note that [1] defines a \"higher order recurrent neural network (HORNN)\" precisely as a simple NARX RNN, which was introduced 20 years earlier in [2], and which was already discussed extensively in our paper.\n\nImportantly, every HORNN variant in [1] suffers from the same issue that is mentioned in our paper for simple NARX RNNs: the vanishing gradient problem is only mitigated mildly as n_d, the number of delays, increases; and simultaneously parameter and computation counts grow by this same factor n_d. We would like to emphasize that MIST RNNs are the first NARX RNNs that resolve both of these issues, by providing exponentially short connections to the past while maintaining even fewer parameters and computations than LSTM.\n\n[1] Rohollah Soltani and Hui Jiang. Higher order recurrent neural networks. arXiv preprint arXiv:1605.00064, 2016.\n\n[2] Tsungnan Lin, Bill G Horne, Peter Tino, and C Lee Giles. Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329–1338, 1996." ]
[ 3, 7, 6, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1pW0WZAW", "iclr_2018_r1pW0WZAW", "iclr_2018_r1pW0WZAW", "H1OSO2dlz", "BJMTxiOlG", "iclr_2018_r1pW0WZAW", "rycLSbcgf" ]
iclr_2018_S1lN69AT-
To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.
workshop-papers
The authors present a thorough exploration of large-sparse models that are pruned down to a target size and show that these models can perform better than small dense models. Results are shown on a variety of datasets with as conv models and seq2seq. The authors even go so far as to release the code. I think the authors are to be thanked for their experimental contributions. However, in terms of accepting the paper for a premier machine learning conference the method holds little surprise or non-obviousness. I think the paper is a good experimental contribution, and would make a good workshop paper instead but it offers little contribution by way of machine learning methods.
train
[ "rJPMO6YxG", "BkE3cW5gG", "r1L5FHqeG", "HySffdp7z", "S1kV-OamM", "ryDNsFvAb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "This paper presents a comparison of model sizes and accuracy variation for pruned version of over-parameterized deep networks and smaller but dense models of the same size. It also presents an algorithm for gradual pruning of small magnitude weight to achieve a pre-determined level of sparsity. The paper demonstrates that pruning of large over-parameterized models leads to better classification compared to smaller dense models of relatively same size. This pruning technique is demonstrated as a modification to TensorFlow on MobileNet, LSTM for PTB dataset and NMT for seq2seq modeling.\n\nThe paper seems mainly a comparison of impact of pruning a large model for various tasks. The novelty in the work seems quite limited mainly in terms of tensorflow implementation of the network pruning using a binary mask. The weights which are masked in the forward pass don't get updated in the backward pass. The fact that most deep networks are inherently over-parametrized seems to be known for quite sometime.\n\nThe experiments are missing comparison with the threshold based pruning proposed by Han etal. to ascertain if the gradual method is indeed better. A computational complexity comparison is also important if the proposed pruning method is indeed effective. In Section 1, the paper claims to arrive at \"the most accurate model\". However, the validation of the claim is mostly empirical and shows that there lies a range of values for increase in sparsity and decrease in prediction accuracy is better compared to other values.\n\nOverall, the paper seems to perform experimental validation of some of the known beliefs in deep learning. The novelty in terms of ideas and insights seems quite limited.", "Summary:\nThis paper presents a thorough examination of the effects of pruning on model performance. Importantly, they compare the performance of \"large-sparse\" models (large models that underwent pruning in order to reduce memory footprint of model) and \"small-dense\" models, showing that \"large-sparse\" models typically perform better than the \"small-dense\" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint). They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs). They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains. They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), although there does come a point where too much sparsity degrades the model performance considerably; this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity.\n\nReview:\nQuality: The quality of the work is high --- the experiments are extensive and thorough. I would have liked to see \"small-dense\" vs. \"large-sparse\" comparisons on Inception (only large-sparse results are reported).\n\nClarity: The paper is clearly written, though there is room for improvement. For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document). Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to. A small section (before \"Methods\") describing the datasets/features in detail would be helpful. Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix).\n\nOriginality: Although the experiments are informative, the work as a whole is not very original. The method proposed of using a sparsity schedule to perform pruning is simple and effective, but is a rather incremental contribution. The primary contribution of this paper is its experiments, which for the most part compare known methods.\n\nSignificance: The paper makes a nice contribution, though it is not particularly significant or surprising. The primary observations are:\n(1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint.\n(2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, which suggests that there is a \"sweet-spot\" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint).\n\nResult #1 is not very surprising, given that Han et al (2016) were able to show significant compression without loss in accuracy; thus, because one would expect a smaller dense model to perform worse than the large dense model, it would also perform worse than the large sparse model.\nResult #2 had already been seen in Han et al (2016) (for example, in Figure 6).\n\nPros:\n- Very thorough experiments across a number of domains\n\nCons:\n- Methodological contributions are minor.\n- Results are not surprising, and are in line with previous papers.", "This paper analyzes the effectiveness of model pruning for deployment in resource constrained environments. The contribution is marginal but interesting as a summary\n\n\nThis paper analyzes the effectiveness of model pruning for deployment in resource constrained environments. Contrary to other approaches, this paper assumes there is a computational budget to be meet and the pruning approach should result in a model that fits within that budget.\n\nAccording to the paper there is a contribution of a pruning scheme. To the best of my understanding, the proposal / contribution is minimal or not clearly detailed. My understanding is the approach is equivalent to a L1 pruning where the threshold for pruning is updated over time / training process rather than pushing weights down towards zero (as it is usually done). \nThen, there is a schedule for minimizing the impact of modifying the weights although this has been discussed in related works (see Alvarez and Salzmann 2016). \n\n\nGiven this setup, the paper present a number of comparisons and experimental validations. \n\nThere are several steps that are not clear to me. \n\n1) how does this compare to the low-rank or group sparsity approaches referred in the related work section?\n2) The key here is modifying the thresholds as the training progresses up to a certain point which seems to me quite equivalent to L1 pruning where the regularization term is also affected by the learning rate (therefore having less influence as the training progresses). In this paper though there are heuristics to stop pruning when certain constraints are met. Which is interesting (as pruning will affect the quality and capacity of the network) but also applicable to other methods. Also, as suggested in related works, the pruning becomes negligible after certain number of epochs (therefore there is no real need to stop the process). Any discussion here would be interesting.\n\n3) For me, it is interesting the fact that pruning in an initial stage is too aggressive. However, it also limits the capacity of the network by pruning too much at the begining. I think there are contrary messages in the paper that would be nice to clarify: pruning rapidly at the beginning where redundant connections are abundant and then, there is the need to have a large learning rate to recover from the pruning.\n\n4) I missed a discussion on the Inception model in the experimental settings. \n\n5) If this is based on masks for pruning and performing sparse operations I wonder how does this benefit at inference time since many operations will be faster in a dense matrix multiplication manner. That is why I think would be interesting to do at group level as proposed in some related methods.\n\n6) Tables showing comparisons are not complete. I do not understand why measuring the non-zero parameters if, in the baseline, there is no analysis on how many of these parameters can be actually set to 0 by pruning as a postprocessing step. Please, add explanations on why / how non-zeros are measured in the baseline.\n\n7) More importantly, I think the comparison sparsity vs width is not fair. This is comparing the training process of a model with limited capacity vs a model where the capacity is progressively limited (the pruned). Training regimes should be detailed and properly analyzed for this to be fair. Nevertheless, results are consistent with other approaches listed in the state of the art (pruning while training is a good thing).\n", "We thank the reviewers for their time and feedback.\n\nWe want to emphasize what we view as the major contributions of our paper.\n\n1. We demonstrate that for a constant model memory footprint, large-sparse models outperform small-dense models across several state-of-the-art neural network architectures. In hindsight, it may appear that this result is intuitive. However, our work provides extensive empirical proof of this property of deep neural networks across a diverse set of neural network architectures (as opposed to limiting the study to CNNs as in several of the prior works). To our knowledge, the fact that magnitude-based pruning can achieve high compression ratios with minimal loss in accuracy on state-of-the-art models like Inception, MobileNet, and Google NMT has not been previously shown in the literature. In the case of CNNs, Han et al. (2015) present pruning results on ImageNet using older CNNs such as AlexNet (42.8% top-1 error, 244MB) and VGGNet (31.5% top-1 error, 552 MB). In contrast, we present pruning results on ImageNet using Inception v3 (21.9% top-1 error, 108MB) and MobileNet (29.4% top-1 error, 16.8 MB), and to our knowledge, we are the first to do so. Older CNNs such as AlexNet and VGGNet are heavily overparametarized and achieve lower accuracy compared to modern, efficient architectures like Inception and MobileNet, and we demonstrate the efficacy of pruning for model compression on compact, highly accurate state-of-the-art architectures. The fact that large, less accurate CNNs such as AlexNet or VGGNet can be pruned with minimal loss in accuracy does not directly imply that compact, highly accurate CNNs such as Inception or MobileNet can also be pruned with minimal loss in accuracy. We view the results obtained in this paper as significant since recent papers that attempt to prune a compact, highly accurate architecture trained on ImageNet achieve significantly worse results compared to us (Alvarez & Salzmann, 2017; and Dong et al., 2017 for the case of ResNet-50).\n\nHan et al. (2015): Learning both Weights and Connections for Efficient Neural Networks. NIPS 2015.\nAlvarez & Salzmann (2017): Compression-aware Training of Deep Networks. NIPS 2017.\nDong et al. (2017): Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon. NIPS 2017.\n\n2. We propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. Many existing pruning methods, like Han et al. (2015), depend on several stages of pruning and fine-tuning and require a lot of hyperparameter tuning, such as the per-layer weight threshold in each stage and the number of fine-tuning iterations. Using these pruning methods often requires a significant amount of ad-hoc tuning. In response to Reviewer 3’s request to see a comparison of our proposed gradual pruning method with the pruning method of Han et al. (2015), we cannot provide an exact comparison with Han et al.’s method because in Han et al.’s method, each layer’s weight threshold is chosen in an ad-hoc manner by specifying a “quality parameter” based on the \"sensitivity of each layer to network pruning” (“how accuracy drops as parameters are pruned on a layer-by-layer basis”) and the sparsities of the weight tensors end up varying considerably between the different layers (16% to 91% for AlexNet and 42% to 96% for VGGNet) as a result of the per-layer \"quality parameter\". In our proposed method, we eliminate a lot of the hyperparameter tuning that is required in other pruning methods by embedding the pruning during the training process of the respective network which allows us to gradually prune and fine-tune according to one sparsity function across all of the layers (without requiring per-layer tuning). We demonstrate that our proposed gradual pruning technique works for several state-of-the-art CNNs and RNNs using the same sparsity function embedded in the training process of the respective network.\n\nWe have open-sourced the TensorFlow pruning library used to generate the results reported in this work. We believe this work makes an important contribution by showing that state-of-the-art neural network models can be pruned with minimal loss in accuracy using a new gradual pruning technique, incorporated as a part of the training procedure, that requires minimal tuning. We present compelling results showing that large-sparse models outperform small-dense models across several state-of-the-art neural network architectures (deep CNNs, stacked LSTMs, seq2seq models).", "Our proposed pruning method sets to zero the smallest magnitude weights in the weight tensor of each layer until a desired sparsity level s(t) is reached. By ensuring that 100*s(t)% of the weights are zero at time t, we avoid the need to tune per-layer weight thresholds or threshold hyperparameters. The gradual pruning is embedded during the training process of the respective network by pruning to sparsity s(t) once every Δt training steps (for all other iterations, the training step of the network is not changed). The sparsity function s(t) and the gradual pruning method are described in the Methods section.\n\n1. Based on our literature review, we believe that magnitude-based weight pruning generally achieves better accuracy compared to low-rank and group sparsity approaches given the same sparsity constraints (though we did not test other methods ourselves).\n\n2. See the first paragraph of our response. The key difference is that our pruning method is based on the sparsity function which directly controls the number of zeros in the weight tensors of each layer. We do not need to tune a L1 regularization coefficient or weight threshold hyperparameters.\n\n3. We can begin pruning when the model is partially or fully trained. Pruning rapidly at the beginning means that we reduce the capacity of our network rapidly at the beginning. Since we have zeroed out a large number of weights in our network and perturbed our system by a large amount, we use a large learning rate to allow the network to recover from our large perturbation.\n\n4. We did not compare small-dense and large-sparse Inception models so Inception is not in the experimental section, but we instead used the Inception model to demonstrate how our gradual pruning technique works in the Methods section.\n\n5. We are interested in the potential of custom hardware architectures that support using sparse neural network models for on-device inference. Since on-device neural network inference is often memory bandwidth-bound, using sparse models can potentially lead to big speedups at inference time by reducing the number of parameters that are fetched from memory in each forward pass (we benefit even if the matrix multiplication is subsequently done using dense matrices if inference is memory-bound rather than compute-bound). Furthermore, by reducing the total number of energy-intensive memory accesses, we reduce the power consumption which is the more critical constraint for on-device neural network inference. The potential advantage of sparse models over group sparse models is that higher accuracy might be obtainable by using sparse models compared to group sparse models for the same memory footprint because group sparsity is a more restrictive condition than sparsity.\n\n6. For the baseline (dense) model, the number of nonzero parameters is equal to the total number of parameters in the model. When using our proposed pruning method to train sparse models, we directly set many of the weights to zero, which makes the weight tensors sparse and reduces the number of nonzero parameters, without needing any postprocessing step.\n\n7. In all of our experiments, the number of iterations for pruning a dense model (training a sparse model) is less than or equal to the number of iterations for originally training the respective dense model, so we have to tried to ensure a fair comparison between small-dense and large-sparse models. Furthermore, all hyperparameters, other than possibly the number of iterations and the learning rate schedule, are kept the same between the training of dense and sparse models.", "Regarding \" Such techniques perform coarse-grain pruning and depend critically on the structure of the convolutional layers, and may not be directly extensible to other neural network architectures that lack such structural properties (LSTMs for instance)\", the category of learning structured sparsity [1][2] in DNNs is a more general way than we thought. It is more challenging for DNNs with more sophisticated structures, but it might be possible to use it in LSTMs [3] and to even learn to reduce layers in ResNets [2], if we could figure out the kind of structures we want to learn. \n\n[1] https://arxiv.org/abs/1608.08710\n[2] http://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf\n[3] https://arxiv.org/abs/1709.05027" ]
[ 5, 5, 5, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1 ]
[ "iclr_2018_S1lN69AT-", "iclr_2018_S1lN69AT-", "iclr_2018_S1lN69AT-", "iclr_2018_S1lN69AT-", "r1L5FHqeG", "iclr_2018_S1lN69AT-" ]
iclr_2018_SkRsFSRpb
GeoSeq2Seq: Information Geometric Sequence-to-Sequence Networks
The Fisher information metric is an important foundation of information geometry, wherein it allows us to approximate the local geometry of a probability distribution. Recurrent neural networks such as the Sequence-to-Sequence (Seq2Seq) networks that have lately been used to yield state-of-the-art performance on speech translation or image captioning have so far ignored the geometry of the latent embedding, that they iteratively learn. We propose the information geometric Seq2Seq (GeoSeq2Seq) network which abridges the gap between deep recurrent neural networks and information geometry. Specifically, the latent embedding offered by a recurrent network is encoded as a Fisher kernel of a parametric Gaussian Mixture Model, a formalism common in computer vision. We utilise such a network to predict the shortest routes between two nodes of a graph by learning the adjacency matrix using the GeoSeq2Seq formalism; our results show that for such a problem the probabilistic representation of the latent embedding supersedes the non-probabilistic embedding by 10-15\%.
workshop-papers
The reviewers found the paper meaningful but noted that they were not convinced by the experiments as they stand and the presentation was dense for them.
train
[ "HJlgLNYxf", "rkU2dUDlf", "HknoNEWZz", "rJLKk-67G", "SkSV0xp7f", "HyNaTgpmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "==== UPDATE AFTER REVIEWER RESPONSE\n\nI apologize to the authors for my late response.\n\nI appreciate the reviewer responses, and they are helpful on a number of\nfronts. Still, there are several problematic points.\n\nFirst, as the authors anticipated, I question whether the geometric encoding\noperations can be included in an end-to-end learning setting. I can imagine\nseveral arguments why an end-to-end algorithm may not be preferred, but the\nauthors do not offer any such arguments.\n\nSecond, I am still interested in more discussion of the empirical investigation\ninto the behavior of the algorithm. For example, \"Shortest\" and \"Successful\"\nin Table 1 still do not really capture how close \"successful but not shortest\"\npaths are to optimal.\n\nThe authors have addressed a number of my concerns, but there\nare a few outstanding concerns. Also, other reviewers are much more familiar\nwith the work than myself. I defer to their judgement after the updates.\n\n==== Original review\n\nIn this work, the authors propose an approach to adapt latent representations to account for local geometry in the embedding space. They show modest improvement compared to reasonable baselines.\n\nWhile I find the idea of incorporating information geometry into embeddings very promising, the current work omits a number of key details that would allow the reader to draw deeper connections between the two (specific comments below). Additionally, the experiments are not particularly insightful.\n\nI believe a substantially revised version of the paper could address most of my concerns; still, I find the current version too preliminary for publication.\n\n=== Major comments / questions\n\nThe transformation from context vectors into Fisher vectors is not clear. Presumably, shortest paths in the training data have different lengths, and thus produce different numbers of context vectors. Does the GMM treat all of these independently (regardless of sample)? or is a separate GMM somehow trained for each training sequence? The same question applies to the VLAD-based approach.\n\nIn a related vein, it is not clear to what extent this method depends on the sequential nature of the considered networks. In particular, could a similar approach be applied to latent space embeddings from non-sequential models?\n\nIt is not clear if the geometric encoding operations are differentiable, or more generally, the entire training algorithm is not clear.\n\nThe choice to limit the road network graph feels quite arbitrary. Why was this done?\n\nDeep models are known to be sensitive to the choice of hyperparameters. How were these chosen? was a validation set used in addtion to the training and testing sets?\n\nThe target for training is very unclear. Throughout Sections 1 and 2, the aim of the paper appears to be to learn shortest paths; however, Section 3 states that the “network is capable of learning the adjacency matrix”, and the caption for Figure 2 suggests that “[t]he adjacency matrix is iteratively learnt (sic)....” However, calculating such training error for back-propagation/optimization would seem to rely on *already knowing* the adjacency matrix.\n\nThe performed experiments are minimal and offer very little insight into what is learned. For example, does the model predict “short” shortest paths better than longer ones? what do the “valid but not optimal” paths look like? are they close to optimal? what do the invalid paths look like? does it seem to learn parts of the road network better than others? sparse parts of the network? dense parts?\n\n=== Minor comments / questions\n\nThe term “context vector” is not explicitly defined or described. Based on the second paragraph in the “Fisher encoding” section, I assume these are the latent states for each element in the shortest path sequences.\n\nIs the graph directed? weighted? by Euclidean distance? (Roads are not necessarily straight, so the Euclidean distance from intersection to intersection may not accurately reflect the distance in some cases.)\n\nAre the nodes sampled uniformly at random for creating the training data?\n\nIs the choice to use a diagonal covariance matrix (as opposed to some more flexible one) a computational choice? or does the theory justify this choice?\n\nRoughly, what are the computational resources required for training?\n\nThe discussion should explain “condition number” in more detail.\n\nDo the “more precise” results for the Fisher encoding somehow rely on an infinite mixture? or, how much does using only a single component in the GMM affect the results?\n\nIt is not clear what “features” and “dictionary elements” are in the context of VLAD.\n\nWhat value of k was used for K-means clustering for VLAD?\n\nIt is not possible to assess the statistical significance of the presented experimental results. More datasets (or different parts of the road network) or cross-validation should be used to provide an indication of the variance of each method.\n\n=== Typos, etc.\n\nThe paper includes a number of runon sentences and other small grammatical mistakes. I have included some below.\n\nThe first paragraph in Section 2.2 in particular needs to be edited.\n\nThe references are inconsistently and improperly (e.g., “Turing” should be capitalized) formatted.\n\nIt seems like that $q_{ik} \\in \\{0,1\\}$ for the hard assignments in clustering.\n", "The paper proposes a method for augmenting sequence-to-sequence (seq2seq) methods with Fisher vector encodings, allowing the decoder to better model the geometric structure of the embedding space. Experiments are performed on a shortest-route problem, where augmenting standard seq2seq architectures with Fisher vectors improves performance.\n\nPros:\n- Combining deep learning with methods from information geometry is an interesting direction for research\n- Method is a generic drop-in replacement for improving any seq2seq architecture\n- Experimental results show modest performance improvements over vanilla seq2seq methods\n\nCons:\n- Missing references for prior work combining information geometry and deep learning\n- Insufficient explanation of the method\n- Only experimental results are a nonstandard route-finding task\n- Missing references and baselines for prior work on deep learning on graphs\n\nThe general research direction of combining deep learning with methods from information geometry is an exciting and fertile area for interesting work. Unfortunately this paper fails to cite or discuss much recent work in this area; for example natural gradient methods in deep learning have recently been explored in [1, 2, 3]; more closely related to the topic of this paper, [4] and [5] have combined Fisher vector encodings and deep networks for image classification tasks. Although these prior methods do not consider the use of recurrent networks, the authors should discuss how their method compares to the approaches of [4] and [5].\n\nThe method is not described in sufficient detail. How exactly is the Fisher encoding combined with the recurrent neural network? In particular, how is GMM fitting interleaved with learning the RNN? Do you backpropagate through the GMM fitting procedure in order to jointly learn the RNN parameters and the GMM for computing Fisher encodings? Or is GMM fitting an offline step done once, after which the RNN decoder is learned on top of the Fisher encodings? The paper should clarify along these points. As a side note, it also feels a little disingenuous to describe the method in terms of GMMs, but to perform all experiments with K=1 mixture components; in this setting the GMM degrades to a simple Gaussian distribution.\n\nThe proposed method could in theory be used as a drop-in replacement for seq2seq on any task. Given its generality, I am surprised at the nonstandard choice of route-finding in a graph of Minnesota roads as the only task on which the method is tested; as a minimum the method should have been tested on more than one graph.\n\nMore generally, I would have liked to see the method evaluated on multiple tasks, and on more well-established seq2seq tasks so that the method could be more easily compared with previously published work. Strong results on machine translation would be particularly convincing; the authors might also consider algorithmic tasks such as copying, repeat copying, sorting, etc. similar to those on which Neural Turing Machines were evaluated.\n\nI am not sure that seq2seq is the best approach for the route-finding task. In particular, since the input is encoded as a [source, destination] tuple it has a fixed length; this means that you could use a feedforward rather than recurrent encoder.\n\nThe paper also fails to cite or discuss recent work involving deep learning on graphs. For example Pointer Networks [6] use a seq2seq model with attention to solve convex hull, Delaunnay Triangulation, and traveling salesman problems; however Pointer Networks assume that the entire graph is provided as input to the model, while in this paper the network learns to specialize to a single graph. In that case, the authors might consider embedding the nodes of the graph using methods such as DeepWalk [7], LINE [8], or node2vec [9] as a preprocessing step rather than learning these embeddings from scratch.\n\nFrom Table 1, seq2seq + VLAD significantly outperforms seq2seq + FV. Given these results, are there any reasons why one should use seq2seq + FV instead of seq2seq + VLAD?\n\nOverall I think that this paper has some interesting ideas. However, due to a number of missing references, unclear description of the method, and limited experimental results I feel that the paper is not ready for publication in its current form.\n\n\nReferences\n\n[1] Grosse and Salakhutdinov, “Scaling Up Natural Gradient by Sparsely Factorizing the Inverse Fisher Matrix”, ICML 2015\n\n[2] Grosse and Martens, “A Kronecker-factored approximate Fisher matrix for convolution layers”, ICML 2016\n\n[3] Desjardins et al, “Natural Neural Networks”, NIPS 2015\n\n[4] Simonyan et al, “Deep Fisher Networks for Large-Scale Image Classification”, NIPS 2013\n\n[5] Sydorov et al, “Deep Fisher Kernels - End to End Learning of the Fisher Kernel GMM Parameters”, CVPR 2014\n\n[6] Vinyals et al, “Pointer Networks”, NIPS 2015\n\n[7] Perozzi et al, “DeepWalk: Online Learning of Social Representations”, KDD 2014\n\n[8] Tang et al, “LINE: Large-scale Information Network Embedding”, WWW 2015\n\n[9] Grover and Leskovec, “node2vec: Scalable Feature Learning for Networks”, KDD 2016", "In this paper, the authors propose to integrate the Fisher information metric with the Seq2Seq network, which abridges the gap between deep recurrent neural networks and information geometry. By considering of the information geometry of the latent embedding, the authors propose to encode the RNN feature as a Fisher kernel of a parametric Gaussian Mixture Model, which demonstrate an experimental improvements compared with the non-probabilistic embedding. \n\nThe idea is interesting. However, the technical contribution is rather incremental. The authors seem to integrate some well-explored techniques, with little consideration of the specific challenges. Moreover, the experimental section is rather insufficient. The results on road network graph is not a strong support for the Seq2Seq model application. \n\n\n", "We thank the Reviewer for his/her comments and for providing useful feedback. Since, we were constrained on time and resources (to train on a machine translation task), in our updated version, we have now introduced two benchmark problems such as copying and recalling sequences. On both problems, the GeoSeq2Seq network has been as successful as the Neural Turing Machine, albeit without a need for an external memory module.\n\nWe have also added a related work section as well as additional information for model construction. In our related work section, we have discussed all of the missing prior work suggested by the Reviewer including, natural neural networks, Fisher vectors for image classification, pointer networks, etc.\n\nAs the work of Sydorov et al. (2014) suggests it is indeed possible to learn the Fisher kernel parameters in an end-to-end manner. In our current work, we have first used the latent vectors learnt using a vanilla Seq2Seq training process to initiate a GMM, and thereof the Fisher kernel, subsequently a decoder is trained on the Fisher kernel to generate a prediction. \n\nIndeed, we agree with the reviewer that a Seq2Seq network may not be the best approach for route finding task; our motivation to use a route finding problem is to enable us to control task difficulty, and not for replacing algorithms such as meta-heuristics, integer programming, etc.\n\nThe idea to embed the nodes of the graph using methods like node2vec, DeepWalk and LINE are very useful and we anticipate them to finesse the accuracy of the GeoSeq2Seq. We would undoubtedly explore this avenue for our future work.\n\nWe have now explained why seq2seq + VLAD significantly outperforms seq2seq + FV and therefore it is preferable to use it instead of seq2seq+FV. We anticipate the performance being directly related to the condition number (the ratio of the largest to smallest singular value in the singular value decomposition of a matrix) of the Fisher Information Matrix.", "We thank the Reviewer for his/her comments and for providing useful feedback. We have now included further information for constructing the Fisher Vectors, along with the relevant references in computer vision, where it has been routinely utilised. \n\nShortest paths in the training data do have different lengths, but we build one context vector for each source-destination tuple and all context vectors have the same fixed length (we choose either 256 or 512). We then train the GMM on the context vectors obtained from the training sequences and finally use the means and variances to build the Fisher vectors. Similarly, for the VLAD-based approach, we train the centers and assignments from K-means and KD-trees on the context vectors obtained from the training sequences and use them to generate the VLAD encoding.\n\nThe formulation of the Fisher information metric, in its current form, can be directly applied to latent space embeddings from non-sequential models. However, as mentioned in the discussion: The Riemannian metric for a recurrent network can be evaluated in two ways -- one where we describe the probability distribution over the entire sequence and another where we describe a conditional distribution at time i conditioned on time i-1. We anticipate that the latter is more suited to a dynamic scenario (where the structure of the graph may be slowly changing) while the former is more suitable for static graphs. Analytically, averaging over time and assuming ergodicity, both metric should be fairly close to one another, nonetheless, it is only with further experiments we can demonstrate the value of one over the other. \n \nIt is a very good question whether the geometric encoding operations are differentiable. We anticipate this goes on to enquire if end-to-end training of Fisher encoding for Seq2Seq models can be attained. Infact Sydorov et al. (2014) have shown just this for convolutional neural networks.\n\nThe selection of the road network graph was not arbitrary. In fact, our motivation for using a route finding problem is to enable us to control task difficulty, and not for replacing algorithms such as meta-heuristics, integer programming, etc. In our updated version, we have added two other algorithmic tasks – copying and associative recall of sequences.\n\nIn the interest of time and resources, we have not utilised hyper-parameter optimization for this paper. In future, with more compute resources, we anticipate utilizing Bayesian optimization to obtain hyperparameters during the validation phase. Also, we were constrained by time and resources (to train on a machine translation task), in our updated version, we have now introduced two benchmark problems such as copying and recalling sequences. On both problems, the GeoSeq2Seq network has been as successful as the Neural Turing Machine, albeit without a need for an external memory module. Again due to limited computation resources, we had to limit our experiments to a diagonal covariance matrix and K=1 as the number of components of the Gaussian Mixture Model. There is no reason why a full covariance matrix or increases mixture components cannot be used.\n\nOf course, finding shortest routes consisting of few nodes is easier. Since the training set is built by sampling source and destination nodes uniformly at random, we can have routes with many nodes and our algorithm can reproduce the shortest path correctly (see for example Figure 3b). It is also possible to find a route between source and destination nodes that is not the shortest (i.e. the sum of the distances between the nodes in the path is greater than the ground truth one). We reported also the accuracy in this case, because in some application it may be enough to reach the destination even if the path is not the shortest. “Invalid” paths are routes that diverge in wrong directions and do not reach the destination. \n\nThe considered graph is undirected and weighted by Euclidean distance. Roads may not be straight but they can be approximated by straight parts from intersection (node) to intersection (node).\n\nWe have now included a formal definition of the condition number in the main text.\n\nSydorov et al, “Deep Fisher Kernels - End to End Learning of the Fisher Kernel GMM Parameters”, CVPR 2014", "We thank the Reviewer for his/her comments. The specific challenge that this paper sets to achieve is to come up with a methodology to increase the temporal memory of a recurrent neural network. In order to achieve this, we use utilise the 2nd order geometry of the latent embeddings, instead of invoking an external memory unit, as in the Neural Turing Machine. A road network graph gives us a straightforward way to control the length of temporal information required to be stored by the neural network. An additional benefit of using the road network graph is unlike other benchmark toy problems, a solution to the shortest route problem on graphs not only had an illustrious history but is also a real-life scenario. In our updated version, we have now introduced two benchmark problems such as copying and recalling sequences. We have also added a related work section as well as additional information for model construction." ]
[ 5, 4, 5, -1, -1, -1 ]
[ 2, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SkRsFSRpb", "iclr_2018_SkRsFSRpb", "iclr_2018_SkRsFSRpb", "rkU2dUDlf", "HJlgLNYxf", "HknoNEWZz" ]
iclr_2018_ryALZdAT-
Feature Incay for Representation Regularization
Softmax-based loss is widely used in deep learning for multi-class classification, where each class is represented by a weight vector and each sample is represented as a feature vector. Different from traditional learning algorithms where features are pre-defined and only weight vectors are tunable through training, feature vectors are also tunable as representation learning in deep learning. Thus we investigate how to improve the classification performance by better adjusting the features. One main observation is that elongating the feature norm of both correctly-classified and mis-classified feature vectors improves learning: (1) increasing the feature norm of correctly-classified examples induce smaller training loss; (2) increasing the feature norm of mis-classified examples can upweight the contribution from hard examples. Accordingly, we propose feature incay to regularize representation learning by encouraging larger feature norm. In contrast to weight decay which shrinks the weight norm, feature incay is proposed to stretch the feature norm. Extensive empirical results on MNIST, CIFAR10, CIFAR100 and LFW demonstrate the effectiveness of feature incay.
workshop-papers
+ An intriguing novel regularization method: encouraging larger norms for the feature vector input to the last softmax layer of a classifier. + Resonably extensive experimental validation shows that it improves test accuracy to some degree. - While a motivation is given, the formal analysis of what is really going on remains very superficial and limited. Technical note: Simply scaling the softmax layer's input would not change class rankings, so any positive effect of this regularizer on classification performance is due to it changing the learning dynamic in the upper layers as well. The paper could be much stronger if it did provide an analysis regarding how the global learning dynamic is affected in all layers, by the interaction between weight decay and the last layer's feature incay.
train
[ "SkNxPOYlf", "BkEcWHKlf", "ryRBHPFxz", "H1ASOQr-f", "HkjBl2jZf", "HkbH3Pobz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The analyses of this paper (1) increasing the feature norm of correctly-classified examples induce smaller training loss, (2) increasing the feature norm of mis-classified examples upweight the contribution from hard examples, are interesting. The reciprocal norm loss seems to be reasonable idea to improve the CNN learning based on the analyses. \n\nHowever, the presentation of this paper need to be largely improved. For example, Figure 3 seems to be not relevant to Property2 and may be show the feature norm is lower when the samples is hard example. Therefore, the author used reciprocal norm loss which increases feature norm as shown in Figure 4. However, both Figures are not explained in the main text, and thus hard to understand the relation of Figure 3 and 4. The author should refer all Figures and Tables. \n\nOther issues are:\n-Large-margin Soft max in Figure 2 is not explained in the introduction section. \n-In Eq.(7), P_j^I is not defined. \n- In the Property3, The author wrote “ where r is lower bound of feature norm”. \n However, r is not used.\n-In the experimental results, “RN” is not defined.\n-In the Table3, the order of \\lambda should be increasing or decreasing order. \n- Table 5 is not referred in the main text. \n\n== Updated review == \nThe presentation has been improved, I have increased the rate from 5 to 6. \nFollowing are further comments for presentation. \n\n-\tFig.2 “ the increasing L2 norm “ seems to “the order of L2 norm ”\n-\tPp.4 the first sentence above Eq.(7) “According to definition …” should be improved . \n-\tpp.5, the first sentence of the second paragraph “The feature norm can be optimized ..” is not clear. \n-\tIt would be better put Figure 5 under Property3. \n-\tD should be defined in Property3. \n-\tpp.8 wrote “However, 259-misclassfied examples are further introduced”. However, in Table 5, it seems to be 261. \n-\tSection 5. is “Conclusion and future work”. However, future work is not mentioned. \n", "Pros:\n1. It provided theoretic analysis why larger feature norm is preferred in feature representation learning.\n\n2. A new regularization method (feature incay) is proposed.\n\nCons:\nIt seems there is not much comparison between this proposed method and the concurrent work \"COCO(Liu et al. (2017c))\".", "The manuscript proposes to increase the norm of the last hidden layer to promote better classification accuracy. However, the motivation is a bit less convincing. Here are a few motivations that are mentioned.\n(1) Increasing the feature norm of correctly classified examples helps cross entropy, which is of course correct. However, it only decreases the training loss. How do we know it will not lead to overfitting?\n(2) Increasing the feature norm of mis-classified examples will make gradient larger for self-correction. And the manuscript proves it in property 2. However, the proof seems not complete. In Eq (7), increasing the feature norm would also affect the value of the term in parenthesis. As an example, if a negative example is already mis-classified as a positive, and its current probability is very close to 1, then further increasing feature norm would make the probability even closer to 1, leading to saturation and smaller gradient.\n(3) Figure 1 shows that examples with larger feature norm tend to be predicted well. However, it is not very convincing since it is only a correlation rather than causality. Let's use simple linear softmax regression as a sanity check, where features to softmax are real features rather than hidden units. Increasing the feature norm seems to be against the best practice of feature normalization in which each feature after normalization is of variance 1.\n\nThe manuscript states that the feature norm won't be infinitely increased since there is an upper bound. However, the proof of property 3 seems to only apply to the certain cases where K<2D. In addition, alpha is in the formula of upper bound, but what is the upper bound of alpha?\n\nThe manuscript does comprehensive experiments to test the proposed method. The results are good, since the proposed method outperforms other baselines in most datasets. But the results are not impressively strong.\n\nMinor issues:\n(1) For proof of property 3, it seems that alpha and beta are used before defined. Are they the radius of two circles?", "Thanks a lot for your positive and constructive comments!\n\nWe provide the response to \"It seems there is not much comparison between this proposed method and the concurrent work 'COCO(Liu et al. (2017c))'.\"\n\n(1) Both \"COCO\" and \"Feature Incay\" increase the L2-norm of feature representations, which is the common reason for performance improvement. \n\n(2) There are several clear differences. \n a. \"COCO\" normalizes and rescales all features to have the same L2-norm while\"Feature Incay\" adds a new regularizer that prefers features with larger L2-norm. \"COCO\" uses the optimal scale value that is fixed during training while \"Feature Incay\" increases the feature norm without constraining the scale value. \n b. “COCO” optimizes feature embedding spreading on a hypersphere while “Feature Incay” optimizes feature embedding located between two hyperspheres with different radiuses. (see Property 3) \n c. \"COCO\" proposes a novel congenerous cosine loss while \"Feature Incay\" uses the original softmax loss: \"Feature Incay\" is simpler than \"COCO\" and it can be easily plugged into almost all the related works that use softmax loss. \n\n(3) We compare the \"COCO\" with \"RN + COCO\" on CASIA-WebFace with SphereNet-20 and find that \"Feature Incay\" can help improve the performance of \"COCO\". e.g., \"RN + COCO\" improves \"COCO\" from 98.90% to 99.02%.\n\n", "Thanks a lot for your insightful comments.\n\n-1- Increasing the feature norm of correctly classified examples helps cross entropy, which is of course correct. However, it only decreases the training loss. How do we know it will not lead to overfitting?\n\nGood question. In our experiments, we don't find the feature incay will lead to overfitting. e.g., by considering feature incay, RN + Softmax decreases the training loss and improves the Softmax from 91.41% to 92.16% on the test set of CIFAR10. It remains an open problem to provide theoretical analysis about whether increasing the feature norm will lead to overfitting currently. \n\n\n-2- Increasing the feature norm of mis-classified examples will make gradient larger for self-correction. And the manuscript proves it in property 2. However, the proof seems not complete. In Eq (7), increasing the feature norm would also affect the value of the term in parenthesis. As an example, if a negative example is already mis-classified as a positive, and its current probability is very close to 1, then further increasing feature norm would make the probability even closer to 1, leading to saturation and smaller gradient.\n\nThanks for pointing this problem. The proof of property 2 is indeed complete. In your described case, increasing the feature norm will not lead to smaller gradients for both the weight vectors of the ground truth category and the wrongly predict category , which instead will have larger gradients. \n\nWe give the reasons below. For a mis-classified sample i with ground truth label y_i. It is true that \"When the mis-classified f_i has probability of class k(k!=y_i) close to 1, then increase the feature norm of f_i will make the probability of class k even closer to 1\", but this will not cause \"saturation and smaller gradient\" for all w_k and w_(y_i). According to Equation (7):\n\n(1) the gradient of w_(y_i) : when j=y_i, h(i)=1, P_j^i is close to 0, then (P_j^i-h(i)) is close to -1, so the gradients for the weight vector of ground truth category can be increased by increasing the norm of f_i; \n(2) the gradient of w_(k): when j=k, h(i)=0, as that P_k^i is close to 1, (P_k^i-h(i)) is close 1, the gradients for weight vector of the wrongly predict category can be increased by increasing the norm of f_i. \n(3) the gradients of other w_j: when j!=y_i && j!=k , (P_j^i-h(i)) is close to 0, thus the gradients is close to zero.\n \n\n\n-3- Figure 1 shows that examples with larger feature norm tend to be predicted well. However, it is not very convincing since it is only a correlation rather than causality. Let's use simple linear softmax regression as a sanity check, where features to softmax are real features rather than hidden units. Increasing the feature norm seems to be against the best practice of feature normalization in which each feature after normalization is of variance 1.\n\nThanks for pointing out this interesting problem. As we observe that the feature norm and the classification accuracy is positively related, and we investigate whether increasing the feature norm explicitly could improve the performance and find that the classification accuracy is improved with the feature incay. It also remains an open problem to provide theoretical analysis about whether it is correlation or causality currently.\nIncreasing the feature norm is not against the best practice of feature normalization. In fact, increasing the feature norm before normalization can also help improve the final performance, which is shown in Table 1 and stated in the last sentence of Section 4.2.(\"feature incay can even promote the A-softmax with normalized features by elongating the features before normalization\")\n\n\n\n-4- The manuscript states that the feature norm won't be infinitely increased since there is an upper bound. However, the proof of property 3 seems to only apply to the certain cases where K<2D. In addition, alpha is in the formula of upper bound, but what is the upper bound of alpha?\n\nThanks for pointing out this issue. Our property essentially is not limited to K<2D. We updated Property 3 for both K<2D and K>=2D case: \n\"...(2) to ensure the maximal intra-class distance is smaller than the minimal inter-class distance, the upper bound of feature norm is 3*alpha, especially when K < 2D, the upper bound in a tighter range of [(1 + sqrt(2))*alpha, 3*alpha]\". So 3*alpha is a general upper bound whether K<2D or K>=2D. Especially, when K<2D, we can formulate a tighter range for the upper bound.\n\nWhat's the upper bound of alpha is an interesting problem, but it is not our current interest. The main point of Property 3 lies in that the ratio of the upper bound beta to the lower bound alpha is bounded: beta/alpha <= 3. \n\n-5- For proof of property 3, it seems that alpha and beta are used before defined. Are they the radius of two circles?\n\nYes, they are the radius of the two circles. ", "Thanks for your comments. \n\n-1- The presentation of this paper need to be largely improved.\nWe have improved the presentation of our paper and updated the pdf files according to your advice. \n\n-2- Figure 3 seems to be not relevant to Property 2 and may be show the feature norm is lower when the samples is hard example. \nActually, Figure 3 is relevant to Property 2. We revised the description and re-plot Figure 3 in the paper to make their relation much clearer and avoid the possible misunderstanding.\n\nWe provide a short explanation below. The purpose of Figure 3 is to show that the mis-classified examples(we can also call them \"hard examples\") tend to be of small feature norm, which has been re-plot based on your advice. Property 2 is proposed to state that we need to increase the feature norm of mis-classified examples(tend to with small feature norm), which makes larger gradient and helps correcting the mis-classified examples.\nEspecially, the fifth column in Table 5 shows that by increasing the feature norm of mis-classified examples, the \"RN + Softmax\" correctly classifies 336 examples that are mis-classified by \"Softmax\". \n\n-3- The author used reciprocal norm loss which increases feature norm as shown in Figure 4. However, both Figures are not explained in the main text, and thus hard to understand the relation of Figure 3 and 4.\nThanks for pointing out this problem. We now added the explanations in the main text. Figure 4 is used to show the Reciprocal Norm Loss can result in more intra-class compactness by increasing the small feature norm faster than the large ones. Figure 3 is not related to Figure 4, and it is about Property 2. \n \n-4- Large-margin Soft max in Figure 2 is not explained in the introduction section. \nThanks for pointing out this problem. We provided the explanation of Large-margin Softmax loss in the first paragraph of Section 2 (Related work). We will put it to the introduction section if it is necessary.\n\n-5- In Eq.(7), P_j^I is not defined. \nWe have added the definition of P_j^i in Eq.(7) in the updated paper. In fact, we have also defined P_j^i in Property 4.\n\n-6- In the Property 3, The author wrote “ where r is lower bound of feature norm”. However, r is not used.\nThanks for pointing out this problem, which is a typo. \"r\" should be replaced with \"alpha\".\n\n-7- In the experimental results, “RN” is not defined.\n \"RN\" refers to the feature incay with form of Reciprocal Norm. We have added that \"... RN(Reciprocal Norm loss) plus the baseline method. e.g., RN + Softmax means combining the feature incay with Softmax loss.\" in the updated paper.\n\n-8- In the Table 3, the order of \\lambda should be increasing or decreasing order. \nWe have resorted it in decreasing order.\n\n-9- Table 5 is not referred in the main text. \nThanks for pointing out this problem. We have discussed the results in Table 5 in the first paragraph in Section 4.5 and added the reference to Table 5 in the updated paper.\n" ]
[ 6, 6, 6, -1, -1, -1 ]
[ 3, 2, 4, -1, -1, -1 ]
[ "iclr_2018_ryALZdAT-", "iclr_2018_ryALZdAT-", "iclr_2018_ryALZdAT-", "BkEcWHKlf", "ryRBHPFxz", "SkNxPOYlf" ]
iclr_2018_HkmaTz-0W
Visualizing the Loss Landscape of Neural Nets
Neural network training relies on our ability to find ````````"good" minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple ``"filter normalization" method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture effects the loss landscape, and how training parameters affect the shape of minimizers.
workshop-papers
This work proposes an improved visualisation techniques for ReLU networks that compensates for filter scale symmetries/invariances, thus allowing a more meaningful comparison of low-dimensional projected optimization landscapes between different network architectures. - the visualisation techniques are a small variation over previous works + extensive experiments provide nice visualisations and yield a clearer visual picture of some properties of the optimization landscape of various architectural variants A promising research direction, which could be further improved by providing more extensive and convincing support for the significance of its contribution in comparison to prior techniques, and to its empirically derived observations, findings and claims.
train
[ "SkB16fKxf", "ByfiU65gf", "S1O4Hinlf", "rJkCfR2Gz", "SJ6ZPA3zz", "SJJJIAhzz", "BJwKFTtMM", "r14kg0WfG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "This paper provides visualizations of different deep network loss surfaces using 2D contour plots, both at minima and along optimization trajectories. They mention some subtle details that must be taken into account, such as scaling the plot axes by the filter magnitudes, in order to obtain correctly scaled plots. \n\nOverall, I think there is potential with this work but it feels preliminary. The visualizations are interesting and provide some general intuition, but they don't yield any clear novel insights that could be used in practice. Also, several parts of the paper spend too much time on describing other work or on implementation details which could be moved to the appendix.\n\nGeneral Comments:\n- I think Sections 2, 3, 4 are too long, we only start getting to the results section at the end of page 4. I suggest shortening Section 2, and it should be possible to combine Sections 3 and 4 into a page at most. 1D interpolations and 2D contour plots can be described in a few sentences each. \n- I think Section 5 can be put in the Appendix - it's essentially an illustration of why the weight scaling is important. Once these details are done correctly, the experiments support the relatively well-accepted hypothesis that flat minima generalize better. \n- The plots in Section 6 are interesting, it would be nice if the authors had an explanation of why the loss surface changes the way it does when skip connections are added. \n- In Section 7, it's less useful to spend time describing what happens when the visualization is done wrong (i.e. projecting along random directions rather than PCA vectors) - this can be put in the Appendix. I would suggest just including the visualizations of the optimization trajectories which are done correctly and focus on deriving interesting/useful conclusions from them. ", "The main concern of this submission is the novelty. Proposed method to visualize the loss function sounds too incremental from existing works. One of the main distinctions is using filter-wise normalization, but it is somehow trivial. In experiments, no comparisons against existing works is performed (at least on toy/controlled environments). Some findings in this submission indeed look interesting, but it is not clear if those results are something difficult to find with other existing standard ways, or even how reliable they are since the effectiveness has not been evaluated. \n\nMinor comments: \nIn introduction, parameter with zero training error doesn't mean it's a global minimizer\nIn section 2, it is not clear that visualizing loss function is helpful in see the reasons of generalization given minima. \nIn figure 2, why do we have solutions at 0 for small batch size and 1 for large batch size case? (why should they be different?)\n", "\n* In the \"flat vs sharp\" dilemma, the experiments display that the dilemma, if any, is subtle. Table 1 does not necessarily contradict this view. It would be a good idea to put the test results directly on Fig. 4 as it does not ease reading currently (and postpone ResNet-56 in the appendix).\n\nHow was Figure 5 computed ? It is said that *a* random direction was used from each minimiser to plot the loss, so how the 2D directions obtained ?\n\n* On the convexity vs non-convexity (Sec. 6), it is interesting to see how pushing the Id through the net changes the look of the loss for deep nets. The difference VGG - ResNets is also interesting, but it would have been interesting to see how this affects the current state of the art in understanding deep learning, something that was done for the \"flat vs sharp\" dilemma, but is lacking here. For example, does this observation that the local curvature of the loss around minima is different for ResNets and VGG allows to interpret the difference in their performances ?\n\n* On optimisation paths, the choice of PCA directions is wise compared to random projections, and results are nice as plotted. There is however a phenomenon I would have liked to be discussed, the fact that the leading eigenvector captures so much variability, which perhaps signals that optimisation happens in a very low dimensional subspace for the experiments carried, and could be useful for optimisation algorithms (you trade dimension d for a much smaller \"effective\" d', you only have to figure out a generating system for this subspace and carry out optimisation inside). Can this be related to the \"flat vs sharp\" dilemma ? I would suppose that flatness tends to increase the variability captured by leading eigenvectors ?\n\n\nTypoes:\n\nLegend of Figure 2: red lines are error -> red lines are accuracy\nTable 1: test accuracy -> test error\nBefore 6.2: architecture effects -> architecture affects", "We thank the reviewer for the valuable feedback. The main contribution of this paper is not the filter normalization scheme itself, but rather the first thorough empirical investigation of neural loss functions. While the filter normalization scheme is also a contribution, it is merely a means to an end; it enables us to plot different loss functions and minimizers on a normalized scale so they can be compared side-by-side. Loss function visualizations reveal a number of important things that have not been observed in the literature. This includes the transition between smooth and chaotic loss landscapes with increased network depth, the important role that these qualitative differences play in generalization error, and the dramatic effect of skip connections of loss function structure. We have added a section on our contributions (Section 1.1) to help the reader navigate this paper.\n\nQ1: “One of the main distinctions is using filter-wise normalization, but it is somehow trivial. In experiments, no comparisons against existing works is performed...\"\nA: We make fairly extensive comparisons against an existing and commonly used method (linear interpolation). We feel that Section 5 makes a convincing argument for why filter normalization advances the state of the art; without it, one cannot make meaningful sharpness vs flatness comparisons between different minima. This is demonstrated in Fig 2 and Table 1, which show that side-by-side comparisons of minima are not meaningful when linear interpolation (the current STOA) is used, and Fig. 4 which shows that filter normalization makes sharpness correlate with generalization error. We validate this using network architectures, different optimizers, and different optimization parameters (batch size and weight decay). \n\nWhile the filter normalization scheme is indeed quite simple (which we view as a merit), it yields a nontrivial improvement over existing methods. We think this observation is significant because of the pervasive use of linear interpolation methods to visualize sharpness, which we show to produce misleading results due to the scaling effect. \n\nFinally, we note that filter normalization is only one of many contributions of this paper . Please see Section 1.1 in the new draft, which lists our contributions. \n\nQ2: “it is not clear if those results are something difficult to find with other existing standard ways, or even how reliable they are since the effectiveness has not been evaluated.”\nA: Section 5 shows that loss surfaces cannot be compared meaningfully without filter normalization, and that loss surface sharpness with filter normalization correlates with generalization error for a range of different architectures and training methods. Also, this is the first article to present high resolution visualizations of loss functions that reveal the dramatic qualitative differences between network architectures. We think this is a major contribution of the paper, and the significance of this result does not depend on the novelty of filter normalization (which is merely a tool for making side-by-side comparisons of sharpness between different plots). \n\nQ3: “In introduction, parameter with zero training error doesn't mean it's a global minimizer”\nA: Thanks for pointing out the typo. We mean zero training “loss” not “error’’. Since cross-entropy loss is non-negative, any zero loss minimizer is a global minimizer.\n\nQ4: “In section 2, it is not clear that visualizing loss function is helpful in see the reasons of generalization given minima.“\nA: Section 2 is meant to review theoretical results on the structure of loss functions. Later in the paper, we investigate two ways in which loss characteristics affect generalization, and both of these characteristics are easily explored via visualization. In Section 5, we show that the sharpness of filter-normalized plots correlates with generalization error. In Section 6, we also show that chaotic loss landscape geometry also results in poor generalization. We add Section 6.5 which discusses how loss function geometry effects initialization, and reasons why it is not possible to train neural networks effectively once loss landscapes get sufficiently chaotic. \n\nQ5: “In figure 2, why do we have solutions at 0 for small batch size and 1 for large batch size case? (why should they be different?)”\nA: We use the same setting as Keskar et. al, 2017, which compare the small/large-batch solutions using the linear interpolation method. Given two solutions trained with different batch size, \\theta_s and \\theta_l, we can linearly interpolate them using the formula (1-\\alpha)*\\theta_s + \\alpha*\\theta_l. For each value of \\alpha, we compute the loss function for the corresponding interpolated parameters. The plots in Figure 2 have \\alpha on the x-axis . When \\alpha=0, this is the loss of \\theta_s, and when \\alpha=1, this is the loss of \\theta_l. ", "We thank the reviewer for the kind feedback and constructive suggestions. We agree with the reviewer that some sections could be shortened or moved to appendix and more efforts should be focused on the interpretation of results. We have made major revisions to the paper to address these issues. In particular, we shortened the first 3 sections of the paper, and we added several discussions into Section 6 that specifically address ramifications of our findings, and how loss landscape geometry effects trainability and generalization error.\n\nFinally, please see Section 1.1 of the new draft, which lists our contributions. We think there are a number of new discoveries in this paper (in particular our realizations about the transition between convex and chaotic landscapes) that the reviewer may have overlooked. We have done a lot of writing to change the focus of our paper to analyze in detail the observations we make about our visualizations. \n\nThe reviewer also seems concerned that this paper is overly long. This is largely due to the number of figures. In fact, we have nearly 4 pages of figures (and about 8 pages of text, which is on par with the suggested length). This is a paper on visualization methods, and as a result it’s hard to chop down on these space consuming figures without losing important content.\n\nWe answer a few of the reviewer’s direct questions below.\n\nQ1: I think Sections 2, 3, 4 are too long, we only start getting to the results section at the end of page 4. I suggest shortening Section 2, and it should be possible to combine Sections 3 and 4 into a page at most. 1D interpolations and 2D contour plots can be described in a few sentences each. \nA: We have shortened Sections 3 and 4 to 1.5 pages total. We think there is some important discussion to be had here, in particular to justify the reasoning for the filter normalization. Unfortunately, not all readers will be familiar with issues like scale invariance, batch normalization, and various plotting methods. These methods form the foundation for the paper, so we don’t want to gloss over these too lightly.\n\nQ2: I think Section 5 can be put in the Appendix - it's essentially an illustration of why the weight scaling is important. Once these details are done correctly, the experiments support the relatively well-accepted hypothesis that flat minima generalize better. \nA: There are two reasons for including Section 5: First, we reveal that much of the work documenting that flat minimizers are better is actually false of misleading. Several other authors have noted this, and some even claim to have refuted the sharp vs flat hypothesis (see Dinh, 2017, “Sharp Minima Can Generalize for Deep Nets”).\n\nSecond, it is important to validate that filter normalization produces plots with sharpness that actually correlates with generalization error. Without Section 5, there would be no validation of the accuracy of our method for comparing loss functions.\n", "We thank the reviewer for the valuable feedback and constructive suggestions. Here are our thoughts on the comments:\n\nQ1: “In the \"flat vs sharp\" dilemma, the experiments display that the dilemma, if any, is subtle. Table 1 does not necessarily contradict this view”.\nA: The purpose of Table 1 is to contradict the notion that 1D linear interpolation is a meaningful view of sharpness/flatness. Please examine Fig. 2 in our paper. The top two figures show small batches producing flatter minimizers and Table 1 shows that flat minimizers produce good generalization. However, the bottom 2 figures (with weight decay) reverse this result, in which the large batch solutions produce “flatter minimizers,” even though these minimizers have worse generalization error than the “sharp” looking small-batch minimizers. In other words, the apparent sharpness/flatness of 1D interpolation is easily manipulated, and does not correspond to generalization.\n\nThe problem we have revealed is that 1D linear interpolation is predominantly visualizing the scale of the weights rather than the endogenous sharpness/flatness. We show that the filter-normalized view is a more reliable way to make visual comparisons of the sharpness among minima. With filter normalization, flatness of the resulting visualizations corresponds to increased generalization ability. \n\nFinally, we note that the differences in sharpness/flatness in Figure 4 are indeed subtle. We view this as one of our contributions: previous work using 1D interpolation has depicted these differences as being extremely dramatic, but we show that these dramatic differences are largely a distortion caused by differences in weight scaling.\n\nWe have revised section 5 to make our contributions, and the purpose of the figures, more clear.\n\nQ2: “It would be a good idea to put the test results directly on Fig. 4 as it does not ease reading currently”\nA: Great idea - we agree and we have added the test errors under each subfigure for easier comparison.\n\nQ3: “How was Figure 5 computed ? It is said that *a* random direction was used from each minimiser to plot the loss, so how the 2D directions obtained ?” \nA: To plot the 2D contours, we choose two random directions (say, a and b) and normalize them at the filter level. This means that, for each convolutional filter in the network, the corresponding entries in “a” contain a random (Gaussian) vector with the same dimensions and the same norm as that filter. For each point (\\alpha, \\beta) in the figure, we calculate the loss value L(\\theta + \\alpha * a + \\beta * b). We described the method of plotting 2D contours in section 3, and we will clarify it in section 5. We have also added equation (1) in the new draft, which clarifies how these plots are made.\n\nQ4: “it would have been interesting to see how this affects the current state of the art in understanding deep learning, something that was done for the \"flat vs sharp\" dilemma...”\nA: We think the observations in Section 6 say a lot about why certain networks perform better than others. The local curvature around minima is very helpful in interpreting/explaining the performance difference between ResNets and VGG-like networks. \n The 2D plots in Section 6 go beyond sharp vs flat, and reveal another important phenomenon that seems to have gone unnoticed in the literature; as network depth increases, loss landscapes suddenly transition from being smooth and dominated by nearly-convex regions, to being chaotic and highly non-convex. Interestingly, the neural nets with smooth convex-like landscapes have low generalization error, whereas the chaotic landscapes yield high error. We can improve the generalization of deep nets by taking measures to convexify loss landscape. Skip connections preserve smoothness for deeper networks, and we see that these convex-like landscapes produce low error (Table 2). Another approach is to widen the network, which also preserves smoothness for deeper networks. \n\nTo address the issue raised by the reviewer, we have re-written Sections 6.2-6.4, and added Sections 6.5 and 6.6, which discuss the issues of generalization error and trainability. We will also label the plots in Figure 6 to show the generalization error.\n\nQ5: “Can the fact that the leading eigenvector captures so much variability be related to the \"flat vs sharp\" dilemma ?”\nA: Good question. One striking thing that we can observe with confidence is that the high amount of variability captured using only 2 dimensions (sometimes as high a 90% for both dimensions combines) indicates that optimization trajectories lie in a very low dimensional space. This could be because well-behaved loss landscapes have large, flat, nearly-convex structures, and iterates move predominantly in the direction towards the nearest minimizer. To address the reviewers question, we have added a discussion of this at the end of Section 7.2.\n", "Thanks for your interests in our work and the efforts of reproducing the results! We really appreciate the detailed comments and suggestions. We will add more details in our next version soon and here we would like to clarify some missing details: \n\nQ1: “we believe it would have been useful to have known the computing resources used to train the nets and generate the figures as well as an estimate of the total time to do so.”\nA: We will add descriptions about the computing resources and the estimated time. Our PyTorch code can be executed in a multiple GPU workstation as well as a HPC with hundreds of GPUs using mpi4py. The computation time depends on the model’s inference speed on the training set, the resolution of the plots and the number of GPUs. For example, a 2D contour of ResNet-56 model with a (relatively low) resolution of 51x51 will take about 1 hour on a workstation with 4 GPUs (Titan X Pascal or 1080 Ti). A much higher resolution version could take 64 GPUs 4 days.\n\nQ2: “whether or not the weights stored in the batch normalization layer should be taken into account when generating the high dimensional gaussian vectors and if they should be used in the proposed filter-wise normalisation technique”\nA: In the 1D linear interpolation methods, the BN parameters including the “running mean” and “running variance” need to be considered as part of \\theta. If these parameters are not considered, then it is not possible to reproduce the loss accurately for both minimizers. In the filter-normalized visualization, the random direction applies to all weights but not the weights in BN. Note that the filter normalization process removes the effect of weight scaling, and so the batch normalization can be ignored.\n\nQ3: The VGG-9 architecture details and parameters for Adam\nA: VGG-9 is a cropped version of VGG-16, which keeps the first 7 Conv layers in VGG-16 with 2 FC layers. A BN layer is added after each conv layer and the first FC layer. A detailed description of VGG-9 architecture can be also found in https://arxiv.org/pdf/1706.02379.pdf. We find VGG-9 is an efficient network with better performance comparing to VGG-16 on CIFAR-10. We use the default values for betas and epsilon in Adam (http://pytorch.org/docs/master/optim.html#torch.optim.Adam) with the same learning rate schedule as used in SGD.\n\nQ4: “In the description of the filter-wise normalization formula of section 4, “d_{f}” represents the i_th filter of d—shouldn’t it be “d_{i}”? Additionally, is ||d_{i}|| to be interpreted as the Frobenius norm as well?”\nA: Thanks for pointing out the typos. Yes, “d_{f}” should be “d_{i}”. We will correct them in the updated version. Here ||d_{i}|| is calculated with the Frobenius norm.\n\nQ5: “In Figure 3, the number of bins used to generate the histogram was not specified as well as whether or not we should count the weights from the batch normalization layer(s). If given the total number of weights (i.e |𝜭|), this could have given us an insight on how to better generate / replicate the graphs.”\nA: We use 100 bins for all histograms in Figure 3. The histogram does count the weights from BN layers. Since the number of BN parameters are very small in comparison with the total number of weights, it may not significantly change the shape of the histogram the BN weights are not counted. Please refer to our answers to Q2.\n\nQ6: “We were unsure of what the x-axis is representing in figure 4, as it isn’t labelled. We assumed it represented alpha, the parameter used to scale the random high-dimensional gaussian vector , and a vector approach to generate other graphs with which we evaluate the accuracy. Similarly, we assumed the same for Figure 5 on both the x and y axes.”\nA: Yes, the axis of Figure 4 is alpha. The x and y axes of Figure 5 are alpha and beta, which are the step sizes for the two random directions. \n\nQ7: “We didn’t attempt to re-implement “Wide” ResNets as they require fine tuning on the ImageNet dataset which we were unable to train on due to space and computing constraints. ”\nA: The “Wide” ResNets are originally designed for ImageNet but we trained them from scratch on CIFAR-10.\n\nQ8: “It is not specified whether training or testing losses were used to generate the graphs in figure 5.”\nA: All the contours are training losses, it would be interesting to draw test contours. However, the loss surface being optimized by SGD is the training loss, not the test loss, and so this is what we visualized.\n\nQ9: “For Figures 2,3,4,5,6,7,8, the step-sizes for the alpha and beta values of the different gaussian vectors to generate the mesh grid is not specified.”\nA: We will add the details of the resolutions used in each contours. The default resolutions used for the 2D contours in Figure 5 and 6 is 51x51. We use higher resolutions (251x251) for the ResNet-56-noshort used in Figure 1 to show more details. The resolution for Figure 4 is 401.", "We attempted to re-implement a subset of the results of this paper as part of the ICLR 2018 Reproducibility Challenge (http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html) in order to report on the feasibility of reproducing the authors’ findings. The authors described three different sets of experiments (small vs. large batch minimizer sharpness, convexity structure of loss surfaces, and optimizer trajectory visualization ).\nThe following is a breakdown of our approach and some comments regarding each of the sections we attempted to re-implement. Our approach was mainly limited by external factors such as time constraints and memory/machine requirements. The reproduction of this paper would have been more accurate and less time-consuming had we been provided the code used to generate the various figures and underlying network architectures. Moreover, we believe it would have been useful to have known the computing resources used to train the nets and generate the figures as well as an estimate of the total time to do so.\n\nSECTION 5: We trained a 9-layer VGG based off of the previous work referenced in the paper. We were unable to use a large batch-size of 8192 and had to reduce to a batch-size 512, which most likely impacted our results. We used any other hyperparameters specified by the authors. We did not note a particularly large difference between the sharpness/flatness of minima in the regular linear-interpolation vs. filter-normalized graphs, but this likely had to do with the significantly smaller large-batch size we ended up using. Moreover, although stating that the VGG9 network used to pilot the experiments had batch regularization, it is not clear where in the network these layers should be used (i.e after every convolution layer? Every layer?) and whether or not the weights stored in the batch normalization layer should be taken into account when generating the high dimensional gaussian vectors and if they should be used in the proposed filter-wise normalisation technique. Finally, the architecture of the VGG9 network doesn’t seem to be part of one of predefined reference models used by the authors who create them. As such, we had to make guesses as to how many convolution layers should be used before max pooling the their outputs. We also assumed the proposed default settings for the Adam optimizer for beta 1 and 2 as well as for epsilon from the original paper.\nWe noted a few minor typos, omissions, or otherwise unclear aspects that somewhat hindered our attempt.\n- In the description of the filter-wise normalization formula of section 4, “d_{f}” represents the i_th filter of d—shouldn’t it be “d_{i}”? Additionally, is ||d_{i}|| to be interpreted as the Frobenius norm as well?\n- There are some ambiguities regarding various figure captions and axis labels. The caption of figure 2 states the red lines represent error rather than accuracy. Similarly, the caption of table 1 describes its contents as accuracy rather than error.\n- \tIn Figure 3, the number of bins used to generate the histogram was not specified as well as whether or not we should count the weights from the batch normalization layer(s). If given the total number of weights (i.e |𝜭|), this could have given us an insight on how to better generate / replicate the graphs.\n- We were unsure of what the x-axis is representing in figure 4, as it isn’t labelled. We assumed it represented alpha, the parameter used to scale the random high-dimensional gaussian vector , and a vector approach to generate other graphs with which we evaluate the accuracy. Similarly, we assumed the same for Figure 5 on both the x and y axes.\n \nSECTION 6: We only re-implemented a portion of the experiments in this section, since we were limited in time and computing power. We didn’t attempt to re-implement “Wide” ResNets as they require fine tuning on the ImageNet dataset which we were unable to train on due to space and computing constraints. We attempted to use the original caffe model for normal CIFAR-10 ResNet but it proved incompatible with our hardware and thus resorted to the official Tensorflow implementation which we were able to train for sizes 20, 56, 110. Unfortunately extracting loss contour plots out of these off-the-shelf models proved to be difficult and required lengthy computations so we chose to prioritize the VGG visualizations.\nSome general observations:\n- It is not specified whether training or testing losses were used to generate the graphs in figure 5.\n-\tFor Figures 2,3,4,5,6,7,8, the step-sizes for the alpha and beta values of the different gaussian vectors to generate the mesh grid is not specified. This could, in turn, affect the resolution, or granularity, of our generated figures.\n" ]
[ 5, 4, 5, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkmaTz-0W", "iclr_2018_HkmaTz-0W", "iclr_2018_HkmaTz-0W", "ByfiU65gf", "SkB16fKxf", "S1O4Hinlf", "r14kg0WfG", "iclr_2018_HkmaTz-0W" ]
iclr_2018_SJTB5GZCb
Extending the Framework of Equilibrium Propagation to General Dynamics
The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists. Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections. We present a simple two-phase learning procedure for fixed point recurrent networks that addresses both these issues. In our model, neurons perform leaky integration and synaptic weights are updated through a local mechanism. Our learning method extends the framework of Equilibrium Propagation to general dynamics, relaxing the requirement of an energy function. As a consequence of this generalization, the algorithm does not compute the true gradient of the objective function, but rather approximates it at a precision which is proven to be directly related to the degree of symmetry of the feedforward and feedback weights. We show experimentally that the intrinsic properties of the system lead to alignment of the feedforward and feedback weights, and that our algorithm optimizes the objective function.
workshop-papers
+ interesting novel extension of equilibrium propagation, as a biologically more plausible alternative to backpropagation, with encouraging initial experimental validation. - currently lacks theoretical guarantees regarding convergence of the algorithm to a meaningful result - experimental study should be more extensive to support the claims
test
[ "rJNtVBtgM", "ryzHMf9gz", "HJqm1ilzG", "SJcRZdpXz", "rkyMfdaQz", "BJnjZdp7G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The manuscript discusses a learning algorithm that is based on the equilibrium propagation method, which can be applied to networks with asymmetric connections. This extension is interesting, but the results seem to be incomplete and missing necessary additional analyses. Therefore, I do not recommend acceptance of the manuscript in its current form. The main issues are:\n\n1) The theoretical result is incomplete since it fails to show that the algorithm converges to a meaningful learning result. Also the experimental results do not sufficiently justify the claims.\n\n2) The paper further makes statements about the performance and biological plausibility of the proposed algorithm that do not hold without additional justification.\n\n3) The paper does not sufficiently discuss and compare the relevant neuroscience literature and related work.\n\nDetails to major points:\n\n1) The presentation of the theoretical results is misleading. Theorem 1 shows that the proposed neuron dynamics has a fixed point that coincides with a local minimum of the objective function if the weights are symmetric. However, this was already clear from the original equilibrium propagation paper. The interesting question is whether the proposed algorithm automatically converges to the condition of symmetric weights, which is left unanswered. In Figure 3 experimental evidence is provided, but the results are not convincing given that the weight alignment only improves by ~1° throughout learning (compared to >45° in Lillicrap et al., 2017). It is even unclear to me if this effect is statistically significant. How many trials did the authors average over here? The authors should provide standard statistical significance measures for this plot. Since no complete theoretical guarantees are provided, a much broader experimental study would be necessary to justify the claims made in the paper.\n\n2) Throughout the paper it is claimed that the proposed learning algorithm is biologically plausible. However, this argument is also not sufficiently justified. Most importantly, it is unclear how the proposed algorithm would behave in a biologically realistic recurrent networks and it is unclear how the different learning phases should be realized in the brain.\n\nNeural networks in the brain are abundantly recurrent. Even in the layered structure of the neocortex one finds dense lateral connectivity between neurons on each layer. It is not clear to me how the proposed algorithm could be applied to such networks. In a recurrent network, rolled-out over time, information would need to be passed forward and backwards in time. The proposed algorithm does not seem to provide a solution to this temporal credit assignment problem. Also in the experiments the algorithm is applied only to feedforward architectures. What would happen if recurrent networks were used to learn temporal tasks like TIMIT? Please discuss.\n\nIn the discussion on page 8 the authors further argue that the learning phases of the proposed algorithm could be implemented in the cortex through theta waves that modulate long-term plasticity. To support this theory the authors cite the results from Orr et al., 2001, where hippocampal place cells in behaving rats were studied. To my knowledge there is no consensus on the precise nature of this modulation of plasticity. E.g. in Wyble et al. 2003, it was observed that application of learning protocols at different phases of theta waves actually leads to a sign change in learning, i.e. long term potentiation was modulated to depression. It seems to me that the algorithm is not compatible with these other experimental findings, since gradients only point in the correct direction towards the final phase and any non-zero learning rate in other phases would therefore perturb learning. Did the authors try non-optimal learning rate schedules in the experiments (including sign change etc.) to test the robustness of the proposed algorithm? Also to my knowledge, the modulatory effect of theta rhythms has so far only been described in the CA1 region of rodent hippocampus which is a very specialized region of the brain (see Hanslmayr et al., 2016, for a review and a modern hypothesis on the role of theta rhythms in the brain).\n\nFurthermore, the discussion of the possible implementation of the learning algorithm in analog hardware on page 8 is missing an explanation of how the different learning phases of the algorithm are controlled on the chip. One of the advantages of analog hardware is that it does not require global clocking, unlike classical digital hardware, which is expensive in wiring and energy requirement. It seems to me that this advantage would disappear if the algorithm was brought to an analog chip, since global information about the learning phase has to be communicated to each synapse. Is there an alternative to a global wiring scheme to convey this information throughout the whole chip? Please discuss this in more depth.\n\n3) The authors apply the learning algorithm only to the MNIST dataset, which is a relatively simple task. Similar results were also achieved using random feedback alignment (Lillicrap et al., 2017). Also, the evolutionary strategies method (Salimans et al., 2017), was recently used for learning deep networks and applied to complex reinforcement learning problems and could likewise also be applied to simple classification tasks. Both these methods are arguably as simple and biologically plausible as the proposed algorithm. It would be good to try other standard benchmark tasks and report and compare the performance there. Furthermore, the paper is missing a broader “related work” section that discusses approaches for biologically plausible learning rules for deep neural architectures.\n\n\nMinor points:\n\nThe proposed algorithm uses different learning rates that shrink exponentially with the layer number. Have the authors explored whether the algorithm works for really deep architectures with several tens of layers? It seems to me that the used learning rate heuristic may hinder scalability of equilibrium propagation.\n\nOn page 5 the authors write: \"However we observe experimentally that the dynamics almost always converges.\" This needs to be quantified. Did the authors find that the algorithm is very sensitive to initial conditions?\n\n\nReferences:\n\nBradley P. Wyble, Vikas Goyal, Christina A. Rossi, and Michael E. Hasselmo. Stimulation in Hippocampal Region CA1 in Behaving Rats Yields Long-Term Potentiation when Delivered to the Peak of Theta and Long-Term Depression when Delivered to the Trough James M. Hyman. Journal of Neuroscience. 2003.\n\nSimon Hanslmayr, Bernhard P. Staresina, and Howard Bowman. Oscillations and Episodic Memory: Addressing the Synchronization/Desynchronization Conundrum. Trends in Neurosciences. 2016.\n\nTim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. Arxiv. 2017.\n", "tl;dr: The paper extends equilibrium propagation to recurrent networks, but doesn't test the algorithm on a dataset requiring a recurrent architecture. \n\nThe experimental results are extremely weak, just for MNIST. There are two problems with this. Firstly, the usual issues with MNIST being too easy, idiosyncratic in many ways, and over-studied. It is a good sanity check but not enough for an ICLR paper. Secondly, and more importantly, MNIST does not require a recurrent architecture. Applying an RNN to MNIST (as opposed to, say, permuted MNIST) is a strange thing to do. The authors should investigate datasets with sequential structure. There *tons* of examples in audio, language, etc. \n\nAs a consequence of the extremely limited experiments, it is difficult to know how much to trust the papers claims (top of page 5, top of page 7, near the end of page 8) about the algorithm optimizing the objective “experimentally”. Yes, it does so for MNIST. What about in more difficult cases?\n\nDetailed comments:\n“We use different learning rates for the different layers in our experiments. We do not have a clear explanation for why this improves performance ...” Introducing an additional hyperparameter per layer is a major drawback of the approach.\n", "The paper proposes a new learning algorithm for learning neural networks that may be biologically plausible. The paper builds upon the paper by Scellier & Bengio but doesn't assume symmetric weights. I couldn't judge how solid the biological plausibility argument but my understanding is that there is no universal agreement in neuroscience about it so I would tend to be open to most of the suggestions. As a non-expert in this field, I found this result of this paper pretty interesting, given experimentally the algorithm does work well for MNIST (which is already interesting to me, given the limited progress in this area).\n\n \n\n", "Thanks for reviewing our submission.\n\nOur algorithm does not apply to sequential data. As explained throughout the paper, we are interested in the standard supervised setting (predicting y given x).\n\nOur algorithm is not an extension of equilibrium propagation to RNN - the original algorithm of equilibrium propagation is already a recurrent model (with fixed input). Recurrent model does not necessarily mean sequential input data - biological networks are recurrent networks and they work in a recurrent manner even when presented with fixed input signals.\n\nOur contribution is not to beat a benchmark on standard machine learning datasets, but to propose a learning algorithm similar in spirit to backpropagation and more faithful to current knowledge in neuroscience.\n", "We thank the reviewer for their thorough review. \n\nWe agree with each of the points 1, 2, and 3.\n\n1) The plot corresponds to a single trial (but different trials typically always show the same curve). We agree that the alignment effect is not very convincing. However, further experiments which we have carried out in the meantime show that the objective function J always decreases, even when the weights are totally misaligned (e.g. when one initializes the feedback weights W_ji to be the opposite of the feedforward weights W_ij, that is W_ji = -W_ij)\n\n2) Regarding the objection concerning the abundance of lateral connections in biological networks, note that our theory also applies to neural architectures that include lateral connections (although for simplicity of presentation we have considered the case of multi-layer networks with neither lateral nor skip-layer connections).\n\nOur algorithm only applies to the standard supervised scenario where one predicts y given x. It is unclear how to extend the theory to sequential data.\n\nRegarding the possible implementation on analog circuits, the way we conceive it is that global clocking for switching phases would be done digitally. Only the phases themselves would be performed analogically.\n\nOur algorithm does not scale well with the number of layers (yet!), both because of the learning rates and because of the lengthy \"free relaxation phase\".\n", "We thank the reviewer for their remarks.\n\nIt is true that there is little progress in biologically plausible \"backpropagation\" in general. Our work is a small step towards achieving this goal.\n" ]
[ 4, 3, 6, -1, -1, -1 ]
[ 4, 4, 2, -1, -1, -1 ]
[ "iclr_2018_SJTB5GZCb", "iclr_2018_SJTB5GZCb", "iclr_2018_SJTB5GZCb", "ryzHMf9gz", "rJNtVBtgM", "HJqm1ilzG" ]
iclr_2018_SkmiegW0b
Challenges in Disentangling Independent Factors of Variation
We study the problem of building models that disentangle independent factors of variation. Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. We introduce an autoencoder model and train it through constraints on image pairs and triplets. We show the role of feature dimensionality and adversarial training theoretically and experimentally. We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used. The numerical value of a factor has different meaning in different reference frames. When the reference depends on other factors, transferring that factor becomes ambiguous. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.
workshop-papers
The paper proposes a method to disentangle style from content (two factor disentanglement) using weak labels (information about the common factor for a pair of images). It is similar to an earlier work by Mathieu et al (2016) with main novelty being in the use of the discriminator which operates with pairs of images in the proposed method. Authors also have some theoretical statements about two challenges in disentangling the factors but reviewers have complained about missing connection b/w theory and experiments, and about exposition in general. The idea has novelty, although somewhat limited in the light of earlier work by Mathieu et al (2016)), and theoretical statements are also of interest but reviewers still feel the paper needs improvement in writing and presentation of results. I would recommend an invitation to the workshop track.
train
[ "rycISJNgz", "H1P_fBdeM", "HJbE6CKlM", "rJvWy-Cbf", "B1e06xCZM", "rycS6xC-G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Quality\nThe method description, particularly about reference ambiguity, I found difficult to follow. The experiments and analysis look solid, although it would be nice to see experiments on more challenging natural image datasets. \n\nClarity\n“In general this is not possible… “ - you are saying it is not possible to learn an encoder that recovers disentangled factors of variation? But that seems to be one of the main goals of the paper. It is not clear at all what is meant here or what the key problem is, which detracts from the paper’s motivation.\n\nWhat is the purpose of R_v and R_c in eq 2? Why can these not be collapsed into the encoders N_v and N_c?\n\nWhat does “different common factor” mean?\n\nWhat is f_c in proof of proposition 1? Previously f (no subscript) was referred to as a rendering engine.\n\nT(v,c) ~ p_v and c ~ p_c are said to be independent. But T(v,c) is explicitly defined in terms of c (equation 6). So which is correct?\n\nOverall the argument seems plausible - pairs of images in which a single factor of variation changes have a reference ambiguity - but the details are unclear.\n\nOriginality\n\nThe model is very similar to Mathieu et al, although using image pairs rather than category labels directly. The idea of weakly-supervised disentangling has also been explored in many other papers, e.g. “Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis”, Yang et al. The description of reference ambiguity seems new and potentially valuable, but I did not find it easy to follow.\n\nSignificance\n\nDisentangling factors of variation with weak supervision is an important problem, and this paper makes a modest advance in terms of the model and potentially in terms of the theory. The analysis in figure 3 I found particularly interesting - illustrating that the encoder embedding dimension can have a drastic effect on the shortcut problem. Overall I think this can be a significant contribution if the exposition can be improved.\n\nPros\n- Proposed method allows disentangling two factors of variation given a training set of image pairs with one factor of variation matching and the other non-matching.\n- A challenge inherent to weakly supervised disentangling called reference ambiguity is described.\n\nCons\n- Only two factors of variation are studied, and the datasets are fairly simple.\n- The method description and the description of reference ambiguity are unclear.", "The paper considers the challenges of disentangling factors of variation in images: for example disentangling viewpoint from vehicle type in an image of a car. They identify a well-known problem, which they call \"reference ambiguity\", and show that in general without further assumptions one cannot tell apart two different factors of variation. \n\nThey then go on to suggest an interesting AE+GAN architecture where the main novelty is the idea of taking triplets such that the first two instances vary in only one factor of variation, while the third instance varies in both from the pair. This is clever and allows them to try and disentangle the variation factors using a joint encoder-decoder architecture working on the triplet.\n\nPros:\n1. Interesting use of constructed triplets.\n2. Interesting use of GAN on the artificial instance named x_{3 \\oplus 1}\n\nCons:\n1. Lack of clarity: the paper is hard to follow at times. It's not entirely obvious how the theoretical part informs the practical part. See detailed comments below.\n2. The theory addresses two widely recognized problems as if they're novel: \"reference ambiguity\" and \"shortcut problem\". The second merely refers to the fact that unconstrained autoencoders will merely memorize the instance. \n3. Some of the architectural choices (the one derived from \"shortcut problem\") are barely explained or looked into.\n\nSpecific comments:\n\n1. An important point regarding the reference ambiguity problem and eq. (2): a general bijective function mixing v and c would not have the two components as independent. The authors could have used this extremely important aspect of the generative process they posit in order to circumvent the problem of ambiguity. In fact, I suspect that this is what allows their method to succeed.\n\n2. I think the intro could be made better if more concrete examples be made earlier on. Specifically the car-type/viewpoint example, along with noting what weak labels mean in that context.\n\n3. In presenting autoencoders it is crucial to note that they are all built around the idea of compression. Otherwise, the perfect latent representation is z=x.\n\n4. I would consider switching the order of sections 2 and 3, so the reader will be better grounded in what this paper is about before reading the related work.\n\n5. In discussing attributes and \"valid\" features, I found the paper rather vague. An image has many attributes: the glint in the corner of a window, the hue of a leaf. The authors should be much more specific in this discussion and definite explicitly and clearly what they mean when they use these terms.\n\n6. In equation (5), should it be p(v_1,v_2)? Or are v_1 and v_2 assumed to be independent? \n\n7. Under equation (5), the paper mentions an \"autoencoder constraint\". Such a constraint is not mentioned up to this point in the paper if I'm not mistaken. \n\n8. Also under equation (5): is this where the encoder requirements are defined? If so, please be more explicit about it. Also note that you should require c_1 \\neq c_2. \n\n9. In proof of Proposition 1, there is discussion of N_c. N_c was mentioned before but never properly defined; same for R_c and C^-1. These should be part of the proposition statement or defined formally. Currently they are only discussed ad-hoc after equation (5). \n\n10 .In the proof of Proposition 1, what is f_c^-1 ? It's only defined later in the paper.\n\n11. In general, what promises that f_c^-1 and f_v^-1 are well defined? Are f_c and f_v injective? Why? \n\n12. Before explaining the training of the model, the task should be defined properly. What is the goal of the training? \n\n13. In eq. (15) I am missing a term which addresses \"the shortcut problem\" as defined in the previous page.\n\n14. The weak labels are never properly defined and are discussed in a vague manner. Please define what does that term mean in your context and what were the weak labels in each experiment.\n\n15. In the conclusion, I would edit to say the \"our trained model works well on *several* datasets\". \n\n\nMinor comments:\nPlease use \\citep when appropriate. Instead of \"Generative Adversarial Nets Goodfellow et al. (2014)\", you should have \"Generative Adversarial Nets (Goodfellow et al., 2014)\"", "This paper studies the challenges of disentangling independent factors of variation under weakly labeled data. \n\nA term \"reference ambiguity\" is introduced, which refers to the fact that there is no guarantee that two data points with same factor of variation will be mapped to the same point if there is only weakly labeled data to that extend. \n\nI am having a hard time understanding the message of the paper. The proof in section 3.1, although elementary, is nice. But then the authors train fairly standard networks in experiments (section 4) for datasets studied with these methods in the literature, and they fail to draw any connection to the introduced reference ambiguity concept. \n\nAs written, the paper to me looks like two separate white papers: \n{beginning - to -end of section 3}: as a theoretical white paper that lacks experiments, and \n{section 4}: experiments with some recent methods / datasets (this part is almost like a cute course project). \n\nEither the paper lacks to harmonically present the over arching goal, or I have missed if there was any such message that was implicit in between the lines. A rewrite with strong connection between the theory and the experiments is required.", "Thank you for your feedback.\n\nThe reference ambiguity is an inherent ambiguity in the disentanglement task itself. So there is no algorithm that provably solves it. The emphasis here is on the \"provably\" part. In practice most methods work on most datasets. The question is why? This is a very similar question to why neural networks learn useful features in an autoencoder or why the features are transferrable to learning tasks other than they were trained on.\n\nOur work is a step towards understanding this problem, as it highlights that we cannot do that by reasoning in terms of attribute/feature distributions. As a consequence when the reference ambiguity arises, a better GAN objective will most likely not solve the issue.\n\nSpecific questions:\n1. N_v and N_c refers to possible trained encoders. R_v and R_c is meant to create an equivalence relation between the possible encoders.\n2. \"different common factor\" meant different c.\n3. f_c^-1 is the inverse of the rendering engine for the c factor. We revised the proof to make it clearer.\n4. T(v, c) functionally depends on c, but it is designed in a way that they are statistically independent.\n", "Thank you for your feedback. First we would like to address your main complaints of the paper:\n\n1. lack of clarity:\nAs multiple reviewers pointed out, the paper needs improvement in the presentation. We revised the paper accordingly, and we hope we could make it easier to understand.\n2. reference ambiguity and shortcut problem\nAs far as we know the description of the reference ambiguity for the weakly labelled case of the disentanglement problem is novel. If it is well known, please provide a reference.\nThe shortcut problem occurs, when only one feature chunk contains the information about the input, and the decoder ignores the other chunk. Memorisation does not play a role in this.\n3. architecture not explained\nWe did not provide a lot of details of encoder/decoder/discriminator components, because we borrowed them from referred works, and that was not our main focus. We did specify however the most important architectural choice regarding the shortcut problem, namely the feature size. Moreover we did ablation studies on that.\n\nSpecific comments:\n1. In general N_v(f(v,c)) and N_c(f(v,c)) are not independent. When N_v and N_c are invariant to c and v respectively, they are independent. The reference ambiguity states the reverse is not true. We can find N_v and N_c that are independent, but N_v is still not invariant to c. Therefore training them to be independent is not enough. And training N_v to be invariant to c is not possible because of the lack of labels.\n2. Thank you for this suggestion. We revised the introduction to visualise the attribute transfer along with the challenges: reference ambiguity and shortcut problem. We also clarify what the weak labelling means early on.\n3. The shortcut problem is related to the compression indeed. Using higher compression and low dimensional features, the shortcut problem can be avoided. The role adversarial term is to make sure the disentanglement works regardless of the feature size.\n4. We revised the intro instead.\n5. Valid feature meant that it is part of the decoders domain, i.e. computed by the encoder from an image. In the revised paper we clarified these terms.\n6. Yes, the image pairs have independent v attributes (viewpoints), this is our model assumption.\n7. The autoencoder constraint is equation 3.\n8. We show here that the weak labelling determines N_c up to a bijection.\n9. N_c is the encoder (defined before equation 1), R_c is a bijection (defined in equation 2) and C is defined as C(c) = N_c(f(v,c)).\n10. Thank you for noticing this, we fixed in the revised paper.\n11. Our model assumption is that f is smooth and invertible. Smooth because a small change in the attribute should change the image a small amount and vice versa. We also assume the attributes are readily apparent from the image, hence f is invertible.\n12. The goals of disentanglement are to achieve the feature and image/data level disentanglement (equation 2 and 4). The goal of the paper is to study how/if we can achieve them (provably).\n13. The composite loss includes both AE and GAN terms. The GAN term addresses the shortcut problem.\n14. We revised paper to better explain weak labels.\n\n", "Thank you for your feedback.\n\nThe goal of the paper is to describe the challenges of disentangling factors of variation:\n- reference ambiguity: inherently present in the task\n- shortcut problem: specific to the swapping auto-encoder setting\n- we introduce a novel method for disentangling\n\nOur method has the advantage over previous methods (Mathieu etal.), that it does not need the common factor labels as inputs, keeping the trainable parameters constant. This is arguably an incremental improvement, but definitely novel and not a recent (existing) method. We also prove that our method solves the shortcut problem.\n\nIn the experiments we show:\n- detailed ablation studies on the shortcut problem\n- in practice the reference ambiguity only appears on a complex dataset and not on the simpler ones\n\nWe revised the paper in order to:\n- better highlight the shortcut problem and the related proofs and experiments\n- better experiment on the effects of the reference ambiguity\n\n" ]
[ 6, 5, 5, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_SkmiegW0b", "iclr_2018_SkmiegW0b", "iclr_2018_SkmiegW0b", "rycISJNgz", "H1P_fBdeM", "HJbE6CKlM" ]
iclr_2018_HkpYwMZRb
Gradients explode - Deep Networks are shallow - ResNet explained
Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities ``solve'' the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why exploding gradients occur and highlight the {\it collapsing domain problem}, which can arise in architectures that avoid exploding gradients. ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property. By noticing that {\it any neural network is a residual network}, we devise the {\it residual trick}, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success.
workshop-papers
The paper sets out to analyze the problem of exploding gradients in deep nets which is of fundamental importance. Reviewers largely acknowledge the novelty of the main ideas in the paper towards this goal, however it is also strongly felt that the writing/presentation of the paper needs significant improvement to make it into a coherent and clean story before it can be published. There are also some concerns on networks used in the experiments not being close to practice. I recommend invitation to the workshop track as it has novel ideas and will likely generate interesting discussion.
train
[ "B1E_6q8NM", "Byt6QYOxf", "Skuwdz5eM", "rJ4WHpjgM", "HyYweyxzf", "B15gl1xMz", "H1-9y1efM", "ByqOJylff", "BkZGR0JzM", "B1TiCL51z", "HkbTdUcyf", "Bk46DI9yG", "B1EqOIqJf", "B1HYDI5kz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "Dear Reviewer,\n\nThank you for your response. You criticize that our revised version contained re-orderings of the text rather than substantive changes. I apologize if I was not able to address your criticisms in the way you wanted in the revised version.\n\nAs far as I could tell, your criticisms of the original paper as well as the criticisms of the other reviewers hinged almost exclusively on the writing style of the paper, i.e. \"central point gets lost\", \"relying on supplement\", \"too long\" etc. Therefore that is exactly what I worked on addressing in the revision: the writing. I inserted summaries and added detail to introduction and conclusion as well as removing several propositions. So I am confused when you say that I \"only\" changed the writing when I thought that was precisely the reason you disliked the paper. \n\nI am sorry if I misinterpreted your original review. Could you explain in more detail what you are still unhappy with?\n\nThanks,\n\n", "Summary of paper - The paper introduces the Gradient Scale Coefficient and uses it to demonstrate issues with the current understanding of where and why exploding gradients occur. \n\nReview - The paper attempts to contribute to the discussion about the exploding gradient problem by both introducing a metric for discussing this issue and by showing that current understanding of the exploding gradient problem may be incorrect. It is admirable that the authors are seeking to add to the understanding about theory of neural nets instead of contributing a new architecture with better error rates but without understanding why said error rates are lower. While the authors list 7 contributions, the current version of the text is a challenge to read and makes it challenging to distill an overarching theme or narrative to these contributions. \n\nThe authors do mention experiments on page 8, but confess that some of the results are somewhat underwhelming. Unfortunately, all tables with the experimental results are left to the appendix. As this is a mostly theoretical paper, pushing experimental results to the appendix does make sense, but the repeated references to these tables suggest that these experimental results are crucial for the authors’ overall points.\n\nWhile the authors do attempt to accomplish a lot in these nearly 16 pages of text, the authors' main points and overall narrative gets lost due to the writing that is a bit jumbled at times and that relies heavily on the supplement. There are several places where it is not immediately clear why a certain block of text is included (i.e. the proof outlines on pages 8 and 10). At other points the authors default to an chronological narrative that can be useful at times (i.e. page 9), but here seems to distract from their overall narrative. \n\nThis paper has a lot of content, but not all of it appears to be relevant to the authors’ central points. Furthermore, the paper is nearly double the recommended page length and has a nearly 30 page supplement. My biggest recommendations for this paper are for the authors to 1) articulate one theme and then 2) look at each part (whether that be section, paragraph, or sentence) and ask what does that part contribute to that theme. \n\n\n\nPros - \n* This paper attempts to add the understanding of neural nets instead of only contributing better error rates on benchmark datasets. \n* At several points, the authors seek to make the work accessible by offering lay explanations for their more technical points. \n* The practical suggestions on page 16 are a true highlight and could provide an outline for possible revisions. \n\n\nCons - \n* The main narrative is lost in the text, leaving a reader unsure of the authors main points and contributions as they read. For example, the authors’ first contribution is hidden among the text presentation of section 2. \n* The paper relies heavily on the supplement to make their central points. \n* It is nearly double the recommended page length with a nearly 30 page supplement\n\n\nMinor issues - \n* Use one style for introducing and defining terms either use italics or single quotes. The latter is not recommended since the authors use double quotes in the abstract to express that the exploding gradient problem is not solved. \n* The citation style of Authors (YEAR) at times leads to awkward sentence parsing. \n* Given that many figures have several subfigures, the authors should consider using a package that will denote subfigures with letters. \n* The block quotes in the introduction may be quite important for points later in the paper, but summarizing the points of these quotes may be a better use of space. The authors more successfully did this in paragraph 2 of the introduction. \n* All long descriptions of the appendix should be carefully revisited and possibly removed due to page length considerations. \n* In the text, figure 4 (which is in the supplement) is referenced before figure 3 (which is in the text).\n\n=-=-=-= Response to the authors\n\nDuring the initial reviewing period, I was unable to distill the significance of the authors’ contributions from the current literature in large part due to the nature of the writing style. After reading the authors responses and consulting the differences between the versions of the paper, my review remains the same. It should be noted that all three reviewers pointed out the length of the paper as a weakness of the paper, and that in the most recent draft, the authors made the main text of the paper longer. \n\nConsulting the differences between the paper revisions, I was initially intrigued with the volume of differences that shown in the summary bar. Upon closer inspection, I read a much stronger introduction and appreciated the summaries at the ends of sections 4.4 and 6. However, I did notice that the majority of these changes were superficial re-orderings of the original text. Given the limited substantive changes to the main text, I did not deeply re-read the text of the paper beyond the introduction.", "Paper Summary:\nThis is a very long paper (55 pages), and I did not read it in its entirety. The first part (up to page 11), focuses on better understanding the exploding gradients problem, and challenges the fact that current techniques to address gradient explosion work as claimed. To do so, they first motivate a new measure of gradient size, the Gradient Scale Coefficient which averages the singular values of the Jacobian and takes a ratio of different layers. The motivation for this measure is that it is invariant to simple rescaling of layers that preserves the function. (I would have liked to have seen what was meant by preserved the function here -- did you mean preserve the same class outputs e.g.?) \n\nThey focus on linear MLPs in the paper for computational simplicity. With this setup, and assuming the Jacobian decomposes, they prove that the GSC increases exponentially (Proposition 5). They empirically test this out for networks 50 layers deep and 100 layers wide, where they find that some architectures have exploding gradients after random initialization, and others do not, but those that do not have other drawbacks. \n\nThey then overview the notion of effective depth for a residual network: a linear residual network can be written as a product of terms of the form (I + r_i). Expanding out, each term is a product of some of the r_i and some of the identities I. If all r_i have a norm < 1, then the terms that dominate will be those that consist of fewer r_i, resulting in a lower effective depth. This is described in Veit et al, 2016. While this analysis was originally used for residual networks, they relate this to any network by letting I turn into an arbitrary initial function. Their main theoretical result from this is that deeper networks take exponentially longer to train (under certain conditions), which they test out with (linear?) networks of depth 50 and width 100.\n\nThey also propose that the reason gradients explode is because networks try to preserve their domain going forward, which requires Jacobians to have determinant 1 and leads to a higher Q-norm.\n\nMain Comments:\nThis could potentially be a very nice paper, but I feel the current presentation is not ready for acceptance. In particular, the paper would benefit greatly from being made much shorter, and having more of the important details or proof outlines for the various propositions in the main text. Right now, it is quite confusing to follow, and I fail to see the motivation for some of the analysis. For example, the Gradient Scale Coefficient appears to be motivated because (bottom page 3), with other norm measurements, we could take any architecture and rescale the parameters, and inversely scale the gradients to make it \"easy to train\". But typically easy to train does not involve a specific preprocessing of gradients. Other propositions e.g. Theorem 1, proposition 6, could do with clearer intuition leading to them. I think the assumptions made in the results should also be clearer. (It's fine to have results, but currently I can't tell under what conditions the results apply and under what conditions they don't. E.g. are there any extensions of this that apply to non-linear networks?)\n\nI also have issues with their experimental setup: why choose to experiment on networks of depth 50 and width 100? This doesn't really look anything like networks that are trained in practice. Calling these \"popular architectures\" is misleading. \n\nIn summary, I think this paper needs more work on the presentation to make clear what they are proving and under what conditions, and with experiments that are closer to those used in practice to support their claims.\n", "The paper makes some bold claims. In particular about commonly accepted intuition for avoiding exploding/vanishing gradients and why all the recent bag of tricks (BN, Adam) do not actually address the problems they set out to alleviate. \n\nThis is either a very important paper or the analysis is incorrect but it's not my area of expertise. Actually understanding it at depth and validating the proofs and validity of the experiments will require some digestion. It's possible some of the issues arise from the particular architectures they choose to investigate and demonstrate on (eg I have mostly seen ResNets in the context of CNNs but they analyze on FC topologies, the form of the loss, etc) but that's a guess and there are some further analysis in the supp material for these networks which I haven't looked at in detail. \n\nRegardless - an important note to the authors is that it's a particularly long and verbose paper, coming in at 16 pages of the main paper(!) with nearly 50 (!) pages of supplementary material where the heart and meat of the proofs and experiments reside. As such it's not even clear if this is proper for a conference. The authors have already provided several pages worth of additional comments on the website on further related work. I view this as an issue in and of itself. Being succinct and applying rigour in editing is part of doing science and reporting findings, and a wise guideline to follow. While the authors may claim it's necessary to use that much space to make their point I will argue that this length is uncalibrated to standards. I've seen many papers that need to go through much more complicated derivations and theory and remain within a 8-10 page limit by being precise and strictly to the point. Perhaps Godel could be a good inspiration here, with a 21 page PhD thesis that fundamentally changed mathematics.\n\nIn addition to being quite bold in claims, it is also somewhat confrontational in style. I understand the authors are trying to make a very serious claim about much of the common wisdom, but again, having reviewed papers for many years, this is highly unusual and it is questionable whether it is necessary. \n\nSo, while I cannot vouch for the correctness, I think it can and should go through a serious revision to make it succinct and that will likely considerably help in making it accessible to a wider readership and aligned to the expectations from a conference paper in the field. ", "Dear Reviewer,\n\nThank you for your review. \n\nWe think our paper makes important contributions deep learning theory, architecture design and optimization and presents a valuable addition to the recent line of work exploring the properties of deep gradients and the impact of skip connections (e.g. [1,2,3,4]). Therefore, we are disappointed that the paper was awarded a low rating without its scientific merit being criticized. Do you believe that our analysis is correct? Do you believe we succeed in supporting the claims we make in the introduction and conclusion of our paper?\n\nI just uploaded a revised version of the paper. We address the points raised in your review in this revision as well as in the comments below.\n\n###\n\n\"It is nearly double the recommended page length with a nearly 30 page supplement ... This paper has a lot of content, but not all of it appears to be relevant to the authors’ central points.\"\n\nIn the revision, we removed some of the less central results (propositions 7 through 9) as well as high-level commentary to make the paper more focused.\n\nThis paper pays attention to details that other papers often gloss over, such as the rigorous definition of exploding gradients or effective depth and the careful setting of layerwise step sizes. This rigor is what enables us to obtain important results. Also note that an important predecessor work [3] from NIPS 2017 is also 55 pages long.\n\nMuch of our appendix is strictly optional for readers interested in certain specifics, such as implementation details for those interested in replicating our results or the extended related work section for those interested in pursuing research in deep learning theory. Do you see providing such details as a weak point of the paper?\n\nIf you believe there are still specific results in the revision that you consider unimportant and should thus be moved to the appendix or removed entirely, please let us know.\n\n###\n\n\"While the authors list 7 contributions, the current version of the text is a challenge to read and makes it challenging to distill an overarching theme or narrative to these contributions. ... the authors main points and overall narrative gets lost due to the writing that is a bit jumbled at times... For example, the authors’ first contribution is hidden among the text presentation of section 2. \"\n\nWe expand both the introduction and conclusion section in the revision to make the implications and contributions of this work more clear and explicit as well as adding summary sections throughout the paper to remind the reader of the overarching goals, including that of contribution 1. We removed high-level commentary from the main paper in favor of low-level explanations and summaries.\n\nThe overarching goal of the paper is to advance the theoretical understanding of the gradient properties of deep networks and provide practical insight for designing neural architectures. This is the same aim as many predecessor works (e.g. [1,2,3,4]). All these works combine a range of theoretical and experimental studies to paint an overall picture, just as we do. Our \"narrative\" is summarized in the new \"Summary\" section on page 16, which is followed by an extended list of practical recommendations and research implications. \n\nIn the revision, is there still a specific goal of the paper that is unclear? Is there a term or piece of notation is not defined? Is there a statement that is ambiguous? Is there a paragraph you think is redundant or out of place?", "###\n\n\"The paper relies heavily on the supplement to make their central points.\"\n\nWe moved both table 1 and table 2 to the main body of the paper in the revision.\n\nBecause this paper is detail-oriented and each reader cares about a different set of details, we chose (a) to provide as much detail as possible and (b) move those details to the appendix into dedicated sections so that they would be easy to find by specific interested parties.\n\nDo you think there is any particular section, paragraph or detail from the appendix that should still be moved to the main body? If so, we would be glad to know and to fulfil such a request if it could be aligned with the preferences of the other reviewers.\n\n###\n\n\"... confess that some of the results are somewhat underwhelming.\"\n\nThe goal of sections 3 through 6 is to demonstrate the pathologies of exploding gradients and collapsing domains. We made our neural networks very deep precisely so that these pathologies would be very clear and measurable. Pathological architectures, by definition, suffer from high errors. We include this information explicitly in the revision. \n\nUsing very deep MLPs to study gradient pathologies is a well-established practice from previous works closely related to this paper (e.g. [1,2,4]). \n\nNote that we contrast these high error values with those achieved by ResNet and looks-linear initialized ReLU networks, which acheive much lower error, in section 7 / table 2.\n\n###\n\n\"Unfortunately, all tables with the experimental results are left to the appendix.\"\n\nTables 1 and 2 have been moved to the main body in the revision as per this request.\n\n###\n\n\"There are several places where it is not immediately clear why a certain block of text is included (i.e. the proof outlines on pages 8 and 10).\"\n\nIn the revision, the proof outline of theorem 1 was removed and replaced by an informal explanation of the underlying mechanisms preceding the theorem. The proof outline of theorem 2 exists to highlight the important intermediate result that surjective endomorphisms exhibit an expected absolute determinant of 1, which leads to an expected qm norm greater than 1, which causes exploding gradients. We've added more references to this important relationship throughout the revision.\n\n###\n\nMinor issues:\n- We used single quotes to define terms and italic to highlight important concepts. In the revision, we use single quotes to define terms AND important concepts for increased consistency. We still use italic to highlight important concepts.\n- We use the citation style provided by the ICLR latex template. I would prefer not to alter this setting. Also, the vast majority of ICLR 2018 submission use (YEAR) in their citations. However, I did miss some brackets around citations in the original version of the paper. Those brackets have been added in the revision.\n- Letters have been added to the subfigures. Thank you for this advice.\n- We removed 2 of the 4 block quotes in the introduction. We would like to keep the remaining ones to underscore the difference between our results and popular wisdom.\n- Appendix length: see above\n- Figures are numbered according to the order in which they appear in the paper, not the order in which they are referenced. Again, this is the default of the ICLR latex template / latex itself. Let me know if you would like me to alter this.\n\n###\n\n\nWe hope that we have addressed your concerns in this comment and the revised version of the paper. If you agree that the contributions of our paper are significant and have been sufficiently demonstrated, we hope that you agree that our paper is well-placed at ICLR. We look forward to hearing your thoughts and comments.\n\n[1] Schoenholz et al. Deep Information Propagation. ICLR 2107. https://arxiv.org/abs/1611.01232\n\n[2] Balduzzi et al. The Shattered Gradients Problem: If resnets are the answer, then what is the question?. ICML, 2017. https://arxiv.org/abs/1702.08591\n\n[3] Saxe et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. 2014. https://arxiv.org/abs/1312.6120\n\n[4] Yang & Schoenholz. Mean field residual networks: on the edge of chaos. NIPS, 2017. https://papers.nips.cc/paper/6879-mean-field-residual-networks-on-the-edge-of-chaos\n", "Dear Reviewer,\n\nThank you for your review. As far as I can tell, you agree that our paper makes important contributions to deep learning theory, architecture design and optimization. I just uploaded a revised version of the paper. We address the points raised in your review in this revision as well as in the comments below.\n\n###\n\n\"In particular, the paper would benefit greatly from being made much shorter, ...\" \n\nIn the revision, we removed some of the less central results (propositions 7 through 9) as well as high-level commentary to make the paper more focused.\n\nThis paper pays attention to details that other papers often gloss over, such as the rigorous definition of exploding gradients or effective depth and the careful setting of layerwise step sizes. This rigor is what enables us to obtain important results. Also note that an important predecessor work [3] from NIPS 2017 is also 55 pages long.\n\nMuch of our appendix is strictly optional for readers interested in certain specifics, such as implementation details for those interested in replicating our results or the extended related work section for those interested in pursuing research in deep learning theory. Do you see providing such details as a weak point of the paper?\n\nIf you believe there are still specific results in the revision that you consider unimportant and should thus be moved to the appendix or removed entirely, please let us know.\n\n###\n\n\"... and having more of the important details or proof outlines for the various propositions in the main text.\"\n\nIn the revision, we add a significant number of further explanations and clarifications throughout the main body of the paper.\n\nBecause this paper is detail-oriented and each reader cares about a different set of details, we chose (a) to provide as much detail as possible and (b) move those details to the appendix into dedicated sections so that they would be easy to find by specific interested parties. We hope that you will find this strategy acceptable and refer you to sections E and F for theoretical details.\n\nNonetheless, we do outline the proofs for theorems 1 and 2 in the main paper. (The proof of theorem 3 is already quite short.) \n\nIf you believe there is any particular section, paragraph or detail from the appendix that should still be moved to the main body, we would be glad to know and to fulfil such a request if it could be aligned with the preferences of the other reviewers.\n\n###\n\n\"I also have issues with their experimental setup: why choose to experiment on networks of depth 50 and width 100? This doesn't really look anything like networks that are trained in practice. Calling these \"popular architectures\" is misleading.\"\n\nIn the revision, we replace the phrase \"popular architectures\" with \"architectures with popular layer types\".\n\nWe agree that 50-layer MLPs without skip connections are seldom used in practice. However, this is mainly because of the very pathologies explored in this paper that lead to training difficulty. We deliberately chose this high depth so that those difficulties could be clearly demonstrated on those networks. Using very deep MLPs to study gradient pathologies is a well-established practice from previous works (e.g. [1,2,3,5]). There is significant evidence that the deep learning theory community cares about those kinds of networks. Also, we believe that our networks are not that far removed from networks used in practice. For example, we use MLPs with tanh nonlinearities and MLPs with ReLU nonlinearities and batch normalization, which are popular choices.\n\nIn addition to plain MLPs, we investigate MLPs with skip connections (ResNets). For ResNet, a depth of 50 is not impractical. \n\nRegarding the layer width of 100: we studied different layer widths in section 3. There was no evidence that any results presented in this paper depend on layer width in a significant way. We don't think using a width of, say, 1000, would have made a difference.\n\nWe did not extend our results to convolutional networks due to space reasons, though we plan to study this case in future work. Again, many recent works also focused on MLPs.", "###\n\n\"I would have liked to have seen what was meant by preserved the function here -- did you mean preserve the same class outputs e.g\"\n\nYes, we mean the value of the prediction and error layers is invariant. In the revision, we have amended the text to reflect this.\n\n###\n\n\"They focus on linear MLPs in the paper for computational simplicity.\"\n\nWhen you say \"linear MLPs\", do you mean MLPs containing only linear layers? Note that all of the MLPs we study in this paper contain nonlinear layers (ReLU, tanh, SeLU, batch normalization and layer normalization) and many also contain skip connections. We do not study linear MLPs in this paper.\n\n###\n\n\"But typically easy to train does not involve a specific preprocessing of gradients.\"\n\nIn the revision, we replace \"easy to train\" with \"can be successfully trained\" and make clear that this includes gradient rescaling. In the paper, we aim to contrast training difficulty that can be overcome by rescaling the gradient versus training difficulty that cannot be overcome in this way, as encapsulated by theorem 1. We agree that it is not always obvious how to scale the gradient in practice, but point out that techniques such as Adam, vSGD [4] or heuristics such as \"scale the gradient to be proportial to the size of the weight matrix\" are often quite successful.\n\n###\n\n\"Other propositions e.g. Theorem 1, proposition 6, could do with clearer intuition leading to them. I think the assumptions made in the results should also be clearer.\"\n\nWe added additional explanations to the leadup of both theorem 1 and proposition 6 in the revision.\n\nUnfortunately, we were unable to include the assumptions made in theoretical results in the main body of the paper due to space reason, and because we think it would significantly detract from the readability of the paper. For example, consider the full statement of theorem 1. While some readers will be interested in this full statement, other readers may find it distracting. However, the assumptions are given and discussed in detail in sections E and F. Is there a specific section, paragraph or detail from the appendix you believe we should include in the main body?\n\n###\n\nWe hope that we have addressed your concerns in this comment and the revised version of the paper. We also refer you to our new introduction and conclusion section that make the contributions and implications of our paper even more clear. If you agree that our paper makes important contributions that are also well-supported (taking into account that we do not just use linear MLPs) we hope that you agree our paper is well-placed at ICLR. We look forward to hearing your thoughts and comments.\n\n[1] Schoenholz et al. Deep Information Propagation. ICLR 2107. https://arxiv.org/abs/1611.01232\n\n[2] Balduzzi et al. The Shattered Gradients Problem: If resnets are the answer, then what is the question?. ICML, 2017. https://arxiv.org/abs/1702.08591\n\n[3] Yang & Schoenholz. Mean field residual networks: on the edge of chaos. NIPS, 2017. https://papers.nips.cc/paper/6879-mean-field-residual-networks-on-the-edge-of-chaos\n\n[4] Schaul et al. No More Pesky Learning Rates. ICML, 2013. https://arxiv.org/abs/1206.1106\n\n[5] Saxe et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. 2014. https://arxiv.org/abs/1312.6120", "Dear Reviewer,\n\nThank you for your review and for your honesty in stating that this paper does not fall within your area of expertise. I just uploaded a revised version of the paper. We address the points raised in your review in this revision as well as in the comments below.\n\n### \n\n\"an important note to the authors is that it's a particularly long and verbose paper\"\n\nIn the revision, we removed some of the less central results (propositions 7 through 9) as well as high-level commentary to make the paper more focused.\n\nThis paper addresses many issues in detail that other papers often gloss over, such as the rigorous definition of exploding gradients or effective depth and the careful setting of layerwise step sizes. This rigor is what enables us to obtain important results. Also note that an important predecessor work [3] from NIPS 2017 is also 55 pages long. \n\nMuch of our appendix is strictly optional for readers interested in certain specifics, such as implementation details for those interested in replicating our results or the extended related work section for those interested in pursuing research in deep learning theory.\n\n### \n\n\"It's possible some of the issues arise from the particular architectures they choose to investigate and demonstrate on\"\n\nWhile we believe that all results discussed in the paper apply to convolutional and other networks in a similar fashion, we do not discuss or test the applicability to these networks specifically, for space reasons. However, using very deep MLPs as a testbed to advance the study of exploding gradients and related problems is a well-established practice (e.g. [1,2,3,4]).\n\n### \n\n\"it is also somewhat confrontational in style\"\n\nI apologize if my writing style appeared confrontational. Do you mean the paragraph that starts with \"These claims are mistaken. ...\"? I reformulated that paragraph in the revision. It now starts with \"We argue that these claims are overly optimistic...\"\n\n### \n\n\"making it accessible to a wider readership and aligned to the expectations from a conference paper in the field\"\n\nWe do not necessarily agree that ICLR papers should appeal to a wide readership. Many program synthesis papers are targeted at those interested in program synthesis. Many machine translation papers are targeted at those interested in NLP etc. Our paper is targeted at those interested in the theory of neural networks and foundational principles of neural network architecture design. We accept that this is a subset of the entire ICLR audience and do not see anything wrong with that.\n\n[1] Schoenholz et al. Deep Information Propagation. ICLR 2107. https://arxiv.org/abs/1611.01232\n\n[2] Balduzzi et al. The Shattered Gradients Problem: If resnets are the answer, then what is the question?. ICML, 2017. https://arxiv.org/abs/1702.08591\n\n[3] Yang & Schoenholz. Mean field residual networks: on the edge of chaos. NIPS, 2017. https://papers.nips.cc/paper/6879-mean-field-residual-networks-on-the-edge-of-chaos\n\n[4] Saxe et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. 2014. https://arxiv.org/abs/1312.6120\n", "The legend located in the top center graph in figure 5 is incorrect. From top to bottom it should be layer-tanh, batch-tanh, layer-ReLU, batch-ReLU, layer-SeLU. This colors match those in figure 3. ", "Dear Reviewers,\n\nI have recently become aware of two lines of work that are quite relevant to this work: ODE-based ResNets and Mean field analysis of deep networks. I will address both these strands in the next revision of the paper, mostly in section 9 but making references throughout the main body of the paper where appropriate. Below, I give a preview (note that this is split between 3 comments).\n\n\n+++++ Mean field analysis +++++\n\n[2] and its precessor [1] are the closest works to our paper. The authors use infinitely wide networks to study the expected behavior of forward activations and gradients in the initialized state. They identify two distinct regimes, order and chaos, based on whether an infinitesimal perturbation shrinks or grows in expectation respectively as it is propagated forward. This corresponds to the expected qm norm of the layer-Jacobian being smaller or larger than 1 respectively. They show that in the chaotic regime, gradients explode whereas in the ordered regime, gradients vanish. Further, they show that for tanh MLPs the correlation between forward activations corresponding to two different data inputs converges to 1 (`unit limit correlation') in the ordered regime as activations are propagated forward and to some value less than 1 in the chaotic regime. Specifically, in a tanh MLP without biases, in the chaotic regime, the correlation converges to 0.\n\nLike [1,2], much of our analysis relies on the expected behavior of networks in their randomly initialized state. Further, it is clear that the order / chaos dichotomy bears similarity to the exploding gradient problem / collapsing domain problem dichotomy as presented in this paper. However, there are also important differences.\n\n- We argue in this paper that the GSC is a better measure for the presence of pathological exploding or vanishing gradients than the raw scale of the gradient. Using the GSC, we obtain very different regions of order, chaos and stability for popular architectures. For a tanh MLP with no biases, using raw gradients, order is achieved for $\\sigma_w < 1$, stability for $\\sigma_w = 1$ and chaos for $\\sigma_w > 1$. For a tanh MLP with no biases, using the GSC, order is impossible, stability is achieved for $\\sigma_w \\le 1$ and chaos for $\\sigma_w > 1$. For a ReLU MLP with no biases, using raw gradients, order is achieved for $\\sigma_w < \\sqrt{2}$, stability for $\\sigma_w = \\sqrt{2}$ and chaos for $\\sigma_w > \\sqrt{2}$. For a ReLU MLP with no biases, using the GSC, stability is inevitable.\n- While [1] showed that order / chaos corresponds to unit limit correlation / non-unit limit correlation in a tanh MLP, this is not true in general. In a ReLU MLP with no biases and $\\sigma_w > \\sqrt{2}$, infinitesimal noise grows (chaos), yet correlation still converges to 1. Exploding gradient problem / collapsing domain problem is not a strict dichotomy and is thus able to accomodate such cases. \n\nSimilarly, the concepts of unit limit correlation and the collapsing domain problem are not the same. In fact, the former can be seen as a special case of the latter. In a tanh MLP with no bias and $\\sigma_w$ slightly larger than 1, correlation converges to 0 and eventually, gradients explode. Yet the domain can still collapse dramatically in the short term as shown in figure 1 to cause pseudo-linearity. In a tanh MLP with no bias and $\\sigma_w$ very large, again, correlation converges to 0 and gradients explode. However, the tanh layer maps all points close to the corners of the hypercube, which corresponds to domain collapse.", "+++++ Mean field analysis continued +++++\n\n[3] uses a framework similar to [1,2] to propose to combat gradient growth by downscaling the weights on the residual path in a ResNet. This corresponds to increased dilution, which indeed reduces gradient growth as shown in section 7. However, we also show in proposition 10 that the reduction achievable in this way may be limited. [3] also proposes to combat the exploding gradient problem by changing the width of intermediate layers. Our analysis in section 4.4 strongly suggests that this is not effective in reducing the growth of the GSC. [3] concludes that changing the width combats the exploding gradient problem because they implicitly assume that the pathology of exploding gradients is determined by the scale of individual components of the gradient vector rather than the length of the entire vector or the GSC. They do not justify this assumption. We propose the GSC as a standard for assessing pathological exploding gradients to avoid such ambiguity.\n\n\n[1] B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, S. Ganguli. Exponential expressivity in deep neural networks through transient chaos. NIPS 2016. https://arxiv.org/abs/1606.05340v1\n\n[2] S. Schoenholz, J. Gilmer, S. Ganguli, J. Sohl-Dickstein. Deep information propagation. ICLR 2017. https://openreview.net/forum?id=H1W1UN9gg\n\n[3] Anonymous. Deep Mean Field Theory: Variance and Width Variation by Layer as Methods to Control Gradient Explosion. ICLR 2018. https://openreview.net/forum?id=rJGY8GbR-\n\n\nPS: There seems to be another relevant paper: \"Mean Field Residual Networks: On the Edge of Chaos\" that will be published at NIPS this year. Unfortunately, I have been unable to obtain a copy so far. If you have a link to this paper, I would love to have it.", "+++++ Mean field analysis continued +++++\n\nWe do not use the assumption of infinite width in our analysis. The only possible exception is that the SSD assumption in proposition 10 can be viewed as implying infinite width. \n\nWhile [2] conjectures that stability is necessary for training very deep networks, our paper provides somewhat contrary evidence. Our two best performing vanilla architectures, SeLU and layer-tanh, are both inside the chaotic regime whereas ReLU, layer-ReLU and tanh, which are all stable, exhibit a higher training classification error. Clearly, chaotic architectures avoid pseudo-linearity. The difference between our experiments and those in [2] is that we allowed the step size to vary between layers. This had a large impact, as can be seen in table 2. We believe that our results underscore the importance of choosing appropriate step sizes when comparing the behavior of different neural architectures or training algorithms in general.\n\nIn section 4, we present a rigorous argument for the harmful nature of exploding gradients, and thus of chaos, at high depth. \n\nIt is not clear a priori whether a unit limit correlation is harmful for accuracy. After all, correlation information is a rather small part of the information present in the data, so the remaining information might be sufficient for learning. In section 6, we show how pseudo-linearity can arise under unit limit correlation and explain how it can harm expressivity and thus accuracy.", "+++++ ODE-based ResNets +++++\n\n\nRecently, [1-4] proposed ResNet architectures inspired by dynamical systems and numerical methods for ordinary differential equations. The central claim is that these architectures are stable at arbitrary depth, i.e. both forward activations and gradients (and hence GSC) are bounded as depth goes to infinity. They propose four practical strategies for building and training ResNets: (a) ensuring that residual and skip functions compute vectors orthogonal to each other by using e.g. skew-symmetric weight matrices (b) ensuring that the Jacobian of the skip function has eigenvalues with negative real part by using e.g. weight matrices factorized as -C^TC (c) scaling each residual function by 1/B where B is the number of residual blocks in the network and (d) regularizing weights in successive blocks to be similar via a fusion penalty.\n\n\narchitecture GSC (base 10 log) GSC dilution-corrected (base 10 log)\nbatch-ReLU (i) 0.337 4.23\nbatch-ReLU (ii) 0.329 4.06\nbatch-ReLU (iii) 6.164 68.37\nbatch-ReLU (iv) 0.313 7.22\nlayer-tanh (i) 0.136 2.17\nlayer-tanh (ii) 0.114 1.91\nlayer-tanh (iii) 3.325 5.46\nlayer-tanh (iv) 0.143 2.31\nTable 1\n\n\nWe evaluated those strategies empirically. In table 1, we show the value of the GSC across the network for 8 different architectures in their initialized state applied to Gaussian noise (see section 9.9.2 for details). All architectures use residual blocks containing a single normalization layer, a single nonlinearity layer and a single linear layer. We initialize the linear layer in four different ways: (i) Gaussian initialization, (ii) skew-symmetric initialization, (iii) initialization as -C^TC where C is Gaussian initialized and (iv) Gaussian initialization where weight matrices in successive blocks have correlation 0.5. Initializations (ii), (iii) and (iv) mimic strategies (a), (b) and (d) respectively. To enable the comparison of the four initialization styles, we normalize each weight matrix to have a unit qm norm. We study all four initializations for both batch-ReLU and layer-tanh. \n\nInitialization (ii) improves slightly over initialization (i). This is expected given theorem 3. One of the key assumptions is that skip and residual function be orthogonal in expectation. While initialization (i) achieves this, under (ii), the two functions are orthogonal with probability 1. \n\nInitialization (iii) has gradients that grow much faster than initialization (i). On the one hand, this is surprising as [2] states that eigenvalues with negative real parts in the residual Jacobian supposedly slow gradient growth. On the other hand, it is not surprising because introducing correlation between the residual and skip path breaks the conditions of theorem 3. \n\nInitialization (iv) performs comparably to initialization (i) in reducing gradient growth, but requires a larger amount of dilution to achieve this result. Again, introducing correlation between successive blocks and thus between skip and residual function breaks the conditions of theorem 3 and weakens the power of dilution.\n\nWhile we did not investigate the exact architectures proposed in [2,3], our results show that more theoretical and empirical evaluation is necessary to determine whether architectures based on (a), (b) and (d) are indeed capable of increasing stability. Of course, those architectures might still confer benefits in terms of e.g. inductive bias or regularization.\n\nFinally, strategy (c), the scaling of either residual and/or skip function with constants is a technique already widely used in regular ResNets. In fact, our study suggests that in order to bound the GSC at arbitrary depth in a regular ResNet, it is sufficient to downscale each residual function by only 1/sqrt(B) instead of 1/B as [1-4] suggest. \n\n\n[1] E. Haber, L. Ruthotto, E. Holtham. Learning Across Scales - Multiscale Methods for Convolution Neural Networks. arXiv 2017. https://xtract.ai/wp-content/uploads/2017/05/Learning-Across-Scales.pdf\n\n[2] E. Haber, L. Ruthotto. Stable Architectures for Deep Neural Networks. arXiv 2017. https://arxiv.org/abs/1705.03341\n\n[3] B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert, E. Holtham. Reversible Architectures for Arbitrarily Deep Residual Neural Networks. arXiv 2017. https://export.arxiv.org/abs/1709.03698\n\n[4] Anonymous. Multi-level residual networks from dynamical systems view. ICLR 2018 submission. https://openreview.net/forum?id=SyJS-OgR-\n" ]
[ -1, 3, 5, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, 4, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Byt6QYOxf", "iclr_2018_HkpYwMZRb", "iclr_2018_HkpYwMZRb", "iclr_2018_HkpYwMZRb", "Byt6QYOxf", "Byt6QYOxf", "Skuwdz5eM", "Skuwdz5eM", "rJ4WHpjgM", "iclr_2018_HkpYwMZRb", "iclr_2018_HkpYwMZRb", "iclr_2018_HkpYwMZRb", "iclr_2018_HkpYwMZRb", "iclr_2018_HkpYwMZRb" ]
iclr_2018_Syr8Qc1CW
DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images
Disentangling factors of variation has always been a challenging problem in representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, bad quality of generated images from encodings, lack of identity information, etc. In this paper, we proposed a supervised algorithm called DNA-GAN trying to disentangle different attributes of images. The latent representations of images are DNA-like, in which each individual piece represents an independent factor of variation. By annihilating the recessive piece and swapping a certain piece of two latent representations, we obtain another two different representations which could be decoded into images. In order to obtain realistic images and also disentangled representations, we introduced the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets demonstrate the effectiveness of our method and the advantage of overcoming limitations existing in other methods.
workshop-papers
The method proposed in the paper for latent disentanglement and attribute-conditional image generation is novel to the best of my understanding but reviewers (Anon1 and Anon3) have expressed concerns on the quality of results (CelebA images) as well as on the technical presentation and claims in the paper. Given the novelty of the proposed method, I would *not* like to recommend a "reject" for this paper but the concerns raised by the reviewers on the quality of results and lack of quantitative results seem valid. Authors rule out possibility of any quantitative results in their response but I am not fully convinced -- in particular, effectiveness of attribute-conditional image generation can be captured by first training an attribute classifier on the generated images and then measuring how often the predicted attributes are flipped when conditioning signal is changed. There are also other metrics in the literature for evaluating generative models. I would recommend inviting it to the workshop track, given that the work is novel and interesting but has scope for improvements.
val
[ "Bk6FQyPVG", "ryz0obDxM", "BJlAlKOgM", "rkqvQmKxM", "H19Hbk-bz", "ryFXlRgZG", "SyD336yZf", "Hy-aBlxbM", "BJ3U_JlWG", "BkCwgT1Zz", "SJbRMqk-G", "HJuqGBnef", "SkEBPDExM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Thanks for the rebuttal. It clears up some confusion but my score remains slightly negative. ", "This paper proposes to disentangle attributes by forcing a representation where individual components of this representation account for individual attributes. \n\nPros: \n+ The idea of forcing different parts of the latent representation to be responsible for different attributes appears novel. \n+ A theoretical guarantee of the efficiency of an aspect of the proposed method is given.\n\nCons: \n- The results are not very appealing visually. The results from the proposed method do not seem much better than the baselines. What is the objective for the images in Fig. 4? For example I'm looking at the bottom right, and that image looks more like a merger of images, than a modification of the image in the top-left but adding the attributes of choice.\n- Quantitative results are missing. \n- Some unclarity in the description of the method; see below.\n\nQuestions/other:\n- What is meant by \"implicit\" models? By \"do not anchor a specific meaning into the disentanglement\"? By \"circumscribed in two image domains\"? \n- Why does the method require two images? \n- In the case of images, what is a dominant vs recessive pattern? \n- It seems artificial to enforce that \"the attribute-irrelevant part [should] encode some information of images\". \n- Why are (1, 0) and (1, 1) not useful pairs?\n- Need to be more specific: \"use some channels to encode the id information\". \n", "Summary:\nThis paper investigated the problem of attribute-conditioned image generation using generative adversarial networks. More specifically, the paper proposed to generate images from attribute and latent code as high-level representation. To learn the mapping from image to high-level representations, an auxiliary encoder was introduced. The model was trained using a combination of reconstruction (auto-encoding) and adversarial loss. To further encourage effective disentangling (against trivial solution), an annihilating operation was proposed together with the proposed training pipeline. Experimental evaluations were conducted on standard face image databases such as Multi-PIE and CelebA.\n\n== Novelty and Significance ==\nMulti-attribute image generation is an interesting task but has been explored to some extent. The integration of generative adversarial networks with auto-encoding loss is not really a novel contribution.\n-- Autoencoding beyond pixels using a learned similarity metric. Larsen et al., In ICML 2016.\n\n== Technical Quality == \nFirst, it is not clear how was the proposed annihilating operation used in the experiments (there is no explanation in the experimental section). Based on my understanding, additional loss was added to encourage effective disentangling (prevent trivial solution). I would appreciate the authors to elaborate this a bit.\n\nSecond, the iterative training (section 3.4) is not a novel contribution since it was explored in the literature before (e.g., Inverse Graphics network). The proof developed in the paper provides some theoretical analysis but cannot be considered as a significant contribution.\n\nThird, it seems that the proposed multi-attribute generation pipeline works for binary attribute only. However, such assumption limits the generality of the work. Since the title is quite general, I would assume to see the results (1) on datasets with real-valued attributes, mixture attributes or even relative attributes and (2) not specific to face images.\n-- Learning to generate chairs with convolutional neural networks. Dosovitskiy et al., In CVPR 2015.\n-- Deep Convolutional Inverse Graphics Network. Kulkarni et al., In NIPS 2015.\n-- Attribute2Image: Conditional Image Generation from Visual Attributes. Yan et al., In ECCV 2016.\n-- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. Chen et al., In NIPS 2016.\n\nAdditionally, considering the generation quality, the CelebA samples in the paper are not the state-of-the-art. I suspect the proposed method only works in a more constrained setting (such as Multi-PIE where the images are all well aligned).\n\nOverall, I feel that the submitted version is not ready for publication in the current form.\n", "Pros:\n1. A new DNA structure GAN is utilized to manipulate/disentangle attributes.\n\n2. Non attribute part (Z) is explicitly modeled in the framework.\n\n3. Based on the experiment results, this proposed method outperformed previous methods (TD-GAN, IcGAN).\n\nCons:\n1. It assumes that each individual piece represents an independent factor of variation, which can not hold all the time. The authors also admit that when two factors are dependent, this method might fail.\n\n2. In Lreconstruct, only min difference between A and A1 is considered. How about A and A2 here? It seems that A2 should also be similar with A since only one bit in A2 and A1 is different.\n\n3. Only one attribute can be \"manipulated\" each time? Is it possible to change more than one attribute each time in this method?", "Thanks for your feedback.\n\nI appreciate for your rigorous argument. The Hat example I mentioned before was to make you aware of the difference between our method and other method, though it was not elaborate in our original paper. Because I thought the idea and structure of DNA-GAN is naturally distinct from others, thus it is not necessary to point out that. Please do not have a glimpse of the framework (Fig. 1), and find there is an encoder, a decoder and a discriminator, and say 'Oh, it is nothing new to me'. There are many models that used them, but the idea and detail of each model is totally different. \n\nAll in all, I wish my previous comments could help correct your misunderstandings towards our paper. I hope that you read our paper again and evaluate our contribution and originality.", "Your answer: \"Many existing methods are only able to add one kind of EYEGLASSES to a certain image. But our method can add various kinds of EYEGLASSES by swapping the attribute-part in latent encodings.\"\n\nMy question: \"I couldn't find evidence in the paper that the proposed method has the capacity to generate diverse-looking EYEGLASSES.\"\n\nYour answer: \"in Figure 5, (j) and (l) display two different kinds of generated HATS at top of the same person.\"\n\nIf you want to claim something in your paper and rebuttal, please make sure it is (1) accurate and (2) concrete.\n\nRegarding \"However, CycleGAN, DTN and UNIT are not able to generate diverse hats\", I am quite skeptical about your comment. If you really want to make such argument, please provide both qualitative and quantitative analysis.\n\n\n", "Dear authors,\n\nThank you for your feedback!\n\n1. \"Many existing methods are only able to add one kind of eyeglasses to a certain image. But our method can add various kinds of eyeglasses by swapping the attribute-part in latent encodings.\"\nI am not convinced by the argument made here. \n-- First, can you link me to the \"many existing methods\" you referred to? \nCycleGAN is an exception (I don't think it is generative model since no stochastity is involved).\nBoth DTN and UNIT can generate diverse-looking samples.\n-- Second, I couldn't find evidence in the paper that the proposed method has the capacity to generate diverse-looking eyeglasses.\n\n2. Thank you for the explanation. But I don't see much novelty from the proposed annihilating (it is basically augmenting the sampling distribution that discourages trivial solution).\n\n3. \"These were not explored in the previous literature. \"\nI agree the theoretical analysis is your contribution but I don't quite agree with the argument made here. Can you possibly summarize the differences against Inverse Graphics Networks?\n\n4. When presenting your work, please make it crystal clear you are targeting at face images with binary attributes.\n\n5. Current results are not very convincing. Please improve the current form if you think your results are preliminary (not ready for publication). Another suggestion is to demonstrate your approach on more challenging datasets.\n", "Thanks for your feedbacks. \n\n1. In Fig. 4, the right bottom image was generated from the top left image with two attributes from bottom left and top right images. Figure. 3 displays the baseline of TD-GAN and IcGAN in one-attribute case. The left two columns are original images. Since TD-GAN encountered the problem of trivial solutions and IcGAN cannot generate real-looking images in single-attribute case, so we did not show the their results in the multi-attribute case. Actually, the multi-attribute case is much harder than the single-attribute case. Actually the visual effect is not bad. (Please see Figure 2.) if you would like to compare results of celebA on the 64*64 resolution level, please look at VAE-GAN (https://github.com/anitan0925/vaegan). \n\nAn important factor that renders the unfairness of comparison is DNA-GNA is able to do image generation by exemplar, which is much more difficult than many other models. As shown in Figure 5, (j) and (l) display two different kinds of generated hats at top of the same person. Many other models are not able to add different hats to the same image. This is what I mean image generation by exemplar. \n\nMoreover, the overall focus of our paper is not image generation. The visual effect can be improved by extensive hyper-parameter tuning, but these figures are used to demonstrate that our model can learn disentangled representations in our latent representations. The contribution of a novel learning disentangled representations using weakly labeled multi-image should be cared about. The iterative training strategy addresses the problem of training on unbalanced dataset and improves the training efficiency. The annihilating operation makes the image generation by exemplar works. Theoretical connection between the training efficiency and the balancedness was given. These are the focus of our paper.\n\n2. There is no reasonable quantitative measure for GAN related papers. So we did not tune our parameters heavily as many other papers did. The figures in our paper were used to demonstrate that multiple attributes were indeed disentangled in our latent representations. The visual effect can be improved better, but it is not the focus of our paper. We care more about the advantage of our model in overcoming difficulties existing in other models, such as 1) difficulty of training on unbalanced multi-attribute datasets, 2) trivial solutions without preserving id information in other methods of image generation by exemplar 3) training on weakly labeled dataset. \n\n3. \"Implicit models\" means the probability distribution of training samples can not be explicitly formulated.\n\n\"do not anchor a specific meaning into the disentanglement\" means: we cannot predict what factors of variation beforehand in unsupervised methods.\n\n\"circumscribed in two image domains\" means CycleGAN, DTN, UNIT, GeneGAN are only able to do image translation between two image domains with respect to one attribute. However, our model are able to do multi-attribute image generation.\n\n4. We need two images with different attributes at i-th position each time. In our model, we swapped the latent encodings to generate novel crossbreeds, which can be decoded into novel images with new attributes.\n\n5. Dominant pattern means the i-th label is 1, while the recessive pattern means the i-th label is 0.\n\n6. The attribute-irrelevant part z_a or z_b is left for encoding information of background or image identity. Because the attribute-related parts are only able to represent part of the image information. For example, in three-attribute case, [Bangs, Eyeglasses, Smiling], there are many other information in images except for Bangs, Eyeglasses and Smiling, such as wearing hat, mustache, the person identity and background.\n\n7. (1, 0) and (1, 1) are useful pairs. The sentence below Fig. 2 \"Because they are not useful pairs, thus do not participated in training\" means {(1, 0) and (1, 0)} or {(1, 1) and (1, 1)} is not useful pairs.\n\n8. The id information was similarly encoded in z. The latent representations are 4-d tensor, each dimension of which represents batch size, height, width, channel. We divide some channels to encode the id information.\n", "Thanks for your feedback!\n\n1. For example, in Figure 5, (j) and (l) display two different kinds of generated hats at top of the same person. However, CycleGAN, DTN and UNIT are not able to generate diverse hats images given the same input image. This is because the attribute information is disentangled from input images in our model.\n\n2. The word we used is annihilating not annealing. It is not related to simulated annealing. I would like to explain the annihilating operation again: replacing the tensor b_i with tf.zeros_like(b_i).\nThis operation is necessary for the success of image generation by exemplar. Directly swapping attribute part a_i and b_i would cause the network converge to trivial solutions. The failure case of TD-GAN is a good example. \n\n3. DC-IGN randomly selects an active attribute and feeds the other attributes by the average in a mini-batch each time; the iterative training in DNA-GAN is: each attribute was repeatedly selected to be the active attribute and useful pairs are fed for training. \n\nThe differences is: in our model, training with random pairs can be viewed as randomly selecting an active attribute, because the active attribute was chosen according to the different position in two images' labels. This is theoretically proved to be less effective than the iterative training with useful pairs. Besides, we do not need to feed other attributes by the average.\n\n4. Thanks for your advice. \n\n5. The figures in our paper were used to demonstrate that multiple attributes were indeed disentangled in our latent representations. The visual effect can be improved better, but it is not the focus of our paper. (That should be the focus of this paper. https://openreview.net/forum?id=Hk99zCeAb&noteId=Hk99zCeAb) We should realize that no single model is perfect in any case by no free lunch theorem. But we should care more about the advantage of every model in overcoming the difficulties in other models as well as its limitation. Our paper addressed 1) difficulty of training on unbalanced multi-attribute datasets, 2) trivial solutions without preserving id information in other methods of image generation by exemplar 3) training on weakly labeled dataset. Of course, I would like show better results in the modified version later.\n", "Thanks for your feedback.\n\n1. There are many model structures that integrate of generative adversarial networks with auto-encoding loss, such CycleGAN, DTN, UNIT, etc. But this is not our contribution. Instead, the key point is to learn the disentangled representations by iteratively swapping the attribute of two images. This idea can help address the problem of underdetermined attribute pattern. For example, we want to generate a facial images with the particular eyeglasses in another image, not a general eyeglasses. Many existing methods are only able to add one kind of eyeglasses to a certain image. But our method can add various kinds of eyeglasses by swapping the attribute-part in latent encodings. Besides, it is not easy to make this idea work, we need annihilating operation to prevent from trivial solutions. \n\n- CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks\n- DTN: Unsupervised cross-domain image generation\n- UNIT: Unsupervised image-to-image translation networks\n\nWe will add VAE-GAN in our reference.\n\n2. As we explained in Section 3.3, the annihilating operation is to replace a tensor by a zero tensor of the same size.\nThe footnote 1 also explains the tensorflow implementation: tf.zeros_like(). No additional loss is necessary. Section 3.3 gives a simple example to illustrate why the solution would be trivial without the annihilating operation. I would appreciate that you read Section 3.3 for details.\n\nThe comparison experiments with TD-GAN demonstrate the importance of the annihilating operation. Without it, TD-GAN encodes all information into the attribute part. As a consequence, two original images rather than the attribute get swapped.\n\n3. The motivations of iterative training comes from the difficulty of training on the unbalanced dataset. The iterative training strategy was employed to overcome this difficulty and increase the training efficiency. Besides, the theoretical parts pointed out the close connection between training efficiency and the balancedness of dataset. These were not explored in the previous literature. \n\n4. The current pipeline is indeed only for binary attribute. But the requirement for weakly supervised label 0/1 is an advantage to some extent. In our experiments on the MultiPie dataset, the illumination factor was only labeled for dark (0) to light (1), but our model can interpolate the illumination ranging from dark (0) to light (1), which is a real-value. Most image datasets are discretely labeled, therefore I believe our model can further apply to many other datasets with cheap expense of labeling (0/1). Of course many unsupervised methods are naturally suitable for this case, since they do not need label. But we cannot predict what factors of variation beforehand. Instead, we make up stories after they work.\n\n5. There is no reasonable quantitative measure for GAN related papers. So we did not tune our parameters heavily as many other papers did. The figures displayed in the paper are the initial successful results. What we cares is a generally effective method. I believe our model can achieve very impressive results if more machines and efforts being devoted. I don't want to select very good pictures as IcGAN did but totally useless in practice. Besides, do remember that our model can do image generation by exemplars, rather than simply adding mean attributes to images. This is particular difficult in the multi-attribute case. As far as I know, many other methods are not able to to this. (e.g. TD-GAN needs the labeled id information when swapping the attribute)\n\n \n", "Thanks for your review and comments. \n\n1. When two factors are statistically dependent with each other, many similar methods would fail, either. For example, considering two attribute male and mustache, they appears or disappears almost simultaneously since they statistically dependent with each other. The model would consider them as on attribute. Anyway, it is a fundamental and hard problem in disentangled representation learning.\n\n2. In our model framework, A2 should display the person from A without the i-th attribute a_i, and B2 should display the person from B with the i-th attribute a_i. We cannot enforce reconstruction loss between A2 and A, because they are the same person with different attribute. Imaging that A is a person with eyeglasses and A2 should be the person without eyeglasses, it is not reasonable to enforce the reconstruction loss between them, since they looks different.\n\n3. In the training process, we only need to swap only one attribute each time. By iterative training, DNA-GAN could disentangle multiple attributes. If we change two or more attributes in the training process, then the number combination of all attributes would be exponentially large. For example, if we have three attributes in total, then the number of combinations is 2^3-1=7. Then the training process would become inefficient. This is why we adopt the strategy of iterative training, which is theoretically proved to be better. Of course, DNA-GAN could manipulate multiple attributes in the test phase, as shown in Fig. 4 and Fig. 5.\n", "Thanks for pointing out that paper. We tried reproducing the Fader-Networks, however failed to generate real-looking images as shown in the paper. The images were blurry generally even with extensive hyper-parameter tuning. Sometimes it failed on one attribute when training with respect to two attributes. Besides, we have noticed that several reproduction available in github are not able to reproduce the results, either. e.g. https://github.com/hjweide/fader-networks, https://github.com/hardikbansal/Fader-Networks-Tensorflow. \n\nWe would like to cite that paper if the authors could release their official codes. For your information, DNA-GAN works even though we did not carefully select hyper-parameters in our experiment. I believe the visual results could get better with extensive hyper-parameter tuning. ", "The Fader Network architecture also deals with the learning of disentangled representations on multi-attribute images:\n https://arxiv.org/abs/1706.00409 . It is probably a relevant paper to cite, and would provide a better comparison than IcGAN." ]
[ -1, 4, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Hy-aBlxbM", "iclr_2018_Syr8Qc1CW", "iclr_2018_Syr8Qc1CW", "iclr_2018_Syr8Qc1CW", "ryFXlRgZG", "BJ3U_JlWG", "BkCwgT1Zz", "ryz0obDxM", "SyD336yZf", "BJlAlKOgM", "rkqvQmKxM", "SkEBPDExM", "iclr_2018_Syr8Qc1CW" ]
iclr_2018_ryDNZZZAW
Multiple Source Domain Adaptation with Adversarial Learning
While domain adaptation has been actively researched in recent years, most theoretical results and algorithms focus on the single-source-single-target adaptation setting. Naive application of such algorithms on multiple source domain adaptation problem may lead to suboptimal solutions. We propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances. Compared with existing bounds, the new bound does not require expert knowledge about the target distribution, nor the optimal combination rule for multisource domains. Interestingly, our theory also leads to an efficient learning strategy using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple domain shifts while still being discriminative for the learning task. To this end, we propose two models, both of which we call multisource domain adversarial networks (MDANs): the first model optimizes directly our bound, while the second model is a smoothed approximation of the first one, leading to a more data-efficient and task-adaptive model. The optimization tasks of both models are minimax saddle point problems that can be optimized by adversarial training. To demonstrate the effectiveness of MDANs, we conduct extensive experiments showing superior adaptation performance on three real-world datasets: sentiment analysis, digit classification, and vehicle counting.
workshop-papers
Pros -- Lays out bounds for multi-domain adaptation based on earlier work on a single source-target domain pair. -- Shows gains over choosing the best source domain for a target domain, or naively combining domains. Cons -- The reviewers agree that the extensions are relatively straightforward extensions to single source-target pair. -- Hard-max doesn’t consider the partial contribution of multiple source domains, and considers the worst-case scenario. -- Soft-max addresses some of these issues; the authors provide reasonable justification for the algorithm but it’s not clear that the specific choice of \alphas leads to the tightest bound. The reviewers noted that the authors significantly improved the paper during the revision process. The AC feels that the presented techniques would be of interest to the community and would help lead discussions towards theoretically optimal ways to do domain adaptation given multiple domains. The authors are therefore encouraged to submit to the workshop track.
val
[ "ryY-CNPEG", "SJ0ahZwEf", "rkT23LLEz", "S1yZTj_lz", "HyxIlUFlz", "SkmS_5--z", "Hkuk_f7ZG", "rkwtwWmNG", "B1x2PDezz", "S1nuuvxzf", "ryXLuDgfz", "r1-VOPlff", "rkpWuPlfG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "I thank the authors for their responsiveness. It seems that we reach a common ground. The authors added a comment about the possibility to combine k single-source bounds to obtain a possibly tighter bound (I appreciate the honesty). Intuitively, the fact that we loosen the bound to obtain a more desirable trade-off still makes me believe that it must exist another theoretical analysis of multi-source domain adaptation.\n\nHowever, the paper experiments show that MDAN achieves generally better empirical result than the best single DANN, which suggests that the authors’ analysis captures something meaningful about the multi-source problem. This is why I consider that the paper contribution is worthy.\n", "We would like to thank Reviewer 3 again for the insightful thoughts and comments. We appreciate your positive feedback for the theoretical study of the smoothed version of the algorithm. We agree with the reviewer that using the single-source-single-target bound k times with union bound can lead to an upper bound that has the same asymptotic order in terms of both m and k. As the reviewer has pointed out, due to the sub-additivity of the max function, this bound is actually tighter. In fact, using the minimax inequality, we can precisely bound the relation of these two lambdas as follows:\n\nR3's lambda = max_{i \\in [k]} min_{h} eps_T(h) + eps_{S_i}(h) <= min_{h} max_{i \\in [k]} eps_T(h) + eps_{S_i}(h) = Our lambda\n\nHowever, our bound has the advantage that it nicely decouples all the four terms in Thm. 3.4 so that once the dataset and the hypothesis class have been fixed, minimizing the upper bound amounts to minimizing the first two terms. Besides providing an intuitive explanation, the minimization of our upper bound can directly lead to practical learning algorithms (the hard/soft versions) that can be implemented and used. On the other hand, as a comparison, although the alternative upper bound is slightly tighter, it does not admit practical instantiation because minimizing this upper bound requires us to:\n\n1. Compute all the k single-source-single-target bounds. \n\n2. Choose the maximum one to minimize.\n\nThe first step cannot be implemented as it depends on unknown quantity, i.e., the lambda_i for each source domain. In fact, for most interesting hypothesis class, the estimation of lambda_i itself is computationally intractable. Hence, given that both bounds share the same asymptotic complexity and our bound can lead to practical learning algorithms, we still choose to use the current bound. \n\nWe've updated the paper to add this discussion in the remark under Thm 3.4 and acknowledge the Reviewer’s comments. \n", "I warmly welcome the theoretical study of the smoothed version of the algorithm.\n\nHowever, I maintain my score since I'm still skeptical about the advantage of Theorem 3.4 compared to the maximum over the k single-source bounds. Specifically:\n\n1 - The authors argued that \"in order to use Thm. 1 in Blitzer 2008 to achieve the same result, because of the union bound, one will have to incur an additional square root of log(k) term\". This is right, but Theorem 3.4 also contains a square root of log(k). \n\n2 - The authors argued that \"the \\lambda (error achieved by the optimal hypothesis on S and T) defined in Blitzer 2008 depends on both S and T, hence when combining the k bounds, there does not necessarily exist a single optimal hypothesis h^* that makes this bound hold for all k pairs\". But, if one takes the maximum over the k bounds (using the union bound as discussed above), he will consider a single pair S and T, which will be valid. In fact, the \"multi source\" lambda defined in the paper also considers a single pair, given by the minimum of the maximum individual target+source risks (see bottom of page 3). The difference is that the alternative approach will amount at taking the maximum of the minimum individual source risks (which might even give a tighter bound).\n", "Quality:\nThe paper appears to be correct.\n\nClarity:\nThe paper is very clear\n\nOriginality:\nThe theoretical contribution extends the seminal work of Ben-David et al., the idea of using adversarial learning is not new, the novelty is mediaum\n\nSignificance:\nThe theoretical analysis is interested but for me limited, the idea of the algorithm is not new but as far as I know the first explicitly presented for multi-source. \n\nPros:\n-new theoretical analysis for multisource problem\n-paper clear\n-smoothed version is interesting\nCons\n-Learning bounds with worst case standpoint is probably not the best analysis for multisource learning\n-experimental evaluation limited in the sense that similar algorithms in the literature are not compared\n-Extension a bit direct from the seminal work of Ben-David et al.\n\n\nSummary:\nThis paper presents a multiple source domain adaptation approach based on adversarial learning.\nThe setting considered contains multiple source domains with labeled instances and one target domain with unlabeled instances. The authors propose learning bounds in this context that extend seminal work of Ben-David and co-authors(2007,2010) where they essentially consider the max source error and the max divergence between target and source with the presence of empirical estimate.\nThen, they propose an adversarial algorithm to optimize this bound, with another version optimizing a smoothed version, following the approach of Ganin et al.(2016). \nAn experimental evaluation on 3 known tasks is presented.\n\nComments:\nComments:\n\n-I am not particularly convinced that the proposed theory explains best multi-source learning. In multi-source, you expect that one source may compensate the others when needed for classification of particular instances. The paper considers a kind of worst case by taking the max error over the sources and the max divergence between target and source, but not really representative of what happens for real problems in the sense that you do not take into account how the different sources interact.\nThe experimental results confirm this aspect actually.\nMaybe, the authors could propose a learning bound that correspond to the smoothed version proposed in the paper and that works best.\n\nThe Hard version of the algorithm seems here to comply with the bound, while the algorithm that is really interesting is the smoothed version.\n\n-Experimental evaluation is a bit limited, there is no comparison with other (deep learning methods) tackling multi-source scenarios (or equivalent), while I think it is easy to find related approaches :\n-E. Tzeng, J. Hoffman, T. Darrell, K. Saenko. Simultaneous Deep Transfer Across Domains and Tasks. ICCV 2015.\n-I-H Jhuo, D Liu, D.T. Lee, and S-Fu. Chang. Robust visual domain adaptation with low-rank reconstruction. In IEEE CVPR, 2012.\n-Muhammad Ghifary, W. Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Domain generalization for object recognition with multi-task autoencoders. In IEEE International Conference on Computer Vision (ICCV), 2015.\n-Chuang Gan, Tianbao Yang, and Boqing Gong. Learning attributes equals multi-source domain generalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.\n-R. Gopalan,R. Li,and R. Chellappa. Unsupervised Adaptation Across Domain shifts by generating intermediate data representations. PAMI, 36(11), 2014.\n\nNote also this paper at CVPR'17: about Domain adversarial adaptation.\nE. Tzeng, J. Hoffman, K. Saenko, T. Darrell. Adversarial Discriminative Domain Adaptation, CVPR 2017.\n\n\n-Nothing is said about the complexity of applying the algorithm on the different datasets (convergence, tuning, ...)\nFor the smoothed version, it could be interesting to see if the weights w_i associated to each source are related to each (original) source error and see how the sources are complementary. \n\n--\nAfter rebuttal\n--\nThe new results and experimental evaluation have improved the paper. I increased my score.", "The paper builds on the previous work of Ganin et al. (2015, 2016), that introduced a domain adversarial neural network (DANN) for single source domain adaptation. Whereas Ganin et al. (2016) were building directly on the (single source) domain adaptation theorem of Ben-David et al., the authors prove a similar result for the multiple sources case. \n\nThis new result appears to be a simple extension of the single source theorem. A similar result to Theorem 3.1 can be obtained by considering the maximum over the k bounds obtained by considering the k pairs source-target one by one, using Theorem 2.1 of Blitzer et al. (2008). In fact, the latter bound might even be tighter, as Theorem 3.1 considers the maximum over the three components of the domain adaptation bound separately (the source error, the discrepancy and the lambda term). The same observation holds for Theorem 3.4, which is very similar to Theorem 1 of Blitzer et al. (2008). This made me doubt that the derived theorem is studying multi-source domain adaptation in an optimal way. \nThat being said, the authors show in their experiments that their multiple sources network (named MDAN), which is based on their theoretical study, generally achieves better accuracy than the best single source DANN algorithm. This succeeds in convincing me that the proposed approach is of interest. At least, these empirical results could be used as non-trivial benchmarks for further development. \n\nNote that the fact that the \"smoothed version\" of MDAN performs better than the \"hard version\", while the latter is directly backed by the theory, also suggests that something is not captured by the theorem. The authors suggest that it can be a question of \"data-efficiency performance\": \"We argue that with more training iterations, the performance of Hard-Max can be further improved\" (page 8). This appears to me to be the weakest claim of the paper, since it is not backed by an empirical or a theoretical study. \n\nPros:\n- Tackle an important problem that is not studied as it deserves.\n- Based on a theoretical study of the multi-source domain adaptation problem.\n- The empirical study is exhaustive enough to show that the proposed algorithm actually works.\n- May be used as a benchmark for further multi-source domain adaptation research.\n\nCons:\n- The soft-max version of the algorithm - obtaining the best empirical study - is not backed by the theory.\n- It is not obvious that the theoretical study and the proposed algorithm is actually the right thing to do.\n\nMinor comment:\n- Section 5: It seems that the benchmarks named \"sDANN\" and \"cDANN\" in 5.1 are the same as \"best-Single-DANN\" and \"Combine-DANN\" in 5.2. If I am right, the nomenclature must be uniformized. \n", "The generalization bounds proposed in this paper is an extension of Blitzer et al. 2007. The previous bounds was proposed for single domain single target setting, and this paper extends it to multiple source domain setting. \n\nThe proposed bound is presented in Theorem 3.4, showing some interesting observations, such as the performance on the target domain depends on the worst empirical error among multiple source domains. The proposed bound reduces to Blitzer et al. 2007’s when there is only single source domain. \n\nPros \n+ The proposed bound is of some interest.\n+ The bound leads to an efficient learning strategy using adversarial neural networks.\n\nCons:\n- My major concern is that the baselines evaluated in the experiments are quite limited. There are other publications working on the multi-source-domain setting, which were not mentioned/compared in the submission.\n", "This work presents a bound to learn from multiple source domains for domain adaptation using adversarial learning. This is a simple extension to the previous work based on a single source domain. The adversarial learning aspect is not new.\n\nThe proposed method (MDAN) was evaluated on 3 known data sets. Overall, the improvements from using MDAN were consistent and promising.\n\nThe bound used in the paper accounts for the worst case scenario, which may not be a tight bound when some of the source domains are very different from the target domain. Therefore, it does not completely address the problem of learning from multiple source domains. The fact that soft-max performs better than hard-max suggest that some form of domain selection or weighting might lead to a better solution. The empirical results in the third experiment (Table 4) also suggest that the proposed solution does not generalize well to domains that are less similar to the target domain.\n\nSome minor comments:\n- Section 2: \"h disagrees with h\" -> \"h disagrees with f\".\n- Theorem 3.1: move the \\lambda term to the end to be consistent with equation 1.\n- Last line of Section 3: \"losses functions\" -> \"loss functions\".\n- Table 1 and 2: shorthands sDANN, cDANN, H-Max and S-Max are used here are not consistent with those used in subsequence experiments. It's good to be consistent.\n- In Section 5.2, it was conjectured that the poorer performance of MDAN on SVHN is due to its dissimilarity to the other domains. However, given that the best-single results are close to the target only results, SVHN should be similar to one or more of the source domains. MDAN is probably hurt by the worst case bound.\n- In Table 4, the DANN performance for S=6 and T=A is off compared to the rest. Any idea?\n\n", "Thanks a lot for increasing the rating score of our paper. We appreciate your time and response!", "We thank all the reviewers for the time devoted to provide thoughtful comments and suggestions. We have uploaded a new paper version, in which, following the suggestions of the reviewers, we prove a new generalization bound of the smoothed version and compare with more baselines in the experiments. We attempt to answer the questions of the reviewers below:\n\nAs suggested by reviewers, in the revised version we also prove a new generalization bound where the minimization of the smoothed version corresponds to the minimization of this upper bound, which provides a theoretical justification for the optimization of (5). We precisely state this theorem in Theorem 4.1 and Theorem 4.2 in the revised version, with detailed proof shown in Appendix C.5 and C.6. As a high-level summary, instead of considering the worst-case scenario, this new bound is obtained by considering interactions between multiple source domains, i.e., all the source domains contribute to the upper bound (not just the worst one as stated in Thm. 3.4), and the combination weight of each source domain depends exactly on its empirical error and its distance to the target domain. One can also see that up to constant that does not depend on the training errors of multiple domains, the new upper bound given by Thm. 4.1 is tighter than that of Thm. 3.4. On the other hand, both sample complexity bounds given in Thm. 3.4 and Thm. 4.1 are optimal in terms of the number of training instances m in each source domain, as it matches the \\Omega(sqrt{1/m}) lower bound in the non-realizable binary classification scenario (see Remark under Thm 4.2).\n\nWe extensively evaluate the proposed methods on three real-world datasets: sentiment analysis, digit classification, and vehicle counting, all showing superior adaptation performance over the baselines. We thank the reviewers to agree with the consistent and promising improvements. We also want to thank Reviewer 1 and Reviewer 4 for pointing out other methods tackling domain adaptation problems, especially for the computer vision problems. While our primary goal is not to achieve state-of-the-art results on specific datasets, we are happy to discuss them in the related work. Besides, we also add more comparisons with these methods in the revised version (Section 5.2). Among the related approaches suggested by Reviewer 1, we found three papers provide codes (Tzeng et al., ICCV 2015; Ghifary et al., ICCV 2015; Tzeng et al., CVPR 2017). We add comparisons and analysis with work (Ghifary et al., ICCV 2015; Tzeng et al., CVPR 2017) in Section 5.2 of the revised version. As the work (Tzeng et al., ICCV 2015) is developed for supervised or semi-supervised domain adaptation (requires some labels for the target domain), while our work is on unsupervised domain adaptation (no label for the target domain)), we didn’t compare with this paper. We also review and add more comparisons with the multi-source-domain adaptation method (Zhang et al. 2015) in Section 5.2 of the revised version as Reviewer 4 suggested. Experimental results still show that our method achieves superior performance for multi-source domain adaptation.\n\nIn addition to the above responses, we reply to each reviewer individually for some specific comments.\n", "We would like to thank the reviewer for providing accurate comments. We have incorporated more comparisons with another three related works for multisource domain adaptation in the experiments. Please check the revised paper (Section 5.2) and the Common Remarks for more details. \n", "We would like to thank reviewer 3 for providing thoughtful comments. Please see the revised paper (Theorem 4.1 and Theorem 4.2 in Section 4) for a new bound we proved to justify the smoothed version. \n\nBoth Thm. 3.4 and Thm. 4.1 are optimal in terms of the number of training instances m in each source domain, as it matches the \\Omega(sqrt{1/m}) lower bound in the non-realizable binary classification scenario. By using different distance measure for distributions, one might get other kinds of bounds that reflect the underlying distance measure (Mansour et al. 2009 a, b, c), but in general those bounds are incomparable to ours, and depending on the concrete settings, one might be tighter than the other.\n\nWe would like to point out that the simple strategy by applying the single-source-single-target bound k times cannot be used to derive a bound as we achieved in Thm. 3.4 for the following reasons: the \\lambda (error achieved by the optimal hypothesis on S and T) defined in Blitzer 2008 depends on both S and T, hence when combining the k bounds, there does not necessarily exist a single optimal hypothesis h^* that makes this bound hold for all k pairs. Second, in order to use Thm. 1 in Blitzer 2008 to achieve the same result, because of the union bound, one will have to incur an additional square root of log(k) term. On the other hand, this combination technique can indeed be used to show that the asymptotic dependency of the upper bound on m is O(\\sqrt{1/m}). \n\nWe have changed the nomenclature so that they are consistent in both experiments. \n", "Q: “This is a simple extension to the previous work based on a single source domain. The adversarial learning aspect is not new. The bound used in the paper accounts for the worst case scenario, which may not be a tight bound when some of the source domains are very different from the target domain. Therefore, it does not completely address the problem of learning from multiple source domains.”\n\nThanks for all the suggestions. We would like to take the chance to explain that our theoretical results and algorithms are novel and nontrivial. To our best knowledge, there is no existing work showing the similar theoretical results as ours. Besides, we provide detailed comparisons with existing work (in Section 3 \"Comparison with Existing Bounds\"). The def. 3.1 is our novel extension to multisource domains, and it’s not easy to see how to use the convexity property of the max function to obtain a proper upper bound. We also prove a new generalization bound where the minimization of the smoothed version corresponds to the minimization of this upper bound, which provides a theoretical justification for the optimization of (5). We precisely state this theorem in Theorem 4.1 and Theorem 4.2. Please see the revised version of the paper about the new upper bound we proved for the smoothed version\n\nAs explained in the Common Remarks, instead of considering the worst-case scenario, this new bound is obtained by considering interactions between multiple source domains, i.e., all the source domains contribute to the upper bound (not just the worst one as stated in Thm. 3.4), and the combination weight of each source domain depends exactly on its empirical error and its distance to the target domain.\n\nThose theoretical results are of insights and practical impacts, providing an effective way to train DNN on multiple datasets with good guarantee of the performance. Both sample complexity bounds given in Thm. 3.4 and Thm. 4.1 are optimal in terms of the number of training instances m in each source domain, as it matches the \\Omega(sqrt{1/m}) lower bound in the non-realizable binary classification scenario (see Remark under Thm 4.2). The proposed multi-domain adversarial network is new architecture with impressively good performance.\n\n\nSome minor comments:\nThanks for the detailed review. We have incorporated the minor comments 1~4 in the revised version of the paper.\n\nQ: In Section 5.2, it was conjectured that the poorer performance of MDAN on SVHN is due to its dissimilarity to the other domains. However, given that the best-single results are close to the target only results, SVHN should be similar to one or more of the source domains. MDAN is probably hurt by the worst case bound.\n\nThough the “Hard-Max” has less accuracy, we would like to point out that the smoothed version (“Soft-Max” in table 3) still achieves better performance than the best-Single-Source. Directly applying DANN to the combined source results in even more degraded accuracy compared to “Hard-Max” and “Soft-Max” of MDAN (0.776 v.s. 0.802 & 0.816).\n\nQ: In Table 4, the DANN performance for S=6 and T=A is off compared to the rest. Any idea?\n\nThe bad performance of DANN for S=6 and T=A proves our conjecture that directly applying DANN to the combined source leads to suboptimal solutions. We rank the source cameras by their proxy A-distance from the target camera and add them into the source of the experiments one by one. When S=6, the newly added camera is already quite different from the target camera. Without a good mechanism designed for multi-source domain adaptation, directly training DANN with such source data results in lower accuracy. This phenomenon further verifies the necessity of our proposed methods.\n", "We thank the reviewer for agreeing that the theoretical analysis is new and interesting. Besides learning bounds with worst case, we also proposed a smoothed version (Equation (5) in Section 4) and prove a new generalization bound where the minimization of the smoothed version corresponds to the minimization of this upper bound (Theorem 4.1 and Theorem 4.2 in Section 4), which provides a theoretical justification for the optimization of Equation (5). As explained in the Common Remarks, instead of considering the worst-case scenario, this new bound is obtained by considering interactions between multiple source domains, i.e., all the source domains contribute to the upper bound (not just the worst one as stated in Thm. 3.4), and the combination weight of each source domain depends exactly on its empirical error and its distance to the target domain.\n\nWe would like to point out that this paper is not just a direct extension of the seminal work of Ben-David et al. We provide detailed comparisons with existing work (in Section 3 \"Comparison with Existing Bounds\"). For the def. 3.1, it’s not easy to see how to use the convexity property of the max function to obtain a proper upper bound. Both sample complexity bounds given in Thm. 3.4 and Thm. 4.1 are optimal in terms of the number of training instances m in each source domain, as it matches the \\Omega(sqrt{1/m}) lower bound in the non-realizable binary classification scenario (see Remark under Theorem 4.2). Those theoretical results are of insights and practical impacts, providing an effective way to train DNN on multiple datasets with good guarantee of the performance. The proposed multi-domain adversarial network is new architecture with impressively good performance.\n\nFor the experimental evaluation, we extensively evaluate the proposed methods on three real-world datasets: sentiment analysis, digit classification, and vehicle counting, all showing superior adaptation performance over the baselines. We thank the reviewer for suggesting some related work and add three of them as new baselines in the revised version. Please refer to the Common Remarks for more explanation. Experimental results still show that our method achieves the state-of-the-art performance for multi-source domain adaptation.\n" ]
[ -1, -1, -1, 6, 6, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, 5, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "SJ0ahZwEf", "rkT23LLEz", "ryXLuDgfz", "iclr_2018_ryDNZZZAW", "iclr_2018_ryDNZZZAW", "iclr_2018_ryDNZZZAW", "iclr_2018_ryDNZZZAW", "S1yZTj_lz", "iclr_2018_ryDNZZZAW", "SkmS_5--z", "HyxIlUFlz", "Hkuk_f7ZG", "S1yZTj_lz" ]
iclr_2018_H1I3M7Z0b
WSNet: Learning Compact and Efficient Networks with Weight Sampling
We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via \emph{ad hoc} processing such as model pruning or filter factorization. Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces {parameter sharing} throughout the learning process. We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably. By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification. Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet. Combined with weight quantization, the resulted models are up to \textbf{180×} smaller and theoretically up to \textbf{16×} faster than the well-established baselines, without noticeable performance drop.
workshop-papers
The paper received generally positive reviews, but the reviewers also had some concerns about the evaluations. Pros: -- An improvement over HashNet, the model ties weights more systematically, and gets better accuracy. Cons: -- Tying weights to compress models already tried before. -- Tasks are all small and/or audio related. -- Unclear how well the results will generalize for 2D convolutions. -- HashNet results are preliminary; comparisons with HashNet missing for audio tasks. Given the expert reviews, I am recommending the paper to the workshop track.
train
[ "Bkc2TkFlG", "S1xBMQtgG", "rJRJeMoxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this work, the authors propose a technique to compress convolutional and fully-connected layers in a network by tying various weights in the convolutional filters: specifically within a single channel (weight sampling) and across channels (channel sampling). When combined with quantization, the proposed approach allows for large compression ratios with minimal loss in performance on various audio classification tasks. Although the results are interesting, I have a number of concerns about this work, which are listed below:\n\n1. The idea of tying weights in the neural network in order to compress the model is not entirely new. This has been proposed previously in the context of feed-forward networks [1], and convolutional networks [2] where the choice of parameter tying is based on hash functions which ensure a random (but deterministic) mapping from a small set of “true” weights to a larger set of “virtual” weights. I think it would be more fair to compare against the HashedNet technique.\n\nReferences:\n[1] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML'15), Francis Bach and David Blei (Eds.), Vol. 37. JMLR.org 2285-2294.\n[2] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2016. Compressing Convolutional Neural Networks in the Frequency Domain. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). ACM, New York, NY, USA, 1475-1484. DOI: https://doi.org/10.1145/2939672.2939839\n\n2. Given that the experiments are conducted on tasks where there isn’t a large amount of training data, one concern is that the baseline model used by the authors might be overparameterized. It would be interesting to see how performance varies as a function of number of parameters for these tasks without any “compression”, i.e., just by reducing filter sizes, for example.\n\n3. It seems somewhat surprising that repeating the filter weights across channels as is done in the channel sharing technique yields no loss in accuracy, especially for the deeper convolutional layers. Could this perhaps be a function of the tasks that the binary “music detection” task that these models are evaluated on? Do the authors have any comments on why this doesn't hurt performance?\n\n4. In citing relevant previous work, the authors should also include student-teacher approaches [1, 2] and distillation [3], and work by Denil et al. [4] on compression.\nReferences:\n[1] C. Bucilua, R. Caruana, and A. Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541. ACM, 2006\n[2] J. Ba and R. Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654–2662, 2014.\n[3] G. Hinton, O. Vinyals, J. Dean. Distilling the Knowledge in a Neural Network, NIPS 2014 Deep Learning Workshop. 2014.\n[4] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013.\n\n5. Section 3, where the authors describe the proposed techniques is somewhat confusing to read, because of a lack of detailed mathematical explanations of the proposed techniques. This makes the paper harder to understand, in my view. Please re-write these sections in order to clearly express the parameter tying mechanism. In particular, I had the following questions:\n- Are weights tied across layers i.e., are the “weight sharing” matrices shared across layers?\n- There appears to be a typo in Equation 3: I believe it should be m = m* C.\n- Filter augmentation/Weight quantization are applicable to all methods, including the baseline. It would therefore be interesting to examine how they affect the baseline, not just the proposed system.\n- Section 3.5, on using the “Integral Image” to speed up computation was not clear to me. In particular, could the authors re-write to explain how the computation is computed efficiently with “two subtraction operations”. Could the authors also clarify the savings achieved by this technique?\n\n6. Results are reported on the various test sets without any discussion of statistical significance. Could the authors describe whether the differences in performance on the various test sets are statistically significant?\n\n7. On the ESC-50, UrbanSound8K, and DCASE tasks, it is a bit odd to compare against previous baselines which use different input features, use different model configurations, etc. It would be much better to use one of the previously published configurations as the baseline, and apply the proposed techniques to that configuration to examine performance. In particular, could the authors also use log-Mel filterbank energies as input features similar to (Piczak, 2015) and (Salomon and Bello, 2015), and apply the proposed techniques starting from those input features? Also, it would be useful when comparing against previously published baselines to indicate total number of independent parameters in the system in addition to accuracy numbers.\n\n8. Minor Typographical Errors: There are a number of minor typographical/grammatical errors in the paper, some of which are listed below:\n- Abstract: “Combining weight quantization ...” → “Combining with weight quantization ...”\n- Sec 1: “... without sacrificing the loss of accuracy” → “... without sacrificing accuracy”\n- Sec 1: “Above experimental results strongly evident the capability of WSNet …” → “Above experimental results strongly evidence the capability of WSNet …”\n- Sec 2: “... deep learning based approaches has been recently proven ...” → “... deep learning based approaches have been recently proven ...”\n- The work by Aytar et al., 2016 is repeated twice in the references.", "The paper presents a method to compress deep network by weight sampling and channel sharing. The method combined with weight quantization provides 180x compression with a very small accuracy drop. \n\nThe method is novel and tested on multiple audio classification datasets and results show a good compression ratio with a negligible accuracy drop. The organization of the paper is good. However, it is a bit difficult to understand the method. Figure 1 does not help much. Channel sharing part in Figure 1 is especially confusing; it looks like the whole filter has the same weights in each channel. Also it isn’t stated in Figure and text that the weight sharing filters are learned by training.\n\nIt would be a nice addition to add number of operations that are needed by baseline method and compressed method with integral image.\n\nTable 5: Please add network size of other networks (SoundNet and Piczak ConvNet). For setting, SoundNet has two settings, scratch init and unlabeled video, what is that setting for WSNet and baseline? \n", "This paper presents a method for reducing the number of parameters of neural networks by sharing the set of weights in a sliding window manner, and replicating the channels, and finally by quantising weights. The paper is clearly written and results seem compelling but on a pretty restricted domain which is not well known. This could have significance if it applies more generally.\n\nWhy does it work so well? Is this just because it acts on audio and these filters are phase shifted?\nWhat happens with 2D convnets on more established datasets and with more established baselines?\nWould be interesting to get wall clock speed ups for this method?\n\nOverall I think this paper lacks the breadth of experiments, and to really understand the significance of this work more experiments in more established domains should be performed.\n\nOther points:\n- You are missing a related citation \"Speeding up Convolutional Neural Networks with Low Rank Expansions\" Jaderberg et al 2014\n- Eqn 2 should be m=m* x C\n- Use \\citep rather than \\cite" ]
[ 6, 6, 5 ]
[ 4, 3, 5 ]
[ "iclr_2018_H1I3M7Z0b", "iclr_2018_H1I3M7Z0b", "iclr_2018_H1I3M7Z0b" ]
iclr_2018_rybAWfx0b
COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS
Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.
workshop-papers
Pros -- A novel way to incorporate LM into an end-to-end model, with good adaptation results. Cons -- Lacks results on public corpora or results are not close to SOTA (e.g., for Librispeech). Given the reviews, it is clear that the experimental evaluations can be improved. But the presented approach is novel and interesting. Therefore I am recommending the paper to the workshop track.
train
[ "Sy0xMaHlG", "ryGQ4uugM", "B10RWItgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a novel approach to integrate a language model (LM) to a seq2seq based speech recognition system (ASR). The LM is pretrained on separate data (presumably larger, potentially not the same exact distribution). It has a similar flavor as DeepFusion (DF), a previous work which also integrated an LM to a ASR in a similar way, but where the fusion is also trained. This paper argues this is not good as the ASR decoder and LM are trying to solve the same problem. Instead, ColdFusion first trains the LM, then fixes it and trains the ASR, so it can concentrate on what the LM doesn't do. This makes a lot of sense.\n\nExperiments on private data show that the ColdFusion approach works better than the DeepFusion approach. Sadly these experiments are done on private data and it is thus hard to compare with benchmark models and datasets.\n\nFor instance, it is possible that the relative capacity (number of layers, number of cells, etc) for each of the blocs need to vary differently between the baseline, the ColdFusion approach and the DeepFusion approach. It is hard to say with results on private data only, as it cannot be compared with strong baselines available in the literature.\n\nUnless a second series of experiments on known benchmarks is provided, I cannot propose this paper for acceptance.\n\n***********\nI have read the revised version. I applaud the use of a public dataset to\ndemonstrate some of the results of the new algorithm, and for this I am raising\nmy score. I am concerned, though, that while ColdFusion is indeed better than\nDeepFusion on LibriSpeech, both of them are significantly worse than the\nresults provided by Wav2Letter on word error rates (although better on\ncharacter error rates, which are usually not as important in that literature).\nIs there any reason for this?\n\n", "The paper proposes a new way of integrating a language model into a seq2seq network: instead of adding the language model only during decoding, the model has access to a pretrained language model during training too. This makes the training and testing conditions more similar. Moreover, only the logits of the pretrained language model are used, making it possible to swap language models post-training.\n\nThe experiments show that the proposed language model fusion is effective, and works well even when different, domain-dependent language models are used during training and testing. Further experiments indicate that through the integration of a language model at training time the seq2seq's decoder can be smaller as it is relieved of language modeling.\n\nQuality:\nThe paper is well executed, the experiments do basic validation of the model (ablation plus a specially designed task to show model effectiveness)\n\nClarity:\nWell written, easy to understand.\n\nOriginality:\nThe main idea is new.\n\nSignificance:\nBetter language model integration and easier adaptation to new domains of seq2seq models is important.\n\nPros and cons:\npros : see above\n\ncons:\nMy problem with the paper is lack of experiments on public datasets. The efficacy of the method is shown on only one task on a proprietary corpus engineered for domain mismatch and the method may be not so efficient under other circumstances. Besides presenting results on publicly available data, the paper would also be improved by adding a baseline in which the logits of the language model are added to the logits of the seq2seq decoder at training time. Similarly to cold-fusion, this baseline also allows swapping of language models at test time. In contrast, the baselines presented in the paper are weaker because they don't use a language model during training time.", "This paper present a simple but effective approach to utilize language model information in a seq2seq framework. The experimental results show improvement for both baseline and adaptation scenarios.\n\nPros:\nThe approach is adapted from deep fusion but the results are promising, especially for the off-domain setup. The analysis also well-motivated about why cold-fusion outperform deep-fusion.\n\nCons:\n(1) I have some question about the baseline. Why the decoder is single layer but for LM it is 2 layer? I suspect the LM may add something to it. For my own Seq2seq model, 2 layer decoder always better than one. Also, what is HMM/DNN/CTC baseline ? Since they use a internal dataset, it's hard to know how was the seq2seq numbers. The author also didn't compare with re-scoring method.\n\n(2) It would be more interesting to test it on more standard speech corpus, for example, SWB (conversational based) and librispeech (reading task). Then it's easier to reproduce and measure the quality of the model.\n\n(3) This paper only report results on speech recognition. It would be more interesting to test it on more area, e.g. Machine Translation. \n\nMissing citation: In (https://arxiv.org/pdf/1706.02737.pdf) section 3.3, they also pre-trained RNN-LM on more standard speech corpus. Also, need to compare with this type of shallow fusion.\n\nUpdates: \n\nhttps://arxiv.org/pdf/1712.01769.pdf (Google's End2End system) use 2-layer LSTM decoder. \nhttps://arxiv.org/abs/1612.02695, https://arxiv.org/abs/1707.07413 and https://arxiv.org/abs/1506.07503) are small task. \nBattenberg et al. paper (https://arxiv.org/abs/1707.07413) use Seq2Seq as a baseline and didn't show any combined results of different #decoder layer vs. different LM integration method. My point is how a stronger decoder affect the results with different LM integration methods. In the paper, it still only compared with deep fusion with one decoder layer. \n\nAlso, why it only compared shallow fusion w/ CTC model? I suspect deep decoder + shallow fusion already could provide good results. Or the gain is additive?\n\nThanks a lot adding Librispeech results. But why use Wav2Letter paper (instead of refer to a peer reviewed paper)? The Wav2letter paper didn't compare with any baseline on librispeech (probably because librispeech isn't a common dataset, but at least the Kaldi baseline is there). \n\nIn short, I'm still think this is a good paper but still slightly below the acceptance threshold." ]
[ 5, 6, 5 ]
[ 5, 5, 5 ]
[ "iclr_2018_rybAWfx0b", "iclr_2018_rybAWfx0b", "iclr_2018_rybAWfx0b" ]
iclr_2018_S1pWFzbAW
Weightless: Lossy Weight Encoding For Deep Neural Network Compression
The large memory requirements of deep neural networks strain the capabilities of many devices, limiting their deployment and adoption. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x; with the same model accuracy, this results in up to a 1.51x improvement over the state-of-the-art.
workshop-papers
Pros: -- Use of Bloomier filters for lossy compression of nets is novel and well motivated, with interesting compression performance. Cons: -- Does lossy compression for transmission, doesn’t address FLOPS required for runtime execution. A lot of times, client devices do not have enough cpu to run large networks (title should be udpated to mean compression and transmission) -- Missing results for full network, larger deeper network. Overall, the content is novel and interesting, so I would encourage the authors to submit to the workshop track.
train
[ "HynYT5_xz", "SyJvYjugz", "Bk_dwGqeG", "rJG33LYzf", "BJ2whUKzM", "H1J4o8KGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary: The paper addresses the actual problem of compression of deep neural networks. Authors propose to use another technique for sparse matrix storage. Namely, authors propose to use Bloomier filter for more efficient storage of sparse matrices obtained from Dynamic Network Surgery (DNS) method. Moreover, authors propose elegant and efficient trick for mitigating errors of Bloomier filter. Overall, the paper present clear and completed research of the proposed technique.\nClarity and Quality: The paper is well structured and easy to follow. Authors provide a reader with a large amount of technical details, which helps to reproduce their method. The paper contains detailed investigation of every step and aspect in the proposed pipeline, which made this research well-organized and complete.\nThough the presentation of some results can be improved. Namely, core values for compression and improvement are presented only for two biggest layers in networks, but more important values are compression and improvement for whole networks.\nOriginality and Significance: The main contribution of the paper is the adaptation of Bloomier filter for sparse network obtained from almost any procedure of networks sparsification. However, this adaptation is almost straightforward, except the proposed trick of network fine-tuning for compensating false positive values of Bloomier filter. Significance of the results is hard to estimate because of several reasons:\nValues of compression and improvement are presented only for two layers, not for the whole network.\nAccording to Fig. 4, encoding of sparse matrices via Bloomier filter is efficient (compared to CSR) only for matrices with nonzero ratio greater than 0.04. So this method can’t be applied to all layers in network, that can significantly influence overall compression.\n\nOther Comments:\nThe procedure of network evaluation is totally omitted in the paper. So a model supposed to be “unpacked” (to the dense or CSR format) before evaluation. Considering this, comparison with CSR could be made only for sending the model over a network. Since, CSR format can be efficiently used during evaluation.\nMinor Comments:\n(Page 5) “Because we encode $k$ clusters, $t$ must be greater than $\\lceil \\log_2 k\\rceil$”. Perhaps, “...$r$ must be greater than $\\lceil \\log_2 k\\rceil$” would be better for understanding.\n(Page 7) “Bloomier filter encoding increased the top-1 accuracy by 2.0 percentage points”. Perhaps, authors have meant top-1 error.", "The problem of lossy compression of neural networks is essentially important and relevant. The paper proposes an interesting usage of Bloomier filters in lossy compression of neural net weights. The Bloomier filter is proposed by others. It is a data structure that maps from sparse indices to their corresponding values with chances that returns incorrect values for non-existing indices. The paper compares its method with two baseline methods (Magnitude and Dynamic network surgery DNS) to demonstrates its performance.\n\nI find the paper fairly interesting but still have some concerns in the technical part and experiments.\n\nPros:\n1. The paper seems the first to introduce Bloomier filter into the network compression problem. I think its contribution is novel and original. The paper may interest those who work in the network compression domain.\n2. The method works well in the demonstrated experimental cases.\n\nCons:\n1. The technical part is partially clear. It might be worthwhile to briefly describe the encoding/construction algorithm used in the paper. It is recommended to describe a bit more details about how such encoding/decoding methods are applied in reducing neural net weights.\n2. One drawback of the proposed method is that it has to work with sparse weights. That requires the method to be used together with network pruning methods, which seems limiting its applicability. I believe the paper can be further improved by including a study of the compression results without a pruning method (e.g., comparing with Huffman in table 3). \n3. What is the reason there is no DNS results reported for VGG-16? Is it because the network is deeper?\n4. The experimental part can be improved by reporting the compression results for the whole network instead of a single layer.\n5. It seems the construction of Bloomier filter is costly and the proposed method has to construct Bloomier filters for all layers. What is the total time cost in terms of encoding and decoding those networks (LeNet and VGG)? It would be nice to have a separate comparison on the time consumption of different methods.\n6. Figure 4 seems a bit misleading. The comparison should be conducted on the same accuracy level instead of the ratio of nonzero weights. I recommend producing another new figure of doing such comparison.\n7. The proposed idea seems somewhat related to using low rank factorization of weight matrices for compression. It might be worthwhile to compare the two approaches in experiments.\n8. I am specifically interested in discussions about the possibility of encoding the whole network instead of layer-by-layer retraining.", "This paper proposes an interesting approach to compress the weights of a network for storage or transmission purposes. My understanding is, at inference, the network is 'recovered' therefore there is no difference in processing time (slight differences in accuracy due to the approximation in recovering the weights).\n\n- The idea is nice although it's applicability is limited as it is only for distribution of the model and storing (is storage really a problem?). \n\nMethod:\n- the idea of using the Bloomier filter is new to me. However, the paper is miss-leading as the filtering is a minor part of the complete process. The paper introduces a complete pipeline including quantization, and pruning to maximize the benefits of the filter and an additional (optional) step to achieve further compression. \n\n- The method / idea seems simply and easy to reproduce (except the subsequent steps that are not clearly detailed).\n\nClarity\n\n- The paper could improve its clarity. At the moment, the Bloomier is the core but needs many other components to make it effective. Those components are not detailed to the level of being reproducible.\n- One interesting point is the self-implementation of the Deep compression algorithm. The paper claims this is a competitive representation as it achieves better compression than the original one. However, those numbers are not clear in tables (only in table 3 numbers seem to be equivalent to the ones in the text). This needs clarification, CSR achieves 81.8% according to Table 2 and 119 according to the text.\n\nResults:\n- Current results are interesting. However I have several concerns:\n1) it is not clear to me why assuming similar performance. While Bloomier is weightless the complete process involves many retraining steps involving performance loss. Analysis on this would be nice to see (I doubt it ends exactly at the same number). Section 3 explicitly suggest there is the need of retraining to mitigate the effect of false positives which is then increased with pruning and quantization. Therefore, would be nice to see the impact in accuracy (even it is not the main focus of the work). \n\n2) Resutls are focused on fully connected layers which carry (for the given models) the larger number of weights (and therefore it is easy to get large compression numbers). What would happen in newer models where the fully connected layer is minimal compared to conv. layers? What about the accuracy impact there? Let's say in a Resnet-34.\n3) I would like to see further analysis on why Bloomier filter encoding improves accuracy (or is a typo and meant to be error?) by 2%. This is a large improvement without training from scractch.\n4) It is interesting to me how the retraining process is 'hidden' all over the paper. At the beginning it is claimed that it takes about one hour for VGG-16 to compute the Bloomier filters. Howerver, that is only a minimal portion of the entire pipeline. Later in the experimental section it is mentioned that 'tens of epochs' are needed for retraining (assuming to compensate for errors) after retraining for compensating l1 pruning?.... tens of epochs is a significant portion of the entire training process assuming VGG is trained for 90epochs max.\n\n5) Interestingly, as mentioned in the paper, this is 'static compression'. That is, the model needs to be completely 'restored' before inference. This is miss-leading as an embedded device will need the same requirements as any other at inferece time(or maybe I am missing something). That is, the benefit is mainly for storing and transmission.\n\n6) I would like to see the sensibility analysis with respect to t and the number of clusters. \n\n7) As mentioned before, LeNet is great but would be nice to see more complicated models (even resnet on CIFAR). These models are not only large in terms of parameters but also quite sensitive to modifications in the weight structure.\n\n8) Results are focused on a single layer. What happens if all the layers are considered at the same time? Here I am also concerned about the retraining process (fixing one layer and retraining the deeper ones). How is this done using only fully connected layers? What is the impact of doing it all over the network (let's say VGG-16 from the first convolutional layer to the very last).\n\nSummary:\n\nAll in all, the idea has potential but there are many missing details. I would like to see clearer and more comprehensive results in terms of modern models and in the complete model, not only in the FC layer, including accuracy impact. ", "\"Values of compression and improvement are presented only for two layers, not for the whole network.\nAccording to Fig. 4, encoding of sparse matrices via Bloomier filter is efficient (compared to CSR) only for matrices with nonzero ratio greater than 0.04. So this method can’t be applied to all layers in network, that can significantly influence overall compression.\"\n\nFigure 4 is meant to demonstrate that Weightless scales better with increasing sparsity than Deep Compression. The results we present in Table 2 show that Weightless does not require a non-zero ratio of 4%; CNN-1 in LeNet5 with magnitude pruning has a non-zero ratio of 7% and it still outperforms Deep Compression. \n\n\n\"The procedure of network evaluation is totally omitted in the paper. So a model supposed to be “unpacked” (to the dense or CSR format) before evaluation. Considering this, comparison with CSR could be made only for sending the model over a network. Since, CSR format can be efficiently used during evaluation.\"\n\nYou are correct. In its current form, a model would need to be “unpacked” to the dense format before evaluation, but this paper is meant to focus solely on compression for over the wire transmission. We feel this is an important problem facing companies deploying deep learning models. We are currently investigating building specialized hardware for efficient sparse processing that would enable evaluating models in the encoded space. \n", "1. The technical part is partially clear. It might be worthwhile to briefly describe the encoding/construction algorithm used in the paper. It is recommended to describe a bit more details about how such encoding/decoding methods are applied in reducing neural net weights.\n\nWe have included a brief description of construction in the appendix. Also, as now mentioned in the paper, we will release an implementation with the publication of the paper.\n\n2. One drawback of the proposed method is that it has to work with sparse weights. That requires the method to be used together with network pruning methods, which seems limiting its applicability. I believe the paper can be further improved by including a study of the compression results without a pruning method (e.g., comparing with Huffman in table 3). \n\nYou are correct that sparsity is necessary. This is also true for competing encoding techniques (namely Deep Compression). For the large VGG16 fully connected layer we ran Huffman encoding on un-pruned, clustered weights and got an 12.8x compression factor, which is an order of magnitude less than the reported results. \n\n3. What is the reason there is no DNS results reported for VGG-16? Is it because the network is deeper?\n\nNo, it was because the weights were not made available and were unclear how to tune the DNS hyperparameters to effectively prune the VGG16 weights. We would like to include a DNS version of VGG16 as the suspected improvement in sparsity would likely significantly improve our results. We are actively working on more advanced pruning techniques, different model types, and datasets to demonstrate the benefits of lossy encoding. We feel this paper presents the core technique and benefits of the proposed method over the state-of-the-art. \n\n4. The experimental part can be improved by reporting the compression results for the whole network instead of a single layer.\n\nWe focused on the largest layers to get the most benefit. In the final version we can report the overall compression if the reviewers feel it is beneficial.\n\n5. It seems the construction of Bloomier filter is costly and the proposed method has to construct Bloomier filters for all layers. What is the total time cost in terms of encoding and decoding those networks (LeNet and VGG)? It would be nice to have a separate comparison on the time consumption of different methods.\n\nConstruction times for LeNet300-100, LeNet5, and VGG16 are 6 seconds, 23 seconds, and 517 seconds, respectively. Decoding takes 11, 12, and 505 seconds for each of the aforementioned models. We believe that these one-time overheads are negligible considering the significant reductions in model size.\n\n6. Figure 4 seems a bit misleading. The comparison should be conducted on the same accuracy level instead of the ratio of nonzero weights. I recommend producing another new figure of doing such comparison.\n\nThank you for the suggestion. We believe that this suggestion can improve the paper. As a result, we conducted additional experiments on iso-accuracy comparison between Weightless and Deep Compression in Figure 7 (appendix).\n\n7. The proposed idea seems somewhat related to using low rank factorization of weight matrices for compression. It might be worthwhile to compare the two approaches in experiments.\n\nWe believe the benefits of low rank factorization lie in efficient execution by reducing the number of computations requires. A byproduct of low rank factorization is compression on the order of 50%. However, when specifically targeting over the wire compression, this is not competitive with existing techniques. If you feel that this is an important distinction that must be made, we will happily add it in the related work section. \n\n8. I am specifically interested in discussions about the possibility of encoding the whole network instead of layer-by-layer retraining.\n\nWe can encode the whole network by eliminating the retraining steps. However, this will come at the expense of either model accuracy (if we use the same t value as with retraining) or overall compression (if we increase t). For example, without retraining, VGG16 can lose 2% absolute accuracy as shown in Figure 5 (appendix). Previously, we tried using an auxiliary data structure to fix false positives (called exception lists), this proved to incur significant storage overheads. As a result, we strongly believe that retraining is an integral part of mitigating the effects of false positives.", "\"The paper could improve its clarity...\"\nThe reason for the brevity on pruning and clustering was because we viewed these aspects as prior work and did not want to spend time discussing materials we did not deem research contributions of our manuscript.\n\n\"One interesting point is the self-implementation of the Deep compression algorithm....This needs clarification, CSR achieves 81.8% according to Table 2 and 119 according to the text.\"\nWe understand your confusion and will clarify the text, but these numbers are indeed correct. The 81.8x is encoding only (using CSR) and the 119x is CSR+Huffman. The distinction is there to compare Bloomier filters with Bloomier+LZMA (i.e., encoding in 4.1 with compression in 4.2).\n\n1) \"It is not clear to me why assuming similar performance...\"\nThis is an excellent point and to address it we have added a plot (see Figure 5) to the appendix of the paper that shows how model performance is regained with retraining. \n\n\"Analysis on this would be nice to see...\"\nYou are correct that it’s not the exact same number (see Figure 5) but we are careful to make sure that the final test-accuracy reported is the same as the baseline or better by a small amount (e.g., VGG16 experiences an absolute improvement of less than 0.1% overall test accuracy). \nFigure 7 further shows how Weightless offers better compression vs. error scaling than CSR.\n\n2) \"Resutls are focused on fully connected layers which carry (for the given models)...\"\nThe models we chose were done so as they are the ones most commonly used in the literature (Deep Compression, DNS, and HashedNets). We also consider CNNs and show that Bloomier filters perform well on them (see LeNet5). Our findings suggest that so long as weights exhibit sufficient sparsity, the method is effective.\n\nAs future work, we are actively looking into more advanced pruning techniques to achieve the necessary sparsity to encode networks like ResNet-34. We recently evaluated magnitude pruning on ResNet-34, but saw substantial increase in model error which we felt would be an unfair comparison.\n\n3) \"I would like to see further analysis on why Bloomier filter encoding improves accuracy (or is a typo and meant to be error?) by 2%...\"\nYou are correct in that this is a typo. It should be error and this is corrected.\n\n4) \"It is interesting to me how the retraining process is 'hidden' all over the paper...\"\nWe have now included a plot (Figure 5) which shows how retraining recovers accuracy in encoded layers. We have also included numbers for construction and reconstruction for all the largest layers in the models used (at the request of another reviewer). We find that on a modern machine, the longest construction takes is 8.5 minutes; the machines we used originally were older and part of a cluster being used by others.\n\n5) \"Interestingly, as mentioned in the paper, this is 'static compression'...\"\nThat is correct. We are looking into ways to compute in the compressed space with Weightless, but to be competitive it will likely require special hardware and require a deeper investigation.\n\nWe did not intend to mislead the reader, this is a compression paper for efficient weight transmission (and storage). If there is a way we could fix this we will gladly amend the paper. \n\n6) \"I would like to see the sensibility analysis with respect to t and the number of clusters.\"\nWe have added a plot (Figure 6) to show this to the appendix.\n\n7) \"As mentioned before, LeNet is great but would be nice to see more complicated models (even resnet on CIFAR)...\"\nSee above.\n\n8) \"Results are focused on a single layer. What happens if all the layers are considered at the same time?...\"\nIf all the layers are considered (i.e., encoded) at the same time, there is no opportunity for deeper layers to be retrained to compensate for errors in the earlier layers. In this scenario, one would likely have to increase the t value to mitigate false positives or incur a slight increase in model error. \n\nIf each layer is encoded individually, the process occurs precisely as specified in the paper. Each layer is encoded and the deeper layers are retrained around their false positives.\n" ]
[ 6, 6, 4, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_S1pWFzbAW", "iclr_2018_S1pWFzbAW", "iclr_2018_S1pWFzbAW", "HynYT5_xz", "SyJvYjugz", "Bk_dwGqeG" ]
iclr_2018_S1Auv-WRZ
Data Augmentation Generative Adversarial Networks
Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13% increase in accuracy in the low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9% to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot we observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in EMNIST (from 59.5% to 61.3%).
workshop-papers
The paper based on cGAN developed a data augmentation GAN to deal with unseen classes of data. The paper developed new modifications to each component and designed network structure using ideas from state-of-the-art nets. As pointed out by reviewer 1 & 2, the technical contribution is not sufficient. We hence recommend it to workshop publication.
train
[ "H1O8xnDlM", "Hym3oxKlf", "S1vTg99gz", "SJxgSOxzG", "SyMfNdxGG", "HkpnsPxfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a conditional Generative Adversarial Networks that is used for data augmentation. In order to evaluate the performance of the proposed model, they use Omniglot, EMNIST, and VGG-Faces datasets and uses in the meta-learning task and standard classification task in the low-data regime. The paper is well-written and consistent. \n\nEven though this paper learns to do data-augmentation (which is very interesting ) rather than just simply applies some standard data augmentation techniques and shows improvements in some tasks, I am not convinced about novelty and originality of this paper, especially on the model side. To be more specific, the paper uses the previously proposed conditional GAN as the main component of their model. And for the one-shot learning tasks, it only trains the previously proposed models with these newly augmented data. \n\nIn addition, there are some other works that used GAN as a method for some version of data augmentation:\n- RenderGAN: Generating Realistic Labeled Data\n https://arxiv.org/abs/1611.01331\n-Data Augmentation in Emotion Classification Using Generative Adversarial Networks\nhttps://arxiv.org/abs/1711.00648\n\nIt is fair to say that their model shows improvement on the above tasks but this improvement comes with a cost of training of GAN network. \n\nIn summary, the idea of the paper is very interesting to learn data-augmentation but yet I am not convinced the current paper has enough novelty and contribution and see the contribution of paper as on more the application side rather than on model and problem side. That said I'd be happy to hear the argument of the author about my comments. ", "In this paper, the authors have proposed a GAN based method to conduct data augmentation. The cross-class transformations are mapped to a low dimensional latent space using conditional GAN. The paper is technically sound and the novelty is significant. The motivation of the proposed methods is clearly illustrated. Experiments on three datasets demonstrate the advantage of the proposed framework. However, this paper still suffers from some drawbacks as below:\n(1)\tThe illustration of the framework is not clear enough. For example, in figure 3, it says the GAN is designed for “class c”, which is ambiguous whether the authors trained only one network for all class or trained multiple networks and each is trained on one class.\n(2)\tSome details is not clearly given, such as the dimension of the Gaussian distribution, the dimension of the projected noise and .\n(3)\tThe proposed method needs to sample image pairs in each class. As far as I am concerned, in most cases sampling strategy will affect the performance to some extent. The authors need to show the robustness to sampling strategy of the proposed method.\n", "This paper is good at using the GAN for data augmentation for the one shot learning, and have demonstrated good performance for a variety of datasets.\nHowever, it seems that the main technique contribution is not so clear. E.g., it is not clear as shown in Figure 3, what is key novelty of the proposed DAGAN, and how does it improve from the existing GAN work. It seems that the paper is a pipeline of many existing works.\nBesides, it will also be interested to see whether this DAGAN can help in the training of prevailing ImageNet and MS COCO tasks.", "Thanks for your review and time. I am very glad you like our work. I will address your concerns using the same identifiers you have used.\n\nI agree with your observation, it seems to be a common issue in all 3 reviews that we need a better illustration/more textual description in Figure 3. Furthermore, we trained 1 DAGAN for all classes. This is key in fact, since we condition the GAN on an image from the class we want to generate from. Thus training for all classes allows the generator to learn augmentations from all the classes and apply them to different classes in a way that allows the samples to remain within their original class, therefore leveraging our data more efficiently.\nThe dimension of the Gaussian is 100-dimensional. The projected noise dimensionality is different from dataset to dataset depending on the image dimensionality. In all cases we make sure that the projected noise matches the size of the encoder embedding.\nYes, the sampling strategy is perhaps one of the most important parts of our methodology. When constructing a new training sample we choose 1 class and then 2 samples from that class using a uniform distribution to use for x_i and x_j whilst making sure the 2 samples are different samples and not identical. This way we are providing the network with 2 unique samples that are always varied at each iteration. There is no label information provided to the DAGAN as we want the Generator to learn to one-shot generate samples that are within the same class of the conditional image, thus pushing the Generator to implicitly learn a manifold around a data sample within which the sample remains in the same class but is augmented enough to be a different sample than the conditional one. The augmentations learned are learned from the whole dataset and often we see the transfer of augmentations from one class to another, only where it makes sense (i.e. add lipstick to females but not males).\nOnce again, thanks for your review and time. I’d be more than happy to discuss any other concerns you might have.\n", "Thanks for your review and time. I will address your concerns in sections:\n\nModel Novelty:\n\nThe model is not a standard conditional GAN as it’s actually image conditioned and not label conditioned as per RenderGAN. A DAGAN is attempting to meta-learn to one-shot generate plausible versions of a provided image, which are varied by the random injected noise. To do so we use a novel training scheme/setup which is where the key contribution of the work lies. By allowing the Generator to learn classes implicitly rather than explicitly we open the possibility for the DAGAN to one shot generate samples on previously unseen classes. In the one shot case, we don’t just augment the training set, we are actually producing on the fly generated samples conditioned on unseen classes in training, validation and test times. By doing so we are converting the one-shot setup, to a few shot setup. \n\nFurther Novel Contributions:\n\nWhen we generate samples on the fly for the matching network we also provide the matching network with information on the source of the images (i.e. real/fake) by doing so the network can learn how much trust to put in fake augmented examples, and adjust the embedding based upon the real/fake label, which improves accuracy performance. In addition we also learn a network that given the target image for a certain episode can generate the best Z for that specific task, which again improves performance. \n\nArchitectural Contributions:\n\nWe have built a novel generator architecture which combines ideas from ResNets, DenseNets and U-Nets to generate very high quality results. Furthermore we use batch renormalization which in our empirical evaluation is shown to greatly enhance sample quality and sample variation, therefore providing evidence for some of the theoretical claims made in the original Batch Renormalization paper. https://arxiv.org/abs/1702.03275\n\nFlexibility of method:\n\nA DAGAN is compatible with any few-shot learning technique so that as new few-shot learning ideas are created, DAGANs can be used to squeeze out extra information from the data thus building more data efficient systems. In addition DAGAN can be further improved by future advances in training GANs. \n\nAlso to summarise our improvements over the mentioned papers:\n\nRenderGAN uses label conditioned GANs which cannot be used for one-shot generation on unseen classes. DAGAN can do one-shot generation on unseen classes as it learns the concept of a class implicitly and is conditioned on an image, rather than a label.\nData Augmentation in Emotion Classification Using GAN: Here the authors use a CycleGAN which requires 2 Generators, 2 Discriminators and a rather complicated amount of loss functions to train. Our model only requires 1 Generator and 1 Discriminator and the GAN Loss. This makes it far less computationally expensive and also produces results that at least visually appear to be much higher quality. In addition in their paper they do not mention whether the model can one-shot generate good samples from unseen classes which is something DAGAN does very well. \n\nAs far as the cost of training the GAN, a UResNet grade Omniglot network needs about 12 hours on a single Titan X Pascal and was then used to run one-shot generation on both Omniglot and EMNIST (thus showing how well the DAGAN can generate samples from unseen classes or in this case, unseen datasets). Yes, it requires additional computational overhead but as demonstrated the improvements are well worth it.\n\nI’d be more than happy to discuss any other concerns you might have and I will make a good attempt to improve the clarity of the illustration of the network and emphasize our contributions. Once again, thank you for your review, looking forward to your reply.\n", "Thanks for your review and time. The key contribution of the paper is a new GAN training setup with which one can use existing GAN framework (i.e. WGAN GP or Standard GAN) to learn to one-shot generate plausible interpolations of data samples. Intuitively the model learns a manifold around a data point within which a sample remains in the same class. Furthermore, the concept of class is extracted directly from the image pairs passed to the discriminator and implicitly learned by the Generator network as a result of backpropagation. One of the novelties of this Data Augmentation technique using GANs is that at generation time you are not restricted by the classes you have already learned (i.e. No Labels are passed to the generator) rather the generator can one-shot generate from unseen-class data points which is where the true power of the DAGAN lies. In fact when training we take note of the WGAN validation loss such that we do not overfit that measure. This allows the network to be used not only for data augmentation in classification but also in the few-shot learning scheme. In terms of attempting experiments on ImageNet and MS COCO we were unfortunately computationally constrained and thus unable to run those experiments within a suitable time frame.\n" ]
[ 4, 9, 6, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1 ]
[ "iclr_2018_S1Auv-WRZ", "iclr_2018_S1Auv-WRZ", "iclr_2018_S1Auv-WRZ", "Hym3oxKlf", "H1O8xnDlM", "S1vTg99gz" ]
iclr_2018_rJrTwxbCb
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et. al. (2016): Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the *flatness* of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create *large* connected components at the bottom of the landscape. Second, the dependence of a small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light on the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin.
workshop-papers
Pros: + Builds in important ways on the work of Sagun et al., 2016. Cons: - The reviewers were very concerned that the assumption in the paper that the second term of Equation (6) is negligible was insufficiently supported, and this concern remained after the discussion and the revision. - The paper needs to be more precise in its language about the Hessian, particularly in distinguishing between ill conditioning and degeneracy. - The reviewers did not find the experiment very convincing because it relied on initializing the small-batch optimization from the end point of the large-batch optimization. Again, this concern remained following the discussion and revision. The area chair agrees with the authors' comments in their OpenReview post of 08 Jan. 2018 "A remark on relative evaluation," and has discounted the reviewers' comments about the relative novelty of the work. It is important not to penalize authors for submitting their papers to conferences with an open review process, especially when that process is still being refined. However, even discounting the remarks about novelty, there are key issues in the paper that need to be addressed to strengthen it (the 3 "cons" above), so this paper does not quite meet the threshold for ICLR Conference acceptance. However, because it raises really interesting questions and is likely to provoke useful discussions in the community, it might be a good workshop track paper.
train
[ "S1Zf_tBNM", "ryIbx22yz", "rJT6jEcgz", "HkeyY0M-z", "BJ1qruaQG", "ByLNSd6XM", "H1bMzuamM", "SkTaZOamG", "rJHmeupmz", "ryoh5LEMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Thanks for attempting to address my concerns. But the responses to Point 2 and Point 3 are not still convincing to me. In particular, the soundness of the assumption for the mathematical justification is still not addressed and the experimental setting comparing SB against LB is not well designed. Considering the overall novelty and the contribution of this paper, I keep my rating.", "The authors perform a set of experiments in which they inspect the Hessian matrix of the loss of a neural network, and observe that most of the eigenvalues are very close to zero. This is a potentially important observation, and the experiments were well worth performing, but I don't find them fully convincing (partly because I was confused by the presentation).\n\nThey perform four sets of experiments:\n\n1) In section 3.1, they show on simulated data that for data drawn from k clusters, there are roughly k significant eigenvalues in the Hessian of the solution.\n\n2) In section 3.2, they show on MNIST that the solution contains few large eigenvalues, and also that there are negative eigenvalues.\n\n3) In section 3.3, they show (again on MNIST) that at their respective solutions, large batch and small batch methods find solutions with similar numbers of large eigenvalues, but that for the large batch method the magnitudes are larger.\n\n4) In section 4.1, they train (on CIFAR10) using a large batch method, and then transition to a small batch method, and argue that the second solution appears to be better than the first, but that they are a part of the same basin (since linearly while interpolating between them they don't run into any barriers).\n\nI'm not fully convinced by the second and third experiments, partly because I didn't fully understand the plots (more on this below), but also because it isn't clear to me what we should expect from the spectrum of a Hessian, so I don't know whether the observed specra have fewer large eigenvalues, or more large eigenvalues, then would be \"natural\". In other words, there isn't a *baseline*.\n\nFor the fourth experiment, it's unsurprising that the small batch method winds up in a different location in the same basin as the large batch method, since it was initialized to the large batch method's solution (and it doesn't appear to me, in figure 9, that the small batch solution is significantly different).\n\nSection 2.1 is said to contain an argument that the second term of equation 5 can be ignored, but only says that if \\ell' and \\nabla^2 of f are uncorrelated, then it can be ignored. I don't see any reason that these two quantities should be correlated, but this is not an argument that they are uncorrelated. Also, it isn't clear to me where this approximation was used--everywhere? In section 3.2, it sounds as if the exact Hessian is used, and at the end of this section the authors say that figure 6 demonstrates that the effect of this second term is small, but I don't see why this is, and it isn't explained.\n\nMy main complaint is that I had a great deal of difficulty interpreting the plots: it often wasn't clear to me what exactly was being plotted, and most of the language describing them was frustratingly vague. For example, figure 6 is captioned \"left edge of the spectrum, eigenvalues are scaled by their ratio\". The text explains that \"left edge of the spectrum\" means \"small but negative eigenvalues\" (this would be better in the caption), but what are the ratios? Ratio of what to what? I think it would greatly enhance clarity if every plot caption described exactly, and unambiguously, what quantities were plotted on the horizontal and vertical axes.\n\nSome minor notes:\n\nThere are a number of places where \"it's\" is used, where it should be \"its\".\n\nIn the introduction, the definition of \\mathcal{L}' is slightly confusing, since it's an expectation, but the use of \"'\" makes one expect a derivative. Perhaps use \\hat{\\mathcal{L}} for the empirical loss, and \\mathcal{L} for the expected one?\n\nOn the bottom of page 4, \"if \\ell' and \\nabla f are not correlated\": I think the \\nabla should be \\nabla^2.\n\nIt's \"principal components\", not \"principle components\".", "This paper studies the spectrum of the Hessian matrix for neural networks. To explain the observation that the spectrum of Hessian is composed of a bulk of eigenvalues centered near zero and several outliers away from the bulk, it applies the generalized Gauss-Newton decomposition on the Hessian matrix and argues that the Hessian can be approximated by the average of N rank-1 matrices. It also studies the effects on the spectrum from the model size, input data distribution and the algorithm empirically. Finally, this paper revisits the issue that if SGD solutions with different batch sizes converge to the same basin. \n\nPros:\n1. The spectra of the Hessians with different model sizes, input data distributions and algorithms are empirically studied, which provides some insights into the behavior of over-parameterized neural networks. \n2. A decomposition of the Hessian is introduced to explain the degeneracy of the Hessian. Although no mathematical justification for the key approximation Eq. (6) is provided, the experiments in Sec. 3 and Sec. 4 seem to suggest the analysis and support the approximation. \n\nCons:\n1. The paper's contributions seem to be marginal. Many arguments in the paper have been first brought out in Sagun et. al.(2016) and Keskar et. al.(2016): the degeneracy of the Hessian, the bulk and outlier decomposition of the Hessian matrix and the flatness of loss surface at basins. The authors failed to show the significance of their results. For example, what further insights do the results in Sec. 3 provide to the community compared with Sagun et. al.(2016) and Keskar et. al.(2016).\n\n2. More mathematical justification is needed. For example, in the derivation of Eq (6), why can we assume l'(f) and the gradient of f to be uncorrelated? How does this lead to the vanishing of the second term in the decomposition? \n\n3. More experiments are needed to support the arguments. For example, Sec. 4.1 shows that the solutions of SB SGD and LB SGD fall into the same basin, which is opposed to the results of Keskar et. al. (2016). However, this conclusion is not convincing. First, this result is drawn from one dataset. Second, the solution of SB SGD is initialized from the solution of LB SGD. As claimed in Keskar et. al. (2016), the solution of LB SGD may already get trapped at some bad minimum and it is not certain if SB SGD can escape from that. If it can't, then SB and LB can still be in the same basin as per the setting in this paper. So I'd like to suggest the author compare SB and LB when random initializations are conducted for both algorithms.\n\n4. In general, this paper is easy to read. However, it is not well organized. In the introduction, the authors spent several paragraphs for line search and expensive computation of GD and the Hessian, which I don't think are very related to the main purpose of this paper. Besides, the connection between the analysis and the experimental results is very weak and should be better established. \n\nMinor points:\n1. Language is awkward in section 3.1 and 3.2: 'contains contains', 'more smaller than', 'very close zero'...\n2. More experimental details need to be included, such as the parameters used in training and generating the synthetic dataset.\n3. The author needs to provide an explanation for the disagreement between Figure (10) and the result of Keskar et. al.(2016). What's the key difference in experimental settings?\n\n", "This paper has at its core an interesting, novel, tentative claim, backed up by simple experiments, that small batch gradient descent and large batch gradient descent may converge to points in the same basin of attraction, contrary to the discussion (but not the actual experimental results) of Keskar et al. (2016). In general, there is a pressing need for insight into the qualitative behavior of gradient-based optimization and this area is of immense interest to many machine learning practitioners. Unfortunately the interesting tentative insights are surrounded by many unsubstantiated and only tangentially related theoretical discussions. Overall the paper has the appearance of lacking a sharp focus. This is a shame since I found the core of the paper very interesting and thought provoking.\n\nMajor comments:\n\nWhile the paper has some interesting tentative experimental insights, the relationship between theory and experiment is complicated. The theoretical claims are vague and wide ranging, and are not all individually well supported or even tested by the experiments. Rather than including lots of small potential insights which the authors have had about what may be going on during gradient-based optimization, I'd prefer to see a paper with much tighter focus with a small number of theoretical claims well supported by experiments (it's fine if the experiments are simplistic as here; that's still interesting).\n\nA large amount of the paper hinges on being able to ignore the second term in (6), and this fact is referred to many times, but the theoretical and experimental justification for this claim is very thin.\n\nThe authors mention overparameterization repeatedly, and it's in the title, but they never define it. It also doesn't appear to take center stage in their experimental investigations (if it is in fact critical to the experiments then it should be made clearer how).\n\nThroughout this paper there is not a clear distinction between eigenvalues being zero and eigenvalues being close to zero, or similarly between the Hessian being singular and ill-conditioned. This distinction is particularly important in the theoretical discussion.\n\nIt would be helpful to be clearer about the differences between this work and that presented in Sagun et al. (2016).\n\nMinor comments:\n\nThe assumption that the target y is real is at odds with many regression problems and practically all classification. It might be worth generalizing the discussion to multidimensional targets.\n\nIt would be good to have some citations to support the claim that often \"the number of parameters M is comparable to the number of examples N (if not much larger)\". With 1-dimensional targets as considered here, that sounds like a recipe for extreme overfitting and poor generalization. Generically based on counting constraints and free parameters one would expect to be able to fit exactly any dataset of N output values using a model with M free parameters. (With P-dimensional targets the relevant comparison would be M vs N P rather than M vs N).\n\nAt the end of intro to section 1, \"loss is non-degenerate\" should be \"Hessian of the loss is non-degenerate\"? Also, didn't the paper cited assume at least one negative eigenvalue at any saddle point, rather than non-degeneracy?\n\nIn section 1.1, it would be helpful to explain the precise sense in which \"overparameterized\" is being used. Hopefully it is in the sense that there are more parameters than needed for good performance at the true global minimum (the additional parameters helping with the process of *finding* a good minimum rather than its existence) or in the sense that M -> infinity for N \"equal to\" infinity. If it is in the sense that M >> N then I'm not sure of the relevance to practical machine learning.\n\nIt would be helpful to use a log scale for the plot in Figure 1. The claim that the Hessian is ill-conditioned depends on the condition number, which is impossible to estimate from the plot.\n\nThe fact that \"wide basins, as opposed to narrow ones, generalize better\" is not a new claim of the Keskar et al. paper. I'd argue it's well-known and part of the classical explanation of why maximum likelihood methods overfit and Bayesian ones don't. See for example MacKay, Information Theory Inference and Learning Algorithms.\n\n\"It turns out that the Hessian is degenerate at any given point\" makes it sound like the result is a theoretical one. As I understand it, the experimental investigation in Sagun et al. (2016) just shows that the Hessian may often be ill-conditioned. As above, more clarity is also needed about whether it is literally degenerate or just approximately so, in which case ill-conditioned is probably a more appropriate word. Ill-conditioned is also more appropriate than singular in \"slightly singular but extremely so\".\n\nHow much data was used for the simple experiments in Figure 1? Infinite data? What data was used?\n\nIt would be helpful to spell out the intuition in \"Intuitively, this kind of singularity...\".\n\nI don't think the decomposition (5) is required to \"explain why having more parameters than samples results in degenerate Hessian matrices\". Generically one would expect that with 1-dimensional targets, N datapoints and N + Q parameters, there would be a Q-dimensional submanifold of parameter space on which the loss would be zero. Of course there would be a few conditions needed to make this into a precise statement, but no need for assuming the second term is negligible.\n\nIs the conventional decomposition of the loss into l o f used for the generalized Gauss Newton that f is a function only of the input to the neural net and the model parameters, but not the target? I could be wrong, but that was always my interpretation.\n\nIt's not clear whether the phrase \"bottom of the landscape\" used several times in the paper refers to the neighborhood of local minima or of global minima.\n\nWhat is the justification for assuming l'(f(w)) and grad f(w) are not correlated? That seems unlikely to be true in general! Also spell out why this implies the second term can be ignored. I'm a bit skeptical of the claim in general. It's easy to come up with counterexamples. For example take l to be the identity (say f has a relu applied to it to ensure everything is well formed).\n\n\"Immediately, this implies that there are at least M - N trivial eigenvalues of the Hessian\". Make it clear that trivial here means approximately not exactly zero (in which case a good word would be \"small\"); this follows since the second term in (5) is only approximately zero. In fact it should be possible to prove there are M - N values which are exactly zero, but that doesn't follow from the argument presented. As above I'd argue this analysis is somewhat beside the point since N should be greater than M in practice to prevent severe overfitting.\n\nIn section 3.1, \"trivial eigenvalues\" should be \"non-trivial eigenvalues\".\n\nWhat's the relevance of using PCA on the data in Figure 2 when it comes to analyzing training neural nets? Also, is there any reason 2 classes breaks the trend?\n\nWhat size of data was used for the experiments to plot figure 2 and figure 3? Infinite?\n\nIt's not completely clear what the takeaway is from Figure 3. I presume this is supporting the point that the eigenvalues of the Hessian at convergence consist of a bulk and outliers. The could be stated explicitly. Is there any significance to the fact that the number of clusters is equal to the number of outliers? Is this supporting some broader claim of the paper?\n\nFigure 4, 5, 6 would benefit from being log plots, and make the claim that the bulk has the same shape independent of data much stronger.\n\nThe x-axis in Figure 5 is not \"ordered counts of eigenvalues\" but \"index of eigenvalues\", and in Figure 6 is not \"ratios of eigenvalues\" but ratio of the index. In the caption for Figure 6, \"scaled by their ratio\" is not clear.\n\nI don't follow why Figure 6 confirms that \"the effects of the ignored term in the decomposition is small\" for negative eigenvalues.\n\nIn section 3.3, when saying the variances of the steps are different but the means are similar, it may interesting to note that the variance is often the dominant term and much greater in magnitude than the mean when doing SGD (at least that's what I've experienced).\n\nWhat's the meaning of \"elbow at similar levels\"? What's the significance?\n\nIn section 4 it is claimed that overparameterization is what \"leads to flatness at the bottom of the landscape which is easy to optimize\". The bulk-outlier view suggests that adding extra parameters may just add extra dimensions to the flat region, but why is optimizing 100 values in a flat 100-dimensional space easier than optimizing 10 values in a flat 10-dimensional space?\n\nIn section 4.1, \"fair comparison\" is misleading since it depends on perspective. If one cares about compute time then certainly measuring steps rather than epochs would not be a fair comparison!\n\nWhat's the relevance of the fact that random initial points in high-dimensional spaces are almost always nearly orthogonal (N.B. the \"nearly\" should be added)? This seems to be assuming something about the mapping from initial point to basin of attraction.\n\nWhat's the meaning of \"extending away from either end points appear to be confirming the sharpness of [the] LB solution\"? Is this shown somewhere?\n\nIt would be helpful to highlight the key difference to Keskar et al. (which I believe is initializing SB training from LB point rather than from scratch). I presume the claim is that Keskar et al. only found their \"inverted camel hump\" linear interpolation results due to the random initialization, and that this would also often be observed for, say, two random LB-from-scratch trainings (which may randomly fall into different basins of attraction). If this is the intended point then it would be good to make this explicit.\n\nIn \"the first terms starts to dominate\", to dominate what? The gradient, or the second term in (5)? If the latter, what is the relevance of this?\n\nWhy \"even\" in \"Even when the weight space has large flat regions\"?\n\nIn the last paragraph of section 4.1, it might be worth spelling out that (as I understand it) the idea is that the small batch method finds itself in a poor region to begin with, since the average loss over an SB-noise-sized neighborhood of the LB point is actually not very good, and so there is a non-zero gradient through flat space to a place where the average loss over an SB-noise-sized neighborhood is good.\n\nIn section 5, \"we see that even large batch methods are able to get to the level where small batch methods go\" seems strange. Isn't this of training set loss? Isn't the \"level\" people care about the test set loss?\n\nIn appendix A, the meaning of consecutive in \"largest consecutive gap\" and \"largest consecutive ratio\" was not clear to me.\n\nAppendix B is only referred to in a footnote. What is its significance for the main theme of the paper? I'd suggest either making it more prominent or putting it in a separate paper.\n\n", "Thank you very much for your question. The experiments are the exact Hessian calculation, therefore, it reflects the existing negative eigenvalues. Of course, they can only come from the second term of the decomposition. We modified the text to reflect this.", "Thank you very much for your time to review our paper. We have added a joint response that should cover most of the issues addressed, and we updated the pdf file for our work. Here are some specific comments:\n\nThe phenomena of a large number of eigenvalues being small (e.g. for Figure 1, 95% of the eigenvalues for the final point are within the band of [-10^(-4), -10^(-4)]) is a geometrical feature of the landscape that may change our way of visualizing the landscape. For instance, if one is to consider a random polynomial of degree 3 or more, and in a large number of variables, at a local minimum, the histogram of the eigenvalues of the loss function will be a shifted semi-circle distribution which is drastically different. Or in another context, if one is interested in the sample covariance function as in a perfect solution that ignores the second term, and if M < N and the data are iid then the spectrum would have a Marcenko Pastur part (see Appendix for more details). However, what we observe here doesn't fit into such provable cases, and to the best of our knowledge, there is no mathematically sound theoretical argument that would provide us with an explanation for the case at hand. Therefore, the scope of our work is to stick to the experiments and gain insight into what may actually be happening at the bottom of the loss landscape. \n\nThank you also for pointing out the improvements, we have edited the text to reflect on increasing the clarity of our experiments, and exposition. We hope that our message is better conveyed this way. ", "We thank all three reviewers for their time to evaluate our work, here we craft a response that we believe should address some of the points commonly raised by all three reviewers. We have edited the paper to enhance the message we are trying to convey, and we hope it is more expressive in its new state. \n\nThe focus of our work: The landscape at the bottom is flatter than the picture depicted in many recent papers (some of which are other fellow ICLR submissions e.g. https://openreview.net/forum?id=rJma2bZCW). Therefore we should revise our notions of 'basin' in a way that will address this feature. \n\nOur work is phenomenological, and it addresses the shortcomings of certain ways of picturing the landscape, and it calls for a change. To this end:\n(1) We demonstrate the local geometry at the bottom of the landscape and its intricate relations with the data, model, and algorithm.\n(2) Then we show how the space of solutions can be vastly connected if one avoids rather simple pitfalls.\n\nGeneral remarks:\n\n- Our work is an enhancement over Sagun et. al. (2016) in the following ways: (1) We present more experiments of the spectrum of the Hessian in various different setups, as well as a possible explanation. Therefore we solidify the claims in a more robust way. (2) Based on the key insight from the previous part, we present an experiment where two qualitatively different solutions are connected, thereby challenging some of the recent work by pointing out the fact that certain ways of visualization techniques can be misleading. (3) Finally, to the best of our knowledge, Sagun et. al. (2016) hasn't been published anywhere besides the ArXiv. We believe that our contribution is the necessary addition that would build on top of that work. This can also be seen by the reviews of that work has got: https://openreview.net/forum?id=B186cP9gx \n\n- Our experiments don't rely on the decomposition. The decomposition is a tool to analyze the results and make predictions to be tested experimentally. All experiments are standalone. We edited the text to better reflect this fact. We also added more details on the experimental procedures.\n\n- Certain notions such as the data complexity and over-parametrization are vague since making them more precise would require the details of the architecture, as well. Our focus is on the flattened weight vector, therefore, for now, it would be enough to consider cases where M>>N. However, future work will take a more detailed look into this.\n", "Thank you very much for the comments. We have addressed some of the main concerns above in a general statement. Please consider that response in your re-evaluation, as well. We fixed the minor points and added more details to the experiments. We also changed the structure of the paper to emphasize our contribution and make it clearer. \n\nTo be more precise, we have a simple perspective that can also be interpreted as a warning sign when one is interested in questions related to the geometry of the bottom of the landscape. We have insights derived from our simple experiments and a demonstration how common ways of visualizations can be misleading. As pointed out, we improved our presentation in this regard. \n\nRegarding point 3: we present two solutions that are qualitatively different and they show signs of being in different basins (sharp/narrow) but they are in the same basin. We also point out that the barriers between solutions can appear depending on internal symmetries of the system, and our experiment addresses this issue as well. Please refer to the general comment above and the updated text for further details. ", "Thank you very much for the helpful review, please note that some of the major themes are addressed in the general comment above. \n\n- In the decomposition, multi-target case can be covered by\n$\\ell(s_y, y) = -s_y + \\log\\sum_{y'}\\exp{s_{y'}}$. It is indeed the case that the output independent of the target would be a conventional way to go, to do that, we will expand the decomposition to cover vector-valued outputs, too.\n\n- The strict saddle property in Lee et. al. assumes isolated (therefore non-degenerate) critical point.\n \n- Log scale plot for Figure 1 doesn't produce a meaningful plot, however, it might be worthwhile to note that 95% of the eigenvalues for the final point are within the band of [-10^(-4), -10^(-4)]\n\n- For Figure 1, 2, and 3 a thousand samples are generated from Gaussian clusters. This point is also addressed in Section 3.1. Also, the takeaway of section 3.1 and 3.2 is the relation between the outliers and data (and not the size of the model).\n\n- By the bottom of the landscape, we mean loss values near zero (but not at zero). To be more precise, for a non-negative function, f, we mean an element from the set {w:f(w)<epsilon}. To the best of our knowledge, the values of the global minimum, and/or the local minima are unknown in the case of deep learning loss functions. \n\n- Regarding the correlation in the second term, that's right, a more plausible argument would be the perfect classifier that has zero gradients on each of the examples.\n\n- In most cases, M>N without 'severe' overfitting, for example, for CIFAR-10 N=50K and M is usually several million.\n\n- PCA was a way to assess the complexity of the data and show its relation to the eigenvalues. But we decided to remove it since a notion of complexity of the data in this context should take the architecture into account. We added a remark on this in the text, as well. \n\n- In our experience, relative values of the variance and the mean of the gradients in SGD depends on the phase of the training. We will look into this in more detail. \n\n- By a 'fair comparison', we mean a fair comparison of what algorithm finds what kind of solution, assuming one is interested in the behavior of the algorithm itself. Otherwise, the real-life computational challenges depend on the hardware, too. For instance, one could increase the batch-size up to the saturation of the GPU and not lose time on it. Therefore, scaling the time axis with the number of epochs can be misleading in a broader context.\n\n- If one is to select random points on the sphere, the selected points become more and more orthogonal as the dimension of the sphere increases. We have experiments that show that this orthogonality is preserved for the trained points, too, if one starts from orthogonal initial points. This is not surprising given the geometry of high dimensional spaces. But we can follow up on this in another work. \n\n- \"In the last paragraph of section 4.1, it might be worth spelling out that (as I understand it) the idea is that the small batch method finds itself in a poor region to begin with, since the average loss over an SB-noise-sized neighborhood of the LB point is actually not very good, and so there is a non-zero gradient through flat space to a place where the average loss over an SB-noise-sized neighborhood is good.\" This is a great point, but we are curious about the following: Is it the size of the noise, or the shape of it? We believe this should be investigated further in a separate context. \n\n- \"In section 5, \"we see that even large batch methods are able to get to the level where small batch methods go\" seems strange. Isn't this of training set loss? Isn't the \"level\" people care about the test set loss?\" Right, we meant 'the same basin'.\n\n- By largest consecutive gap, we mean the largest element of the set of consecutive gaps of eigenvalues when they are ordered on the real line. And similarly with the largest consecutive ratio. They are just ways of finding a separator in the spectrum. Some which seem to work better than the others but such a separator should depend on the notion of the complexity of the dataset, as well. Also, we added a note to explain the relevance of Appx B. The theorem there is a tool that maps the eigenvalues of the population matrix to the sample covariance matrix but it is only valid for independent data. We also provide an example where it can work and fail at the end of the appendix.", "Interesting and much need line of research. I have one question regarding your experiments.\n\nIn equation (6), you approximate a Hessian as sum of rank-1 matrices of the form vv^T. But these kind of matrices formed are always Positive Semi-definite. Given that how did you estimate the Hessian's to have negative eigenvalues in Figures. 1, 5, 6, 8?" ]
[ -1, 5, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "SkTaZOamG", "iclr_2018_rJrTwxbCb", "iclr_2018_rJrTwxbCb", "iclr_2018_rJrTwxbCb", "ryoh5LEMM", "ryIbx22yz", "iclr_2018_rJrTwxbCb", "rJT6jEcgz", "HkeyY0M-z", "iclr_2018_rJrTwxbCb" ]
iclr_2018_B1Z3W-b0W
Learning to Infer
Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs). In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients. Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings. We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.
workshop-papers
This paper is intersting but has a few flaws that still need to be addressed. As one reviewer noted, "the authors seems to have simply applied the method of Andrychowicz et al. If they added some discussion and experiments clearly showing why this is a better way to improve the existing inference methods, the paper might have more impact.". Overall, this work builds on existing work, but does not really dig deep enough for answers to these questions raised by the reviewers. The committee still feels this paper will be of great value at ICLR and recommends it for a workshop paper.
train
[ "SkLbR5mSf", "S1W-El7rG", "rykU00Y4f", "BkKZyyP4z", "SkZMqqHEG", "SkeA8hENf", "rJuH-vKeG", "Sy3-NV9xG", "Bk8FeZjgf", "BkIzESGGM", "H1aYeSMGz", "HJ5tTEGGf" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We’re glad that you find the revised version is an improvement and more clearly conveys the contributions of the paper.\n\nIterative inference model parameter gradients are obtained using the reparameterization trick, as with standard inference models. The difference is that these gradients are obtained and averaged over inference iterations. We will clarify this point in the caption of Figure 8.\n\nWe are unclear what is meant by proposing and comparing ‘other’ ways of connecting the gradient with the inference network. We followed the method of Andrychowicz et al. for inputting the gradient, using the sign and log of the gradient. If the reviewer means that exploring other methods of processing the gradient may be useful, then we agree, but this does not impact the main contribution of this paper: one can learn to infer using gradients. We hope to further explore this technical detail for the final version of the paper.\n\nAs far as desirable properties for an inference procedure, it is a speed accuracy trade-off. We want a model that is capable of arriving at near-optimal inference estimates in a reasonable amount of time. We have demonstrated that iterative inference models outperform standard inference models in terms of accuracy, achieving similar performance as variational EM in a fraction of the time. We have additionally shown that encoding errors and/or the data can arrive at similar or improved estimates even faster. Please let us know if this point is unclear in the paper so that we can clarify it further.", "Thanks for the revised version. I think Figure 8 helps to clarify the contribution a bit more. I think adding a caption and clearly explaining and how phi is obtain from the gradient would be useful.\n\nIt is important to propose and compare several 'other' ways of connecting the gradient with the inference network. This will help to understand why the proposed method is a good way to do so? Also, what kind of properties we would want in an inference optimizer to be able to improve over Variational EM as well as VAE. Currently, paper proposes one method but does not add much to the understanding on what kind of methods will generally lead to an improvement over traditional inference methods. In my opinion, if done well, this will help the community move forward.", "We have uploaded a revised version of the submission, which attempts to take the reviewers’ comments into account. Again, we thank the reviewers for their help with this process. We specifically highlight the following additions:\n\n- Additional empirical results on the sparse, high-dimensional RCV1 text data set (Appendix D), following Krishnan, et al. Using a multinomial output, we also observe empirical benefits using iterative inference models over standard inference models. We also include Figure 11, which further illustrates learned inference optimization on this data set.\n- Figure 8, which shows unrolled computational graphs for each inference scheme. We hope this helps in clarifying each process.\n- Clarification of distinctions / novel aspects of this work over previous methods at the end of Section 3.1.\n- Clarification of the relative number of input parameters in each model in Section 5.2.\n- Discussion of difficulty of training iterative inference models in Appendix B.4.\n- Clarification on where we report ELBO values and NLL values in Section 5 (first paragraph).\n- Additional sentences in Section 3.1 (2nd and 3rd paragraphs) discussing the amortization gap, i.e. the gap in performance by assuming an amortized inference scheme.\n- Citations for Hoffman et al., 2013; Krishnan et al., 2017; Cremer et al., 2017.\n\nFinally, we would like to close by stating that we feel the content of this submission provides many useful insights to the larger Bayesian deep learning community. We have taken the VAE, one of the most popular models in this area, and provided a novel method by which to perform inference optimization. While the idea of iterative inference optimization may initially seem counterintuitive, we have empirically demonstrated that moving beyond the typical data-encoding paradigm has clear advantages in terms of modeling performance. We have demonstrated the feasibility and success of our method on multiple data sets using various output modeling distributions. This work provides a more detailed view of inference optimization and will hopefully enable further work in amortized variational inference and learned optimization.", "I look forward to the revised version.", "Thank you for your interest in our submission. As it happens, we are currently finishing up a revised version of the paper, which we intend to upload this coming weekend. We hope you will look at the revised paper, as it will include additional clarifications on the points raised by you and the other reviewers.\n\nWith regards to “the advantages of the proposed method,” we seem to misunderstand your comment. We have shown that iterative inference models consistently outperform comparable standard inference models in terms of log-likelihood performance. In other words, iterative inference models result in generative models that are better at fitting data distributions. This is, in and of itself, an advantage of our method. And our experiments on increasing the number of samples and inference iterations demonstrate that we even have the ability to enlarge this advantage. Furthermore, we have shown that iterative inference models converge to similar approximate inference estimates far faster than traditional optimization-based methods. Iterative inference models are therefore more computationally efficient than these methods. This is another clear advantage of our method. As these baselines are the primary methods by which deep latent variable models are currently trained, our work provides the community with an improved method for generative modeling of data.\n\nWe hope you find the revised version of the paper expresses these points more clearly. Please let us know if you have any further comments or questions.", "Is there a revision of the paper available? I am assuming there is none because I don't see it in this page.\n\nAfter reading the rebuttal and other reviews, I think that the paper needs plenty of work on clarifying the writing, and as I said in my review, to clarify (and show) the advantages of the proposed method. For the current version, my opinion has not changed (although I have gained clarify about the work and I do think that this work could make an interesting paper).", "This paper proposes an iterative inference scheme for latent variable models that use inference networks. Instead of using a fixed-form inference network, the paper proposes to use the learning to learn approach of Andrychowicz et. al. The parameter of the inference network is still a fixed quantity but the function mapping is based on a deep network (e.g. it could be RNN but the experiments uses a feed-forward network).\n\nMy main issue with the paper is that it does not do a good job justifying the main advantages of the proposed approach. It appears that the iterative method should result in \"direct improvement with additional samples and inference iterations\". I am supposing this is at the test time. It is not clear exactly when this will be useful. \n\nI believe an iterative approach is also possible to perform with the standard VAE, e.g., by bootstrapping over the input data and then using the iterative scheme of Rezende et. al. 2014 (they used this method to perform data imputation).\n\nThe paper should also discuss the additional difficulty that arises when training the proposed model and compare them to training of standard inference networks in VAE.\n\nIn summary, the paper needs to do a better job in justifying the advantages obtained by the proposed method. ", "This paper proposes a learning-to-learn approach to training inference networks in VAEs that make explicit use of the gradient of the log-likelihood with respect to the latent variables to iteratively optimize the variational distribution. The basic approach follows Andrychowicz et al. (2016), but there are some extra considerations in the context of learning an inference algorithm.\n\nThis approach can significantly reduce the amount of slack in the variational bound due to a too-weak inference network (above and beyond the limitations imposed by the variational family). This source of error is often ignored in the literature, although there are some exceptions that may be worth mentioning:\n* Hjelm et al. (2015; https://arxiv.org/pdf/1511.06382.pdf) observe it for directed belief networks (admittedly a different model class).\n* The ladder VAE paper by Sonderby et al. (2016, https://arxiv.org/pdf/1602.02282.pdf) uses an architecture that reduces the work that the encoder network needs to do, without increasing the expressiveness of the variational approximation.\n* The structured VAE paper by Johnson et al. (2016, https://arxiv.org/abs/1603.06277) also proposes an architecture that reduces the load on the inference network.\n* A very recent paper by Krishnan et al. (https://arxiv.org/pdf/1710.06085.pdf, posted to arXiv days before the ICLR deadline) is probably closest; it also examines using iterative optimization (but no learning-to-learn) to improve training of VAEs. They remark that the benefits on binarized MNIST are pretty minimal compared to the benefits on sparse, high-dimensional data like text and recommendations; this suggests that the learning-to-learn approach in this paper may shine more if applied to non-image datasets and larger numbers of latent variables.\n\nI think this is good and potentially important work, although I do have some questions/concerns about the results in Table 1 (see below). \n\n\nSome more specific comments:\n\nFigure 2: I think this might be clearer if you unrolled a couple of iterations in (a) and (c).\n\n(Dempster et al. 1977) is not the best reference for this section; that paper only considers the case where the E and M steps can be done in closed form on the whole dataset. A more relevant reference would be Stochastic Variational Inference by Hoffman et al. (2013), which proposes using iterative optimization of variational parameters in the inner loop of a stochastic optimization algorithm.\n\nSection 4: The statement p(z)=N(z;mu_p,Sigma_p) doesn’t quite match the formulation of Rezende&Mohamed (2014). First, in the case where there is only one layer of latent variables, there is almost never any reason to use anything but a normal(0, I) prior, since the first weight matrix of the decoder can reproduce the effects of any mean or covariance. Second, in the case where there are two or more layers, the joint distribution of all z need not be Gaussian (or even unimodal) since the means and variances at layer n can depend nonlinearly on the value of z at layer n+1. An added bonus of eliminating the mu_p, Sigma_p: you could get rid of one subscript in mu_q and sigma_q, which would reduce notational clutter.\n\nWhy not have mu_{q,t+1} depend on sigma_{q,t} as well as mu_{q,t}?\n\nTable 1: These results are strange in a few ways:\n* The gap between the standard and iterative inference network seems very small (0.3 nats at most). This is much smaller than the gap in Figure 5(a).\n* The MNIST results are suspiciously good overall, given that it’s ultimately a Gaussian approximation and simple fully connected architecture. I’ve read a lot of papers evaluating that sort of model/variational distribution as a baseline, and I don’t think I’ve ever seen a number better than ~87 nats.", "Instead of either optimization-based variational EM or an amortized inference scheme implemented via a neural network as in standard VAE models, this paper proposes a hybrid approach that essentially combines the two. In particular, the VAE inference step, i.e., estimation of q(z|x), is conducted via application of a recent learning-to-learn paradigm (Andrychowicz et al., 2016), whereby direct gradient ascent on the ELBO criteria with respect to moments of q(z|x) is replaced with a neural network that iteratively outputs new parameter estimates using these gradients. The resulting iterative inference framework is applied to a couple of small datasets and shown to produce both faster convergence and a better likelihood estimate.\n\nAlthough probably difficult for someone to understand that is not already familiar with VAE models, I felt that this paper was nonetheless clear and well-presented, with a fair amount of useful background information and context. From a novelty standpoint though, the paper is not especially strong given that it represents a fairly straightforward application of (Andrychowicz et al., 2016). Indeed the paper perhaps anticipates this perspective and preemptively offers that \"variational inference is a qualitatively different optimization problem\" than that considered in (Andrychowicz et al., 2016), and also that non-recurrent optimization models are being used for the inference task, unlike prior work. But to me, these are rather minor differentiating factors, since learning-to-learn is a quite general concept already, and the exact model structure is not the key novel ingredient. That being said, the present use for variational inference nonetheless seems like a nice application, and the paper presents some useful insights such as Section 4.1 about approximating posterior gradients.\n\nBeyond background and model development, the paper presents a few experiments comparing the proposed iterative inference scheme against both variational EM, and pure amortized inference as in the original, standard VAE. While these results are enlightening, most of the conclusions are not entirely unexpected. For example, given that the model is directly trained with the iterative inference criteria in place, the reconstructions from Fig. 4 seem like exactly what we would anticipate, with the last iteration producing the best result. It would certainly seem strange if this were not the case. And there is no demonstration of reconstruction quality relative to existing models, which could be helpful for evaluating relative performance. Likewise for Fig. 6, where faster convergence over traditional first-order methods is demonstrated; but again, these results are entirely expected as this phenomena has already been well-documented in (Andrychowicz et al., 2016).\n\nIn terms of Fig. 5(b) and Table 1, the proposed approach does produce significantly better values of the ELBO critera; however, is this really an apples-to-apples comparison? For example, does the standard VAE have the same number of parameters/degrees-of-freedom as the iterative inference model, or might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs? Overall, I wonder whether iterative inference is better than standard inference with eq. (4), or whether the recurrent structure from eq. (5) just happens to implicitly create a better neural network architecture for the few examples under consideration. In other words, if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained.\n\n\nOther minor comment:\n* In Fig. 5(a), it seems like the performance of the standard inference model is still improving but the iterative inference model has mostly saturated.\n* A downside of the iterative inference model not discussed in the paper is that it requires computing gradients of the objective even at test time, whereas the standard VAE model would not.", "Thank you for your feedback. We hope to clarify points that were unclear through this reply as well as revisions to the paper.\n\n``A very recent paper by Krishnan et al. (https://arxiv.org/pdf/1710.06085.pdf, posted to arXiv days before the ICLR deadline) is probably closest; it also examines using iterative optimization (but no learning-to-learn) to improve training of VAEs. They remark that the benefits on binarized MNIST are pretty minimal compared to the benefits on sparse, high-dimensional data like text and recommendations; this suggests that the learning-to-learn approach in this paper may shine more if applied to non-image datasets and larger numbers of latent variables.\"\n\nWe became aware of the work by Krishnan et al. after the deadline, and we will cite them as concurrent work. We find it interesting that they did not see a larger improvement on binarized MNIST, as this may point to qualitative differences between their approach and learned optimization. We plan to include additional experiments in the appendix applying iterative inference models to sparse data.\n\n``Figure 2: I think this might be clearer if you unrolled a couple of iterations in (a) and (c).\"\n \nThank you for the suggestion. We plan to include an additional figure in the appendix showing these iterative approaches unrolled in time.\n\n``(Dempster et al. 1977) is not the best reference for this section; that paper only considers the case where the E and M steps can be done in closed form on the whole dataset. A more relevant reference would be Stochastic Variational Inference by Hoffman et al. (2013)...\"\n\nWe will cite Hoffman et al. (2013). We were initially hesitant to cite this reference as they make use of natural gradients, which are absent in this work.\n\n`` The statement p(z)=N(z;mu_p,Sigma_p) doesn’t quite match the formulation of Rezende&Mohamed (2014…in the case where there are two or more layers, the joint distribution of all z need not be Gaussian (or even unimodal)…\"\n\nWe chose this formulation in the derivation because it provides a more general treatment. As pointed out, it is unnecessary in the case of a one-level model. However, this formulation is applicable in the hierarchical case, where the prior is typically some arbitrary factorized Gaussian density. The discussion in Section 4 applies to one-level models, which are most commonly used in practice. You are correct that a hierarchical prior need not take the form of a Gaussian, and we discuss this model form in further detail in Appendix A.6. We will attempt to make this point clearer.\n\n``Why not have mu_{q,t+1} depend on sigma_{q,t} as well as mu_{q,t}?\"\n \nThis is, in fact, what we do in practice. VAEs have typically been presented as having separate functions for each approximate posterior term, which then share parameters to simplify the model and make learning more efficient. We followed this convention.\n\n``The gap between the standard and iterative inference network seems very small (0.3 nats at most). This is much smaller than the gap in Figure 5(a).\"\n \nTable 1 presents negative log-likelihood estimates using 5,000 importance weighted samples, whereas all other figures show lower bound estimates using a single sample. The gap between negative log-likelihood estimates and lower bound estimates need not be the same, as they depend on the tightness of the bounds. We will make this distinction clearer in the paper.\n\n``The MNIST results are suspiciously good overall...I don’t think I’ve ever seen a number better than ~87 nats.\"\n \nOur results agree with (Sønderby et al., 2016), who report a NLL of ~85 nats for a nearly identical model architecture (compare with our ~84 nats for a standard inference model). As in their experiments, we use the dynamically binarized version of MNIST, which results in higher log-likelihoods as compared with statically binarized MNIST. The additional ~1 nat gap is likely due to different activation functions and encoding architecture. We used exponential linear units (ELU), which we have always found to yield superior performance over leaky ReLUs used in (Sønderby et al., 2016). We also used residual encoding networks, which tend to perform better.", "Thank you for your feedback. We hope to clarify points that were unclear through this reply as well as revisions to the paper.\n\nRegarding the utility of our method:\n``It appears that the iterative method should result in \"direct improvement with additional samples and inference iterations\"... It is not clear exactly when this will be useful…the paper needs to do a better job in justifying the advantages obtained by the proposed method.\"\n\nAdditional samples and inference iterations help at both training and test time. We presented these experiments to show two aspects of iterative inference models that are distinct from standard inference models, helping readers to distinguish between these models. The main advantage of iterative inference models is that they outperform similar standard inference models in terms of log likelihood, i.e. iterative inference models are better able to capture the data distribution. Increasing the number of samples or inference iterations provides two additional knobs with which to widen this performance gap. We will attempt to make this clearer in the revised paper.\n\nRegarding iterative approaches with VAEs:\n``I believe an iterative approach is also possible to perform with the standard VAE, e.g., by bootstrapping over the input data and then using the iterative scheme of Rezende et. al. 2014 (they used this method to perform data imputation).\"\n \nSuch an approach would be qualitatively different than the approach presented here. The data imputation scheme from in (Rezende et. al. 2014) involves iteratively encoding partial observations or reconstructions. If we understand your comment, at best, that approach could only perform as well as a VAE with full observations. Encoding reconstructions would likely introduce further errors.\n\nRegarding training difficulty:\n``The paper should also discuss the additional difficulty that arises when training the proposed model and compare them to training of standard inference networks in VAE.\"\n \nWe found training iterative inference models to be relatively straightforward and easy to implement. There were no tricks necessary to train these models, and we found that iterative inference models start learning to improve their inference estimates almost immediately. We will include further discussion of this point in Appendix B to assure readers. We will also release code upon publication.", "Thank you for your feedback. We hope to clarify points that were unclear through this reply as well as revisions to the paper.\n\nRegarding novelty:\n``…the paper…represents a fairly straightforward application of (Andrychowicz et al., 2016). …learning-to-learn is a quite general concept already, and the exact model structure is not the key novel ingredient.\"\n\nWhile our work is related to that of (Andrychowicz et al., 2016), there are several novel distinctions:\n1.\twe apply learned optimization models to variational inference,\n2.\twe empirically demonstrate that feedforward networks can perform optimization, whereas previous works required recurrent networks,\n3.\twe develop a novel encoding form that approximates derivatives.\nTo the best of our knowledge, these findings are not fully discussed or demonstrated in the literature.\nUnlike learning, variational inference optimization operates over fewer steps and is performed separately for each example, rather than across different tasks. Furthermore, our experiments with hierarchical latent variable models demonstrate a qualitatively different form of optimization model, split across separate networks on multiple levels of optimized variables.\nThe optimization model architecture is an important contribution, as all previous works have only used recurrent neural networks, implicitly assuming that learned optimization requires coordination over multiple steps. We have shown that feedforward networks can learn to perform optimization, outperforming optimizers like ADAM and RMSProp that capture additional curvature information from decaying moments. \nThe reviewer states, “the paper presents some useful insights such as Section 4.1 about approximating posterior gradients.” We have shown that computing approximate posterior gradients is unnecessary; a model can learn to optimize using locally computed errors. To the best of our knowledge, this is the first time this observation has been explicitly identified in the literature, providing a novel form of learned optimization models.\n\nRegarding seemingly unsurprising results:\n``...most of the conclusions are not entirely unexpected.\"\n``...these results are entirely expected as this phenomena has already been well-documented in (Andrychowicz et al., 2016).\"\n\nThe results on inference optimization capabilities (Section 5.1 and Figure 6) are interesting for the reason that they are what we would expect. It’s un-intuitive and surprising that an iterative inference model can learn to optimize a generative model, and our results verify that this is done in a reasonable manner. Few works in the VAE literature have discussed optimization performance, so it is instructive to visualize and quantify how various methods compare.\n\nRegarding experimental comparisons:\n``...is this really an apples-to-apples comparison?…might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs?...if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained.\"\n\nThe gradient encoding iterative inference model (eq. 5) in Figure 5b has fewer input parameters (256 vs. 784), yet outperforms the standard inference model. We will clarify this point. As the reviewer points out, a perfect comparison of models is difficult to perform: varying numbers of inputs result in varying numbers of input parameters. Yet, the number of parameters processing information from the data is constant across both models, showing that gradients and errors contain additional information. Regarding our results, we found that iterative inference models outperformed standard models across a variety of architectures (varying network/latent width, residual/dense connections, batch norm, etc.) on the benchmark data sets. The experiments are representative of this finding, which we hope to clarify in the revised paper.\n\nMiscellaneous:\n``A downside…is that it requires computing gradients of the objective even at test time...\"\n\nIterative inference models that encode gradients require these gradients at test time, which we will state more clearly. However, the error encoding models that we introduce do not require these gradients, one of their benefits that we highlight.\n\n``…there is no demonstration of reconstruction quality relative to existing models, which could be helpful for evaluating relative performance.\"\n\nThe purpose of Figure 4 is to provide a qualitative verification of our inference optimization, not to demonstrate superior reconstruction quality. It would also be difficult for humans to visually inspect these differences, as they likely involve slight differences in pixel intensities. \n\n``In Fig. 5(a), …the standard inference model is still improving but the iterative inference model has mostly saturated.\"\n\nWe agree, but this does not impact the main empirical findings from section 5.2: iterative inference models improve significantly with more approximate posterior samples." ]
[ -1, -1, -1, -1, -1, -1, 5, 6, 5, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, -1, -1, -1 ]
[ "S1W-El7rG", "SkZMqqHEG", "iclr_2018_B1Z3W-b0W", "BkIzESGGM", "SkeA8hENf", "H1aYeSMGz", "iclr_2018_B1Z3W-b0W", "iclr_2018_B1Z3W-b0W", "iclr_2018_B1Z3W-b0W", "Sy3-NV9xG", "rJuH-vKeG", "Bk8FeZjgf" ]
iclr_2018_HkCnm-bAb
Can Deep Reinforcement Learning solve Erdos-Selfridge-Spencer Games?
Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning. These games have a number of appealing features: they are challenging for current learning approaches, but they form (i) a low-dimensional, simply parametrized environment where (ii) there is a linear closed form solution for optimal behavior from any state, and (iii) the difficulty of the game can be tuned by changing environment parameters in an interpretable way. We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set.
workshop-papers
The paper introduces an interesting family of two-player zero-sum games with tunable complexity, called Erdos-Selfridge-Spencer games, as a new domain for RL. The authors report on extensive empirical results using a wide variety of training methods, including supervised learning and several flavors of RL (PPO, A2C, DQN) as well as single-agent vs. multi-agent training. The reviewers also appear to agree that the method appears to be technically correct, clearly written, and easy to read. A drawback of the paper is that it does not make a *significant* contribution to the field. In combing through the reviewer comments, none of them identify a significant contribution. Even in the text of the paper, the authors do not anywhere claim to have made a significant contribution. As the paper is still interesting, the committee would like to recommend this for the workshop track. Pros: Interesting domain with tunable complexity High-quality extensive empirical results Writing is clear Cons: Lacks a significant contribution Appears to overlook self-play, the dominant RL training paradigm for decades (multiagent training appears to be related but different) Per Reviewer3, "I remain unconvinced that these games are good general tests for Deep RL"
train
[ "SJNicmVeM", "rynBydweG", "r16fdy3xG", "ryyIUs7VM", "HkBezd67G", "H18csR2mz", "SJreVx2Xf", "SJK3Xl2mz", "SJIqzlhXz", "r1hM8u5Xz", "r1aTS_qQM", "ByYUrO5QM", "Sk187Bmff", "SJVXQS7Gf", "H1KaWSmff", "Sy39ZHQMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "The paper presents Erdos-Selfridge-Spencer games as environments for investigating\ndeep reinforcement learning algorithms. The proposed games are interesting and clearly challenging, but I am not sure what they tell us about the algorithms chosen to test them. There are some clarity issues with the justification and evaluation which undermine the message the authors are trying to make.\n\nIn particular, I have the following concerns:\n\n • these games have optimal policies that are expressible as a linear model, meaning that if the architecture or updating of the learning algorithm is such that there is a bias towards exploring these parts of policy space, then they will perform better than more general algorithms. What does this tell us about the relative merits of each approach? The authors could do more to formally motivate these games as \"difficult\" for any deep learning architecture if possible.\n • the authors compare linear models with non-linear models at some point for attacker policies, but it is unclear whether these linear models are able to express the optimal policy. In fact, there is a level of non-determinism in how the attacker policies are encoded which means that an optimal policy cannot be (even up to soft-max) expressed by the agent (as I read things the number of pieces chosen in level l is always chosen uniformly randomly).\n • As the authors state, this paper is an empirical evaluation, and the theorems presented are derived from earlier work. There is possibly too much focus on the proofs of these theorems.\n • There are a number of ambiguities and errors which places difficulties on the interpretation (and potential replication) of the experiments. As this is an empirical study, this is the yardstick by which the paper should be judged. In particular, this relates to:\n ◦ The architecture of each of the tested Deep RL methods.\n ◦ What is done to select appropriate tuning parameters of the tested Deep RL methods, if anything.\n ◦ It is unclear whether 'incorrect actions' in the supervised learning evaluations, refer to non-optimal actions, or simply actions that do not preserve the dominance of the defender, e.g. both partitions may have potential >0.5\n ◦ Fig 4. right looks like a reward signal, but is labelled Proportion correct. The text is not clear enough to be sure which it is.\n ◦ Fig 4. left and right has 4 methods: rl rewards, rl correct actions, sup rewards, and sup correct actions. The specifics of how these methods are constructed is unclear from the paper.\n ◦ What parts of the evaluation explores how well these methods are able to represent the states (feature/representation learning) and what parts are evaluating the propagation of sparse rewards (the reinforcment learning core)? The authors could be clearer and more targetted with respect to this question.\n\nThere is value in this work, but in its current state I do not think it is ready for publicaiton.\n\n# Detailed notes\n\n[p4, end of sec 3] The authors say that the difficulty of the games can be varied with \"continuous changes in potential\", but the potential is derived from the discrete initial game state, so these values are not continuously varying (even though it is possible to adjust them by non-integer amounts).\n\n[p4, sec 4.1]\n\"strategy unevenly partitions the occupied levels...with the proportional difference between the two sets being sampled randomly\"\nWhat is meant by this? The proportional difference between the two sets is discussed as if it is a continuous property, but must be chosen from the discrete set of all available partitions. If one partition one is chosen uniformly randomly from all possibly sets A, B (and the potential proportion calculated) then I don't know why it would be written in this way. That suggests that proportions that are closer to 1:1 are chosen more often than \"extreme\" partitions, but how? This feels a little under-justified.\n\"very different states A, B (uneven potential, disjoint occupied levels)\"\nAre these states really \"very different\", or at least for the reasons indicated. Later on (Theorem 3) we see how an optimal partition is generated. This chooses a partition where one part contains all pieces in layer (l+1) and above and one part with all pieces in layer (l-1) and below, with layer l being distributed between the two parts. The first part will typically have a slightly lower potential than the other and all layers other than layer l will be disjoint.\n\n\n[p6, Fig 4] The right plot y-limits vary between -1 and 1 so it cannot represent a proportion of correct actions. Also, in the text the authors say:\n >> The results, shown in Figure 4 are surprising. Reinforcement learning \n >> is better at playing the game, but does worse at predicting optimal moves.\nI am not sure which plot shows the playing of the game. Is this the right hand plot? In which case are we looking at rewards? In fact, I am a little confused as to what is being shown here. Is \"sup rewards\" a supervised learning method trained on rewards, or evaluated on rewards, or both? And how is this done. The text is just not clear enough.\n\n[p7 Fig 6 and text] Here the authors are comparing how well agents select the optimal actions as compared to how close they are to the end of the game. This relates to the \"surprising\" fact that \"Reinforcement learning is better at playing the game, but does worse at predicting optimal moves.\". I think an important point here is how many training/test examples there are in each bin. If there are more in the range 3-7 moves from the end of the game, than there are outside this range, then the supervised learner will\n\n[p8 proof of theorem 3] \n\"φ(A l+1 ) < 0.5 and φ(A l ) > 0.5.\"\nIs it true that both these inequalities are strict?\n\"Since A l only contains pieces from levels K to l + 1\"\nIn fact this should read from levels K to l.\n\"we can move k < m − n pieces from A l+1 to A l\"\nDo the authors mean that we can define a partition A, B where A = A_{l+1} plus some (but not all) elements in level l (A_{l}\\setminus A_{l+1})?\n\"...such that the potential of the new set equals 0.5\"\nIt will equal exactly 0.5 as suggested, but the authors could make it more precise as to why (there is a value n+k < l (maybe <=l) such that (n+k)*2^{-(K-l+1)}=0.5 (guaranteed). They should also indicate why this then justifies their proof (namely that phi(S0)-0.5 >= 0.5).\n\n[p8 paramterising action space] A comment: this doesn't give as much control as the authors suggest. Perhaps the agent should also chose the proportion of elements in layer l to set A. For instance, if there are a large number of elements in l, and or phi(A_{l+1}) is very close to 0.5 (or phi(A_l) is very close to 0.5) then this doesn't give the attacker the opportunity to fine tune the policy to select very good partitions. It is unclear expected level of control that agents have under various conditions (K and starting states).\n\n[p9 Fig 8] As the defender's score is functionally determined by the attackers score, it doesn't help to include this on the plot. It just distracts from the signal.\n", "This paper presents a study of reinforcement learning methods applied to Erdos-Selfridge-Spencer games, a particular type of two-agent, zero-sum game. The authors describe the game and some of its properties, notably that there exists a tractable potential function that indicates optimal play for each player for every state of the board. This is used as a sort of ground truth that enables study of the behavior of certain reinforcement learning algorithms (for just one or to both players). An empirical study is performed, measuring the performance of both agents, tuning the difficulty of the game for each agent by changing the starting position of the game.\n\n- The comparison of supervised learning vs RL performance is interesting. Is the supervised algorithm only able to implement Markovian policies? Is the RL agent able to find policies with longer-term dependence that it can follow? Is that what is meant by the sentence on page 6 \"We conjecture that reinforcement learning is learning to focus most on moves that matter for winning\"? \n\n- Why do you think the defender trained as part of a multiagent setting generalizes better than the single agent defender? Is there something different about the distribution of policies seen by each defender? \n\nQuality: The method appears to be technically correct, clearly written, and easy to read.\n\nOriginality: I believe this is the first use of ESS games to study RL algorithms. I am also not aware of trying to use games with known potential functions/optimal moves as a way to study the performance of RL algorithms.\n\nImpact: I think this is an interesting and creative contribution to studying RL, particularly the use of an easy-to-analyze game in an RL setting. ", "This paper presents an adversarial combinatorial game: Erdos-Selfridge-Spencer attacker-defender game, with the goal to use it as a benchmark for reinforcement learning. It first compares PPO, A2C, DQN on the task of defending vs. an epsilon-sub-optimal attacker, with varying levels of difficulty. Secondly it compared RL and supervised learning (as they know the optimal actiona at all times). Then it trains (RL) the attacker, and finally trains the attacker and the defender (each a separate model) jointly/concurrently.\n\nVarious points:\n - The explanation of the Erdos-Selfridge-Spencer attacker-defender game is clear.\n - As noted by the authors in section 5, with this featurization, the network only has to learn the weight \"to multiply\" (the multiplication is already the inner product) the feature x_i to be 2^{-(K-i)}, K is fixed for an experiment, and i is the index of the feature, thus can be matched by the index of the weight (vector or diagonal matrix). The defender network has to do this to the features of A and of B, and compare the values; the attacker (with the action space following theorem 3) has to do this for (at most) K progressive partitions. All of this leads me to think that a linear baseline is a must-have in most of the plots, not just Figure 15 in the appendix on one task, moreso as the environment (game) is new. A linear baseline also allows for easy interpretation of what is learned (is it the exact formula of phi(S)?), and can be parametrized to work with varying values of K.\n - In the experimental section, it seems (due transparent coloring in plots, that I understand to be the minimum and maximum values as said in the text in section 4.1, or is that a confidence interval or standard deviation(s)? In ny case:) that 3 random seeds are sometimes not enough to derive strong conclusions, in particular in Figure 9.\n - Everything leads me to believe that, up to 6.2, the game is only dealt with as a fixed MDP to be overfit by the model through RL:\n - there is no generalization from K=k (train) to K > k (test).\n - sections 6.2, 6.3 and the appendix are more promising but there is only one experiment with potential=1.0 (which is the most interesting operating point for multiagent training) in Figure 8, and potential=0.999 in the appendix. There is no study of the dynamics of attacks/defenses (best responses or Nash equilibrium).\n\nNits:\n - in Figure 8, there is no need to plot both the attacker and defender rewards.\n - Figure 3 overwrites x axis of top figures.\n - Figure 4 right y-axis should be \"average rewards\".\n\nIt seems the game is easy from a reinforcement learning standpoint, and this is not necessarily a bad thing, but then the experimental study should be more rigorous in term of convergences, error bars, and baselines.", "Thank you for running the additional experiment, comparing your supervised learning setup to the RL approach. I think this an interesting empirical distinction, that a supervised learner makes more fatal mistakes as the game becomes more complex. But I think this raises more questions: What is the fundamental difference between these two approaches that leads to this performance gap? Can RL be seen as a minimax routine, whereas the supervised learner may perform well on average? Is there a way to fairly compare/construct different objectives these two approaches may be optimizing? I believe you've highlighted an interesting phenomenon, but I wish more understanding had been cultivated by its analysis. \n\nI am keeping the same score; I still think this is interesting work, but I think the paper can be improved by coupling the empirical study with more analysis. ", "Thank you for your detailed responses to the rebuttal. We have responses to your two main concerns, which we hope can help address the issues you raise.\n\n#### I remain unconvinced that these games are good general tests for deep reinforcement learning. I think this would require more theoretical justification of why a deep learner (or shallow learner) simply cannot learn them efficiently, and I am not sure that is possible. #####\n\nNote that the data distribution, while linearly separable, has an exponentially small margin: there can be as little as 2^{-K} difference between the two sets A, B. In many contexts, this exponentially small margin typically results in exponentially large sample complexity for learning linear separators, e.g. the paper [1], or the lecture notes [2]. \n\n[1]: Sivan Sabato, Nathan Srebro, Naftali Tishby. Tight Sample Complexity of Large Margin Learning. Journal of Machine Learning Research 14 (2013) 2119-2149.\n[2]: https://www.cs.princeton.edu/courses/archive/fall16/cos402/lectures/402-lec4.pdf \n\nWhile we do not have a proof in our case, we believe that it may be possible to use similar arguments based on the exponentially small margin to try showing a lower bound that shallow learners cannot learn optimal play efficiently here as well. \n\n\n#### Under these circumstances both moves can lead to success, and so both are optimal. To put this another way, a perfect player (one that never lost when it could win) could chose the set with the lower potential under these conditions and still win every time. ####\n\nThanks for raising this issue; we agree with your point, namely that there’s only truly a \"right\" answer when one of A or B has potential < 0.5 and the other has potential > 0.5. (In any other case, as you note, while it still seems the most natural to delete the set of higher potential, it doesn't actually \"matter\" from the point of view of preserving the minimax value in the game tree.)\n\nBased on the initial set of reviews, we took this issue into account, and have added two new results to the topic in Section 5 (on Supervised Learning vs. RL) to address the point. Both of these new results address the question of which moves in the game we should be using to test the supervised learner relative to the RL agent.\n\nTo describe these, suppose that p_A is the potential of set A, and p_B is the potential of set B, and let us rename the sets if necessary so that p_A \\leq p_B. In both results, we look only at a subsequence of moves, rather than all moves.\n\nIn the first new result, left pane in Figure 6 in the revised version, we look only at moves where the current potential is < 1, but there is a “wrong move” (a “fatal mistake”) that can make the potential in the next configuration > 1. In particular, this means that p_A + p_B < 1 (so the current position is a forced win for the defender) but p_B \\geq 0.5 (so that if A is removed, this is a fatal mistake that converts the game to a forced loss for the defender).\n\nIn the second new result, which we also worked out but haven’t reported in the current revision, we consider the case raised in your discussion -- the subset of moves where it matters for the minimax value which of A or B is chosen. This corresponds to p_A < 0.5 and p_B \\geq 0.5.\n\nIn both cases, then, we look only at the subsequence of moves specified by the indicated predicate on potential functions, and we see which of the supervised learning algorithm or the RL agent has better performance. The results are very similar in the two cases -- the RL agent has better performance, and significantly so as K increases.\n\nAs we note in the revised Section 5, this adds to the discussion of the contrast between the supervised learning method and the RL agent. Essentially, it suggests that while the mathematical theory of ESS games suggests a simple closed-form optimal solution -- to always delete the set of lower potential -- this is not (as you observe) the only optimal solution, since it doesn’t matter which set is deleted when max(p_A,p_B) < 0.5 or when min(p_A,p_B) \\geq 0.5. And while supervised learning is better at matching the simple closed-form solution from the discrete math literature, it performs worse than the RL agent when restricted to the subsequence of moves that actually matter. In this way, the RL agent is performing better overall in winning the game -- while it deviates more from the strategy suggested in the mathematical literature, the strategy is actually arrives at is effective for obtaining high reward.\n\nOur revised text tries to reflect this distinction, and in a further revision we will include the second of the two additional tests above, and emphasize this point additionally in the text.\n", "Thanks for the additional work, I think it makes the paper better, and puts data behind things that were previously claimed with too little support. I am still a bit unconvinced as far as the modeling goes, as there indeed seems to be overfitting going on, but I revise my overall 5 mark into 6, as your update renders the ESS game more interesting as a \"simple\" benchmark for RL algorithms (at least your update gave me a better comprehension of why ESS is interesting).", ">> ####” [p4, sec 4.1] \"strategy unevenly partitions the occupied levels...with the proportional difference between the two sets being sampled randomly\"” ######\n>> To clarify the method: we randomly pick a proportion C that is **bounded away*** from 0.5, and let the potential of the first set be C*potential. We then greedily fill up a set until its potential first crosses C*potential. The remainder of the pieces go to the other set. The states are typically of significantly different potentials when sampled this way, due to the ability to increment by very small amounts (amounts of 2^-K). \n>> \n\nThank you for the clarification. Does this **bounded away** notion mean that you can actually end up choosing a partition where one set has potential <0.5, even if there exists a partition where both sets have potentials >0.5? The random element suggests this is so.\n\n>> ##### Later on (Theorem 3) we see how an optimal partition is generated….The first part will typically have a slightly lower potential than the other and all layers other than layer l will be disjoint. #####\n>> In Theorem 3, we show that there is a way to form the partition A, B with **almost** disjoint support: as you’ve written above, A will contain all pieces from (l+1) up, and B from (l-1) and below, ***but*** they might share pieces in level l (with the optimal splitting done by the environment.) As a result, there is no bias towards the first set having a slightly lower potential. \n>> \n\nBut these partitions are a very restricted subset of the possible (optimal) playing choices.\n\n\n>> ##### [p7 Fig 6 and text] ######\n>> It looks like your comment here is unfinished? More than data being in different bins, the important aspect to make the comparison fair is that the RL agent and the Supervised Model see exactly the same data, which we ensure by first generating the data with the RL agent interacting with the environment, and using that data for supervised training.\n>> \n\nYou are right. Please accept my apologies. I meant to say that if there are lots of training examples containing states that appear most often 3-7 moves from the end of the game, then this will influence the performance in this region of the graph. I am now not sure that this is correct though. I think it is more likely to relate to the difference in loss function of the two approaches (as discussed above).\n\n>> ##### [p8 proof of theorem 3] #####\n>> Thank you for the questions about the proof, we’ve corrected the indexing typo and we hope the argument is clearer now (in the v2 uploaded!) We’d be happy to take additional questions on this.\n>> \n\nYes, this is now clearer.\n\n>> ##### [p9 Fig 8] #####\n>> We’ve edited the figure to remove the dashed lines which hopefully makes the curves clearer.\n\nThank you, these are now clearer.", ">> We investigate this further in Figure 6 with a new plot (left pane), through studying “fatal mistakes” -- errors made that take the agent from a winning state to a losing state. We find that Supervised Learning is much more prone to fatal mistakes than RL, suggesting a natural basis for the worse performance.\n>> \n\nYes, this is a helpful piece of analysis.\n\n>> ##### Hyperparameter Choice for Deep RL architectures #####\n>> Aside from experiments to determine the effect of depth and width of architectures, we used minimal hyperparameter tuning. While additional hyperparameter tuning would have likely helped improve performance, the overall conclusions drawn from the paper (better generalization in multiagent vs single agent, ability of RL to avoid “fatal mistakes” made by supervised learning, decrease in rewards as game difficulty increased) would not have been affected by hyperparameter changes.\n>> \n\nThis is a very strong claim. I am not sure I am able to to assess its validity without a theoretical justification or empirical evidence that hyperparameter tuning has no qualitative effect.\n\n\n>> ##### “ As the authors state, this paper is an empirical evaluation, and the theorems presented are derived from earlier work” #####\n>> This is not completely the case: Theorem 3 is a theoretical contribution that is original to this paper; and it is an important component of the paper since without it, training an attacker agent would be intractable. The earlier theorems are explained in detail since the central approach of the paper is based on the linearly expressible potential function and its connection to the optimal policy, and one needs the proofs of these earlier theorems -- not simply their statements -- in order to understand this structure.\n>>\n\nIt would have helped to be clearer about this in the paper.\n\n \n>> ##### “The authors compare linear models with non-linear models for attacker policies” #####\n>> This is incorrect -- we don’t use linear models for attacker policies.\n>> \n\nThank you for the clarification.\n\n\n>> #### [p4, end of sec 3] Continuous changes of potential ####\n>> The potential changes are indeed due to the discrete initial game state, but for a game with K levels, we can adjust the potential in increments of 2^-K (e.g. for K=20, we can adjust the potential in increments of ~0.0000009) which seems to be a reasonable approximation to continuous. \n>> \n\nHowever, it is not continuous. Inclusion of the word 'effectively' would have avoided this criticism. For the reader to understand the methods and claims, surely it is reasonable to expect precision.\n", "Thank you for your thoughtful responses. I have tried to respond to each below. I think that the work is interesting, but I am keeping my recommendation as weak reject. In particular, the first two comments below indicate the issues I consider most problematic.\n\n>> ##### “Optimal policies expressible as a linear model” #####\n>> We have added a linear baseline (Figure 2 and Section 4.1.1) showing that these games are hard to learn well with what is theoretically an expressive enough model. This is additional motivation for studying deeper architectures.\n>> \n\nJust to be precise, it doesn't show that the games are hard to learn with any method. It shows that the current methods do not learn well on these games. The fact that there is an optimal policy based on a linear function of the input space means that a deep learner isn't actually needed. The authors could do more in the paper to give some intuition to why these policies are hard to learn. Is it because the weights of the optimal policy are so different in terms of magnitude? If so, perhaps a linear model with the weights trained in the log domain would suffice to learn these games efficiently. Also, what features are the deep learners actually learning that enables them to improve beyond that of a linear learner? Some investigation of that would help.\n\nI remain unconvinced that these games are good general tests for deep reinforcment learning. I think this would require more theoretical justification of why a deep learner (or shallow learner) simply cannot learn them efficiently, and I am not sure that is possible.\n\n\n>> #### “'incorrect actions' in the supervised learning evaluations” ####\n>> The natural formulation of the optimal policy is for the agent to keep the potential in the next state reached as small as possible; this ensures that the agent preserves the minimax value of the game from all states. This policy corresponds to choosing the set of larger potential to delete; correspondingly, an incorrect action is one where the agent does not choose the set of larger potential. In the supervised learning setting, we have a starting potential < 1, so the defender, if playing according to the optimal policy, is guaranteed a win and one of A, B will always have potential < 0.5. Even if, due to suboptimal play, there was a state where A, B both have potential > 0.5, the policy that minimizes the potential in the next state would still have a well-defined move, which is to choose the set of larger potential to delete; we would view this as the correct move under optimal play.\n>> \n\nUnder these circumstances both moves can lead to success, and so both are optimal. To put this another way, a perfect player (one that never lost when it could win) could chose the set with the lower potential under these conditions and still win every time. The loss function in the RL domain is whether the game is won or lost, while the loss function in your supervised learning evaluation is different. While this doesn't mean the two can be compared, I would recommend discussing the meaning of this with more care.\n\n\n>> ##### Figure 4 (in old version) now Figure 5 ####\n>> Setup for Figure 4: we first train a RL defender agent to play the game, and store all the game trajectories that it sees. We then take each state in the game, and label it with the correct action (as described in the comment above, i.e. the correct action is picking the `larger’ set to destroy, which is what the optimal policy does.) We train a model in a supervised fashion on this labelled dataset \n>> \n>> We now have edited Figure 4 based on your feedback to make it clearer (it is Figure 5 in the new version.) The left pane shows the proportion of correct actions for different K achieved by RL and Supervised Learning. \n>> \n>> Unsurprisingly, we see that supervised learning is better than RL in terms of number of correct actions. However, RL is better at playing the game: we take a model trained in (1) supervised fashion (2) with RL and test it on the environment, and find that RL achieves significantly higher reward, particularly for larger K. \n>> \n\nAgain, this relates to the loss function. In the supervised learning case, you are penalising \"incorrect\" actions uniformly, whereas in the reinforcement learning case the learner will place more emphasis on some actions than others.\n\nThere is a related notion that the best action in the context of an optimal player, may not always be the best action in the context of a suboptimal player. Your notion of incorrect action assumes that you are playing an optimal player. It may be worth making the adversarial nature of the domain more explicit.\n", "Dear Reviewer,\n\nHappy new year! We would be very grateful to know your thoughts on our paper revision and rebuttal, which we hope has answered the points raised.\n\nBest,\n\nThe Authors", "Dear Reviewer,\n\nHappy new year! We would be very grateful to hear your responses to our paper revision and rebuttal. In particular, we believe that the paper revision has additional figures that answer some of your questions. \n\nBest,\n\nThe Authors", "Dear Reviewer,\n\nHappy new year! We would be very grateful to hear your responses to our paper revision and rebuttal.\n\nBest,\n\nThe Authors", "####” [p4, sec 4.1]\n\"strategy unevenly partitions the occupied levels...with the proportional difference between the two sets being sampled randomly\"” ######\nTo clarify the method: we randomly pick a proportion C that is **bounded away*** from 0.5, and let the potential of the first set be C*potential. We then greedily fill up a set until its potential first crosses C*potential. The remainder of the pieces go to the other set. The states are typically of significantly different potentials when sampled this way, due to the ability to increment by very small amounts (amounts of 2^-K). \n\n##### Later on (Theorem 3) we see how an optimal partition is generated….The first part will typically have a slightly lower potential than the other and all layers other than layer l will be disjoint. #####\nIn Theorem 3, we show that there is a way to form the partition A, B with **almost** disjoint support: as you’ve written above, A will contain all pieces from (l+1) up, and B from (l-1) and below, ***but*** they might share pieces in level l (with the optimal splitting done by the environment.) As a result, there is no bias towards the first set having a slightly lower potential. \n\n##### [p7 Fig 6 and text] ######\nIt looks like your comment here is unfinished? More than data being in different bins, the important aspect to make the comparison fair is that the RL agent and the Supervised Model see exactly the same data, which we ensure by first generating the data with the RL agent interacting with the environment, and using that data for supervised training.\n\n##### [p8 proof of theorem 3] #####\nThank you for the questions about the proof, we’ve corrected the indexing typo and we hope the argument is clearer now (in the v2 uploaded!) We’d be happy to take additional questions on this.\n\n##### [p9 Fig 8] #####\nWe’ve edited the figure to remove the dashed lines which hopefully makes the curves clearer.\n", "Thank you for your time in reviewing the paper and your comments! We’ve uploaded a new version of the paper based on the feedback, and have addressed specific points below.\n\n##### “Optimal policies expressible as a linear model” #####\nWe have added a linear baseline (Figure 2 and Section 4.1.1) showing that these games are hard to learn well with what is theoretically an expressive enough model. This is additional motivation for studying deeper architectures.\n\n#### “'incorrect actions' in the supervised learning evaluations” ####\nThe natural formulation of the optimal policy is for the agent to keep the potential in the next state reached as small as possible; this ensures that the agent preserves the minimax value of the game from all states. This policy corresponds to choosing the set of larger potential to delete; correspondingly, an incorrect action is one where the agent does not choose the set of larger potential. In the supervised learning setting, we have a starting potential < 1, so the defender, if playing according to the optimal policy, is guaranteed a win and one of A, B will always have potential < 0.5. Even if, due to suboptimal play, there was a state where A, B both have potential > 0.5, the policy that minimizes the potential in the next state would still have a well-defined move, which is to choose the set of larger potential to delete; we would view this as the correct move under optimal play.\n\n##### Figure 4 (in old version) now Figure 5 ####\nSetup for Figure 4: we first train a RL defender agent to play the game, and store all the game trajectories that it sees. We then take each state in the game, and label it with the correct action (as described in the comment above, i.e. the correct action is picking the `larger’ set to destroy, which is what the optimal policy does.) We train a model in a supervised fashion on this labelled dataset \n\nWe now have edited Figure 4 based on your feedback to make it clearer (it is Figure 5 in the new version.) The left pane shows the proportion of correct actions for different K achieved by RL and Supervised Learning. \n\nUnsurprisingly, we see that supervised learning is better than RL in terms of number of correct actions. However, RL is better at playing the game: we take a model trained in (1) supervised fashion (2) with RL and test it on the environment, and find that RL achieves significantly higher reward, particularly for larger K. \n\nWe investigate this further in Figure 6 with a new plot (left pane), through studying “fatal mistakes” -- errors made that take the agent from a winning state to a losing state. We find that Supervised Learning is much more prone to fatal mistakes than RL, suggesting a natural basis for the worse performance.\n\n##### Hyperparameter Choice for Deep RL architectures #####\nAside from experiments to determine the effect of depth and width of architectures, we used minimal hyperparameter tuning. While additional hyperparameter tuning would have likely helped improve performance, the overall conclusions drawn from the paper (better generalization in multiagent vs single agent, ability of RL to avoid “fatal mistakes” made by supervised learning, decrease in rewards as game difficulty increased) would not have been affected by hyperparameter changes.\n\n##### “ As the authors state, this paper is an empirical evaluation, and the theorems presented are derived from earlier work” #####\nThis is not completely the case: Theorem 3 is a theoretical contribution that is original to this paper; and it is an important component of the paper since without it, training an attacker agent would be intractable. The earlier theorems are explained in detail since the central approach of the paper is based on the linearly expressible potential function and its connection to the optimal policy, and one needs the proofs of these earlier theorems -- not simply their statements -- in order to understand this structure.\n\n##### “The authors compare linear models with non-linear models for attacker policies” #####\nThis is incorrect -- we don’t use linear models for attacker policies.\n\n#### [p4, end of sec 3] Continuous changes of potential ####\nThe potential changes are indeed due to the discrete initial game state, but for a game with K levels, we can adjust the potential in increments of 2^-K (e.g. for K=20, we can adjust the potential in increments of ~0.0000009) which seems to be a reasonable approximation to continuous. \n", "Thank you for your time in reviewing the paper and your comments! We’ve uploaded a new version of the paper based on the feedback, and have addressed specific points below.\n\n##### ”Supervised Learning vs RL” #####\nIn this setting, both Supervised Learning and RL learn markovian policies because there is no additional dependence on previous states. However, supervised learning is less able to associate important moves with their **time delayed** reward. Motivated by your comments, we ran another experiment to explore this (Figure 6 in the new version), where we looked at the number of “fatal mistakes” made by supervised learning vs RL: a fatal mistake being one where the agent makes an irrecoverable error. We found that supervised learning is *much* more prone to fatal mistakes, explaining the worse performance, and validating our conjecture that “reinforcement learning is learning to focus most on moves that matter for winning”\n\n##### “ Why does multiagent generalize better than single agent defender” #####\nTraining in the multiagent setting likely means the defender sees a greater diversity in the data, resulting in a more robust learned policy. Exploring this further could be interesting future work!\n\n##### Summary #####\nThank you for the kind comments! We also believe that it is valuable and unique contribution to have a challenging game but with linearly expressible optimal policy to study RL, make comparisons to Supervised Learning and explore Generalization.\n", "Thank you for your time in reviewing the paper and your comments! We’ve uploaded a new version of the paper based on the feedback, and have addressed specific points below.\n\n##### “A linear baseline is a must have” #####\nWe’ve added a new subsection and results (Figure 2, section 4.1.1.) where we show the performance of linear models that are trained with PPO, A2C and DQN. While theoretically, a linear model is expressive enough to learn the optimal policy, in practice, we see a large improvement in using a deeper model. (We had also observed this in initial experiments with the environment, but omitted it due to the better performance with the deeper models.)\n\n##### “Three random seeds not enough in particular in Figure 9” ######\nWe reran the experiment for the paper v1 figure 9, now figure 10, with 8 random seeds. Due to the larger number of seeds, we plotted the mean and show shaded the standard deviation. The figure shows even more clearly the better generalization of multiagent training over single agent training.\n\n##### Figure 4 (in original paper) now Figure 5 #####\nThanks for your comments, we’ve edited Figure 4 (old version), now figure 5 to make the main message clearer: supervised learning does better on a per move basis, but does worse at playing the game.\n\n##### Fatal Mistakes (Figure 6) #####\nWe’ve also interpreted this result further, and show that this performance difference is likely due to supervised learning making many more fatal mistakes (Figure 6) -- errors in play that cannot be recovered from.\n\n#####Figure 8 (original paper) now Figure 9#####\nThanks for the comment, we’ve removed the dashed lines to make the performance clearer.\n\n#####Multiagent training at potential 1.0#####\nThere aren’t many plots in the paper with potential=1.0 for multiagent training because training an attacker agent successfully is much harder than training the defender agent (larger action space), and for larger K, the attacker performs poorly at potential~1.0, with the defender typically dominating.\n\n##### “There is no generalization from K=k train to K > k (test)” #####\nAs we understand it, this would involve picking a K_0 for training, and then testing on K > K_0 during test time. But we don't see a way for this to produce useful insights, since if the model has never seen pieces at levels other than the K_0 levels shown at train time, we cannot expect that it will learn the correct weighting for the levels it hasn’t seen at all.\nWe therefore try the converse of this, where we train on K_0, and test on K, K < K_0, and find that decreasing K does not improve play. While this does suggest that the model is overfitting to K_0, we believe this is an interesting phenomena, highlighting some of the weaknesses of the current methodology. Determining how to adapt our methods to enable this generalization across different K would be exciting to explore in the future.\n\n####Summary####\nWe believe we have addressed the main points of your response (linear baselines, more seeds, additional interpretation plots) as well as clarified certain points of confusion (multiagent training at potential 1, generalization at different levels.) Our results present a environment that has variable difficulty and is challenging to learn, but also a known, simple optimal policy to compare to. The environment demonstrates many of the typical phenomena observed with RL and provides insights into Supervised Learning vs RL, the effects of multiagent play, and also generalization and catastrophic forgetting. We strongly believe that further work on this environment will help develop more robust RL methods.\n" ]
[ 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkCnm-bAb", "iclr_2018_HkCnm-bAb", "iclr_2018_HkCnm-bAb", "H1KaWSmff", "SJNicmVeM", "Sy39ZHQMf", "SJK3Xl2mz", "SJIqzlhXz", "SJNicmVeM", "SJNicmVeM", "rynBydweG", "r16fdy3xG", "SJVXQS7Gf", "SJNicmVeM", "rynBydweG", "r16fdy3xG" ]
iclr_2018_Bya8fGWAZ
Value Propagation Networks
We present Value Propagation (VProp), a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We evaluate on configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes. Furthermore, we show that the module enables to learn to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.
workshop-papers
This paper and reviews makes for a difficult call. The reviewers appear to be in agreement that Value Propagation provides an interesting algorithmic advance over earlier work on Value Iteration networks. AnonReviewer1 gives a strong rationale why the advance is both original and significant. Their experiments also show very nice results with VProp and MVProp in 2-D grid-worlds. However, I also fully agree with AnonReviewer2 that testing in other domains beyond 2-D grid-world is necessary. Earlier work on VIN was also tested on a Mars Rover / continuous control domain, as well as graph-based web navigation task. The authors' rebuttal on this point comes across as weak. In their view, they can't tackle real-world domains until VProp has been proven effective in large, complex grid-worlds. I don't buy this at all -- they could start initial experiments right away, which would perhaps yield some surprising results. Given this analysis, the committee recomments this paper for workshop. Pros: significant algorithmic advance, good technical quality and writeup, nice results in 2-D grid world. Con: Validation is only in 2-D grid-world domains.
train
[ "Sy5I_xKgM", "S1I3_bqgM", "rJRfJZKxf", "By8Sseq7G", "r1ewFxqXG", "rkRPdgqQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The original value-iteration network paper assumed that it was trained on near-expert trajectories and used that information to learn a convolutional transition model that could be used to solve new problem instances effectively without further training.\n\nThis paper extends that work by\n- training from reinforcement signals only, rather than near-expert trajectories\n- making the transition model more state-depdendent\n- scaling to larger problem domains by propagating reward values for navigational goals in a special way\n\nThe paper is fairly clear and these extensions are reasonable. However, I just don't think the focus on 2D grid-based navigation has sufficient interest and impact. It's true that the original VIN paper worked in a grid-navigation domain, but they also had a domain with a fairly different structure; I believe they used the gridworld because it was a convenient initial test case, but not because of its inherent value. So, making improvements to help solve grid-worlds better is not so motivating. It may be possible to motivate and demonstrate the methods of this paper in other domains, however. The work on dynamic environments was an interesting step: it would have been interesting to see how the \"models\" learned for the dynamic environments differed from those for static environments.\n\n", "ORIGINALITY & SIGNIFICANCE\n\nThe authors build upon value iteration networks: the idea that the value function can be computed efficiently from rewards and transitions using a dedicated convolutional network. The authors point out that the original \"value iteration network” (Tamar 2016) did not handle non-stationary dynamics models or variable size problems well and propose a new formulation to extend the model to this case which they call a value propagation network. It seems useful and practical to compute value iteration explicitly as this will propagate values for us without having to learn the propagated form through extensive gradient update steps. Extending to the scenario of non-stationary dynamics is important to make the idea applicable to common problems. The work is therefore original and significant.\n\nThe algorithm is evaluated on the original obstacle grids from Tamar 2016 and larger grids generated to test scalability. The authors Prop and MVProp are able to solve the grids with much higher reliability at the end of training and converge much faster. The M in MVProp in particular seems to be very useful in scaling up to the large grids. The authors also show that the algorithm handles non-stationary dynamics in an avalanche task where obstacles can fall over time.\n\n\nQUALITY\n\nThe symbol d_{rew} is never defined — what does “new” stand for? It appears to be the number of latent convolutional filters or channels generated by the state embedding network. \n\nSection 2.2 Sentence 2: The final layer representing the encoding is given as ( R^{d_rew x d_x x d_y }.\nBased on the description in the first paragraph of section 2, it sounds like d_rew might be the number of channels or filters in the last convolutional layer. \n\nIn equation 1, it wasn’t obvious to me that the expression max_a q_{ij}^{k-1} q^{k} corresponds to an actual operation?\nThe h( \\Phi( x ), v^{k-1} ) sort of makes sense … value is only calculated with respect to only the observation of the maze obstacles but the policy \\pi is calculated with respect to the joint observation and agent state. \n\nThe expression \n\n h_{aid} ( \\phi(0), v ) = < Wa, [ \\phi(o) ; v ] > + b\n\nmakes sense and reminds me of the Value Iteration network work where we take the previous value function, combine it with the reward function and use convolution to compute the expectation (the weights Wa encode the effect of transitions). I gather the tensor Wa = R^{|A| x (d_{rew} x d_x x d_y } both converts the feature embedding \\phi{o} to rewards and represents the transition / propagation of reward across states due to transitions and discounts at the same time? \n\nI didn’t understand the r^in, r&out representation in section 4.1. These are given by the domain?\n\nI did get the overall idea of efficiently creating a local value function in the neighborhood of the current state and passing this to the policy so that it can make a local decision.\n\nA bit more detail defining terms, explaining their intuitive role and how the output of one module feeds into the next would be helpful.\n\n\nPOST REVISION COMMENTS:\n\n- I didn't reread the whole thing - just used the diff tool. \n- It looks like the typos in the equations got fixed\n- The new phrase \"enables to learn to plan\" seems pretty awkward\n\n", "The paper introduces two alternatives to value iteration network (VIN) proposed by Tamar et al. VIN was proposed to tackle the task of learning to plan using as inputs a position and an image of the map of the environment. The authors propose two new updates value propagation (VProp) and max propagation (MVProp), which are roughly speaking additive and multiplicative versions of the update used in the Bellman-Ford algorithm for shortest path. The approaches are evaluated in grid worlds with and without other agents.\n\nI had some difficulty to understand the paper because of its presentation and writing (see below). \n\nIn Tamar's work, a mapping from observation to reward is learned. It seems this is not the case for VProp and MVProp, given the gradient updates provided in p.5. As a consequence, those two methods need to take as input a new reward function for every new map. Is that correct?\nI think this could explain the better experimental results\n\nIn the experimental part, the results for VIN are worse than those reported in Tamar et al.'s paper. Why did you use your own implementation of VIN and not Tamar et al.'s, which is publicly shared as far as I know?\n\nI think the writing needs to be improved on the following points:\n- The abstract doesn't fit well the content of the paper. For instance, \"its variants\" is confusing because there is only other variant to VProp. \"Adversarial agents\" is also misleading because those agents act like automata.\n\n- The authors should recall more thoroughly and precisely the work of Tamar et al., on which their work is based to make the paper more self-contained, e.g., (1) is hardly understandable.\n\n- The writing should be careful, e.g., \nvalue iteration is presented as a learning algorithm (which in my opinion is not) \n\\pi^* is defined as a distribution over state-action space and then \\pi is defined as a function; ...\n\n- The mathematical writing should be more rigorous, e.g., \np.2:\nT: s \\to a \\to s', \\pi : s \\to a\nA denotes a set and its cardinal\nIn (1), shouldn't it be \\Phi(o)? all the new terms should be explained\np. 3:\ndefinition of T and R \nshouldn't V_{ij}^k depend on Q_{aij}^k?\nT_{::aij} should be defined\nIn the definition of h_{aij}, should \\Phi and b be indexed by a?\n\n- The typos and other issues should be fixed:\np. 3:\nK iteration\nwith capable\np.4:\nclose 0\np.5:\nour our\ns^{t+1} should be defined like the other terms\n\"The state is represented by the coordinates of the agent and 2D environment observation\" should appear much earlier in the paper. \n\"\\pi_\\theta described in the previous sections\", notation \\pi_\\theta appears the first time here...\n3x3 -> 3 \\times 3\nofB\nV_{\\theta^t w^t}\np.6:\nthe the\nFig.2's caption:\nWhat does \"both cases\" refer to? They are three models.\nReferences:\net al.\nYI WU\n", "Thank you for reviewing our work. We would like to address your comment about the relevancy of gridworlds as testbeds for our own work by providing three counter-arguments:\n\n- First and foremost, we have decided to focus on gridworlds because they are a largely used benchmark for work such as ours, and as such it allows to quickly compare methods. In particular, while Tamar et al. have indeed provided a variegated experimental section, their work has been almost entirely evaluated and re-used in experiments on gridworld or gridworld-like environments, which biased our experimental section towards making sure that such users would find it especially compelling. Sections 2 and 3 of our manuscript present some of such papers (e.g. [1], [2], [3]).\n\n- On all applications with a 2D structure of the original VIN paper, our method can be used as a drop-in replacement for the VI module. Whereas only experiments could confirm that our approach works on these domains as well, we believe our approach should indeed work on them (as the structure of the problem is always similar).\n\n- Finally, gridworld environments, while of simple construction and reasoning, can provide challenges that current algorithms are clearly unable to solve. We have for instance shown that when the environment becomes even slightly larger than sizes commonly used, state-of-the-art models struggle to learn and converge smoothly. We would like the community to take our work as inspiration and try tackling gridworlds whose parameters (sizes, complexity of dynamics, sparsity of rewards, etc.) are pushed to areas that current algorithms cannot hope to solve. We for instance would like to reach a point where VProp we can tackle both _large_ and extremely _complex_ gridworlds, which would enable applied research to seriously consider it a planner that can be deployed in live systems.\n\n\n[1] https://arxiv.org/abs/1709.05273\n[2] https://arxiv.org/abs/1702.03920\n[3] https://arxiv.org/abs/1709.05706", "Thank you for reading our submission. Here's a response to the comments you made:\n\n- There is a single \"reward map\", as in Tamar et al. The reward used for the gradient update is that of the true task (e.g., -1 on hitting a wall, +1 on reaching the goal), not the reward map that is learnt.\n\n- As mentioned in Section 5.2, the best results we obtained from VIN _did_ match the numbers shown in the paper, however we saw a large variance in performance wrt random seeds when evaluated on many trials, even after some search on the RL hyperparameters. \nThe original code release provided code for the supervised learning experiments, so it wasn’t applicable to our setup. In any case, we are confident our Pytorch implementation of the VIN model is essentially identical to the one in Theano provided by the authors, as it’s a relatively simple architecture and there are multiple similar implementations online.\n\n- Thank you for pointing out the typos in the abstract and the rest of the paper. These are going to be fixed in the version we will upload in a couple of days. We would like to point out that equation (1) has a typo and is hard to parse because of missing spaces (please, see the answer to reviewer 1). We appreciate the comment on clarity, and we will add a simpler explanation of VIN in the background section, which should help make the explanation of the baseline more readable.\n\n- Regarding your comments on the formalism regarding value iteration (and \\pi), we will add a paragraph explaining that at training time we use stochastic policies, while testing with deterministic ones.\n\n- As far as we can see, the action set is usually denoted with the calligraphic letter, while the cardinal in standard uppercase when needed.\n\n- Further thanks for spotting the mistake in the definition of T (and R) on page 3. Please refer to our response to AnonReviewer1 (second point), we will correct the mistake.", "Thank you for reading and reviewing our work. We really appreciate the comments on novelty and significance.\n\n- As you indeed spotted, d_rew definition is implied in Section 2, where it indicates the number of feature channels extracted by a embedding function in the input. We used it to refer to Tamar et al. ‘16, but we agree that it’s a bit confusing if you don’t carefully read the section. We’ll rename it to d_feat.\n\n- Equation (1) is a literal translation of the paragraph above it (even though there is a typo). It is hard to parse because there are missing spaces between q^{k-1}_{aij} and q^k, so the reader can’t see there are two equalities; also we will make it clear that v^k depends on q^k and not q^{k-1}. That max operation in equation (1) is formalization of the max-pooling operation performed by the convnet at each iteration of k. In the case of VI it so happens that the operation is performed over the set of actions A, and it’s useful to point it out to the reader to provide a summary of the value iteration -> VI module mapping.\n\n- W_a indeed computes the transition map when d_rew := A and \\phi(o) := R. This particular formulation is useful when implementing a VI module, as it provides the dimensions for the module when using a single fully-connected linear layer to represent the transform.\n\n- Thank you for spotting the missing definition. r_in and r_out are the reward propagation maps that can be generated by reparametrizing the single VI reward map. They are properly defined and used in the following paragraph to define VProp’s value recurrence, but we’ll add a quick explanation where they are first mentioned." ]
[ 5, 7, 5, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1 ]
[ "iclr_2018_Bya8fGWAZ", "iclr_2018_Bya8fGWAZ", "iclr_2018_Bya8fGWAZ", "Sy5I_xKgM", "rJRfJZKxf", "S1I3_bqgM" ]
iclr_2018_BkfEzz-0-
Neuron as an Agent
Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments. This paper proposes reward distribution using {\em Neuron as an Agent} (NaaA) in MARL without a TTP with two key ideas: (i) inter-agent reward distribution and (ii) auction theory. Auction theory is introduced because inter-agent reward distribution is insufficient for optimization. Agents in NaaA maximize their profits (the difference between reward and cost) and, as a theoretical result, the auction mechanism is shown to have agents autonomously evaluate counterfactual returns as the values of other agents. NaaA enables representation trades in peer-to-peer environments, ultimately regarding unit in neural networks as agents. Finally, numerical experiments (a single-agent environment from OpenAI Gym and a multi-agent environment from ViZDoom) confirm that NaaA framework optimization leads to better performance in reinforcement learning.
workshop-papers
The reviewers have significantly different views, with one strongly negative, one strongly positive, and one borderline negative. However, all three reviews seem to regard the NaaA framework as a very interesting and novel approach to training neural nets. They also concur that the major issue with the paper is very confusing technical exposition regarding the motivation, math details, and how the idea works. The authors indicate that they have significantly revised the manuscript to improve the exposition, but none of the reviewers have changed their scores. One reviewer states that "technical details are still too heavy to easily follow." My own take regarding the current section 3 is that it is still very challenging to parse and follow. Given this analysis, the committee recommends this for workshop. Pros: Interesting and novel framework for training NNs "Adaptive DropConnect" algorithm contribution Good empirical results in image recognition and ViZDoom domains Cons: Technical exposition is very challenging to parse and follow Some author rebuttals do not inspire confidence. For example, motivation of method due to "$100 billion market cap of Bitcoin" and in reply to unconvincing neuroscience motivation, saying "throw away the typical image of auction."
train
[ "Bk3zRoBGz", "H12VRW9gM", "HJSqWxjez", "HkQCEwaXM", "SyM-3W87f", "BJgxl4WbM", "Bykv9-bWG", "H1mjFWW-G", "BJxz0TA1f", "BJTvnoCJG", "BkTpRSRkf", "SJPHs661G", "rkdWpPEJz", "ByRdDLV1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "This paper proposed a novel framework Neuron as an Agent (NaaA) for training neural networks to perform various machine learning tasks, including classification (supervised learning) and sequential decision making (reinforcement learning). The NaaA framework is based on the idea of treating all neural network units as self-interested agents and optimizes the neural network as a multi-agent RL problem. This paper also proposes adaptive dropconnect, which extends dropconnect (Wan et al., 2013) by using an adaptive algorithm for masking network topology.\n \nThis work attempts to bring several fundamental principles in game theory to solve neural network optimization problems in deep learning. Although the ideas are interest and technically sound, and the proposed algorithms are demonstrated to outperform several baselines in various machine learning tasks, there several major problems with this paper, including lacking clarity of presentation, insights and substantiations of many claims. These issues may need a significant amount of effort to fix as I will elaborate more below.\n \n1. Introduction\nThere are several important concepts, such as reward distribution, credit assignment, which are used (from the very beginning of the paper) without explanation until the final part of the paper.\n \nThe motivation of the work is not very clear. There seems to be a gap between the first paragraph and the second paragraph. The authors mentioned that “From a micro perspective, the abstraction capability of each unit contribute to the return of the entire system. Therefore, we address the following questions. Will reinforcement learning work even if we consider each unit as an autonomous agent ”\nIs there any citation for the claim “From a micro perspective, the abstraction capability of each unit contribute to the return of the entire system” ? It seems to me this is a very general claim. Even RL methods with linear function approximations use abstractions. Also, it is unclear to me why this is an interest question. Does it have anything to do with existing issues in DRL? Moreover, The definition of autonomous agent is not clear, do you mean learning agent or policy execution agent?\n \n“it uses \\epsilon-greedy as a policy, …” Do you mean exploration policy?\nI also have some concerns regarding the claim that “We confirm that optimization with the framework of NaaA leads to better performance of RL”. Since there are only two baselines are compared to the proposed method, this claim seems too general to be true.\n \nIt is not clear to why the authors mention that “negative result that the return decreases if we naively consider units as agents”. What is the big picture behind this claim?\n \n“The counterfactual return is that by which we extend reward …” need to be rewritten.\n \nThe last paragraph of introduction discussed the possible applications of the proposed methods without any substantiation, especially neither citations nor any related experiments of the authors are provided.\n \n2 Related Work\n \n“POSG, a class of reinforcement learning with multiple ..” -> reinforcement learning framework\n \n“Another one is credit assignment. Instead of reward.. ” Two sentences are disconnected and need to be rewritten.\n \n“This paper unifies both issues” sounds very weird. Do you mean “solves/considers both issues in a principled way”?\n \nThe introduction of GAN is very abrupt. Rather than starting from introducing those new concepts directly, it might be better to mention that the proposed method is related to many important concepts in game theory and GANs.\n \n“,which we propose in a later part of this paper” -> which we propose in this paper\n \n \n3. Background\n \n“a function from the state and the action of an agent to the real value” -> a reward function \n \nShould provide a citation for DRQN\n \nThere is a big gap between the last two paragraphs of section 3.\n \n4. Neuro as an agent\n \n“We add the following assumption for characteristics of the v_i” -> assumptions for characterizing v_i\n \n“to maximize toward maximizing its own return” -> to maximize its own return\n \nWe construct the framework of NaaA from the assumptions -> from these assumptions\n \n“indicates that the unit gives additional value to the obtained data. …” I am not sure what this sentence means, given that \\rho_ijt is not clearly defined.\n \n5. Optimization\n \n“NaaA assumes that all agents are not cooperative but selfish” Why? Is there any justification for such a claim?\n \n \nWhat is the relation between \\rho_jit and q_it ?\n \n“A buyer which cannot receive the activation approximates x_i with …” It is unclear why a buyer need to do so given that it cannot receive the activation anyway.\n \n“Q_it maximizing the equation is designated as the optimal price.” Which equation?\n \ne_j and 0 are not defined in equation 8\n \n \n6 Experiment\nsetare -> set are\n \nwhat is the std for CartPole in table 1\n \nIt is hard to judge the significance of the results on the left side of figure 2. It might be better to add errorbars to those curves\n \nMore description should be provided to explain the reward visualization on the right side of figure 2. What reward? External/internal?\n \n“Specifically, it is applicable to various methods as described below …” Related papers should be cited.", "In this paper, the authors present a novel way to look at a neural network such that each neuron (node) in the network is an agent working to optimize its reward. The paper shows that by appropriately defining the neuron level reward function, the model can learn a better policy in different tasks. For example, if a classification task is formulated as reinforcement learning where the ultimate reward depends on the batch likelihood, the presented formulation (called Adaptive DropConnect in this context) does better on standard datasets when compared with a strong baseline.\n\nThe idea proposed in the paper is quite interesting, but the presentation is severely lacking. In a work that relies heavily on precise mathematical formulation, there are several instances when the details are not addressed leading to ample confusion making it hard to fully comprehend how the idea works. For example, in section 5.1, notations are presented and defined much later or not at all (g_{jit} and d_{it}). Many equations were unclear to me for similar reasons to the point I decided to only skim those parts. Even the definition of external vs. internal environment (section 4) was unclear which is used a few times later. Like, what does it mean when we say, “environment that the multi-agent system itself touches”?\n\nOverall, I think the idea presented in the paper has merit, but without a thorough rewriting of the mathematical sections, it is difficult to fully comprehend its potential and applications.", "The authors consider a Neural Network where the neurons are treated as rational agents. In this model, the neurons must pay to observe the activation of neurons upstream. Thus, each individual neuron seeks to maximize the sum of payments it receives from other neurons minus the cost for observing the activations of other neurons (plus an external reward for success at the task). \n\nWhile this is an interesting idea on its surface, the paper suffers from many problems in clarity, motivation, and technical presentation. It would require very major editing to be fit for publication. \n\nThe major problem with this paper is its clarity. See detailed comments below for problems just in the introduction. More generally, the paper is riddled with non sequiturs. The related work section mentions Generative Adversarial Nets. As far as I can tell, this paper has nothing to do with GANs. The Background section introduces notation for POMDPs, never to be used again in the entirety of the paper, before launching into a paragraph about apoptosis in glial cells. \n\nThere is also a general lack of attention to detail. For example, the entire network receives an external reward (R_t^{ex}), presumably for its performance on some task. This reward is dispersed to the the individual agents who receive individual external rewards (R_{it}^{ex}). It is never explained how this reward is allocated even in the authors’ own experiments. The authors state that all units playing NOOP is an equilibrium. While this is certainly believable/expected, such a result would depend on the external rewards R_{it}^{ex}, the observation costs \\sigma_{jit}, and the network topology. None of this is discussed. The authors discuss Pareto optimality without ever formally describing what multi-objective function defines this supposed Pareto boundary. This is pervasive throughout the paper, and is detrimental to the reader’s understanding. \n\nWhile this might be lost because of the clarity problems described above, the model itself is also never really motivated. Why is this an interesting problem? There are many ways to create rational incentives for neurons in a neural net. Why is paying to observe activations the one chosen here? The neuroscientific motivation is not very convincing to me, considering that ultimately these neurons have to hold an auction. Is there an economic motivation? Is it just a different way to train a NN? \n\nDetailed Comments:\n“In the of NaaA” => remove “of”?\n“passing its activation to the unit as cost” => Unclear. What does this mean?\n“performance decreases if we naively consider units as agents” => Performance on what?\n“.. we demonstrate that the agent obeys to maximize its counterfactual return as the Nash Equilibrium“ => Perhaps, this should be rewritten as “Agents maximize their counterfactual return in equilibrium. \n“Subsequently, we present that learning counterfactual return leads the model to learning optimal topology” => Do you mean 
“maximizing” instead of learning. Optimal with respect to what task?\n“pure-randomly” => “randomly”\n “with adaptive algorithm” => “with an adaptive algorithm”\n“the connection” => “connections”\n“In game theory, the outcome maximizing overall reward is named Pareto optimality.” => This is simply incorrect. ", "We uploaded the revised version of our paper.\n\nAs you can see, over 80% of the paper is major edited to improve clarity while the claim is same as the previous version.\n\nEspecially, we make it clear the motivation and the method.\nPlease read throughout the paper again.", "Thank you for reading and commenting our paper. \nWe really appreciate your detailed comments. Most of them were very helpful to brush up our paper. \nWe are about to finalize the paper, and will upload a version which highly improved clarity at 5th Jan. \nSo, please look forward it.\n\nEnjoy the holidays & Have a happy new year.", "Thank you for reading and commenting our paper. We will polish our mathematical formulation to improve your understanding during this period.\n\n> Even the definition of external vs. internal environment (section 4) was unclear which is used a few times later. \n\nAn external environment is the original environment such as Doom and Atari, and an internal environment is a set of units. From an agent's perspective, other units are considered as an environment.\n\nThe quick reference can also be helpful.\n\n Environment for a unit State for a unit Observation for a unit \n ----------------------------------- ------------------------------------------- ---------------------------------------\nExternal original environment original state original observation \nInternal other units activation of all the other units activation of allocated units\nBoth - - be used to predict o_{ijt}\n\n Reward per unit Total reward over units\n ----------------------------------- -------------------------------------------------------------\nExternal original reward total original reward (designer's objective)\nInternal revenue from units - cost 0\nBoth units' objective total original reward (designer's objective)\n\n\n> “environment that the multi-agent system itself touches”\n\nTypically, there is boundary between an agent and an environment (e.g., a robot in a room). We wrote this situation that the agent with a NN (as the multi-agent system) touches the environment. \n\n> In section 5.1, notations are presented and defined much later or not at all (g_{jit} and d_{it}). \n> Many equations were unclear to me for similar reasons to the point \n> Without a thorough rewriting of the mathematical sections, it is difficult to fully comprehend its potential and applications.\n\nAs we will reflect the comments to our paper, please wait for it.", "Thank you for reading and commenting our paper. Let us answer the question first:\n\n> The model itself is also never really motivated. Why is this an interesting problem? \n\nDo you know “Blockchain”, the technology supporting most of virtual currencies such as Bitcoin and Ethereum, which has vast market cap of $100 billions? Also, there was a news that the price of one Bitcoin exceeded $10,000 last week though the price was $1,000 at the beginning of this year. The emerging technology enable us to send incentive among agents in a decentralized environment. If an agent can earn money with realistic way such as automatic financial trading for stock, debt and coins, the way agent takes will be reinforcement learning, and it would face a problem of POMDP, because the market is inefficient in which an agent who has informative data can gain advantage. Hence, the agent will buy the informative data from other agent by paying incentive over the multi-agent setting, and the incentive will be distributed in the Blockchain environment.\n\nSuch background raises the question in a face of the paper: \n “will reinforcement learning work even if we consider each unit as an autonomous agent?”\nwhich motivates our framework. To answer the question, there are several issues to address such as “how much is appropriate reward the agent should pay?” and “how to address the social dilemma?”. All the answers are written in the paper.\n\nIf the major problem is clarity as you mentioned, a month will be enough to solve it. \n\n\nResponse to Paragraph 3:\n\n> GANs\n\nAlthough our paper had nothing to do with GANs directly, we mentioned GAN as a game-theoretic approach to model the real environment.\n\n> POMDPs, never to be used again in the entirety of the paper \n\nNotation of POMDP is used in methods such as S_O (below Eq (3)) and \\gamma (in Eq (4)). \nBesides, Eq (1) in the section of PODMP is used to derive Eq (9).\n\n\nResponse to Paragraph 4:\n\n> It is never explained how this reward is allocated even in the authors’ own experiments. \n\nIn the classification and the single-agent setting, the reward is given only to the endpoint of agent. \n\nIn the multi-agent setting, the external reward (reward from the Doom environment) is given to the agents (a main player and a cameraman) with following ways.\n 1. Baseline: endpoint of the main player.\n 2. Comm: endpoint of the main player and the cameraman (the same configuration to the original paper of CommNet) \n 3. NaaA: endpoint of the main player. The reward is pour from the main player to the cameraman as an internal reward.\n\n> The authors state that all units playing NOOP is an equilibrium. While this is certainly believable/expected, such a result would depend on the external rewards R_{it}^{ex}, the observation costs \\sigma_{jit}, and the network topology. None of this is discussed. \n\nAlthough the few agents which can gain the external reward can survive, most of the agents whose R_{it}^{ex} equals to 0 becomes NOOP regardless of its network topology because \\sigma_{jit} will equal 0 at the convergence. As we will post the revised version which contains the proof, please wait for it. \n\n> The authors discuss Pareto optimality without ever formally describing what multi-objective function defines this supposed Pareto boundary. This is pervasive throughout the paper, and is detrimental to the reader’s understanding. \n\nThe objective function is return (discounted cumulative reward). That is,\n Σ_{t=0}^T [ γ^t R_t^{ex} ],\nwhere R_{ex,t} := Σ_i R_{it}^{ex} is overall reward from the external environment. Pareto optimal is defined for the objective functions of all the agents.\n\n(continues...)", "Response to Paragraph 5:\n\n> There are many ways to create rational incentives for neurons in a neural net. \n\nAs I don’t think there are many methods for our problem setting, please provide a link.\n \n> The neuroscientific motivation is not very convincing to me, considering that ultimately these neurons have to hold an auction.\n\nAuction is more than auction as it used in mechanism design to orchestrate the actions of agents with mechanism. So, think out of the box, and throw away the typical image of auction.\n\n> Is there an economic motivation? Is it just a different way to train a NN? \n\nYes, there is an economic motivation as well as to improve training a NN.\n\n\nResponse to Paragraph 6 (the detailed comments):\n\n> “passing its activation to the unit as cost” => Unclear. What does this mean?\n\n\"to observe their activation\" is correct.\n(As it was a mistake in the native check process, we will change the native checker later)\n\n> “performance decreases if we naively consider units as agents” => Performance on what?\n\nPerformance on the total cumulative external reward.\n\n> “Subsequently, we present that learning counterfactual return leads the model to learning optimal topology” => Do you mean 
“maximizing” instead of learning. Optimal with respect to what task?\n\nJust like the above answer, it will be optimal with respect to the total cumulative external reward.", "Thank you for being interested in.\n\n> Demo\n\nYou can see our demo for the multi-agent Doom in the following URL.\nhttps://youtu.be/paT2n40QHOA\n\n> Implementation of Adaptive Dropconnect\n\nImplementation is easy because you can use it by just replace a layer.\n\nHere is a sample vanilla code w/o Adaptive DropConnect in pytorch:\n\n 1 class Net(nn.Module): \n 2 def __init__(self): \n 3 super(Net, self).__init__() \n 4 self.conv1 = nn.Conv2d(3, 10, kernel_size=5) \n 5 self.conv2 = nn.Conv2d(10, 20, kernel_size=5) \n 6 self.fc1 = nn.Linear(500,100) \n 7 self.fc2 = nn.Linear(100, 10) \n 8 \n 9 def forward(self, x): \n 10 x = F.relu(F.max_pool2d(self.conv1(x), 2)) \n 11 x = F.relu(F.max_pool2d(self.conv2(x), 2)) \n 12 x = x.view(-1, 500) \n 13 x = F.relu(self.fc1(x, training=self.training)) \n 14 x = F.dropout(x, training=self.training) \n 15 x = self.fc2(x) \n 16 return F.log_softmax(x) \n\nYou can turn on Adaptive DropConnnect by just replace a line with\n 6 self.fc1 = nn.Linear(500,100) \n vvv\n 6 self.fc1 = TradeLinear(500,100,eps=0.2) \nTradeLinear is contained in our provided library, which supports Adaptive DropConnnect and NaaA.", "Good questions.\n\n> External/internal\n\nIn reinforcement learning (RL), there are two parts: an environment and an agent.\nIn \"deep\" RL, there is a neural network inside the agent as a value/policy function approximator.\nThe network contains bunch of units,\nand NaaA considers the network as a multi-agent system, and each unit as an agent.\nFrom perspective of the unit, the other units are considered as an environment.\nTo distinguish from the original environment, we call it an internal environment,\nand call the original environment an external environment.\n\nHere is a quick reference which can also be helpful.\n\n Environment for a unit State for a unit Observation for a unit \n ----------------------------------- ------------------------------------------- ---------------------------------------\nExternal original environment original state original observation \nInternal other units activation of all the other units activation of allocated units\nBoth - - be used to predict o_{ijt}\n\n Reward per unit Total reward over units\n ----------------------------------- -------------------------------------------------------------\nExternal original reward total original reward (designer's objective)\nInternal revenue from units - cost 0\nBoth units' objective total original reward (designer's objective)\n\n\n> Why not use simple neural network\n\nSuppose AIs had ego. That is, they maximize not total reward but their own reward in a multi-agent system.\nAlthough recent works such as CommNet supposed cooperate setting, say, all the agents have obtain total reward R,\nif the agents were selfish, there would be no incentive to cooperate, and hence they would not communicate each other.\nThe problem is known as social dilemma (e.g., prisoner's dilemma), and leads the overall reward.\nNaaA enables us to design such a multi-agent setting.\nAlso, NaaA can be used multi-agent setting in which the agents are made by other people.", "The method is general and that makes it widely applicable for many problems.\nI hope that the design concept will be a new basis of studies such as GAN.\n \nI found very challenging trying to find alternative AI patterns or routines\nbased on the cooperation of two AIs. I would like to see this happening more often in videogames.\n Are there any demo videos?\n \nHowever, your idea, Adaptive Dropconnect seems to be complicated to Implement.\nHow can we implement it?", "Why do you just divide the environment into two types: external/internal?\nThere is also another way to simply use neural network.", "\n1) The objective function is return (discounted cumulative reward). That is,\n Σ_{t=0}^T [ γ^t R_t^{ex} ],\n where R_{ex,t} := Σ_i R_{it}^{ex} is overall reward from the external environment.\n\n > POMDP/MDP\n The actual problem we want to solve is POMDP.\n However, we extended it to POSG, multi-agent problem, because we consider bunch of neurons as agents.\n That's why we distinguished it external/internal environment in the paper.\n\n2) Yes, we used DQN-like architecture (Q-learning with neural net and experience replay) to predict counterfactual return of j for i at t o_{ijt}. The detail is as below.\n - The input is a state s_{it}, a coupled vector made of an external state, input vector, and parameter (weight and bias).\n - The output is Q-value Q(s_{it}, g_{ijt}), where g_{ijt} \\in {0, 1} is allocation. Hence, there are 2 |N_i^{in}| Q-values per unit, where |N_i^{in}| is number of j's (indices of connected units from a unit v_i).\n - o_{ijt} is calculated from a pair of scalars from the output: Q(s_{it}, 1) - Q(s_{it}, 0).\n - The model made of one layer, but also deeper architectures can be introduced.\n\n3) ViZDoom partially supports multi-agent environment, but it does not supports communication among the agents.\n So, we extended it with writing the original code which supports communication.\n\n> def of (s_it^ex, \\tilde{x}, \\theta_i)\n\nThe coupled vectors are designed as a state to predict Q-values for o_{ijt}. \nHere is the definition of the each notation.\n- s_it^ex: external state\n- \\tilde{x}: the predicted input vector from limited information. \\tilde{x} := x * g + \\bar{x} * (1-g). \n \\bar{x} is mean value of x.\n- \\theta_i: the parameter of v_i. For example, weight and bias for linear unit.\nPlease also see our answer (2) in this post.", "Very interesting paper. It shows a novel framework to consider all the units as agents.\nEven though the problem setting is challenging, the paper solved it by converting it into a scheme of counterfactual return maximization using an elegant trick from auction theory.\n\nNonetheless, I have several questions about the paper.\n1. What is the objective function? While the author states the problem is POSG, I guess the problem is POMDP/MDP since the paper introduced a Doom-based environment as the experiment. I'm not sure to what the algorithm want to maximize actually.\n2. I'm unsure how to predict o_it actually. Though it seems to use Q-learn according to the paper, I want you to provide detail information of the architecture.\n3. As I guess ViZDoom is a single-agent platform, how do you realize the multi-agent setting? I mean, are there some special implementations?\n\nThere are minor comments which may improve your paper:\n - Definition of R and \\pi is missing. I supposed they are a reward function and a policy. \n - Provide definition of (s_it^ex, \\tilde x, \\theta_i) in line 12, algo 1." ]
[ 6, 7, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkfEzz-0-", "iclr_2018_BkfEzz-0-", "iclr_2018_BkfEzz-0-", "iclr_2018_BkfEzz-0-", "Bk3zRoBGz", "H12VRW9gM", "HJSqWxjez", "HJSqWxjez", "BkTpRSRkf", "SJPHs661G", "iclr_2018_BkfEzz-0-", "iclr_2018_BkfEzz-0-", "ByRdDLV1z", "iclr_2018_BkfEzz-0-" ]
iclr_2018_Hk91SGWR-
Investigating Human Priors for Playing Video Games
What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play.
workshop-papers
This paper turned out to be quite difficult to call. My take on the pros/cons is: 1. The research topic, how and why humans can massively outperform DQN, is unanimously viewed as highly interesting by all participants. 2. The authors present an original human subject study, aiming to reveal whether human outperformance is due to human knowledge priors. The study is well conceived and well executed. I consider the study to be a contribution by itself. 3. The study provides prima facie evidence that human priors play a role in human performance, by changing the visual display so that the priors cannot be used. 4. However, the study is not definitive, as astutely argued by AnonReviewer2. Experiments using RL agents (with presumably no human priors) yield behavior that is similar to human behavior. So it is possible that some factor other than human prior may account for the behavior seen in the human experiments. 5. It would indeed be better, as argued by AnonReviewer2, to use some information-theoretic measure to distinguish the normal game from the modified games. 6. The paper has been substantially improved and cleaned up from the original version. 7. AnonReviewer1 provided some thoughtful detailed discussion of how the authors may be overstating the conclusions that one can draw from the paper. Bottom line: Given the procs and cons of the paper, the committee recommends this for workshop.
val
[ "HJfzeB4Hz", "B1mN4AC4M", "ry07SzQgG", "S1sHPAWgz", "BJZ52L6lf", "Syy7rN67M", "SJML7NamG", "SkMvz4TQM", "B1LEVNTXG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We have revised our paper and added new experiments to address all of your previous concerns. It would be great if you can please find some time to look at our response and inform us of any other feedback or concerns. This would go a long way in helping us improve the paper further. Thank you so much once again!", "Thank you for upgrading the rating of our paper. We addressed the concerns raised in your review and reported all experiments you asked for. It would be great and helpful if you could provide details about your current concerns. Thanks a lot for your time!", "This paper investigates human priors for playing video games.\n\nConsidering a simple video game, where an agent receives a reward when she completes a game board, this paper starts by stating that:\n-\tFirstly, the humans perform better than an RL agent to complete the game board.\n-\tSecondly, with a simple modification of textures the performances of human players collapse, while those of a RL agent stay the same.\n\nIf I have no doubts about these results, I have a concern about the method. \nIn the case of human players the time needed to complete the game is plotted, and in the case of a RL agent the number of steps needed to complete the game is plotted (fig 1). Formally, we cannot conclude that one minute is lesser than 4 million of steps.\n\nThis issue could be easily fixed. Unfortunately, I have other concerns about the method and the conclusions.\n\nFor instance, masking where objects are or suppressing visual similarity between similar objects should also deteriorate the performance of a RL agent. So it cannot be concluded that the change of performances is due to human priors. In these cases, I think that the change of performances is due to the increased difficulty of the game.\n\nThe authors have to include RL agent in all their experiments to be able to dissociate what is due to human priors and what is due to the noise introduced in the game. \n\n\n", "Overall:\nI really enjoyed reading this paper and think the question is super important. I have some reservations about the execution of the experiments as well as some of the conclusions drawn. For this reason I am currently a weak reject (weak because I believe the question is very interesting). However, I believe that many of my criticisms can be assuaged during the rebuttal period.\n\nPaper Summary:\nFor RL to play video games, it has to play many many many many times. In fact, many more times than a human where prior knowledge lets us learn quite fast in new (but related) environments. The authors study, using experiments, what aspects of human priors are the important parts. \n\nThe authors’ Main Claim appears to be: “While common wisdom might suggest that prior knowledge about game semantics such as ladders are to be climbed, jumping on spikes is dangerous or the agent must fetch the key before reaching the door are crucial to human performance, we find that instead more general and high-level priors such as the world is composed of objects, object like entities are used as subgoals for exploration, and things that look the same, act the same are more critical.”\n\nOverall, I find this interesting. However, I am not completely convinced by some of the experimental demonstrations. \n\nIssue 0: The experiments seem underpowered / not that well analyzed. \nThere are only 30 participants per condition and so it’s hard to tell whether the large differences in conditions are due to noise and what a stable ranking of conditions actually looks like. I would recommend that the authors triple the sample size and be more clear about reporting the outcomes in each of the conditions. \n\nIt’s not clear what the error bars in figure 1 represent, are they standard deviations of the mean? Are they standard deviations of the data? Are they confidence intervals for the mean effect? \n\nDid you collect any extra data about participants? One potentially helpful example is asking how familiar participants are with platformer video games. This would give at least some proxy to study the importance of priors about “how video games are generally constructed” rather than priors like “objects are special”.\n\nIssue 1: What do you mean by “objects”?\nThe authors interpret the fact that performance falls so much between conditions b and c to mean that human priors about “objects are special” are very important. However, an alternative explanation is that people explore things which look “different” (ie. Orange when everything else is black). \n\nThe problem here comes from an unclear definition of what the authors mean by an “object” so in revision I would like authors to clarify what precisely they mean by a prior about “the world is composed of objects” and how this particular experiment differentiates “object” from a more general prior about “video games have clearly defined goals, there are 4 clearly defined boxes here, let me try touching them.”\n\nThis is important because a clear definition will give us an idea for how to actually build this prior into AI systems.\n\nIssue 2: Are the results here really about “high level” priors?\nThere are two ways to interpret the authors’ main claim: the strong version would maintain that semantic priors aren’t important at all.\n\nThere is no real evidence here for the strong version of the claim. A real test would be to reverse some of the expected game semantics and see if people perform just as well as in the “masked semantics” condition.\n\nFor example, suppose we had exactly the same game and N different types of objects in various places of the game where N-1 of them caused death but 1 of them opened the door (but it wasn’t the object that looked like a key). My hypothesis would be that performance would fall drastically as semantic priors would quickly lead people in that direction. \n\nThus, we could consider a weaker version of the claim: semantic priors are important but even in the absence of explicit semantic cues (note, this is different from having the wrong semantic cues as above) people can do a good job on the game. This is much more supported by the data, but still I think very particular to this situation. Imagine a slight twist on the game:\n\nThere is a sword (with a lock on it), a key, a slime and the door (and maybe some spikes). The player must do things in exactly this order: first the player must get the key, then they must touch the sword, then they must kill the slime, then they go to the door. Here without semantic priors I would hypothesize that human performance would fall quite far (whereas with semantics people would be able to figure it out quite well).\n\nThus, I think the authors’ claim needs to be qualified quite a bit. It’s also important to take into account how much work general priors about video game playing (games have goals, up jumps, there is basic physics) are doing here (the authors do this when they discuss versions of the game with different physics).", "The authors present a study of priors employed by humans in playing video\ngames -- with a view to providing some direction for RL agents to be more\nhuman-like in their behaviour.\n\nThey conduct a series of experiments that systematically elides visual\ncues that humans can use in order to reason about actions and goals in a\nplatformer game that they have a high degree of control over.\n\nThe results of the experiments, conducted using AMT participants, demonstrates\nthe existence of a taxonomy of features that affect the ability to complete\ntasks in the game to varying degrees.\n\nThe paper is clearly written, and the experiments follow a clean and coherent\nnarrative. Both the premises assumed and the conclusions drawn are quite\nreasonable given the experimental paradigm and domain in which they are\nconducted.\n\nThere were a couple of concerns I did have however:\n\n1. Right at the beginning, and through the manuscript, there is something of an\n apples-to-oranges comparison when considering how quickly humans can\n complete the task (order of minutes) and how quickly the SOTA RL agents can\n complete the task (number of frames).\n\n While the general spirit of the argument is somewhat understandable despite\n this, it would help strengthen any inference drawn from human performance\n to be applied to RL agents, if the comparison between the two were to be\n made more rigorous -- say by estimating a rough bijection between human and\n RL measures.\n\n2. And in a related note to the idea of establishing a comparison, it would be\n further instructive if the RL agents were also run on the different game\n manipulations to see what (if any) sense could be made out of their\n performance.\n\n I understand that at least one such experiment is shown in Figure 1 which\n involves consistent semantics, but it would be quite interesting to see how\n RL agents perform when this consistency is taken away.\n\nOther questions and comments:\n\n1. In the graphs shown in Figure 3, are the meaning of the 'State' variable is\n not clear -- is it the number of *unique* states visited? If not, is it the\n total number of states/frames seen? In that case, how is it different from\n 'Time'?\n\n2. The text immediately below Figure 3's caption seems to have an incorrect\n reference (referring to Figure 2(a) instead of Figure 3(a)).\n\nGiven recent advances in RL and ML that eschew all manner of structured\nrepresentations, I believe this is a well-timed reminder that being able to\ntransfer know-how from human behaviour to artificially-intelligent ones.\n", "We thank the reviewer for the detailed and very useful feedback. We have addressed all of your concerns below.\n\nIssue 0\n“..there are only 30 participants per condition and so it’s hard to tell whether the large differences in conditions are due to noise and what a stable ranking of conditions actually looks like …” \nA: Good point! As per your suggestion, we have increased the sample size substantially by recruiting a total of 120 subjects per condition. The results and conclusions remain unchanged. \n\n“... the error bars in figure 1 represent, are they standard deviations of the mean?... ”\nA: Sorry for the confusion. The error bars in Figure 1 represent standard error of the mean (we have added that clarification in the revision).\n\n“Did you collect any extra data about participants? One potentially helpful example is asking how familiar participants are with platformer video games …”\nA: Yes! For all of the games, we found only a moderate correlation (around 0.3) between familiarity with video games and average time taken to solve the game. This relatively moderate correlation indicates that familiarity with video games only results in slight improvement in performance of human players. \n\nIssue 1\n“What do you mean by “objects”?”\nA: Thank you for asking this question. We have clarified the definition of objects in the revised version of the manuscript. In the video game setting, objects are simply entities that are visibly distinct from their surroundings. The hypothesis is that humans use these visually distinct entities as subgoals, which results in more efficient exploration than random search. Performance of humans in game manipulation shown in Figure 2(c), demonstrates that when players cannot distinguish entities from the background, their performance drops significantly. We believe that mechanisms to bias exploration towards salient entities would be an interesting step towards improving the efficiency of RL agents.\n \nIssue 2.\n“There are two ways to interpret the authors’ main claim: the strong version would maintain that semantic priors aren’t important at all..”\nA: We are sorry for the confusion. Our claim is that while prior knowledge about semantics and affordances is important for human players, more general priors about objects (i.e. existence of visually salient entities that are subgoals; entities that look similar have the same semantics) are more critical to performance. In essence, we agree with the reviewer and we do not claim that semantic priors aren't important (just that general prior about objects are more critical). We have revised the manuscript to clarify this. As per your suggestion, we have also included an additional experiment (refer to section A in Appendix) and indeed find that reversing the semantics leads to worse performance than that of simply masking the semantics. \n\n“..Here without semantic priors I would hypothesize that human performance would fall quite far (whereas with semantics people would be able to figure it out quite well).”\nA: We completely agree. However, at the same time, we believe that the prior of treating visually distinct entities as sub-goals for exploration will be more important than the prior about semantics alone. \n\n“..It’s also important to take into account how much work general priors about video game playing (games have goals, up jumps, there is basic physics) are doing here..”\nA: Great point. Humans bring in various priors about general video game playing such as moving up or right in games is generally correlated with progress, games have goals etc. Quantifying the importance of such priors is an interesting direction of research and we will include this discussion in the next revision of the paper.\n", "We thank the reviewer for the positive and useful feedback. Our response to your concerns below:\n\nQ: “there is something of an apples-to-oranges comparison when considering how quickly humans can complete the task (order of minutes) and how quickly the SOTA RL agents can complete the task (number of frames).”\nA: Good point! In Figure 1 of the revised manuscript, we have now reported number of actions taken by both human players and RL agents to solve the games.\n\nQ: “... it would be further instructive if the RL agents were also run on the different game manipulations…” \nA: Thank you for this useful suggestion! We have included additional experiments that quantify the performance of RL agent on the different game manipulations (Section C, Appendix). The RL agent’s performance is unaffected in all game manipulations except for the version in which visual similarity is removed. \n\nResponse to additional comments:\nQ: “ .. graphs shown in Figure 3, are the meaning of the 'State' variable is not clear -- is it the number of *unique* states visited? ..”\nA: In graph 3, the 'State' variable indeed refers to the unique states visited which serves as a measure of how much players explore a game manipulation. We have clarified this in the revised manuscript.\n\nWe have made revisions to the text addressing other minor corrections pointed by you. ", "We thank the reviewers for their encouraging comments. We are glad that the reviewers found the questions addressed in the paper to be super important (R3) and the narrative to be coherent (R1). R1 says, “Given recent advances in RL ... it is a well-timed reminder that being able to transfer know-how from human behaviour to artificially-intelligent ones”. The reviewers also had a number of great suggestions that we have incorporated in the revised manuscript. The major changes are as follows:\n\na) We have rewritten the introduction to clarify the main claims of our paper. \n\nb) We increased the sample size significantly for all the human experiments to ensure robustness.\n\nc) We evaluated the performance of RL agent on various game manipulations to shed further light as to how RL agents differ from humans in terms of prior knowledge (Section C, Appendix).\n", "\nThe reviewer says “..Formally, we cannot conclude that one minute is lesser than 4 million of steps..”\n\nA: In Figure 1 of the revised manuscript, we have now reported the number of steps taken by both the RL agents and human players for direct comparison. Human players take three orders of magnitude fewer steps to solve the game. Further, please note that the main point of our work was not to compare absolute performance of humans against RL agents, but to show that the performance of human players changes significantly with re-rendering of the game which makes it hard for humans to use their prior knowledge, whereas the performance of RL agent is almost unchanged.\n\nThe reviewer says “..So it cannot be concluded that the change of performances is due to human priors. In these cases, I think that the change of performances is due to the increased difficulty of the game.” \n\nA: We are afraid that the reviewer might have misunderstood our experimental setup and methodology. First, note that all games are *exactly the same* in their reward and goal structure - the only difference between the different versions of the game is in the rendering of the game entities. Because there is no other difference between the original and the manipulated versions of the game, it can be inferred that drop in performance is due to the inability of humans to employ their prior knowledge and beliefs in those manipulated games. \n\nThe reviewer says, “...The authors have to include RL agent in all their experiments to be able to dissociate what is due to human priors and what is due to the noise introduced in the game”\n\nA: We do not agree with the reviewer’s comment because the performance of the RL agents on different manipulations of the game has no effect on the conclusions of the human study. At the same time, we do believe that studying the performance of RL agents on all game manipulations is an interesting question; one that is independent of the study of priors employed by humans. We have included the performance of RL agents for various game manipulations in Section C, Appendix. The RL agent’s performance is unaffected in all game manipulations except for the version in which visual similarity is removed. The results provide direct evidence that reviewer’s claim that “masking where objects are ...should also deteriorate the performance of a RL agent.” is simply not true.\n" ]
[ -1, -1, 4, 5, 7, -1, -1, -1, -1 ]
[ -1, -1, 3, 4, 4, -1, -1, -1, -1 ]
[ "Syy7rN67M", "B1LEVNTXG", "iclr_2018_Hk91SGWR-", "iclr_2018_Hk91SGWR-", "iclr_2018_Hk91SGWR-", "S1sHPAWgz", "BJZ52L6lf", "iclr_2018_Hk91SGWR-", "ry07SzQgG" ]
iclr_2018_rJk51gJRb
Adversarial Policy Gradient for Alternating Markov Games
Policy gradient reinforcement learning has been applied to two-player alternate-turn zero-sum games, e.g., in AlphaGo, self-play REINFORCE was used to improve the neural net model after supervised learning. In this paper, we emphasize that two-player zero-sum games with alternating turns, which have been previously formulated as Alternating Markov Games (AMGs), are different from standard MDP because of their two-agent nature. We exploit the difference in associated Bellman equations, which leads to different policy iteration algorithms. As policy gradient method is a kind of generalized policy iteration, we show how these differences in policy iteration are reflected in policy gradient for AMGs. We formulate an adversarial policy gradient and discuss potential possibilities for developing better policy gradient methods other than self-play REINFORCE. The core idea is to estimate the minimum rather than the mean for the “critic”. Experimental results on the game of Hex show the modified Monte Carlo policy gradient methods are able to learn better pure neural net policies than the REINFORCE variants. To apply learned neural weights to multiple board sizes Hex, we describe a board-size independent neural net architecture. We show that when combined with search, using a single neural net model, the resulting program consistently beats MoHex 2.0, the state-of-the-art computer Hex player, on board sizes from 9×9 to 13×13.
workshop-papers
The reviewers agree that the paper is below threshold for acceptance in the main track (one with very low confidence), but they favor submitting the paper to the workshop track. The paper considers policy gradient methods for two-player zero-sum Alternating Markov games. They propose adversarial policy gradient (fairly obviously), wherein the critic estimates min rather than mean reward. They also report promising empirical results in the game of Hex, with varying board sizes. I found the paper to be well-written and easy to read, possibly due to revisions in the rebuttal discussions. The reviewers consider the contribution to be small, mainly due to the fact that the key algorithmic insights were already published decades ago. Reintroducing them is a service to the community, but its novelty is limited. Other critiques mentioned that results in Hex only provide limited understanding of the algorithm's behavior in general Alternating Markov games. The lack of comparison with modern methods like AlphaGo Zero was also mentioned as a limitation. Bottom line: The paper provides a small but useful contribution to the community, as described above, and the committee recommends it for workshop.
train
[ "SkLrUaZWG", "rJFql_Nxz", "ByzeYntef", "SkNEyzcxG", "SkURGVXVf", "SkpoKqYzz", "SyowYgBfz", "rkJtIabWM", "SkGAjuFzz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thanks for your comment. \n\nIn the revised paper, we have added our neural net model to search, the resulting program is stronger than MoHex 2.0 on board sizes 9x9 to 13x13. We have also included a comparison with ExIt. It appears that ExIt might not as strong as MoHex 2.0 (the ExIt paper was comparing their player with MoHex 2011). Another advantage of our new player is that it is able to play on multiple board size with only one trained model, while ExIt is limited on 9x9. \n\nDetained responses are in below. \n\nThe methods ExIt (I assume you mean ExIt by saying HexIt) and AlphaGo Zero are similar. They work well but one problem is the computation cost is very high. For example, when applied to chess and shogi, it is mentioned that 5000 TPUs were used for MCTS self-play data generation. \n\nFor ExIt, by the time our paper is submitted, only first version is available on arxiv, though we are aware their work has been accepted in NIPS 2017. The newest version can be found from this URL.\n https://arxiv.org/abs/1705.08439\n\nThey did all experiments on 9x9 Hex. In the first version on arxiv, their player is a search+NN player not pure neural net. \nOn other hand hand, even if the learned neural net policy itself is strong by following MCTS, it is likely the playing strength of this pure neural net can be improved by doing a policy gradient on it, though after such a policy gradient, the policy might not good for Monte-carlo tree search any more (as shown by first Alphago paper). \n\nIn the newest version, they compared their policy_value net + MCTS player with MoHex 2011, however, there is MoHex 2.0, which is much stronger than MoHex 2011. \n\nExIt only conducts experiments on 9x9 Hex. It is not very clear how much time could be used to produce significant results on larger board size, such as 11x11, presumably, this is not a easy task with only one GPU computer. We note that even ExIt was specially applied only to this board size, MoHex 2.0 and our new program both seem to be able achieve better playing results than ExIt. \n\nOur AMCPG-A or AMCPG-B follows traditional “light self-play”. No tree was built. To estimate the “minimum” critic, extra roll-outs are conducted. But it is very much due to the Monte-Calro nature of the method, and that is why we mention an actor-critic style might be more efficient. Our methods work essentially similar as traditional policy gradient, that's why we only compared with REINFORCE variants. \n\nWe argue that it could be unfair to say that our better results compared to classic REINFORCE is merely due to extra roll-outs. One can see that in REINFORCE-B, extra roll-outs are also conducted the same way as AMCPG-A and AMCPG-B. Their extra computation costs due to extra roll-out are the same. However, the results in Figure 2 suggests that REINFORCE-B has similar performance as REINFORCE-A and REINFORCE-V. ", "This paper is outside of my area of expertise, so I'll just provide a light review:\n\n- the idea of assuming that the opponent will take the worst possible action is reasonable in widely used in classic search, so making value functions follow this intuition seems sensible,\n- but somehow I wonder if this is really novel? Isn't there a whole body of literature on fictitious self-play, including need RL variants (e.g. Heinrich&Silver, 2016) that approaches things in a similar way?\n- the results on Hex have some signal, but I don’t know how to calibrate them w.r.t. The state of the art on that game? A 40% win rate seems low, what do other published papers based on RL or search achieve?\n", "This paper introduces a variation over existing policy gradient methods for two players zero sum games, in which instead of using the outcome of a single policy network rollout as the return, they use the minimum outcome among a few rollouts either from the original position or where the first action from that position is selected uniformly among the top k policy outputs. \n\nThe proposed method supposedly provides slightly stronger targets, due to the extra lookahead / rollouts. Experiments show that this provides faster progress per iteration on the game of Hex against a fixed third party opponent.\n\nThere is no comparison against state of the art methods like AlphaGo Zero which uses MCTS root move distribution and MCTS rollouts outcome to train policy and value network, even though the author do cite this work. There is also no comparison with Hexit which also trains policy net on MCTS move distribution, and was also applied to Hex.\n\nThe actual proposed method is actually a one liner change, which could be introduced much sooner in the paper to save the reader some time. While the idea is interesting, the paper felt quite verbose on introducing notations and related work, and a bit lacking on actual change that is being proposed and the experiment to back it up.\n\nFor example, was it really necessary to introduce state transition probabilities p(s’, a, s) when all the experiments are done in the deterministic game of Hex ?\n\nAlso the experiment seems not fully fair to the reinforce baseline. My understand is that the proposed method is much more costly due to extra rollouts that are needed. It would be interesting to see the same learning curves as in Figure 2, but the x axis would be some computational budget (total number of network forward, or wall clock time). It is conceivable that the vanilla reinforce would do just as well as the proposed method if the plots were aligned this way. It would also be good to know the asymptotic behavior.\n\nSo even though the idea is interesting, it seems that much stronger methods AlphaGo Zero / Hexit are now available, and the experimental section is a bit weak. I would recommend to accept for a workshop paper but not sure about the main track.\n\n\n", "The paper makes the simple but important observation that (deep) reinforcement learning in alternating Markov games requires a min-max formulation of the Bellman equation as well as careful attention to the way in which one alternates solving for both players' policies in a policy iteration setting.\n\nWhile some of the core algorithmic insights regarding Algorithms 3 & 4 in the paper stem from previous work (Condon, 1990; Hoffman & Karp, 1966), I was not actually aware of these previous results until I reviewed this paper.\n\nA nice corollary of Algorithms 3 & 4 is that they make for a straightforward adaptation of policy gradient algorithms since when optimizing one policy, the other is fixed to the greedy policy.\n\nIn general, it would be nice to have the algorithms specified as formal algorithms as opposed to text-based outlines. I found myself reading and re-reading descriptions to make sure I understood what math was being implied by the descriptions.\n\nSection 6\n\n> Hex is simpler than Go in the sense that perfect play can \n> often be achieved whenever virtual connections are found \n> by H-Search\n\nIt is not clear here what virtual connections are, what H-Search is, and how these imply perfect play, if perfect play as previously discussed is unknown.\n\nOverall, the results on Hex for AMCPG-A and AMCPG-B vs. standard REINFORCE variants currently used are very encouraging. That said, empirically it is always a question of whether these results are specific to Hex. Because this paper is not proposing the best Hex player (i.e., the winning rate against Wolve never exceeds 0.5), I think it is quite reasonable to request the authors to compare AMCPG-A and AMCPG-B to standard REINFORCE variants on other games (they do not need to be as difficult as Hex).\n\nFinally, assuming that the results do generalize to other games, I am left wondering about the significance of the contribution. On one hand, the authors have introduced me to literature I was not aware of, but on the other hand, their actual novel contribution is a rather straightforward adaptation of ideas in the literature to policy gradients (that could be formalized in a more technically precise way) with an evaluation on a single type of game. This is a useful contribution no doubt, but I am concerned with whether it meets the significance level that I am used to with accepted ICLR papers in previous years.\n", "Overall, I like the paper (it makes a simple but important point) and the authors have addressed most of my concerns.\n\nThat said, the one major issue that remains with the paper is that I would like to see evaluations in a larger variety of domains -- I feel like I'm overfitting my understanding of the ideas in the paper to the game of Hex. For this reason, I feel that my current review score is appropriate. As another reviewer points out, this paper would be great for a workshop if it is not accepted to the main track.\n", "Thanks for the reviewing. \n\nThe reviewer mentioned fictitious self-play (Heinrich&Silver, 2016), but it is primary for imperfect-information games.\n\nWe focus on classic perfect information two-player zero-sum game played in alternate turns. \n\nAdditionally, the reviewer was concerned about the state-of-art in Hex. In the revised paper, we haven shown that after combining our neural net with search, the state-of-art in Hex is improved. Moreover, we used a single neural net model, with consistent improvement on multiple board sizes. \n", "\nWe thank the reviewer's comments about the 'state-of-the-art' in Hex. We have updated our paper, in which we show that after combining our neural net model with search, better results than MoHex 2.0 are observed.\n\nWe summarize the changes in below:\n\n\n1. we show that with our boardsize independent (as there is no fully connected layers) neural net architecture, a single trained model trained on 9x9 can generalize to other board sizes. When combined with search, the new program consistently defeats MoHex 2.0 on 9x9 to 13x13 (with same number of simulations and with same computation time ).\n\n2. we also compared our results with ExIt. We show that both MoHex 2.0 and our new program achieve better winrates against MoHex 2011 than ExIt, though ExIt was only concentrated on 9x9. \n\n3. we show that minimum rollout return slightly improves Monte carlo tree search\n\n\n4. typos and grammatical errors are corrected. \n\nHowever, due to various constraint, we did not apply our methods to other games, though it would be interesting to do so. Hex is the game we are most familiar. But on the other hand, we stress that, just as REINFORCE, we did not have any special modifications when apply the ACMPG variants to Hex. \n", "Thanks for your comments. \n\nThe reviewer is concerned about the computation cost for each training. In our experiments, training is very fast, took only a few hours on 9x9 and 11x11 Hex. Since all training/evaluation are conducted on the same computer with a single GTX 1080 GPU. We briefly list the detailed training time for each method here: \n \n9x9 Hex: total time usage for 400 iterations training: \nAMCPG-A: k=1: about 1 h 40 m, k=3: about 2.5 h, k=6: 4h 10 minutes, k=9: about 6h\nAMCPG-B: similar as above\nREINFORCE-B: similar as above\nREINFORCE-A: 1 hour 15 minutes\nREINFORCE-V: 1 hour 20 minutes\n\n11x11 Hex: \nAMCPG-A: k=1: about 3h15 minutes, k=3: about 5h, k=6: about 9h, k=9: about 12h\nAMCPG-B: similar as above\nREINFORCE-B: similar as above,\nREINFORCE-A: 2.5 hours\nREINFORCE-V: 2.5 hours\n\nIn fact, most of our time was not spending on training the neural net, but for evaluation the neural net model by playing against Wolve, as Wolve's search is orders of magnitude slower than a pure neural net player. \n\nWe note that, even though a pure neural net self-play training might not be able to provide state-of-the-art playing, such methods have its own merits. For example, due to their fast speed, the first version of Alphago uses such a method for generating data to train a value net which is useful in search. \n\nOn other hand hand, even though search+NN self-play might also be able to learn a neural net policy that itself can play strongly. It is likely that such a neural net could be further improved by policy gradient. \n", "Thank you for your comments. \n\nYes, the key insights behind this paper is much from the literature, i.e., (Condon, 1990; Hoffman & Karp, 1966; Littman 1996). But, as the reviewer has pointed out, perhaps it is because of the difference in terminology, those classic works were much \"unknown\" for many researchers.\n\nIn this paper, we brought those again to the community, one goal is to stimulate more thorough thinking about the difference between two-player alternate-turn games and single agent MDPs. It is apparent that two-player alternate-turn zero-sum games are more \"challenging\" in many aspects. A more careful examination about the fundamental differences between AMGs and MDPs will perhaps help people develop more effective/efficient RL methods specifically for this domain. \n\nWe only did our experiments on the game of Hex, primarily because this is the game we are most familiar. But it should be noted that we didn't conduct any game specific modifications when applying those AMCPG variants to this specific game, just as REINFORCE. \n\nIt is true that doing more games would be more convincing; however, due to various constraint (i.e., hardware constraint, knowledge about other games), we did not manage to have an attempt in this direction while writing this paper. \n\nAs for advancing the state-of-art, the state-of-the-art for Hex are still search based methods. In the first version we submitted, we did not attempt to advance the state-of-art, since we were concentrated on introducing new fast and better policy gradient methods. \n\nHowever, after receiving the reviewers' comments about state-of-art, we proceed to combine our neural net with search, and the resulting program is indeed be able to surpass MoHex 2.0. \n\nMost notably, we use a single model for multiple board sizes, the new program consistently defeats MoHex 2.0 on every board size. This is much due to the architecture we introduced, where we deliberately removed fully connected layers, so that the learned parameter weights can generalize to multiple board sizes.\n\nSince expert data is often difficult to obtain or generate, while generating expert data on smaller board is usually much easier and cheaper than larger board sizes, our result provides an encouraging direction for more efficient learning on games which has similar characteristics as Hex (e.g., other connection games). \n\nWe have also investigated “minimum return” in Monte-carlo tree search, experimental results show that incorporating “minimum playout” also improved MCTS. \n\nFuture work direction is using value net in pure neural net training as well as use it to replace the playout in MCTS. However, different from previous work, we argue that a “min” operator might be able to lead better results in alternating markov games. \n\nWe have included a psude-code for Algo.1, Algo.2 and Algo.3 in the appendix, which provides a more formal discription about each procedure. Also, explanation about Virtual Connections and H-Search have been added in the revised paper. \n\n" ]
[ -1, 5, 5, 5, -1, -1, -1, -1, -1 ]
[ -1, 2, 4, 4, -1, -1, -1, -1, -1 ]
[ "ByzeYntef", "iclr_2018_rJk51gJRb", "iclr_2018_rJk51gJRb", "iclr_2018_rJk51gJRb", "SkGAjuFzz", "rJFql_Nxz", "iclr_2018_rJk51gJRb", "ByzeYntef", "SkNEyzcxG" ]
iclr_2018_BJInEZsTb
Learning Representations and Generative Models for 3D Point Clouds
Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep autoencoder (AE) network with excellent reconstruction quality and generalization ability. The learned representations outperform the state of the art in 3D recognition tasks and enable basic shape editing applications via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation. We also perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and, Gaussian mixture models (GMM). Interestingly, GMMs trained in the latent space of our AEs produce samples of the best fidelity and diversity. To perform our quantitative evaluation of generative models, we propose simple measures of fidelity and diversity based on optimally matching between sets point clouds.
workshop-papers
This paper compares autoencoder and GAN-based methods for 3D point cloud representation and generation, as well as new (and welcome) metrics for quantitatively evaluating generative models. The experiments form a good but still a bit too incomplete exploration of this topic. More analysis is needed to calibrate the new metrics. Qualitative analysis would be very helpful here to complement and calibrate the quantitative ones. The writing also needs improvement for clarity and verbosity. The author replies and revisions are very helpful, but there is still some way to go on the issues above. Overall, the committee is intersting and recommends this paper for the workshop track.
test
[ "SJyXoTtlG", "B1Mvg-qlM", "HJf1JQqez", "H1n5Uv6QG", "rJoOW2dfz", "SyYI6idfz", "S19u3suGf", "HJfG2jOzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper introduces a generative approach for 3D point clouds. More specifically, two Generative Adversarial approaches are introduced: Raw point cloud GAN, and Latent-space GAN (r-GAN and l-GAN as referred to in the paper). In addition, a GMM sampling + GAN decoder approach to generation is also among the experimented variations. \n\nThe results look convincing for the generation experiments in the paper, both from class-specific (Figure 1) and multi-class generators (Figure 6). The quantitative results also support the visuals. \n\nOne question that arises is whether the point cloud approaches to generation is any more valuable compared to voxel-grid based approaches. Especially Octree based approaches [1-below] show very convincing and high-resolution shape generation results, whereas the details seem to be washed out for the point cloud results presented in this paper. \n\nI would like to see comparison experiments with voxel based approaches in the next update for the paper. \n\n[1]\n@article{tatarchenko2017octree,\n title={Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs},\n author={Tatarchenko, Maxim and Dosovitskiy, Alexey and Brox, Thomas},\n journal={arXiv preprint arXiv:1703.09438},\n year={2017}\n}\n\nIn light of the authors' octree updates score is updated. I expect these updates to be reflected in the final version of the paper itself as well. ", "3D data processing is very important topic nowadays, since it has a lot of applications: robotics, AR/VR, etc.\n\nCurrent approaches to 2D image processing based on Deep Neural Networks provide very accurate results and a wide variety of different architectures for image modelling, generation, classification, retrieval.\n\nThe lack of DL architectures for 3D data is due to complexity of representation of 3D data, especially when using 3D point clouds.\n\nConsidered paper is one of the first approaches to learn GAN-type generative models.\nUsing PointNet architecture and latent-space GAN, the authors obtained rather accurate generative model.\n\nThe paper is well written, results of experiments are convincing, the authors provided the code on the github, realizing their architectures. \n\nThus I think that the paper should be published.", "Summary:\n\nThis paper proposes generative models for point clouds. First, they train an auto-encoder for 3D point clouds, somewhat similar to PointNet (by Qi et al.). Then, they train generative models over the auto-encoder's latent space, both using a \"latent-space GAN\" (l-GAN) that outputs latent codes, and a Gaussian Mixture Model. To generate point clouds, they sample a latent code and pass it to the decoder. They also introduce a \"raw point cloud GAN\" (r-GAN) that, instead of generating a latent code, directly produces a point cloud.\n\nThey evaluate the methods on several metrics. First, they show that the autoencoder's latent space is a good representation for classification problems, using the ModelNet dataset. Second, they evaluate the generative model on several metrics (such as Jensen-Shannon Divergence) and study the benefits and drawbacks of these metrics, and suggest that one-to-one mapping metrics such as earth mover's distance are desirable over Chamfer distance. Methods such as the r-GAN score well on the latter by over-representing parts of an object that are likely to be filled.\n\nPros:\n\n- It is interesting that the latent space models are most successful, including the relatively simple GMM-based model. Is there a reason that these models have not been as successful in other domains?\n\n- The comparison of the evaluation metrics could be useful for future work on evaluating point cloud GANs. Due to the simplicity of the method, this paper could be a useful baseline for future work.\n\n- The part-editing and shape analogies results are interesting, and it would be nice to see these expanded in the main paper.\n\nCons:\n\n- How does a model that simply memorizes (and randomly samples) the training set compare to the auto-encoder-based models on the proposed metrics? How does the diversity of these two models differ?\n\n- The paper simultaneously proposes methods for generating point clouds, and for evaluating them. The paper could therefore be improved by expanding the section comparing to prior, voxel-based 3D methods, particularly in terms of the diversity of the outputs. Although the performance on automated metrics is encouraging, it is hard to conclude much about under what circumstances one representation or model is better than another.\n\n- The technical approach is not particularly novel. The auto-encoder performs fairly well, but it is just a series of MLP layers that output a Nx3 matrix representing the point cloud, trained to optimize EMD or Chamfer distance. The most successful generative models are based on sampling values in the auto-encoder's latent space using simple models (a two-layer MLP or a GMM).\n\n- While it is interesting that the latent space models seem to outperform the r-GAN, this may be due to the relatively poor performance of r-GAN than to good performance of the latent space models, and directly training a GAN on point clouds remains an important problem.\n\n- The paper could possibly be clearer by integrating more of the \"background\" section into later sections. Some of the GAN figures could also benefit from having captions.\n\nOverall, I think that this paper could serve as a useful baseline for generating point clouds, but I am not sure that the contribution is significant enough for acceptance.\n", "Dear reviewers,\n\nIn the uploaded revision we have incorporated your suggestions and did our best to address your concerns. In the main paper we improved the syntax/language in a handful places and added some missing citations.\n\nImportant additions occurred only in the supplementary section; at your suggestion, we will incorporate any of them in the main paper. \n\nConcretely, in the supplementary:\n\n\t1.\tWe added extensive details of our training and architecture parameters to facilitate reproducibility.
\n\n\t2.\tWe included the optimal parameters of our SVMs classifiers along with a confusion matrix. By expanding the search space of the SVM parameters we improved the classification scores in ModelNet10 by .1 and .4  in each structural loss.
\n\n\t3.\tWe added more comparisons with Wu et al. [Sec. I] and the random-memorization baseline suggested by reviewer-1 [Sec. H].
\n\n\t4.\tWe added a section with a new, voxel-based, comparison study [Sec. G].
\n\nWe appreciate your feedback; it has been invaluable in improving our work.", "We thank all reviewers for their feedback and comments, which we have addressed in the messages below. We look forward to any additional suggestions. Pending reviewers’ approval, we would incorporate all changes below into the appendix of a the next paper revision. \n", "Thank you for your comments and feedback. We start by pointing out that the primary goal of our work is not to evaluate different 3D modalities. Our stated goal and main focus has been to learn meaningful representations and generative models for point clouds. As a testament to the power of our learned representations, we also reported that they can be used to improve the state of the art in classification (over existing voxel based methods). A full evaluation of different 3D modalities is highly task dependent; we can definitely envision certain tasks where voxels are a good choice. We chose point clouds because we believe that they are relatively unexplored, concise and natural input modality which appears as the direct output of most 3D range sensing pipelines.\n\nWith reference to oct-trees specifically, we agree that the oct-tree-based approach is a powerful one, with the potential to achieve high-resolution results (e.g. Tatarchenko et al.). When occupancy grids are the modality of choice, opting for oct-tree cells as opposed to uniformly sized voxels offers an obvious advantage in terms of the resolution that can be captured within a given memory/space bandwidth. Point clouds are a completely distinct, surface-based representation; it only describes the visible part of a shape, which typically is the only relevant part. As such, compared to volumetric representations (occupancy cells of any kind), point clouds are typically much more compact. \n\nFrom a quantitative perspective, the average oct-tree from the Shapenet car category (as per Tatarchenko et al.) requires 389200 bytes to be stored - within this space bandwidth, we could represent the same shape with a point cloud of 32433 points. This would achieve a very high on-surface resolution (15x what is currently shown in our paper), exceeding what 128^3 cells can achieve. We will add such insights in the paper revision and demonstrate visually. Point clouds can be made even more compact by utilizing data-structures such as kd-trees (https://arxiv.org/abs/1704.01222), which could be considered the equivalent, for point clouds, of what oct-trees are for volumetric representations. Exploring this direction for this modality remains an interesting avenue for future work. \n\nThat being said, we strongly agree that further exploring the volumetric modality for generation is an interesting direction for study, and we thank you for the pointer. To address this, we have added new comparisons against standard voxel-based methods; we report related observations in a separate message above. Please let us know if these experiments suffice, or of any additional experiments you might have in mind that would better address this question. \n", "Many thanks for your review -- we appreciate your positive feedback. We have since added a number of voxel-based generation experiments and comparisons that you might find interesting - please feel free to refer to the corresponding message above.", "Thank you for your comments and suggestions. We will incorporate your exposition/text restructuring suggestions in the next revision - below we address the comments pertaining to the technical part of the paper.\n\nA) Evaluation of metrics on models that memorize/randomly sample\nTo answer this, we randomly sampled the training set, creating sample sets of 3 different sizes, and evaluated our metrics between these ”memorized” sets and our test set; see https://www.dropbox.com/s/gouvyw1vccqxdkb/memorization-table.png?dl=0 . The coverage/fidelity obtained by our generative models is slightly lower than the equivalent in size (case (b) ), as expected: memorizing the training set produces good coverage/fidelity with respect to the test set when they are both drawn from the same population. This speaks for the validity of our metrics. Naturally, the advantage of using a learned representation lies in learning the structure of the underlying space instead of individual samples, which enables compactly representing the data and generating novel shapes as demonstrated by our interpolations.\n\nB) Comparison to voxel-based approaches\nSince comparing to voxel-based methods was a shared concern, we performed extensive comparisons, which we report in a separate message above. Please let us know if these experiments suffice, or of any additional experiments you might have in mind that would better address this question. Please note that fully studying latent representations on the voxel modality remains beyond the scope of our work, since point clouds are a distinct representation from voxel grids, with its own set of merits. More details on this can be found in our reply to Rev. #3.\n\nC) rGAN performance\nIndeed, designing significantly better raw GANs directly on point clouds requires further study - we do not claim to have shown that building a point-cloud rGAN with performance en par with (or better than) an lGAN is infeasible. Nevertheless, the fact that our latent representations lead to powerful generation is an interesting and novel result on its own.\n\nD) Simplicity of network architectures\nWhile this is true, we do not believe is necessarily constitutes a disadvantage of our networks, especially when considering ease of training and reproducibility. Architectures of similar spirit have been shown to work well with point data in the recent literature (PointNet etc.). Our simple models provide a competitive baseline for point cloud learning that establishes the state of the art.\n\nE) Success of latent-space models (including GMMs) in other domains\nThis is very much an open question and a great research problem. We cannot assert that latent space models are the way to achieve state-of-the-art results in other problems; follow-up work that explores when this might be the case would be very interesting. Arguably a big challenge on generative models currently lies in evaluating quality and diversity of their produced samples. Our fidelity and coverage metrics contribute to this evaluation discussion.\n" ]
[ 6, 8, 5, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJInEZsTb", "iclr_2018_BJInEZsTb", "iclr_2018_BJInEZsTb", "iclr_2018_BJInEZsTb", "iclr_2018_BJInEZsTb", "SJyXoTtlG", "B1Mvg-qlM", "HJf1JQqez" ]
iclr_2018_BJubPWZRW
Cross-View Training for Semi-Supervised Learning
We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a "teacher") to produce soft targets. The model then learns from these soft targets (acting as a ``"student"). We deviate from prior work by adding multiple auxiliary student prediction layers to the model. The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data. On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.
workshop-papers
This paper combines ideas from student-teacher training and multi-view learning in a simple but clever way. There is not much novelty in the methods, but promising results are given across several tasks, including realistic NLP tasks. The improvements are not huge but are consistent. Considering the limited novelty, the paper should include some more convincing analysis and insight on why/when the approach works. Given the intersting results, the committee recommends this for workshop track.
train
[ "Bkp-xJ5xf", "HJhFVtqez", "SkDHZacef", "HkwzPvbmf", "By4SmTOMz", "S1b4QXkzf", "ByvGWXyGf", "BJdP17kzG", "B12yJQkMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper presents a so-called cross-view training for semi-supervised deep models. Experiments were conducted on various data sets and experimental results were reported.\n\nPros:\n* Studying semi-supervised learning techniques for deep models is of practical significance.\n\nCons:\n* The novelty of this paper is marginal. The use of unlabeled data is in fact a self-training process. Leveraging the sub-regions of the image to improve performance is not new and has been widely-studied in image classification and retrieval. \n* The proposed approach suffers from a technical weakness or flaw. For the self-labeled data, the prediction of each view is enforced to be same as the assigned self-labeling. However, since each view related to a sub-region of the image (especially when the model is not so deep), it is less likely for this region to contain the representation of the concepts (e.g., some local region of an image with a horse may exhibit only grass); enforcing the prediction of this view to be the same self-labeled concepts (e.g,“horse”) may drive the prediction away from what it should be ( e..g, it will make the network to predict grass as horse). Such a flaw may affect the final performance of the proposed approach.\n* The word “view” in this paper is misleading. The “view” in this paper is corresponding to actually sub-regions in the images\n* The experimental results indicate that the proposed approach fails to perform better than the compared baselines in table 2, which reduces the practical significance of the proposed approach. \n", "The paper proposes a ’Cross View training’ approach to semi-supervised learning. In the teacher-student framework for semi-supervised learning, it introduces a new cross view consistency loss that includes auxiliary softmax layers (linear layers followed by softmax) on lower levels of the student model. The auxiliary softmax layers take different views of the input for prediction.\n\nPros:\n1. A simple approach to encourage better representations learned from unlabeled examples. \n\n2. Experiments are comprehensive.\n\nCons:\n\n0. The whole paper just presented strategies and empirical results. There are no discussions of insights and why the proposed strategy work, for what cases it will work, and for what cases it will not work? Why? \n\n1. The addition of auxiliary layers improves Sequence Tagging results marginally. \n\n2. The claim of cross-view for sequence tagging setting is problematic. Because the task is per-position tagging, those added signals are essentially not part of the examples, but the signals of its neighbors. \n\n3. Adding n^2 linear layers for image classification essentially makes the model much larger. It is unfair to compare to the baseline models with much fewer parameters. \n\n4. The \"CVT, no noise\" should be compared to \"CVT, random noise\", then to \"CVT, adversarial noise\". The current results show that the improvements are mostly from VAT, instead of CVT. \n\n\n", "This paper proposes a multi-view semi-supervised method. For the unlabelled data, a single input (e.g., a picture) is partitioned into k new inputs permitting overlap. Then a new objective is to obtain k predictions as close as possible to the prediction from the model learned from mere labeled data.\n\nTo be more precise, as seen from the last formula in section 3.1, the most important factor is the D function (or KL distance used here). As the author said, we could set the noisy parameter in the first part to zero, but have to leave this parameter non-zero in the second term. Otherwise, the model can't learn anything.\n\nMy understanding is that the key factor is not the so called k views (as in the first sight, this method resembles conventional ensemble learning very much), but the smoothing distribution around some input x (consistency related loss). In another word, we set the k for unlabeled data as 1, but use unlabeled data k times in the scale (assuming no duplicate unlabeled data), keeping the same training (consistency objective) method, would this new method obtain a similar performance? If my understanding is correct, the authors should further discuss the key novelty compared to the previous work stated in the second paragraph of section 1. One obvious merit is that the unlabeled data is utilized more efficiently, k times better.\n\n\n", "We have updated our paper with \"CVT, random noise\" results for the vision tasks as the reviewer suggested. CVT with random input noise works almost as well as VAT, suggesting the improvements from CVT are close to the improvements from VAT. However, the additional computation cost for CVT is much smaller than the additional computation cost for VAT.", "We have updated our paper to include the new dependency parsing results. ", "Thank you for the comments! We would like to address the cons you listed in order:\n\n1. “The novelty of this paper is marginal.”:\nTo the best of our knowledge the contribution to NLP is completely novel. We actually consider our NLP results to be more important than our image recognition ones because (1) they use external unlabeled data instead artificially making the dataset semi-supervised (2) they are on more widely-used tasks and (3) although the past few years of development on consistency-cost-based and GAN-based semi-supervised learning methods have yielded gains in accuracy for image classification, they are not effective for sequence tagging (whereas our method is).\n\n“Leveraging the sub-regions of the image to improve performance is not new and has been widely-studied in image classification and retrieval.”\nWe believe leveraging sub-regions of the image to improve semi-supervised learning is novel, even though leveraging sub-regions has been used in prior works on supervised learning\n\n2. “The proposed approach suffers from a technical weakness or flaw.”\nThe reviewer’s comment on the technical flaw applies to image recognition, but not NLP. We also note even the smallest views in our model see a 21x21 region of the 32x32 images, so it is unlikely for a view to contain no representative concepts. But even aside from these two points we disagree with the criticism. This same “technical flaw” exists (although to a less degree) for any CNN with global mean pooling (an extremely common architecture). Like with our method, a mean-pooled CNN will encourage the feature vectors extracted from all patches of the image to be representative of the target class, not just the ones from the most salient patches. However, we think in many cases this is a good thing rather than a bad one: on difficult examples it is beneficial for the model to leverage the context surrounding the main part of the image (e.g., that an animal is standing in a field of grass) to better classify it (e.g., as a horse rather than a cat).\n\n3. “The word “view” in this paper is misleading. The “view” in this paper is corresponding to actually sub-regions in the images”\nA view being a sub-region of the image is true in the case of image recognition, but obviously not for NLP. We use “view” as a general term for particular subset of the input features. This usage of “view” is from Blum and Mitchell’s very influential paper “Combining Labeled and Unlabeled Data with Co-Training,” so “view” is terminology that has been around since 1998.\n\n4. “The experimental results indicate that the proposed approach fails to perform better than the compared baselines in table 2”\nCVT significantly outperforms our baselines. If the reviewer is using “baselines” to refer to prior work, we note (as we mention in the paper) that the TagLM model has far more parameters than ours (LSTMS with up to 8 times as many hidden units) and thus is also many times slower than ours for training and inference. When using a model with only twice as many hidden units as ours, their results drop to significantly below our numbers (see Table 6 in their paper). Therefore we believe their results are close to ours because their models are much larger, not because their method is equally effective.", "Response: Thank you for the comments! We would like to address the cons you listed:\n\n0. “There are no discussions of insights and why the proposed strategy work” \nWe discuss in the abstract and introduction why CVT works. To reiterate, there is a mutually beneficial relationship between the teacher and the students. The students can learn from the teacher because the teacher has access to more of each input and thus produces more accurate labels. Meanwhile, as the students learn they improve the representations for the parts of the input they are exposed to. These better representations in turn improve the teacher. In Section 4.1 under “Model Analysis” we present further insights into why the method works by analyzing the behavior of the trained models. \n\n“...for what cases it will work, and for what cases it will not work”\nWe believe CVT will be less effective if the views are too restricted (e.g., seeing very small patches of an input image, in which case the auxiliary prediction layers will not be able to learn effectively) or the views are too unrestricted (e.g., seeing almost the entire image, in which case the auxiliary layers will be very similar to the teacher and thus not be able to benefit from the teacher’s predictions).\n\n1. “The addition of auxiliary layers improves Sequence Tagging results marginally. “\n Although in absolute terms the gains are small, performance in sequence tagging is quite saturated, making large gains difficult to achieve. Looking at improvements over baselines in prior work, Wu et al., (2017) report gains of 0.3 for CCG and 0.05 for POS; Liu et al. (2017) report gains of 0.16 for Chunking, 0.49 for NER, and 0.09 for POS; Hashimoto et al. (2017) report gains of 0.75 for Chunking and 0.10 for POS-tagging; Peters et al. (2017), report gains of 1.37 for Chunking and 1.06 for NER. Therefore our gains (comparing “Baseline” vs “CVT” in Table 2) of 0.51 for CCG, 1.07 for Chunking, 0.80 for NER, and 0.11 for POS are pretty large in the context of sequence tagging research. We also note that the large gains from Peters et al. come from using a model many times bigger than ours. When they apply their method to a model more comparable to ours in size, their gains are smaller (see Table 6 of their paper).\n\n2. “The claim of cross-view for sequence tagging setting is problematic.”\nWe are not quite sure what the reviewer means by “problematic.” It is completely normal to leverage a token’s context (i.e., surrounding tokens) when making predictions for sequence tagging. Our “future’’ and “past” auxiliary losses improve this contextual information (which gets passed to the primary softmax layer through the BiLSTMs), resulting in better accuracy. \n\n3. “It is unfair to compare to the baseline models with much fewer parameters”\nThe extra parameters are only used at training-time, so we don’t think it’s an unfair comparison. The models have exactly the same expressive power because they have the same set of test-time parameters. We also note the additional layers only contain about 15% of the model’s parameters for image classification and about 5% for sequence tagging. \n\n4. \"The \"CVT, no noise\" should be compared to \"CVT, random noise\"\"\nThis is a good point, and we will add that comparison! \n\n\"The current results show that the improvements are mostly from VAT, instead of CVT.\"\nAlthough the improvements for image recognition are larger for VAT than CVT, CVT still works well as a semi-supervised learning method on its own while training almost twice as fast as VAT (which requires two backwards passes for each minibatch instead of just one). We also note that we were unable to get VAT working for sequence tagging, so we believe our method has the advantage of being more applicable to NLP tasks. \n\n", "Thank you for the comments! \n\n“...would this new method obtain a similar performance?”\nWe think the model definitely does benefit from using more than one view. For example, for sequence tagging adding “forward” and “backward’ views on top of the “future” and “past” views improved performance (see Table 2). We believe you could perhaps set k=1 and get good results if you sampled a different a view for each example, but this would cause the model to train much slower than when learning from all views simultaneously. ", "We want to emphasize that CVT is applicable to multiple domains, achieving state-of-the-art results for NLP tasks as well as vision ones. We believe our results on NLP tasks are particularly important because:\n(1) They use external unlabeled data instead of artificially making the dataset semi-supervised (as in standard semi-supervised vision benchmarks).\n(2) Tasks like dependency parsing and NER have been studied for decades and are widely used in industry (whereas CIFAR-10 is a bit more of a \"toy\" task). \n(3) The discrete structure of language makes applying many recent semi-supervised learning methods difficult. Many prior works (e.g., all seven papers in Table 1) only evaluate on vision tasks. As we discuss in our paper, we were unable to successfully apply these methods to NLP tasks successfully.\n\nTo further demonstrate the utility of our method, we recently applied CVT to dependency parsing and achieved excellent results. We use a graph-based dependency parser similar to the one from Dozat and Manning (ICLR 2017). CVT improves over a fully supervised system by 0.7 LAS points on the Penn Treebank (using Stanford Dependencies), and achieves a new state-of-the-art for graph-based dependency parsing. We will update the paper with these results soon." ]
[ 2, 5, 7, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJubPWZRW", "iclr_2018_BJubPWZRW", "iclr_2018_BJubPWZRW", "ByvGWXyGf", "iclr_2018_BJubPWZRW", "Bkp-xJ5xf", "HJhFVtqez", "SkDHZacef", "iclr_2018_BJubPWZRW" ]