source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "Due to a resource-constrained environment, network compression has become an important part of deep neural networks research.", "In this paper, we propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in all convolution layers based on an inter-frame prediction method in conventional video coding schemes.", "Furthermore, we found a phenomenon Smoothly Varying Weight Hypothesis (SVWH) which is that the weights in adjacent convolution layers share strong similarity in shapes and values, i.e., the weights tend to vary smoothly along with the layers.", "Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.", "Since the predicted weight residuals tend to follow Laplace distributions with very low variance, the weight quantization can more effectively be applied, thus producing more zero weights and enhancing the weight compression ratio.", "In addition, we propose a new inter-layer loss for eliminating non-texture bits, which enabled us to more effectively store only texture bits.", "That is, the proposed loss regularizes the weights such that the collocated weights between the adjacent two layers have the same values.", "Finally, we propose an ILWP with an inter-layer loss and quantization method.", "Our comprehensive experiments show that the proposed method achieves a much higher weight compression rate at the same accuracy level compared with the previous quantization-based compression methods in deep neural networks.", "Deep neural networks have demonstrated great performance for various tasks in many fields, such as image classification (LeCun et al. 1990a; Krizhevsky et al. 2012; He et al. 2016) , object detection (Ren et al. 2015; He et al. 2017; Redmon & Farhadi 2018) , image captioning (Jia et al., 2015) , and speech recognition Xiong et al. 2018) .", "Wide and deep neural networks achieved great accuracy with the aid of the enormous number of weight parameters and high computational cost (Simonyan & Zisserman 2014; He et al. 2016; ).", "However, as demands toward constructing the neural networks in the resource-constrained environments have been increasing, making the resource-efficient neural network while maintaining its accuracy becomes an important research area of deep neural networks.", "Several studies have aimed to solve this problem.", "In LeCun et al. (1990b) , Hassibi & Stork (1993) , Han et al. (2015b) and Li et al. (2016) , network pruning methods were proposed for memory-efficient architecture, where unimportant weights were forced to have zero values in terms of accuracy.", "In Fiesler et al. (1990) , Gong et al. (2014) and Han et al. (2015a) , weights were quantized and stored in memory, enabling less memory usage of deep neural networks.", "On the other hand, some literature decomposed convolution operations into sub operations (e.g., depth-wise separable convolution) requiring less computation costs at similar accuracy levels (Howard et al. 2017; Sandler et al. 2018; Ma et al. 2018) .", "In this paper, we show that the weights between the adjacent two convolution layers tend to share high similarity in shapes and values.", "We call this phenomenon Smoothly Varying Weight Hypothesis (SVWH).", "This paper explores an efficient neural network method that fully takes the advantage of SVWH.", "Specifically, inspired by the prediction techniques widely used in video compression field (Wiegand et al. 2003; Sullivan et al. 2012 ), we propose a new weight compression scheme based on an inter-layer weight prediction technique, which can be successfully incorporated into the depth-wise separable convolutional blocks (Howard et al. 2017; Sandler et al. 2018; Ma et al. 2018) .", "Contributions: Main contributions of this paper are listed below:", "• From comprehensive experiments, we find out that the weights between the adjacent layers tend to share strong similarities, which lead us to establishing SVWH.", "• Based on SVWH, we propose a simple and effective Inter-Layer Weight Prediction (ILWP) and quantization framework enabling a more compressed neural networks than only applying quantization on the weights of the neural networks.", "• To further enhance the effectiveness of the proposed ILWP, we devise a new regularization function, denoted as inter-layer loss, that aims to minimize the difference between collocated weight values in the adjacent layers, resulting in significant bit saving for non-texture bits (i.e., bits for indices of prediction).", "• Our comprehensive experiments demonstrate that, the proposed scheme achieves about 53% compression ratio on average in 8-bit quantization at the same accuracy level compared to the traditional quantization method (without prediction) in both MobileNetV1 (Howard et al., 2017) and MobileNetV2 (Sandler et al., 2018) .", "We propose a new inter-layer weight prediction with inter-layer loss for efficient deep neural networks.", "Motivated by our observation that the weights in the adjacent layers tend to vary smoothly, we successfully build a new weight compression framework combining the inter-layer weight prediction scheme, the inter-layer loss, quantization and Huffman coding under SVWH condition.", "Intuitively, our prediction scheme significantly decreases the entropy of the weights by making them much narrower Laplace distributions, thus leading remarkable compression ratio of the weight parameters in neural networks.", "Also, the proposed inter-layer loss effectively eliminates the nontexture bits for the best predictions.", "To the best of our knowledge, this work is the first to report the phenomenon of the weight similarities between the neighbor layers and to build a prediction-based weight compression scheme in modern deep neural network architectures." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.7457627058029175, 0.31578946113586426, 0.6666666865348816, 0.2745097875595093, 0.17777776718139648, 0.19999998807907104, 0.23529411852359772, 0.19607841968536377, 0.0615384578704834, 0.07843136787414551, 0.07843136787414551, 0, 0.10344827175140381, 0.1249999925494194, 0.0714285671710968, 0.31111109256744385, 0.1249999925494194, 0.10526315122842789, 0.2028985470533371, 0, 0.21739129722118378, 0.4000000059604645, 0.1515151411294937, 0.1904761791229248, 0.21621620655059814, 0.31578946113586426, 0.1599999964237213, 0.05714285373687744, 0.2641509473323822 ]
S1eUd64tDr
true
[ "We propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in convolution layers." ]
[ "There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference.", "However, to the best of our knowledge, none target a specific number of floating-point operations (FLOPs) as part of a single end-to-end optimization objective, despite reporting FLOPs as part of the results.", "Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression.", "In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification.", "Neural networks are a class of parametric models that achieve the state of the art across a broad range of tasks, but their heavy computational requirements hinder practical deployment on resourceconstrained devices, such as mobile phones, Internet-of-things (IoT) devices, and offline embedded systems.", "Many recent works focus on alleviating these computational burdens, mainly falling under two non-mutually exclusive categories: manually designing resource-efficient models, and automatically compressing popular architectures.", "In the latter, increasingly sophisticated techniques have emerged BID3 BID4 BID5 , which have achieved respectable accuracy-efficiency operating points, some even Pareto-better than that of the original network; for example, network slimming BID3 reaches an error rate of 6.20% on CIFAR-10 using VGGNet BID9 with a 51% FLOPs reduction-an error decrease of 0.14% over the original.However, to the best of our knowledge, none of the methods impose a FLOPs constraint as part of a single end-to-end optimization objective.", "MorphNets BID0 apply an L 1 norm, shrinkage-based relaxation of a FLOPs objective, but for the purpose of searching and training multiple models to find good network architectures; in this work, we learn a sparse neural network in a single training run.", "Other papers directly target device-specific metrics, such as energy usage BID15 , but the pruning procedure does not explicitly include the metrics of interest as part of the optimization objective, instead using them as heuristics.", "Falling short of continuously deploying a model candidate and measuring actual inference time, as in time-consuming neural architectural search BID11 , we believe that the number of FLOPs is reasonable to use as a proxy measure for actual latency and energy usage; across variants of the same architecture, Tang et al. suggest that the number of FLOPs is a stronger predictor of energy usage and latency than the number of parameters BID12 .Indeed", ", there are compelling reasons to optimize for the number of FLOPs as part of the training objective: First, it would permit FLOPs-guided compression in a more principled manner. Second", ", practitioners can directly specify a desired target of FLOPs, which is important in deployment. Thus,", "our main contribution is to present a novel extension of the prior state of the art BID6 to incorporate the number of FLOPs as part of the optimization objective, furthermore allowing practitioners to set and meet a desired compression target." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1599999964237213, 0.3396226465702057, 0.1690140813589096, 0.8064516186714172, 0.21212120354175568, 0.037735845893621445, 0.17204301059246063, 0.28125, 0.24137930572032928, 0.22499999403953552, 0.2857142686843872, 0.1818181723356247, 0.4067796468734741 ]
HkG5JF6Do7
true
[ "We extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective, and we show that, given a desired FLOPs requirement, different neural networks are successfully trained." ]
[ "Unpaired image-to-image translation among category domains has achieved remarkable success in past decades.", "Recent studies mainly focus on two challenges.", "For one thing, such translation is inherently multimodal due to variations of domain-specific information (e.g., the domain of house cat has multiple fine-grained subcategories).", "For another, existing multimodal approaches have limitations in handling more than two domains, i.e. they have to independently build one model for every pair of domains.", "To address these problems, we propose the Hierarchical Image-to-image Translation (HIT) method which jointly formulates the multimodal and multi-domain problem in a semantic hierarchy structure, and can further control the uncertainty of multimodal.", "Specifically, we regard the domain-specific variations as the result of the multi-granularity property of domains, and one can control the granularity of the multimodal translation by dividing a domain with large variations into multiple subdomains which capture local and fine-grained variations.", "With the assumption of Gaussian prior, variations of domains are modeled in a common space such that translations can further be done among multiple domains within one model.", "To learn such complicated space, we propose to leverage the inclusion relation among domains to constrain distributions of parent and children to be nested.", "Experiments on several datasets validate the promising results and competitive performance against state-of-the-arts.", "Image-to-image translation is the process of mapping images from one domain to another, during which changing the domain-specific aspect and preserving the domain-irrelevant information.", "It has wide applications in computer vision and computer graphics Isola et al. (2017) ; Ledig et al. (2017) ; Zhu et al. (2017a) ; Liu et al. (2017) ; such as mapping photographs to edges/segments, colorization, super-resolution, inpainting, attribute and category transfer, style transfer, etc.", "In this work, we focus on the task of attribute and category transfer, i.e. a set of images sharing the same attribute or category label is defined as a domain 1 .", "Such task has achieved significant development and impressive results in terms of image quality in recent years, benefiting from the improvement of generative adversarial nets (GANs) Goodfellow et al. (2014) ; Mirza & Osindero (2014) .", "Representative methods include pix2pix Isola et al. (2017) , UNIT Liu et al. (2017) , CycleGAN Zhu et al. (2017a) , DiscoGAN Kim et al. (2017) , DualGAN Kim et al. (2017) and DTN Taigman et al. (2017) .", "More recently the study of this task mainly focus on two challenges.", "The first is the ability of involving translation among several domains into one model.", "It is quite a practical need for users.", "Using most methods, we have to train a separate model for each pair of domains, which is obviously inefficient.", "To deal with such problem, StarGAN Choi et al. (2018) leverages one generator to transform an image to any domain by taking both the image and the target domain label as conditional input supervised by an auxiliary domain classifier.", "Another challenge is the multimodal problem, which is early addressed by BicycleGAN Zhu et al. (2017b) .", "Most techniques including aforementioned StarGAN can only give a single determinate output in target domain given an image from source domain.", "However, for many translation task, the mapping is naturally multimodal.", "As shown in Fig.1 , a cat could have many possible appearances such as being a Husky, a Samoyed or other dogs when translated to the dog domain.", "To address Figure 1: An illustration of a hierarchy structure and the distribution relationship in a 2D space among categories in such hierarchy.", "Multi-domain translation is shown in the horizontal direction (blue dashed arrow) while multimodal translation is indicated in the vertical direction (red dashed arrow).", "Since one child category is a special case of its parent, in the distribution space it is a conditional distribution of its parent, leading to the nested relationship between them.", "this issue, recent works including BicycleGAN Zhu et al. (2017b) , MUNIT Huang et al. (2018) and DRIT Lee et al. (2018) model a continuous and multivariant distribution independently for each domain to represent the variations of domain-specific information, and they have achieved diverse and high-quality results for several two-domain translation tasks.", "In this paper , we aim at involving the abilities of both multi-domain and multimodal translation into one model.", "As shown in Fig.1 , it is noted that categories have the natural hierarchical relationships.", "For instance, the cat, dog and bird are three special children of the animal category since they share some common visual attributes.", "Furthermore, in the dog domain, some samples are named as husky and some of them are called samoyed due to the appearance variations of being the dog.", "Of course, one can continue to divide the husky to be finer-grained categories based on the variations of certain visual attributes.", "Such hierarchical relationships widely exist among categories in real world since it is a natural way for our human to understand objects according to our needs in that time.", "We go back to the image translation task, the multi-domain and multimodal issues can be understood from horizontal and vertical views respectively.", "From the horizontal view as the blue dashed arrow indicates, multi-domain translation is the transformation in a flat level among categories.", "From the vertical view as the red dashed arrow indicates, multimodal translation further considers variations within target category, i.e. the multimodal issue is actually due to the multi-granularity property of categories.", "In the extreme case, every instance is a variation mode of the domain-specific information.", "Inspired by these observations, we propose a Hierarchical Image-to-image Translation (HIT) method which translates object images among both multiple category domains in a same hierarchy level and their children domains.", "To this end, our method models the variations of all domains in forms of multiple continuous and multivariant Gaussian distributions in a common space.", "This is different from previous methods which model the same Gaussian distribution for two domains in independent spaces and thus can not work with only one generator.", "Due to the hierarchical relationships among domains, distribution of a child domain is the conditional one of its parent domain.", "Take such principle into consideration, distributions of domains should be nested between a parent and its children, as a 2D illustration shown in Fig.1 .", "To effectively supervise the learning of such distributions space, we further improve the traditional conditional GAN framework to possess the hierarchical discriminability via a hierarchical classifier.", "Experiments on several categories and attributes datasets validate the competitive performance of HIT against state-of-the-arts.", "In this paper we propose the Hierarchical Image-to-image Translation (HIT) method which incorporates multi-domain and multimodal translation into one model.", "Experiments on three datasets especially on CelebA show that the proposed method can well achieve such granularity controlled translation objectives, i.e. the variation modes of outputs can be specified owe to the nested distributions.", "However, current work has a limitation, i.e. the assumption of single Gaussian for each category domain.", "On one hand, though Gaussian distribution prior is a good approximation for many data, it may not be applicable when scale of available training data is small but variations within domain are large such as the used hierarchical data on ImageNet and ShapeNet in this paper.", "On the other hand, the parent distributions should be mixture of Gaussians given multiple single Gaussians of its children.", "This issue would lead to sparse sampling around the centers of parent distributions and poor nested results if samples are not enough to fulfill the whole space.", "We have made efforts to the idea of mixture of Gaussians and found that it is hard to compute the KL divergence between two mixture of Gaussians which does not have an analytical solution.", "Besides, the re-parameterize trick for distribution sampling during SGD optimization can not be transferred to the case of mixture of Gaussians.", "A better assumption to realize the nested relationships among parent-children distributions is a promising direction for our future research." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0, 0.1764705777168274, 0.11428570747375488, 0.21052631735801697, 0.1463414579629898, 0, 0.12903225421905518, 0.09090908616781235, 0.19354838132858276, 0.09756097197532654, 0.0555555522441864, 0.09756097197532654, 0.06896550953388214, 0, 0.08695651590824127, 0, 0.0714285671710968, 0.1463414579629898, 0.0833333283662796, 0.06896550953388214, 0.21052631735801697, 0.0555555522441864, 0.06896550953388214, 0.1599999964237213, 0.0624999962747097, 0.11538461595773697, 0.2857142686843872, 0, 0.06666666269302368, 0.13333332538604736, 0.0714285671710968, 0.05714285373687744, 0.41379308700561523, 0.1428571343421936, 0.1621621549129486, 0, 0.10810810327529907, 0.12903225421905518, 0.0555555522441864, 0.07692307233810425, 0.060606054961681366, 0.0624999962747097, 0.0833333283662796, 0.3448275923728943, 0.14999999105930328, 0, 0.037735845893621445, 0, 0.11764705181121826, 0.1111111044883728, 0.0714285671710968, 0.0714285671710968 ]
HkljrTEKvr
true
[ "Granularity controled multi-domain and multimodal image to image translation method" ]
[ "Recurrent Neural Networks (RNNs) are very successful at solving challenging problems with sequential data.", "However, this observed efficiency is not yet entirely explained by theory.", "It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN.", "Such networks, however, are not very often applied to real life tasks.", "In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit (ReLU), and show that they also benefit from properties of universality and depth efficiency.", "Our theoretical results are verified by a series of extensive computational experiments.", "Recurrent Neural Networks are firmly established to be one of the best deep learning techniques when the task at hand requires processing sequential data, such as text, audio, or video BID10 BID18 BID7 .", "The ability of these neural networks to efficiently represent a rich class of functions with a relatively small number of parameters is often referred to as depth efficiency, and the theory behind this phenomenon is not yet fully understood.", "A recent line of work BID12 BID5 focuses on comparing various deep learning architectures in terms of their expressive power.It was shown in that ConvNets with product pooling are exponentially more expressive than shallow networks, that is there exist functions realized by ConvNets which require an exponentially large number of parameters in order to be realized by shallow nets.", "A similar result also holds for RNNs with multiplicative recurrent cells BID12 .", "We aim to extend this analysis to RNNs with rectifier nonlinearities which are often used in practice.", "The main challenge of such analysis is that the tools used for analyzing multiplicative networks, namely, properties of standard tensor decompositions and ideas from algebraic geometry, can not be applied in this case, and thus some other approach is required.", "Our objective is to apply the machinery of generalized tensor decompositions, and show universality and existence of depth efficiency in such RNNs.", "In this paper, we sought a more complete picture of the connection between Recurrent Neural Networks and Tensor Train decomposition, one that involves various nonlinearities applied to hidden states.", "We showed how these nonlinearities could be incorporated into network architectures and provided complete theoretical analysis on the particular case of rectifier nonlinearity, elaborating on points of generality and expressive power.", "We believe our results will be useful to advance theoretical understanding of RNNs.", "In future work, we would like to extend the theoretical analysis to most competitive in practice architectures for processing sequential data such as LSTMs and attention mechanisms.A PROOFS Lemma 3.1.", "Under the notation introduced in eq. (9), the score function can be written as DISPLAYFORM0 Proof.", "DISPLAYFORM1 rt−1rt h(1) r1 DISPLAYFORM2 r1r2 h(1) r1 DISPLAYFORM3 = . . . DISPLAYFORM4 Proposition A.1.", "If we replace the generalized outer product ⊗ ξ in eq. (16) with the standard outer product ⊗, we can subsume matrices C (t) into tensors G (t) without loss of generality.Proof.", "Let us rewrite hidden state equation eq. (16) after transition from ⊗ ξ to ⊗: DISPLAYFORM5 We see that the obtained expression resembles those presented in eq. (10) with TT-cores G (t) replaced byG (t) and thus all the reasoning applied in the absence of matrices C (t) holds valid.Proposition A.2.", "Grid tensor of generalized shallow network has the following form (eq.", "(20)): DISPLAYFORM6 denote an arbitrary sequence of templates.", "Corresponding element of the grid tensor defined in eq. FORMULA1 has the following form: DISPLAYFORM7 Proposition A.3.", "Grid tensor of a generalized RNN has the following form: DISPLAYFORM8 Proof.", "Proof is similar to that of Proposition A.2 and uses eq. FORMULA0 to compute the elements of the grid tensor.Lemma 5.1.", "Given two generalized RNNs with grid tensors Γ A (X), Γ B (X), and arbitrary ξ-nonlinearity, there exists a generalized RNN with grid tensor Γ C (X) satisfying DISPLAYFORM9 Proof.", "Let these RNNs be defined by the weight parameters DISPLAYFORM10 and DISPLAYFORM11 We claim that the desired grid tensor is given by the RNN with the following weight settings.", "DISPLAYFORM12 It is straightforward to verify that the network defined by these weights possesses the following property: DISPLAYFORM13 , 0 < t < T, and h DISPLAYFORM14 B , concluding the proof.", "We also note that these formulas generalize the well-known formulas for addition of two tensors in the Tensor Train format (Oseledets, 2011).Proposition", "A.4. For any associative", "and commutative binary operator ξ, an arbitrary generalized rank 1 shallow network with ξ-nonlinearity can be represented in a form of generalized RNN with unit ranks (R 1 = · · · = R T −1 = 1) and ξ-nonlinearity. DISPLAYFORM15 be the", "parameters specifying the given generalized shallow network. Then the following weight", "settings provide the equivalent generalized RNN (with h (0) being the unity of the operator ξ). DISPLAYFORM16 Indeed, in", "the notation defined above, hidden states of generalized RNN have the following form:Theorem 5.3 (Expressivity 2). For every value of R there", "exists an open set (which thus has positive measure) of generalized RNNs with rectifier nonlinearity ξ(x, y) = max(x, y, 0), such that for each RNN in this open set the corresponding grid tensor can be realized by a rank 1 shallow network with rectifier nonlinearity.Proof. As before, let us denote by", "I (p,q) a matrix of size p × q such that I (p,q) ij = δ ij , and by a (p1,p2,...p d ) we denote a tensor of size p 1 × · · · × p d with each entry being a (sometimes we will omit the dimensions when they can be inferred from the context). Consider the following weight", "settings for a generalized RNN. DISPLAYFORM17 The RNN defined", "by these weights has the property that Γ (X) is a constant tensor with each entry being 2(M R) T −1 , which can be trivially represented by a rank 1 generalized shallow network. We will show that this property", "holds under a small perturbation of C (t) , G (t) and F. Let us denote each of these perturbation (and every tensor appearing size of which can be assumed indefinitely small) collectively by ε. Applying eq. FORMULA0 we obtain", "(with ξ(x, y) = max(x, y, 0)). where we have used a simple property", "connecting ⊗ ξ with ξ(x, y) = max(x, y, 0) and ordinary ⊗: if for tensors A and B each entry of A is greater than each entry of B, A ⊗ ξ B = A ⊗ 1. The obtained grid tensors can be represented", "using rank 1 generalized shallow networks with the following weight settings. λ = 1, DISPLAYFORM18 DISPLAYFORM19 ε (2(MR)", "T−1 + ε), t = 1, 0, t > 1, where F ε is the feature matrix of the corresponding perturbed network." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0714285671710968, 0, 0.04255318641662598, 0, 0.07547169178724289, 0.07692307233810425, 0.043478257954120636, 0.2083333283662796, 0.0634920597076416, 0.1538461446762085, 0.13333332538604736, 0.07843136787414551, 0.11764705181121826, 0.1395348757505417, 0.1904761791229248, 0.07407406717538834, 0.04444444179534912, 0, 0, 0.1428571343421936, 0.09836065024137497, 0.07999999821186066, 0.09090908616781235, 0.06451612710952759, 0.07692307233810425, 0.11428570747375488, 0.10526315122842789, 0.10526315122842789, 0.0476190447807312, 0.05714285373687744, 0, 0.12765957415103912, 0, 0.06451612710952759, 0.05714285373687744, 0.06557376682758331, 0.10344827175140381, 0, 0.03999999538064003, 0.07999999821186066, 0, 0.1304347813129425, 0.1875, 0.060606054961681366 ]
r1gNni0qtm
true
[ "Analysis of expressivity and generality of recurrent neural networks with ReLu nonlinearities using Tensor-Train decomposition." ]
[ "While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.", "In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image.", "In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step.", "We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.", "In a first observation in BID25 it was found that deep neural networks exhibit unstable behavior to small perturbations in the input.", "For the task of image classification this means that two visually indistinguishable images may have very different outputs, resulting in one of them being misclassified even if the other one is correctly classified with high confidence.", "Since then, a lot of research has been done to investigate this issue through the construction of adversarial examples: given a correctly classified image x, we look for an image y which is visually indistinguishable from x but is misclassified by the network.", "Typically, the image y is constructed as y = x + r, where r is an adversarial perturbation that is supposed to be small in a suitable sense (normally, with respect to an p norm).", "Several algorithms have been developed to construct adversarial perturbations, see BID9 BID18 ; BID14 ; BID16 ; BID5 and the review paper BID0 .Even", "though such pathological cases are very unlikely to occur in practice, their existence is relevant since malicious attackers may exploit this drawback to fool classifiers or other automatic systems. Further", ", adversarial perturbations may be constructed in a black-box setting (i.e., without knowing the architecture of the DNN but only its outputs) BID19 BID17 and also in the physical world BID14 BID1 BID3 BID24 . This has", "motivated the investigation of defenses, i.e., how to make the network invulnerable to such attacks, see BID13 ; BID4 ; BID16 ; BID27 ; BID28 ; BID20 ; BID2 ; BID12 . In most", "cases, adversarial examples are artificially created and then used to retrain the network, which becomes more stable under these types of perturbations.Most of the work on the construction of adversarial examples and on the design of defense strategies has been conducted in the context of small perturbations r measured in the ∞ norm. However", ", this is not necessarily a good measure of image similarity: e.g., for two translated images x and y, the norm of x−y is not small in general, even though x and y will look indistinguishable if the translation is small. Several", "papers have investigated the construction of adversarial perturbations not designed for norm proximity BID21 BID24 BID3 BID6 BID29 .In this", "work, we build up on these ideas and investigate the construction of adversarial deformations. In other", "words, the misclassified image y is not constructed as an additive perturbation y = x + r, but as a deformation y = x • (id + τ ), where τ is a vector field defining the transformation. In this", "case, the similarity is not measured through a norm of y − x, but instead through a norm of τ , which quantifies the deformation between y and x.We develop an efficient algorithm for the construction of adversarial deformations, which we call ADef. It is based", "on the main ideas of DeepFool BID18 , and iteratively constructs the smallest deformation to misclassify the image. We test the", "procedure on MNIST (LeCun) (with convolutional neural networks) and on ImageNet (Russakovsky et al., 2015) (with Inception-v3 BID26 and ResNet-101 BID10 ). The results", "show that ADef can succesfully fool the classifiers in the vast majority of cases (around 99%) by using very small and imperceptible deformations. We also test", "our adversarial attacks on adversarially trained networks for MNIST. Our implementation", "of the algorithm can be found at https://gitlab.math. ethz.ch/tandrig/ADef.The", "results of this work have initially appeared in the master's thesis BID8 , to which we refer for additional details on the mathematical aspects of this construction. While writing this paper", ", we have come across BID29 , in which a similar problem is considered and solved with a different algorithm. Whereas in BID29 the authors", "use a second order solver to find a deforming vector field, we show how a first order method can be formulated efficiently and justify a smoothing operation, independent of the optimization step. We report, for the first time", ", success rates for adversarial attacks with deformations on ImageNet. The topic of deformations has", "also come up in BID11 , in which the authors introduce a class of learnable modules that deform inputs in order to increase the performance of existing DNNs, and BID7 , in which the authors introduce a method to measure the invariance of classifiers to geometric transformations.", "In this work, we proposed a new efficient algorithm, ADef, to construct a new type of adversarial attacks for DNN image classifiers.", "The procedure is iterative and in each iteration takes a gradient descent step to deform the previous iterate in order to push to a decision boundary.We demonstrated that with almost imperceptible deformations, state-of-the art classifiers can be fooled to misclassify with a high success rate of ADef.", "This suggests that networks are vulnerable to different types of attacks and that simply training the network on a specific class of adversarial examples might not form a sufficient defense strategy.", "Given this vulnerability of neural networks to deformations, we wish to study in future work how ADef can help for designing possible defense strategies.", "Furthermore, we also showed initial results on fooling adversarially trained networks.", "Remarkably, PGD trained networks on MNIST are more resistant to adversarial deformations than ADef trained networks.", "However, for this result to be more conclusive, similar tests on ImageNet will have to be conducted.", "We wish to study this in future work.", "T from the MNIST experiments.", "Deformations that fall to the left of the vertical line at ε = 3 are considered successful.", "The networks in the first column were trained using the original MNIST data, and the networks in the second and third columns were adversarially trained using ADef and PGD, respectively." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.14999999105930328, 0.3478260934352875, 0.0624999962747097, 0.1538461446762085, 0.07843136787414551, 0.178571417927742, 0.1249999925494194, 0.14999999105930328, 0.04255318641662598, 0.1538461446762085, 0.08888888359069824, 0.17241379618644714, 0.07547169178724289, 0.15789473056793213, 0.11764705181121826, 0.0833333283662796, 0.2545454502105713, 0.1666666567325592, 0, 0.1395348757505417, 0.06896550953388214, 0.13333332538604736, 0.08888888359069824, 0.10256409645080566, 0.1599999964237213, 0.1249999925494194, 0.11999999731779099, 0.31578946113586426, 0.16949151456356049, 0.21739129722118378, 0.1463414579629898, 0, 0.1875, 0.060606054961681366, 0.1538461446762085, 0, 0.11764705181121826, 0 ]
Hk4dFjR5K7
true
[ "We propose a new, efficient algorithm to construct adversarial examples by means of deformations, rather than additive perturbations." ]
[ "Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable.", "Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients.", "In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck.", "By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients.", "We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms.", "Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running.", "We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods.", "The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings.", "Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.", "Adversarial learning methods provide a promising approach to modeling distributions over highdimensional data with complex internal correlation structures.", "These methods generally use a discriminator to supervise the training of a generator in order to produce samples that are indistinguishable from the data.", "A particular instantiation is generative adversarial networks, which can be used for high-fidelity generation of images BID21 and other highdimensional data BID45 BID46 BID9 .", "Adversarial methods can also be used to learn reward functions in the framework of inverse reinforcement learning BID10 BID12 , or to directly imitate demonstrations BID19 .", "However, they suffer from major optimization challenges, one of which is balancing the performance of the generator and discriminator.", "A discriminator that achieves very high accuracy can produce relatively uninformative gradients, but a weak discriminator can also hamper the generator's ability to learn.", "These challenges have led to widespread interest in a variety of stabilization methods for adversarial learning algorithms BID24 BID4 .In", "this work, we propose a simple regularization technique for adversarial learning, which constrains the information flow from the inputs to the discriminator using a variational approximation to the information bottleneck. By", "enforcing a constraint on the mutual information between the input observations and the discriminator's internal representation, we can encourage the discriminator to learn a representation that has heavy overlap between the data and the generator's distribution, thereby effectively modulating the discriminator's accuracy and maintaining useful and informative gradients for the generator. Our", "approach to stabilizing adversarial learning can be viewed as an adaptive variant of instance noise BID39 . However", ", we show that the adaptive nature of this method is critical. Constraining", "the mutual information between the discriminator's internal representation and the input allows the regularizer to directly limit the discriminator's accuracy, which automates the choice of noise magnitude and applies this noise to a compressed representation of the input that is specifically optimized to model the most discerning differences between the generator and data distributions.The main contribution of this work is the variational discriminator bottleneck (VDB), an adaptive stochastic regularization method for adversarial learning that substantially improves performance across a range of different application domains, examples of which are available in FIG0 . Our method can", "be easily applied to a variety of tasks and architectures. First, we evaluate", "our method on a suite of challenging imitation tasks, including learning highly acrobatic skills from mocap data with a simulated humanoid character. Our method also enables", "characters to learn dynamic continuous control skills directly from raw video demonstrations, and drastically improves upon previous work that uses adversarial imitation learning. We further evaluate the", "effectiveness of the technique for inverse reinforcement learning, which recovers a reward function from demonstrations in order to train future policies. Finally, we apply our framework", "to image generation using generative adversarial networks, where employing VDB improves the performance in many cases.", "To interpret the effects of the VDB, we consider the results presented by , which show that for two distributions with disjoint support, the optimal discriminator can perfectly classify all samples and its gradients will be zero almost everywhere.", "Thus, as the discriminator converges to the optimum, the gradients for the generator vanishes accordingly.", "To address this issue, proposed applying continuous noise to the discriminator inputs, thereby ensuring that the distributions have continuous support everywhere.", "In practice, if the original distributions are sufficiently distant from each other, the added noise will have negligible effects.", "As shown by , the optimal choice for the variance of the noise to ensure convergence can be quite delicate.", "In our method, by first using a learned encoder to map the inputs to an embedding and then applying an information bottleneck on the embedding, we can dynamically adjust the variance of the noise such that the distributions not only share support in the embedding space, but also have significant overlap.", "Since the minimum amount of information required for binary classification is 1 bit, by selecting an information constraint I c < 1, the discriminator is prevented from from perfectly differentiating between the distributions.", "To illustrate the effects of the VDB, we consider a simple task of training a discriminator to differentiate between two Gaussian distributions.", "FIG1 visualizes the decision boundaries learned with different bounds I c on the mutual information.", "Without a VDB, the discriminator learns a sharp decision boundary, resulting in vanishing gradients for much of the space.", "But as I c decreases and the bound tightens, the decision boundary is smoothed, providing more informative gradients that can be leveraged by the generator.Taking this analysis further, we can extend Theorem 3.2 from to analyze the VDB, and show that the gradient of the generator will be non-degenerate for a small enough constraint I c , under some additional simplifying assumptions.", "The result in states that the gradient consists of vectors that point toward samples on the data manifold, multiplied by coefficients that depend on the noise.", "However, these coefficients may be arbitrarily small if the generated samples are far from real samples, and the noise is not large enough.", "This can still cause the generator gradient to vanish.", "In the case of the VDB, the constraint ensures that these coefficients are always bounded below.", "Due to space constraints, this result is presented in Appendix A.", "We present the variational discriminator bottleneck, a general regularization technique for adversarial learning.", "Our experiments show that the VDB is broadly applicable to a variety of domains, and yields significant improvements over previous techniques on a number of challenging tasks.", "While our experiments have produced promising results for video imitation, the results have been primarily with videos of synthetic scenes.", "We believe that extending the technique to imitating realworld videos is an exciting direction.", "Another exciting direction for future work is a more in-depth theoretical analysis of the method, to derive convergence and stability results or conditions." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.052631575614213943, 0.21052631735801697, 0.10256409645080566, 0.15789473056793213, 0.1764705777168274, 0.1621621549129486, 0.3589743673801422, 0, 0.1764705777168274, 0.05405404791235924, 0.14999999105930328, 0.19512194395065308, 0.060606054961681366, 0.052631575614213943, 0.1666666567325592, 0.19512194395065308, 0.1090909019112587, 0.24242423474788666, 0, 0.14457830786705017, 0.20689654350280762, 0.14999999105930328, 0.23255813121795654, 0.1904761791229248, 0.1875, 0.07692307233810425, 0.0714285671710968, 0.05714285373687744, 0, 0.05882352590560913, 0.1355932205915451, 0.09090908616781235, 0.05714285373687744, 0.13333332538604736, 0, 0.05882352590560913, 0, 0.052631575614213943, 0.07999999821186066, 0, 0.07407406717538834, 0.20689654350280762, 0.09756097197532654, 0.05882352590560913, 0.13333332538604736, 0.10256409645080566 ]
HyxPx3R9tm
true
[ "Regularizing adversarial learning with an information bottleneck, applied to imitation learning, inverse reinforcement learning, and generative adversarial networks." ]
[ "Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions.", "While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. ", "Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy.", "We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. ", "Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data.", "We find that this augmentation leads to reduced sensitivity to high frequency noise (similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image (similar to Cutout).", "We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. ", "Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training.", "Patch Gaussian augmentation overcomes the accuracy/robustness tradeoff observed in other augmentation strategies.", "Larger σ of Patch Gaussian (→) improves mean corruption error (mCE) and maintains clean accuracy, whereas larger σ of Gaussian (→) and patch size of Cutout (→) hurt accuracy or robustness.", "More robust and accurate models are down and to the right.", "Modern deep neural networks can achieve impressive performance at classifying images in curated datasets (Karpathy, 2011; Krizhevsky et al., 2012; Tan & Le, 2019 ).", "Yet, they lack robustness to various forms of distribution shift that typically occur in real-world settings.", "For example, neural networks are sensitive to small translations and changes in scale (Azulay & Weiss, 2018) , blurring and additive noise (Dodge & Karam, 2017) , small objects placed in images (Rosenfeld et al., 2018) , and even different images from a distribution similar to the training set (Recht et al., 2019; .", "For models to be useful in the real world, they need to be both accurate on a high-quality held-out set of images, which we refer to as \"clean accuracy,\" and robust on corrupted images, which we refer to as \"robustness.\"", "Most of the literature in machine learning has focused on architectural changes (Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016; Szegedy et al., 2017; Han et al., 2017; Hu et al., 2017; Liu et al., 2018) to improve clean accuracy but interest has recently shifted toward robustness as well.", "Research in neural network robustness has tried to quantify the problem by establishing benchmarks that directly measure it (Hendrycks & Dietterich, 2018; Gu et al., 2019) and comparing the performance of humans and neural networks (Geirhos et al., 2018b; Elsayed et al., 2018) .", "Others have tried to understand robustness by highlighting systemic failure modes of current methods.", "For instance, networks exhibit excessive invariance to visual features (Jacobsen et al., 2018) , texture bias (Geirhos et al., 2018a) , sensitivity to worst-case (adversarial) perturbations (Goodfellow et al., 2014) , and a propensity to rely on non-robust, but highly predictive features for classification (Doersch et al., 2015; Ilyas et al., 2019) .", "Of particular relevance, Ford et al. (2019) has established connections between popular notions of adversarial robustness and some measures of distribution shift considered here.", "Another line of work has attempted to increase model robustness performance, either by projecting out superficial statistics (Wang et al., 2019) , via architectural improvements (Cubuk et al., 2017) , pretraining schemes (Hendrycks et al., 2019) , or with the use of data augmentations.", "Data augmentation increases the size and diversity of the training set, and provides a simple way to learn invariances that are challenging to encode architecturally (Cubuk et al., 2017) .", "Recent work in this area includes learning better transformations (DeVries & Taylor, 2017; Zhang et al., 2017; Zhong et al., 2017) , inferring combinations of them (Cubuk et al., 2018) , unsupervised methods (Xie et al., 2019) , theory of data augmentation (Dao et al., 2018) , and applications for one-shot learning (Asano et al., 2019) .", "Despite these advances, individual data augmentation methods that improve robustness do so at the expense of reduced clean accuracy.", "Further, achieving robustness on par with the human visual system is thought to require major changes in training procedures and datasets: the current state of the art in robustness benchmarks involves creating a custom dataset with styled-transferred images before training (Geirhos et al., 2018a) , and still incurs a significant drop in clean accuracy.", "The ubiquity of reported robustness/accuracy trade-offs in the literature have even led to the hypothesis that these trade-offs may be inevitable (Tsipras et al., 2018) .", "Because of this, many recent works focus on improving either one or the other (Madry et al., 2017; Geirhos et al., 2018a) .", "In this work we propose a simple data augmentation method that overcomes this trade-off, achieving improved robustness while maintaining clean accuracy.", "Our contributions are as follows:", "• We characterize a trade-off between robustness and accuracy in standard data augmentations Cutout and Gaussian (Section 2.1).", "• We describe a simple data augmentation method (which we term Patch Gaussian) that allows us to interpolate between the two augmentations above (Section 3.1).", "Despite its simplicity, Patch Gaussian achieves a new state of the art in the Common Corruptions robustness benchmark (Hendrycks & Dietterich, 2018) , while maintaining clean accuracy, indicating current methods have not reached this fundamental trade-off (Section 4.1).", "• We demonstrate that Patch Gaussian can be combined with other regularization strategies (Section 4.2) and data augmentation policies (Section 4.3).", "• We perform a frequency-based analysis (Yin et al., 2019) of models trained with Patch Gaussian and find that they can better leverage high-frequency information in lower layers, while not being too sensitive to them at later ones (Section 5.1).", "• We show a similar method can be used in adversarial training, suggesting under-explored questions about training distributions' effect on out-of-distribution robustness (Section 5.2).", "In an attempt to understand Patch Gaussian's performance, we perform a frequency-based analysis of models trained with various augmentations using the method introduced in Yin et al. (2019) .", "First, we perturb each image in the dataset with noise sampled at each orientation and frequency in Fourier space.", "Then, we measure changes in the network activations and test error when evaluated with these Fourier-noise-corrupted images: we measure the change in 2 norm of the tensor directly after the first convolution, as well as the absolute test error.", "This procedure yields a heatmap, which indicates model sensitivity to different frequency and orientation perturbations in the Fourier domain.", "Each image in Fig 4 shows first layer (or test error) sensitivity as a function of frequency and orientation of the sampled noise, with the middle of the image containing the lowest frequencies, and the edges of the image containing highest frequencies.", "For CIFAR-10 models, we present this analysis for the entire Fourier domain, with noise sampled with norm 4.", "For ImageNet, we focus our analysis on lower frequencies that are more visually salient add noise with norm 15.7.", "Note that for Cutout and Gaussian, we chose larger patch sizes and σs than those selected with the method in Section 3.2 in order to highlight the effect of these augmentations on sensitivity.", "Heatmaps of other models can be found in the Appendix (Figure 11 ).", "In this work, we introduced a simple data augmentation operation, Patch Gaussian, which improves robustness to common corruptions without incurring a drop in clean accuracy.", "For models that are large relative to the dataset size (like ResNet-200 on ImageNet and all models on CIFAR-10), Patch Gaussian improves clean accuracy and robustness concurrently.", "We showed that Patch Gaussian achieves this by interpolating between two standard data augmentation operations Cutout and Gaussian.", "Finally, we analyzed the sensitivity to noise in different frequencies of models trained with Cutout and Gaussian, and showed that Patch Gaussian combines their strengths without inheriting their weaknesses.", "Our method is much simpler than previous state of the art, and can be used in conjunction with other regularization and data augmentation strategies, indicating it is generally useful.", "We end by showing that applying perturbations in patches can be a powerful method to vary training distributions in the adversarial setting.", "Our results indicate current methods have not reached a fundamental robustness/accuracy trade-off, and that future work is needed to understand the effect of training distributions in o.o.d. robustness." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.1395348757505417, 0.1090909019112587, 0.09756097197532654, 0.1818181723356247, 0.1702127605676651, 0.1463414579629898, 0.1599999964237213, 0.3125, 0.08888888359069824, 0.12903225421905518, 0.04255318641662598, 0.1621621549129486, 0.16129031777381897, 0.19230768084526062, 0.1666666567325592, 0.13793103396892548, 0.05714285373687744, 0.06666666269302368, 0.13636362552642822, 0.07017543166875839, 0.2083333283662796, 0.1355932205915451, 0.14999999105930328, 0.1818181723356247, 0.2222222238779068, 0.1428571343421936, 0.1463414579629898, 0, 0.1538461446762085, 0.12765957415103912, 0.1355932205915451, 0.0952380895614624, 0.0952380895614624, 0.3478260934352875, 0.16326530277729034, 0.15789473056793213, 0.1599999964237213, 0.14999999105930328, 0.15686273574829102, 0.052631575614213943, 0.04878048226237297, 0.26923075318336487, 0.1764705777168274, 0.08888888359069824, 0.13333332538604736, 0.10526315122842789, 0.1666666567325592, 0.25, 0.1904761791229248, 0.2800000011920929 ]
HkxWXkStDB
true
[ "Simple augmentation method overcomes robustness/accuracy trade-off observed in literature and opens questions about the effect of training distribution on out-of-distribution generalization." ]
[ "Offset regression is a standard method for spatial localization in many vision tasks, including human pose estimation, object detection, and instance segmentation.", "However, \nif high localization accuracy is crucial for a task, convolutional neural networks will offset regression\nusually struggle to deliver. ", "This can be attributed to the locality of the convolution operation, exacerbated by variance in scale, clutter, and viewpoint.", "An even more fundamental issue is the multi-modality of real-world images.", "As a consequence, they cannot be approximated adequately using a single mode model. ", "Instead, we propose to use mixture density networks (MDN) for offset regression, allowing the model to manage various modes efficiently and learning to predict full conditional density of the outputs given the input.", "On 2D human pose estimation in the wild, which requires accurate localisation of body keypoints, we show that this yields significant improvement in localization accuracy.", "In particular, our experiments reveal viewpoint variation as the dominant multi-modal factor.", "Further, by carefully initializing MDN parameters, we do not face any instabilities in training, which is known to be a big obstacle for widespread deployment of MDN.", "The method can be readily applied to any task with a spatial regression component.", "Our findings highlight the multi-modal nature of real-world vision, and the significance of explicitly accounting for viewpoint variation, at least when spatial localization is concerned.\n", "Training deep neural networks is a non-trivial task in many ways.", "Properly initializing the weights, carefully tuning the learning rate, normalization of weights or targets, or using the right activation function can all be vital for getting a network to converge at all.", "From another perspective, it is crucial to carefully formulate the prediction task and loss on top of a rich representation to efficiently leverage all the features learned.", "For example, combining representations at various network depths has been shown to be important to deal with objects at different scales Newell et al. (2016) ; Lin et al. (2017) ; Liu et al. (2016) .", "For some issues, it is relatively straightforward to come up with a network architecture or loss formulation to address them -see e.g. techniques used for multi-scale training and inference.", "In other cases it is not easy to manually devise a solution.", "For example, offset regression is extensively used in human pose estimation and instance segmentation, but it lacks high spatial precision.", "Fundamental limitations imposed by the convolution operation and downsampling in networks, as well as various other factors contribute to this -think of scale variation, variation in appearance, clutter, occlusion, and viewpoint.", "When analyzing a standard convolutional neural network (CNN) with offset regression, it seems the network knows roughly where a spatial target is located and moves towards it, but cannot get precise enough.", "How can we make them more accurate?", "That's the question we address in this paper, in the context of human pose estimation.", "Mixture density models offer a versatile framework to tackle such challenging, multi-modal settings.", "They allow for the data to speak for itself, revealing the most important modes and disentangling them.", "To the best of our knowledge, mixture density models have not been successfully integrated in 2D human pose estimation to date.", "In fact, our work has only become possible thanks to recent work of Zhou et al. (2019a) proposing an offset based method to do dense human pose estimation, object detection, depth estimation, and orientation estimation in a single forward pass.", "Essentially, in a dense fashion they classify some central region of an instance to decide if it belongs to a particular category, and then from that central location regress offsets to spatial points of interest belonging to the instance.", "In human pose estimation this would be keypoints; in instance segmentation it could be extreme points; and in tracking moving objects in a video this could be used to localize an object in a future frame Zhou et al. (2019b) ; Neven et al. (2019) ; Novotny et al. (2018) ; Cui et al. (2019) .", "This eliminates the need for a two stage top-down model or for an ad hoc post processing step in bottom-up models.", "The former would make it very slow to integrate a density estimation method, while for the latter it is unclear how to do so -if possible at all.", "In particular, we propose to use mixture density networks (MDN) to help a network disentangle the underlying modes that, when taken together, force it to converge to an average regression of a target.", "We conduct experiments on the MS COCO human pose estimation task Lin et al. (2014) , because its metric is very sensitive to spatial localization: if the ground truth labels are displaced by just a few pixels, the scores already drop significantly, as shown in top three rows of Table 4 .", "This makes the dataset suitable for analyzing how well different models perform on high precision localization.", "Any application demanding high precision localization can benefit from our approach.", "For example, spotting extremely small broken elements on an electronic board or identifying surface defects on a steel sheet using computer vision are among such applications.", "In summary, our contributions are as follows:", "• We propose a new solution for offset regression problems in 2D using MDNs.", "To the best of our knowledge this is the first work to propose a full conditional density estimation model for 2D human pose estimation on a large unconstrained dataset.", "The method is general and we expect it to yield significant gains in any spatial dense prediction task.", "• We show that using MDN we can have a deeper understanding of what modes actually make a dataset challenging.", "Here we observe that viewpoint is the most challenging mode that forces a single mode model to settle down for a sub-optimal solution.", "We have shown mixture density models significantly improve spatial offset regression accuracy.", "Further, we have demonstrate that MDNs can be deployed on real world data for conditional density estimation without facing mode collapse.", "Analyzing the ground truth data and revealed modes, we have observe that in fact MDN picks up on a mode, that significantly contributes to achieving higher accuracy and it can not be incorporated in a single mode model.", "In the case of human pose estimation, it is surprising that viewpoint is the dominant factor, and not the pose variation.", "This stresses the fact that real world data is multi-modal, but not necessarily in the way we expect.", "Without a principled approach like MDNs, it is difficult to determine the most dominant factors in a data distribution.", "A stark difference between our work and others who have used mixture models is the training data.", "Most of the works reporting mode collapse rely on small and controlled datasets for training.", "But here we show that when there is a large and diverse dataset, just by careful initialization of parameters, MDNs can be trained without any major instability issues.", "We have made it clear that one can actually use a fully standalone multi-hypothesis model in a real-world scenario without the need to rely on an oracle or postponing model selection to a downstream task.", "We think there is potential to learn more finer modes from the dataset, maybe on the pose variance, but this needs further research.", "Specially, it will be very helpful if the role of training data diversity could be analysed theoretically.", "At the same time, the sparsity of revealed modes also reminds us of the sparsity of latent representations in generative models Xu et al. (2019) .", "We attribute this to the fact that deep models, even without advanced special prediction mechanism, are powerful enough to deliver fairly high quality results on the current datasets.", "Perhaps, a much needed future direction is applying density estimation models to fundamentally more challenging tasks like the very recent large vocabulary instance segmentation task Gupta et al. (2019) ." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2790697515010834, 0.2380952388048172, 0.1538461446762085, 0.0624999962747097, 0, 0.44897958636283875, 0.17777776718139648, 0.060606054961681366, 0.12765957415103912, 0.22857142984867096, 0.17777776718139648, 0.1249999925494194, 0.12244897335767746, 0.21739129722118378, 0.0416666604578495, 0.1599999964237213, 0.12121211737394333, 0.39024388790130615, 0.12244897335767746, 0.19607841968536377, 0, 0.23529411852359772, 0.11764705181121826, 0.2222222238779068, 0.3333333134651184, 0.24137930572032928, 0.18867923319339752, 0.20338982343673706, 0.09756097197532654, 0.2978723347187042, 0.3199999928474426, 0.22857142984867096, 0.10810810327529907, 0, 0, 0, 0.22857142984867096, 0.3829787075519562, 0.25641024112701416, 0.04999999329447746, 0.1463414579629898, 0.3636363446712494, 0.1904761791229248, 0.145454540848732, 0.2631579041481018, 0.052631575614213943, 0.1538461446762085, 0.15789473056793213, 0.1666666567325592, 0.04081632196903229, 0.23076923191547394, 0.1860465109348297, 0.10810810327529907, 0.04878048226237297, 0.12765957415103912, 0.19999998807907104 ]
ByeYOerFvr
true
[ "We use mixture density networks to do full conditional density estimation for spatial offset regression and apply it to the human pose estimation task." ]
[ "Like language, music can be represented as a sequence of discrete symbols that form a hierarchical syntax, with notes being roughly like characters and motifs of notes like words. ", "Unlike text however, music relies heavily on repetition on multiple timescales to build structure and meaning.", "The Music Transformer has shown compelling results in generating music with structure (Huang et al., 2018). ", "In this paper, we introduce a tool for visualizing self-attention on polyphonic music with an interactive pianoroll. ", "We use music transformer as both a descriptive tool and a generative model. ", "For the former, we use it to analyze existing music to see if the resulting self-attention structure corroborates with the musical structure known from music theory. ", "For the latter, we inspect the model's self-attention during generation, in order to understand how past notes affect future ones.", "We also compare and contrast the attention structure of regular attention to that of relative attention (Shaw et al., 2018, Huang et al., 2018), and examine its impact on the resulting generated music. ", "For example, for the JSB Chorales dataset, a model trained with relative attention is more consistent in attending to all the voices in the preceding timestep and the chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. ", "We hope that our analyses will offer more evidence for relative self-attention as a powerful inductive bias for modeling music. ", "We invite the reader to explore our video animations of music attention and to interact with the visualizations at https://storage.googleapis.com/nips-workshop-visualization/index.html.", "Attention is a cornerstone in neural network architectures.", "It can be the primary mechanism for constructing a network, such as in the self-attention based Transformer, or serve as a secondary mechanism for connecting parts of a model that would otherwise be far apart or different modalities of varying dimensionalities.", "Attention also offers us an avenue for visualizing the inner workings of a model, often to illustrate alignments BID3 .", "For example in machine translation, the Transformer uses attention to build up both context and alignment while in the LSTM-based seq2seq models, attention eases the word alignment between source and target sentences.", "For both types, attention gives points us to where a model is looking when translating BID6 BID0 .", "For example in speech recognition, attention aligns different modalities from spectograms to phonemes BID1 .In", "contrast to the above domains, there is less \"groundtruth\" in what should be attended to in a creative domain such as music. Moreover", ", in contrast to encoder-decoder models where attention serves as alignment, in language modeling self-attention serves to build context, to retrieve relevant information from the past to predict the future. Music theory", "gives us some insight of the motivic, harmonic, temporal dependencies across a piece, and attention could be a lens in showing their relevance in a generative setting, i.e. does the model have to pay attention to this previous motif to generate the new note? Music Transformer", ", based on self-attention BID6 , has been shown to be effective in modeling music, being able to generate sequences with repetition on multiple timescales (motifs and phrases) with long-term coherence BID2 . In particular, the", "use of relative attention improved sample quality and allowed the model generalize beyond lengths observed during training time. Why does relative", "attention help? More generally, how", "does the attention structure look like on these models?In this paper, we introduce", "a tool for visualizing self-attention on music with an interactive pianoroll. We use Music Transformer as", "both a descriptive tool and a generative model. For the former, we use it to", "analyze existing music to see if the resulting self-attention structure corroborates with musical structure known from music theory. For the latter, we inspect the", "model's self-attention during generation, in order to understand how past notes affect future ones. We explore music attention on", "two music datasets, JSB Chorales and Piano-e-Competition. The former are Chorale harmonizations", ", and we see attention keeping track of the harmonic progression and also voice-leading. The latter are virtuosic classical piano", "music and attention looks back on previous motifs and gestures. We show for JSB Chorales the heads in multihead-attention", "distribute and focus on different temporal regions.Moreover, we compare and contrast the attention structure of regular attention to that of relative attention, and examine its impact on the resulting generated music. For example, for the JSB Chorales dataset, a model trained", "with relative attention is more consistent in attending to all the voices in the preceding timestep and the many chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. In contrast, regular attention often becomes a \"local\" model", "only attending to the most recent history, resulting in certain voice repeating the same note for a long duration, perhaps due to overconfidence.", "We presented a visualization tool for seeing and exploring music self-attention in context of music sequences.", "We have shown some preliminary observations and we hope this it the beginning to furthering our understanding in how these models learn to generate music." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.054054051637649536, 0.07692307233810425, 0.13793103396892548, 0.06896550953388214, 0.0833333283662796, 0.060606054961681366, 0.06666666269302368, 0.25641024112701416, 0.2083333283662796, 0.12903225421905518, 0.1875, 0, 0.09302325546741486, 0.13333332538604736, 0.2702702581882477, 0.0714285671710968, 0.07692307233810425, 0.0624999962747097, 0.1621621549129486, 0.20000000298023224, 0.0952380895614624, 0.25806450843811035, 0.1249999925494194, 0.1666666567325592, 0.2222222238779068, 0.1599999964237213, 0.06451612710952759, 0.06666666269302368, 0.08695651590824127, 0.19999998807907104, 0.27586206793785095, 0.260869562625885, 0.21276594698429108, 0.1249999925494194, 0.1538461446762085, 0.11428570747375488 ]
ryfxVNEajm
true
[ "Visualizing the differences between regular and relative attention for Music Transformer." ]
[ "We study the statistical properties of the endpoint of stochastic gradient descent (SGD).", "We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients..", "Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. \nWe experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.", "Despite being massively over-parameterized BID13 , deep neural networks (DNNs) have demonstrated good generalization ability and achieved state-of-the-art performances in many application domains such as image BID13 and speech recognition BID1 .", "The reason for this success has been a focus of research recently but still remains an open question.", "Our work provides new theoretical insights and useful suggestions for deep learning practitioners.The standard way of training DNNs involves minimizing a loss function using SGD and its variants BID4 .", "In SGD, parameters are updated by taking a small discrete step depending on the learning rate in the direction of the negative loss gradient, which is approximated based on a small subset of training examples (called a mini-batch).", "Since the loss functions of DNNs are highly non-convex functions of the parameters, with complex structure and potentially multiple minima and saddle points, SGD generally converges to different regions of parameter space depending on optimization hyper-parameters and initialization.Recently, several works BID2 BID0 BID28 have investigated how SGD impacts generalization in DNNs.", "It has been argued that wide minima tend to generalize better than sharp minima BID15 BID28 .", "This is entirely compatible with a Bayesian viewpoint that emphasizes targeting the probability mass associated with a solution, rather than the density value at a solution BID21 .", "Specifically, BID28 find that larger batch sizes correlate with sharper minima.", "In contrast, we find that it is the ratio of learning rate to batch size which is correlated with sharpness of minima, not just batch size alone.", "In this vein, while BID9 discuss the existence of sharp minima which behave similarly in terms of predictions compared with wide minima, we argue that SGD naturally tends to find wider minima at higher noise levels in gradients, and such wider minima seem to correlate with better generalization.In order to achieve our goal, we approximate SGD as a continuous stochastic differential equation BID3 BID22 BID19 .", "Assuming isotropic gradient noise, we derive the Boltzmann-Gibbs equilibrium distribution of this stochastic process, and further derive the relative probability of landing in one local minima as compared to another in terms of their depth and width.", "Our main finding is that the ratio of learning rate to batch-size along with the gradient's covariances influence the trade-off between the depth and sharpness of the final minima found by SGD, with a high ratio of learning rate to batch size favouring flatter minima.", "In addition, our analysis provides a theoretical justification for the empirical observation that scaling the learning rate linearly with batch size (up to a limit) leads to identical performance in DNNs BID18 BID12 .We", "verify our theoretical insights experimentally on different models and datasets. In", "particular, we demonstrate that high learning rate to batch size ratio (due to either high learning rate or low batchsize) leads to wider minima and correlates well with better validation performance. We", "also show that a high learning rate to batch size ratio helps prevent memorization. Furthermore", ", we observe that multiplying each of the learning rate and the batch size by the same scaling factor results in similar training dynamics. Extending", "this observation, we validate experimentally that one can exchange learning rate and batch size for the recently proposed cyclic learning rate (CLR) schedule BID31 , where the learning rate oscillates between two levels. Finally,", "we discuss the limitations of our theory in practice.", "In the theoretical section of this work we treat the learning rate as fixed throughout training.", "However, in practical applications, the learning rate is annealed to a lower value, either gradually or in discrete jumps.", "When viewed within our framework, at the beginning with high noise, SGD favors width over depth of a region, then as the noise decreases, SGD prioritizes the depth more stronglythis can be seen from Theorem 3 and the comments that follow.In the theoretical section we made the additional assumption that the covariance of the gradients is isotropic, in order to be able to derive a closed form solution for the equilibrium distribution.", "We do not expect this assumption to hold in practice, but speculate that there may be mechanisms which drive the covariance towards isotropy, for example one may be able to tune learning rates on a per-parameter basis in such a way that the combination of learning rate and covariance matrix is approximately isotropic -this may lead to improvements in optimization.", "Perhaps some existing mechanisms such as batch normalization or careful initialization give rise to more equalized covariance -we leave study of this for future work.We note further that our theoretical analysis considered an equilibrium distribution, which was independent of the intermediate dynamics.", "However, this may not be the case in practice.", "Without the isotropic covariance, the system of partial differential equations in the late time limit will in general have a solution which will depend on the path through which optimization occurs, unless other restrictive assumptions are made to force this path dependence to disappear .", "Despite this simplifying assumption, our empirical results are consistent with the developed theory.", "We leave study of path dependence and dynamics to future work.In experiments investigating memorization we explored how the noise level changes the preference of wide minima over sharp ones.", "BID2 argues that SGD first learns true labels, before focusing on random labels.", "Our insight is that in the second phase the high level of noise maintains generalization.", "This illustrates the trade-off between width of minima and depth in practice.", "When the noise level is lower, DNNs are more likely to fit random labels better, at the expense of generalizing less well on true ones.", "We shed light on the role of noise in SGD optimization of DNNs and argue that three factors (batch size, learning rate and gradient variance) strongly influence the properties (loss and width) of the final minima at which SGD converges.", "The learning rate and batch size of SGD can be viewed as one effective hyper-parameter acting as a noise factor n = η/S.", "This, together with the gradient covariance influences the trade-off between the loss and width of the final minima.", "Specifically, higher noise favors wider minima, which in turn correlates with better generalization.Further, we experimentally verify that the noise n = η/S determines the width and height of the minima towards which SGD converges.", "We also show the impact of this noise on the memorization phenomenon.", "We discuss the limitations of the theory in practice, exemplified by when the learning rate gets too large.", "We also experimentally verify that η and S can be simultaneously rescaled as long as the noise η/S remains the same." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.1702127605676651, 0.16296295821666718, 0.039215680211782455, 0.04999999329447746, 0.15686273574829102, 0.18867923319339752, 0.1515151411294937, 0.05405404791235924, 0.04444443807005882, 0.060606054961681366, 0.13333332538604736, 0.12987013161182404, 0.18867923319339752, 0.2181818187236786, 0.11320754140615463, 0, 0.08163265138864517, 0.05405404791235924, 0.2222222238779068, 0.07843136787414551, 0.19354838132858276, 0.1621621549129486, 0.14999999105930328, 0.09999999403953552, 0.1428571343421936, 0.0624999962747097, 0.12903225421905518, 0.10344827175140381, 0.05714285373687744, 0.11999999731779099, 0.05714285373687744, 0.1666666567325592, 0.23529411852359772, 0.08695651590824127, 0.4000000059604645, 0.13636362552642822, 0.21621620655059814, 0.18867923319339752, 0.12121211737394333, 0.2631579041481018, 0.04878048226237297 ]
rJma2bZCW
true
[ "Three factors (batch size, learning rate, gradient noise) change in predictable way the properties (e.g. sharpness) of minima found by SGD." ]
[ "Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems.", "In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns.", "My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted.", "Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.\n", "Word vectors constructed using Word2vec BID6 , BID8 ) and Glove BID9 ) are central to the success of several state of the art models in natural language processing BID1 , BID2 , BID7 , BID11 ).", "These vectors are low dimensional vector representations of words that accurately capture the semantic and syntactic information about the word in a document.The ability of these vectors to encode language is best illustrated by their efficiency at solving word analogy problems.", "The problem involves predicting a word, D, which completes analogies of the form 'A:B :: C:D'.", "For example, if the phrase is ''King:Queen :: Man:D', then the appropriate value of D is Woman.", "Word2vec solves these problems by observing that the word vectors for A, B, C and D satisfy the equation V ec(D) ≈ V ec(C) + V ec(B) − V ec(A) in several cases.Although this equation accurately resolves the word analogy for a wide variety of semantic and syntactic problems, the precise dynamics underlying this equation are largely unknown.", "Part of the difficulty in understanding the dynamics is that word vectors are essentially 'black boxes' which lack interpretability.", "This difficulty has been overcome in large part due to the systematic analyses of Levy, Goldberg and colleagues, who have derived connections between word vectors and the more human-interpretable count based approach of representing words.", "They show that", "1) there are mathematical equivalences between Word2vec and the count based approach BID4 ),", "2) that the count based approach can produce results comparable to Word2vec on word analogy problems BID3 ) and more generally,", "3) that the count based approach can perform as well as Word2vec on most NLP tasks when the hyper-parameters in the model are properly tuned BID5 .", "Their results (see section 9 in BID3 ) demonstrate that V ec(B) − V ec(A) is likely capturing the 'common information' between A and B, and this information is somehow being 'transferred' on to C to compute D.Still the question remains, how is this transference process taking place?", "The answer would provide insight into the topology of word vectors and would help us to identify gaps in our understanding of word vectors.", "In this paper, I attempt to gain insights into the transference process by building a simple generative algorithm for solving semantic word analogy problems in the case where A, B, C and D are nouns.", "My algorithm works in two steps: In the first step, I compute a list of nouns that likely represent the information that is common to both A and B. In the second step, I impose the information about the nouns obtained in the first step on to C to compute D. Both steps of the algorithm work only on word counts; therefore, it is possible to precisely understand how and why D is generated in every word analogy question.Despite the simplicity of my approach, the algorithm is able to produce results comparable to Word2vec on the semantic word analogy questions, even using a very small dataset.", "My study reveals insights into why word vectors solve certain classes of word analogy problems much better than others.", "I show that there is no universal interpretation of the information contained in V ec(B) − V ec(A) because the 'common information' between A and B is strongly dependent on the training dataset.", "My results reveal that a machine may not be 'learning' the relationship between a pair of words the way a human does, even when it accurately solves an analogy problem." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.30188679695129395, 0.23255813121795654, 0.25, 0.12244897335767746, 0.21052631735801697, 0.17142856121063232, 0.05882352590560913, 0.1515151411294937, 0.1621621549129486, 0.19607841968536377, 0, 0.1818181723356247, 0.3499999940395355, 0.0952380895614624, 0.09836065024137497, 0.25641024112701416, 0.3396226465702057, 0.1149425283074379, 0.3243243098258972, 0.0833333283662796, 0.1304347813129425 ]
ryA-jdlA-
true
[ "Simple generative approach to solve the word analogy problem which yields insights into word relationships, and the problems with estimating them" ]
[ "Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.", "Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch.", "This paper re-examines several common practices of setting hyper-parameters for fine-tuning.", "Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks.", "(1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter.", "We find that picking the right value for momentum is critical for fine-tuning performance and connect it with previous theoretical findings.", "(2) Optimal hyper-parameters for fine-tuning in particular the effective learning rate are not only dataset dependent but also sensitive to the similarity between the source domain and target domain.", "This is in contrast to hyper-parameters for training from scratch.", "(3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for \"dissimilar\" datasets.", "Our findings challenge common practices of fine- tuning and encourages deep learning practitioners to rethink the hyper-parameters for fine-tuning.", "Many real-world applications often have limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting.", "Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting.", "Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset.", "Specifically, fine-tuning on pre-trained ImageNet classification models (Simonyan & Zisserman, 2015; He et al., 2016b) has achieved impressive results for tasks such as object detection (Ren et al., 2015) and segmentation (He et al., 2017; Chen et al., 2017) and is becoming the de-facto standard of solving computer vision problems.", "It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization.", "Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019) .", "The common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule.", "It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features.", "For instance, many studies conduct fine-tuning of ResNets (He et al., 2016b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001.", "However, the default setting is not necessarily optimal for fine-tuning on other tasks.", "While few studies have performed extensive hyperparameter search for learning rate and weight decay (Mahajan et al., 2018; Kornblith et al., 2018) , the momentum coefficient is rarely changed.", "Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood.", "In addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include:", "• Fine-tuning pre-trained networks outperforms training from scratch; recent work (He et al., 2019) has already revisited this.", "• Fine-tuning from similar domains and tasks works better (Ge & Yu, 2017; Cui et al., 2018; Achille et al., 2019; Ngiam et al., 2018) .", "• Explicit regularization with initial models matters for transfer learning performance (Li et al., 2018; 2019) .", "Are these practices or beliefs always valid?", "From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization.", "However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters.", "Would the hyperparameters for training from scratch still be useful for fine-tuning?", "In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the conclusion differ when some of them are changed?", "With these questions in mind, we re-examined the common practices for fine-tuning.", "We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models.", "The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computations.", "We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide future hyperparameter search for fine-tuning.", "Our main findings are as follows:", "• Optimal hyperparameters for fine-tuning are not only dataset dependent, but also depend on the similarity between the source and target domains, which is different from training from scratch.", "Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance.", "It explains why some tasks are not achieving satisfactory results after fine-tuning because of inappropriate hyperparameter selection.", "Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we verified that zero momentum could work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain.", "• Hyperparameters are coupled together and it is the effective learning rate-which encapsulates the learning rate, momentum and batch size-that matters for fine-tuning performance.", "While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice.", "Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks.", "We show that the optimal effective learning rate actually depends on the similarity between the source and target domains.", "• We find regularization methods that were designed to keep models close to the initial model does not apply for \"dissimilar\" datasets, especially for nets with Batch Normalization.", "Simple weight decay can result in as good performance as the reference based regularization methods for fine-tuning with better search space.", "The two extreme ways for selecting hyperparameters-performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training-could be either too computationally expensive or yield inferior performance.", "Different with training from scratch, the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target domain and the source domains.", "The rarely tuned momentum value could impede the performance when the target domain and source domain are close.", "These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate.", "We further identify the optimal effective learning rate depends on the similarity of source domain and target domain.", "With this understanding, one can significant reduce the hyperparameter search space.", "We hope these findings could be one step towards better hyperparameter selection strategies for fine-tuning." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07692307233810425, 0.29411762952804565, 1, 0.1599999964237213, 0.12903225421905518, 0.12903225421905518, 0.1621621549129486, 0.2857142686843872, 0.06896550953388214, 0.4000000059604645, 0.0624999962747097, 0.0714285671710968, 0.10526315122842789, 0.1090909093618393, 0.11428570747375488, 0.0624999962747097, 0.22857142984867096, 0.12121211737394333, 0.10526315122842789, 0.25, 0.052631575614213943, 0.17142856121063232, 0.1666666567325592, 0, 0, 0.0714285671710968, 0.1111111044883728, 0.07407406717538834, 0.06451612710952759, 0.09090908616781235, 0.05882352590560913, 0.3478260934352875, 0.14814814925193787, 0.1666666567325592, 0.1249999925494194, 0, 0.10526315122842789, 0.14814814925193787, 0.1428571343421936, 0.1599999964237213, 0.1249999925494194, 0.1538461446762085, 0.17142856121063232, 0, 0.054054051637649536, 0.12903225421905518, 0.0555555522441864, 0.17391304671764374, 0, 0.06666666269302368, 0.07407406717538834, 0, 0.1538461446762085 ]
B1g8VkHFPH
true
[ "This paper re-examines several common practices of setting hyper-parameters for fine-tuning." ]
[ "Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task.", "To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied.", "In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention.", "Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network.", "Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner.", "We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP.", "The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.", "In many real-world applications, deep learning practitioners often have limited number of training instances.", "Direct training a deep neural network with a small training data set usually results in the so-called over-fitting problem and the quality of the obtained model is low.", "A simple yet effective approach to obtain high-quality deep learning models is to perform weight fine-tuning.", "In such practices, a deep neural network is first trained using a large (and possibly irrelevant) source dataset (e.g. ImageNet).", "The weights of such a network are then fine-tuned using the data from the target application domain.Fine-tuning is a specific approach to perform transfer learning in deep learning.", "The weights pretrained by the source dataset with a sufficiently large number of instances usually provide a better initialization for the target task than random initializations.", "In a typical fine-tuning approach, weights in lower convolution layers are fixed and weights in upper layers are re-trained using data from the target domain.", "In this approach parameters of the target model may be driven far away from initial values, which also causes over-fitting in transfer learning scenarios.Approaches called regularization using the starting point as the reference (SPAR), were recently proposed to solve the over-fitting problem.", "For example, Li et al. BID10 proposed L 2 -SP that incorporates the Euclid distance between the target weights and the starting point (i.e., weights of source network) as part of the loss.", "Minimizing this loss function, L 2 -SP aims to minimize the empirical loss of deep learning while reducing the distance of weights between source and target networks.", "They achieved significant improvement compared with standard practice of using the weight decay (L 2 normalization).However", "such regularization method may not deliver optimal solution for transfer learning. On one", "side, if the regularization is not strong, even with fine-turning, the weights may still be driven far away from the initial position, leading to the lose of useful knowledge, i.e. catastrophic memory loss. On the", "other side, if the regularization is too strong, newly obtained model is constrained to a local neighborhood of the original model, which may be suboptimal to the target data set. Although", "aforementioned methods demonstrated the power of regularization in deep transfer learning, we argue that we need to perform research on at least the following two aspects in order to further improve current regularization methods.Behavior vs. Mechanisms. The practice", "of weight regularization for CNN is motivated by a simple intuition -the network (layers) with similar weights should produce similar outputs. However, due", "to the complex structures of deep neural network with strong redundancies, regulating the model parameters directly seems an over-killing of the problem. We argue that", "we should regularize the \"Behavior\", or in our case, the outer layer outputs (e.g. the feature maps) produced by each layer, rather than model parameters. With constrained", "feature maps, the generalization capacity could be improved through aligning the behaviors of the outer layers of the target network to the source one, which has been pre-trained using an extremely large dataset. In Convolutional", "Neural Networks, which we focus on exclusively in this paper, an outer layer is a convolution layer and the output of an outer layer is its feature map.Syntax vs Semantics. While regularizing", "the feature maps might improve the transfer of generalization capacity, it is still difficult to design such regularizers. It is challenging", "to measure the similarity/distance between the feature maps without understanding its semantics or representations. For example for image", "classification, some of the convolution kernels may be corresponding to features that are shared between the two learning tasks and hence should be preserved in transfer learning while others are specific to the source task and hence could be eliminated in transfer learning.In this paper, we propose a novel regularization approach DELTA to address the two issues. Specifically, DELTA selects", "the discriminative features from outer layer outputs through re-weighting the feature maps with a novel supervised attention mechanism. Through paying attention to", "discriminative parts of feature maps, DELTA characterizes the distance between source/target networks using their outer layer outputs, and incorporates such distance as the regularization term of the loss function. With the back-propagation,", "such regularization finally affects the optimization for weights of deep neural network and awards the target network generalization capacity inherited from the source network.In summary, our key insight is what we call \"unactivated channel re-usage\". Specifically our approach", "identifies those transferable channels and preserves such filters through regularization and identify those untransferable channels and reuse them, using an attention mechanism with feature map regularization.We have conducted extensive experiments using a wide range of source/target datasets and compared DELTA to the existing deep transfer learning algorithms that are in pursuit of weight similarity. The experiment results show", "that DELTA significantly outperformed the state-of-the-art regularization algorithms including L 2 and L 2 -SP with higher accuracy on a wide group of image classification data sets.The rest of the paper is organized as follows: in Section 2 related works are summarized, in Section 3 our feature map based regularization method is introduced, in Section 4 experimental results are presented and discussed, and finally in Section 5 the paper is concluded.", "To better understand the performance gain of DELTA we performed an experiment where we analyzed how parameters of the convolution filters change after fine-tuning.", "Towards that purpose we randomly sampled images from the testing set of Stanford Dogs 120.", "For ResNet-101, which we use exclusively in this paper, we grouped filters into stages as described in (he et al., 2016) .", "These stages are conv2 x, conv3 x, conv4 x, conv5 x.", "Each stage contains a few stacked blocks and a block is a basic inception unit having 3 conv2d layers.", "One conv2d layer consists of a number of output filters.", "We flatten each filter into a one dimension parameter vector for convenience.", "The Euclidian distance between the parameter vectors before and after fine-tuning is calculated.", "All distances are sorted as shown in FIG3 .We", "observed a sharp difference between the two distance distributions. Our", "hypothesis of possible cause of the difference is that simply using L 2 -SP regularization all convolution filters are forced to be similar to the original ones. Using", "attention, we allow \"unactivated\" convolution filters to be reused for better image classification. About", "90% parameter vectors of DELTA have larger distance than L 2 -SP . We also", "observe that a small number of filters is driven very far away from the initial value (as shown at the left end of the curves in FIG3 . We call", "such an effect as \"unactivated channel re-usage\".To further", "understand the effect of attention and the implication of \"unactivated channel re-usage\", we \"attributed\" the attention to the original image to identify the set of pixels having high contributions in the activated feature maps. We select", "some convolution filters on which the source model (the initialization before fine-tuning) has low activation. For the convenience", "of analyzing the effect of regularization methods, each element a i of the original activation map is normalized with DISPLAYFORM0 where the min and max terms in the formula represent for the minimum and maximum value of the whole activation map respectively. Activation maps of", "these convolution filter for various regularization method are presented on each row.As shown in FIG4 , our first observation is that without attention, the activation maps from DELTA in different images are more or less the same activation maps from other regularization methods. This partially explains", "the fact that we do not observe significant improvement of DELTA without attention.Using attention, however, changes the activation map significantly. Regularization of DELTA", "with attention show obviously improved concentration. With attention (the right-most", "column in FIG4 ), we observed a large set of pixels that have high activation at important regions around the head of the animals. We believe this phenomenon provides", "additional evidence to support our intuition of \"unactivated channel re-usage\" as discussed in previous paragraphs. In addition, we included new statistical", "results of activations on part locations of CUB-200-2011 supporting the above qualitative cases. The CUB-200-2011 datasets defined 15 discriminative", "parts of birds, e.g. the forehead, tail, beak and so on. Each part is annotated with a pixel location representing", "for its center position if it is visible. So for each image, we got several key points which are very", "important to discriminate its category. Using all testing examples of CUB-200-2011, we calculated normalized", "activations on these key points of these different regularization methods. As shown in TAB2 , DELTA got the highest average activations on those", "key points, demonstrating that DELTA focused on more discriminate features for bird recognition.", "In this paper, we studied a regularization technique that transfers the behaviors and semantics of the source network to the target one through constraining the difference between the feature maps generated by the convolution layers of source/target networks with attentions.", "Specifically, we designed a regularized learning algorithm DELTA that models the difference of feature maps with attentions between networks, where the attention models are obtained through supervised learning.", "Moreover, we further accelerate the optimization for regularization using start point as reference (SPAR).", "Our extensive experiments evaluated DELTA using several real-world datasets based on commonly used convolutional neural networks.", "The experiment results show that DELTA is able to significantly outperform the state-of-the-art transfer learning methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.08888888359069824, 0.11764705181121826, 0.25, 0, 0.1702127605676651, 0.09090908616781235, 0.0714285671710968, 0.1599999964237213, 0.11428570747375488, 0.1538461446762085, 0.12903225421905518, 0.21621620655059814, 0.05714285373687744, 0.0624999962747097, 0.1599999964237213, 0, 0.11428570747375488, 0.1428571343421936, 0.25, 0.09302325546741486, 0.052631575614213943, 0.13636362552642822, 0.12121211737394333, 0.12121211737394333, 0.054054051637649536, 0.09756097197532654, 0.05128204822540283, 0.19999998807907104, 0.1428571343421936, 0.1071428507566452, 0.25806450843811035, 0.15789473056793213, 0.08888888359069824, 0.25806450843811035, 0.125, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.1111111044883728, 0, 0, 0, 0, 0.15789473056793213, 0, 0.13636362552642822, 0.07843136787414551, 0.0624999962747097, 0.19999998807907104, 0, 0, 0, 0.0624999962747097, 0, 0, 0.06451612710952759, 0, 0.17777776718139648, 0.277777761220932, 0.1599999964237213, 0.14814814925193787, 0.14814814925193787 ]
rkgbwsAcYm
true
[ "improving deep transfer learning with regularization using attention based feature maps" ]
[ "Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost.", "In some domains, automated predictions without justifications have limited applicability.", "Recently, progress has been made regarding single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal.", "In this context, a justification, or mask, consists of (long) word sequences from the input text, which suffice to make the prediction.", "Existing models cannot handle more than one aspect in one training and induce binary masks that might be ambiguous.", "In our work, we propose a neural model for predicting multi-aspect sentiments for reviews and generates a probabilistic multi-dimensional mask (one per aspect) simultaneously, in an unsupervised and multi-task learning manner.", "Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.", "Neural networks have become the standard for many natural language processing tasks.", "Despite the significant performance gains achieved by these complex models, they offer little transparency concerning their inner workings.", "Thus, they come at the cost of interpretability (Jain & Wallace, 2019).", "In many domains, automated predictions have a real impact on the final decision, such as treatment options in the field of medicine.", "Therefore, it is important to provide the underlying reasons for such a decision.", "We claim that integrating interpretability in a (neural) model should supply the reason of the prediction and should yield better performance.", "However, justifying a prediction might be ambiguous and challenging.", "Prior work includes various methods that find the justification in an input text -also called rationale or mask of a target variable.", "The mask is defined as one or multiple pieces of text fragments from the input text.", "1 Each should contain words that altogether are short, coherent, and alone sufficient for the prediction as a substitute of the input (Lei et al., 2016) .", "Many works have been applied to single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal.", "In this case, we define an aspect as an attribute of a product or service (Giannakopoulos et al., 2017) , such as Location or Cleanliness for the hotel domain.", "There are three different methods to generate masks: using reinforcement learning with a trained model (Li et al., 2016b) , generating rationales in an unsupervised manner and jointly with the objective function (Lei et al., 2016) , or including annotations during training (Bao et al., 2018; Zhang et al., 2016) .", "However, these models generate justifications that are", "1) only tailored for one aspect, and", "2) expressed as a hard (binary) selection of words.", "A review text reflects opinions about multiple topics a user cares about (Musat et al., 2013) .", "It appears reasonable to analyze multiple aspects with a multi-task learning setting, but a model must be trained as many times as the number of aspects.", "A hard assignment of words to aspects might lead to ambiguities that are difficult to capture with a binary mask: in the text \"The room was large, clean and close to the beach.\", the word \"room\" refers to the aspects Room, Cleanliness and Location.", "Finally, collecting human-provided rationales at scale is expensive and thus impractical.", "In this work, we study interpretable multi-aspect sentiment classification.", "We describe an architecture for predicting the sentiment of multiple aspects while generating a probabilistic (soft) multi-dimensional mask (one dimension per aspect) jointly, in an unsupervised and multi-task learning manner.", "We show that the induced mask is beneficial for identifying simultaneously what parts of the review relate to what aspect, and capturing ambiguities of words belonging to multiple aspects.", "Thus, the induced mask provides fine-grained interpretability and improves the final performance.", "Traditionally interpretability came at a cost of reduced accuracy.", "In contrast, our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable compared to attention-based methods and a single-aspect masker.", "We show that it can be a benefit to", "1) guide the model to focus on different parts of the input text, and", "2) further improve the sentiment prediction for all aspects.", "Therefore, interpretabilty does not come at a cost anymore.", "The contributions of this work can be summarized as follow:", "• We propose a Multi-Aspect Masker (MAM), an end-to-end neural model for multi-aspect sentiment classification that provides fine-grained interpretability in the same training.", "Given a text review as input, the model generates a probabilistic multi-dimensional mask, with one dimension per aspect.", "It predicts the sentiments of multiple aspects, and highlights long sequences of words justifying the current rating prediction for each aspect; • We show that interpretability does not come at a cost: our final model significantly outperforms strong baselines and attention models, both in terms of performance and mask coherence.", "Furthermore, the level of interpretability is controllable using two regularizers; • Finally, we release a new dataset for multi-aspect sentiment classification, which contains 140k reviews from TripAdvisor with five aspects, each with its corresponding rating.", "Developing interpretable models is of considerable interest to the broader research community, even more pronounced with neural models (Kim et al., 2015; Doshi-Velez & Kim, 2017) .", "Many works analyzed and visualized state activation (Karpathy et al., 2015; Li et al., 2016a; Montavon et al., 2018) , learned sparse and interpretable word vectors (Faruqui et al., 2015b; a; Herbelot & Vecchi, 2015) or analyzed attention (Clark et al., 2019; Jain & Wallace, 2019) .", "Our work differs from these in terms of what is meant by an explanation.", "Our system identifies one or multiple short and coherent text fragments that -as a substitute of the input text -are sufficient for the prediction.", "In this work, we propose Multi-Aspect Masker, an end-to-end neural network architecture to perform multi-aspect sentiment classification for reviews.", "Our model predicts aspect sentiments while generating a probabilistic (soft) multi-dimensional mask (one dimension per aspect) simultaneously, in an unsupervised and multi-task learning manner.", "We showed that the induced mask is beneficial to guide the model to focus on different parts of the input text and to further improve the sentiment prediction for all aspects.", "Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable compared to attention-based methods and a single-aspect masker." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1304347813129425, 0, 0.04651162400841713, 0.04444443807005882, 0.1428571343421936, 0.38461539149284363, 0.5306122303009033, 0.0555555522441864, 0, 0, 0.04444443807005882, 0.05405404791235924, 0.1860465109348297, 0.12121211737394333, 0.1304347813129425, 0.05128204822540283, 0.11999999731779099, 0.04651162400841713, 0.03999999538064003, 0.1230769157409668, 0.06451612710952759, 0.06451612710952759, 0.060606054961681366, 0.04999999701976776, 0.08510638028383255, 0.09999999403953552, 0.05714285373687744, 0.12121211737394333, 0.2641509473323822, 0.16326530277729034, 0.11428570747375488, 0.060606054961681366, 0.4912280738353729, 0.12121211737394333, 0.10810810327529907, 0, 0.060606054961681366, 0, 0.1702127605676651, 0.24390242993831635, 0.260869562625885, 0.06896550953388214, 0.03999999538064003, 0.06779660284519196, 0, 0.1304347813129425, 0.04651162400841713, 0.3333333134651184, 0.1599999964237213, 0.5 ]
B1lyZpEYvH
true
[ "Neural model predicting multi-aspect sentiments and generating a probabilistic multi-dimensional mask simultaneously. Model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable." ]
[ "Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL).", "However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency.", "We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL.", "In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL.", "We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α → 0 and RL to α → 1).", "We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α.", "Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.", "Neural sequence models have been successfully applied to various types of machine learning tasks, such as neural machine translation Sutskever et al., 2014; , caption generation (Xu et al., 2015; BID6 , conversation (Vinyals & Le, 2015) , and speech recognition BID8 BID2 .", "Therefore, developing more effective and sophisticated learning algorithms can be beneficial.", "Popular objective functions for training neural sequence models include the maximum-likelihood (ML) and reinforcement learning (RL) objective functions.", "However, both have limitations, i.e., training/testing discrepancy and sample inefficiency, respectively.", "indicated that optimizing the ML objective is not equal to optimizing the evaluation metric.", "For example, in machine translation, maximizing likelihood is different from optimizing the BLEU score BID19 , which is a popular metric for machine translation tasks.", "In addition, during training, ground-truth tokens are used for the predicting the next token; however, during testing, no ground-truth tokens are available and the tokens predicted by the model are used instead.", "On the contrary, although the RL-based approach does not suffer from this training/testing discrepancy, it does suffer from sample inefficiency.", "Samples generated by the model do not necessarily yield high evaluation scores (i.e., rewards), especially in the early stage of training.", "Consequently, RL-based methods are not self-contained, i.e., they require pre-training via ML-based methods.", "As discussed in Section 2, since these problems depend on the sampling distributions, it is difficult to resolve them simultaneously.Our solution to these problems is to integrate these two objective functions.", "We propose a new objective function α-DM (α-divergence minimization) for a neural sequence generation, and we demonstrate that it generalizes ML-and RL-based objective functions, i.e., α-DM can represent both functions as its special cases (α → 0 and α → 1).", "We also show that, for α ∈ (0, 1), the gradient of the α-DM objective is a combinations of the ML-and RL-based objective gradients.", "We apply the same optimization strategy as BID18 , who useed importance sampling, to optimize this proposed objective function.", "Consequently, we avoid on-policy RL sampling which suffers from sample inefficiency, and optimize the objective function closer to the desired RL-based objective than the ML-based objective.The experimental results for a machine translation task indicate that the proposed α-DM objective outperforms the ML baseline and the reward augmented ML method (RAML; BID18 , upon which we build the proposed method.", "We compare our results to those reported by BID3 , who proposed an on-policy RL-based method.", "We also confirm that α-DM can provide a comparable BLEU score without pre-training.The contributions of this paper are summarized as follows.•", "We propose the α-DM objective function using α-divergence and demonstrate that it can be considered a generalization of the ML-and RL-based objective functions (Section 4).•", "We prove that the α-DM objective function becomes closer to the desired RL-based objectives as α increases in the sense that the upper bound of the maximum discrepancy between ML-and RL-based objective functions monotonically decreases as α increases.•", "The results of machine translation experiments demonstrate that the proposed α-DM objective outperforms the ML-baseline and RAML (Section 7).", "In this study, we have proposed a new objective function as α-divergence minimization for neural sequence model training that unifies ML-and RL-based objective functions.", "In addition, we proved that the gradient of the objective function is the weighted sum of the gradients of negative loglikelihoods, and that the weights are represented as a mixture of the sampling distributions of the ML-and RL-based objective functions.", "We demonstrated that the proposed approach outperforms the ML baseline and RAML in the IWSLT'14 machine translation task.In this study, we focus our attention on the neural sequence generation problem, but we expect our framework may be useful to broader area of reinforcement learning.", "The sample inefficiency is one of major problems in reinforcement learning, and people try to mitigiate this problem by using several type of supervised learning frameworks such as imitation learning or apprenticisip learning.", "This alternative approaches bring another problem similar to the neural sequence generaton problem that is originated from the fact that the objective function for training is different from the one for testing.", "Since our framework is general and independent from the task, our approach may be useful to combine these approaches.", "A GRADIENT OF α-DM OBJECTIVEThe gradient of α-DM can be obtained as follows: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where DISPLAYFORM3 In Eq. FORMULA0 , we used the so-called log-trick: ∇ θ p θ (y|x) = p θ (y|x)∇ θ log p θ (y|x)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0, 0.05714285373687744, 0.3255814015865326, 0.19512194395065308, 0.1764705777168274, 0.2857142686843872, 0.1538461446762085, 0.07999999821186066, 0.4000000059604645, 0.07407406717538834, 0.07692307233810425, 0.10810810327529907, 0.1111111044883728, 0.06666666269302368, 0, 0.1428571343421936, 0.09999999403953552, 0.3461538553237915, 0.1764705777168274, 0.12121211737394333, 0.23728813230991364, 0.06666666269302368, 0, 0.2631579041481018, 0.1860465109348297, 0.1249999925494194, 0.4324324131011963, 0.2380952388048172, 0.14814814925193787, 0.045454539358615875, 0.2631579041481018, 0.0624999962747097, 0 ]
H1Nyf7W0Z
true
[ "Propose new objective function for neural sequence generation which integrates ML-based and RL-based objective functions." ]
[ "Capsule Networks have shown encouraging results on \\textit{defacto} benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.", "Although, they are yet to be tested on tasks where (1) the entities detected inherently have more complex internal representations and (2) there are very few instances per class to learn from and (3) where point-wise classification is not suitable.", "Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.", "In doing so we introduce \\textit{Siamese Capsule Networks}, a new variant that can be used for pairwise learning tasks.", "We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. \n", "We find that \\textit{Siamese Capsule Networks} perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with $\\ell_2$-normalized capsule encoded pose features, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.", "Convolutional Neural networks (CNNs) have been a mainstay model for a wide variety of tasks in computer vision.", "CNNs are effective at detecting local features in the receptive field, although the spatial relationship between features is lost when crude routing operations are performed to achieve translation invariance, as is the case with max and average pooling.", "Essentially, pooling results in viewpoint invariance so that small perturbations in the input do not effect the output.", "This leads to a significant loss of information about the internal properties of present entities (e.g location, orientation, shape and pose) in an image and relationships between them.", "The issue is usually combated by having large amounts of annotated data from a wide variety of viewpoints, albeit redundant and less efficient in many cases.", "As noted by hinton1985shape, from a psychology perspective of human shape perception, pooling does not account for the coordinate frames imposed on objects when performing mental rotation to identify handedness BID23 ; BID19 BID12 .", "Hence, the scalar output activities from local kernel regions that summarize sets of local inputs are not sufficient for preserving reference frames that are used in human perception, since viewpoint information is discarded.", "Spatial Transformer Networks (STN) BID13 have acknowledged the issue by using dynamic spatial transformations on feature mappings to enhance the geometric invariance of the model, although this approach addresses changes in viewpoint by learning to remove rotational and scale variance, as opposed to viewpoint variance being reflected in the model activations.", "Instead of addressing translation invariance using pooling operations, BID8 have worked on achieving translation equivariance.The recently proposed Capsule Networks BID24 ; BID7 have shown encouraging results to address these challenges.", "Thus far, Capsule Networks have only been tested on datasets that have (1) a relatively sufficient number of instances per class to learn from and (2) utilized on tasks in the standard classification setup.", "This paper extends Capsule Networks to the pairwise learning setting to learn relationships between whole entity encodings, while also demonstrating their ability to learn from little data that can perform few-shot learning where instances from new classes arise during testing (i.e zero-shot prediction).", "The Siamese Capsule Network is trained using a contrastive loss with 2 -normalized encoded features and demonstrated on face verification tasks.", "BID8 first introduced the idea of using whole vectors to represent internal properties (referred to as instantiation parameters that include pose) of an entity with an associated activation probability where each capsule represents a single instance of an entity within in an image.", "This differs from the single scalar outputs in conventional neural networks where pooling is used as a crude routing operation over filters.", "Pooling performs sub-sampling so that neurons are invariant to viewpoint change, instead capsules look to preserve the information to achieve equivariance, akin to perceptual systems.", "Hence, pooling is replaced with a dynamic routing scheme to send lowerlevel capsule (e.g nose, mouth, ears etc.) outputs as input to parent capsule (e.g face) that represent part-whole relationships to achieve translation equivariance and untangles the coordinate frame of an entity through linear transformations.", "The idea has its roots in computer graphics where images are rendered given an internal hierarchical representation, for this reason the brain is hypothesized to solve an inverse graphics problem where given an image the cortex deconstructs it to its latent hierarchical properties.", "The original paper by BID24 describes a dynamic routing scheme that represent these internal representations as vectors given a group of designated neurons called capsules, which consist of a pose vector u ∈ R d and activation α ∈ [0, 1].", "The architecture consists of two convolutional layers that are used as the initial input representations for the first capsule layer that are then routed to a final class capsule layer.", "The initial convolutional layers allow learned knowledge from local feature representations to be reused and replicated in other parts of the receptive field.", "The capsule inputs are determined using a Iterative Dynamic Routing scheme.", "A transformation W ij is made to output vector u i of capsule C L i .", "The length of the vector u i represents the probability that this lower-level capsule detected a given object and the direction corresponds to the state of the object (e.g orientation, position or relationship to upper capsule).", "The output vector u i is transformed into a prediction vectorû j|i , whereû j|i = W ij u i .", "Then,û j|i is weighted by a coupling coefficient c ij to obtain s j = i c ijûj|i , where coupling coefficients for each capsule j c ij = 1 and c ij is got by log prior probabilities b ij from a sigmoid function, followed by the softmax, c ij = e bij / k e b ik .", "Ifû L j|i has high scalar magnitude when multiplied by u L+1 j then the coupling coefficient c ij is increased and the remaining potential parent capsules coupling coefficients are decreased.", "Routing By Agreement is then performed using coincidence filtering to find tight clusters of nearby predictions.", "The entities output vector length is represented as the probability of an entity being present by using the nonlinear normalization shown in Equation 1 where vote v j is the output from total input s j , which is then used to compute the agreement a ij = v jûj|i that is added to the log prior b ij .", "This paper has introduced the Siamese Capsule Network, a novel architecture that extends Capsule Networks to the pairwise learning setting with a feature 2 -normalized contrastive loss that maximizes inter-class variance and minimizes intra-class variance.", "The results indicate Capsule Networks perform better at learning from only few examples and converge faster when a contrastive loss is used that takes face embeddings in the form of encoded capsule pose vectors.", "We find Siamese Capsule Networks to perform particularly well on the AT&T dataset in the few-shot learning setting, which is tested on unseen classes (i.e subjects) during testing, while competitive against baselines for the larger Labeled Faces In The Wild dataset." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.07692307233810425, 0.2222222238779068, 0.17142856121063232, 0.1463414579629898, 0.16949151456356049, 0.060606054961681366, 0, 0.0624999962747097, 0, 0.04878048226237297, 0.03999999538064003, 0.043478257954120636, 0.03389830142259598, 0.04444443807005882, 0.1249999925494194, 0.1090909019112587, 0.21621620655059814, 0.07692307233810425, 0, 0.10526315122842789, 0.06779660284519196, 0.03999999538064003, 0.07547169178724289, 0.09756097197532654, 0.05128204822540283, 0.07407406717538834, 0.12903225421905518, 0.1304347813129425, 0, 0.035087715834379196, 0, 0, 0.0317460261285305, 0.08695651590824127, 0.11999999731779099, 0.07407406717538834 ]
H1xylj04_V
true
[ "A pairwise learned capsule network that performs well on face verification tasks given limited labeled data " ]
[ "In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016).", "Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting.", "The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones.", "Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline.", "Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization.", "Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance.", "Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.", "Model-free deep reinforcement learning has achieved several remarkable successes in domains ranging from super-human-level control in video games (Mnih et al., 2015) and the game of Go BID10 , to continuous motor control tasks (Lillicrap et al., 2015; Schulman et al., 2015) .Much", "of the recent work can be divided into two categories. First", ", those of which that, often building on the DQN framework, act -greedily according to an action-value function and train using minibatches of transitions sampled from an experience replay buffer BID10 BID13 BID5 BID0 . These", "value-function agents benefit from improved sample complexity, but tend to suffer from long runtimes (e.g. DQN requires approximately a week to train on Atari). The second", "category are the actor-critic agents, which includes the asynchronous advantage actor-critic (A3C) algorithm, introduced by Mnih et al. (2016) . These agents", "train on transitions collected by multiple actors running, and often training, in parallel (Schulman et al., 2017; BID12 . The deep actor-critic", "agents train on each trajectory only once, and thus tend to have worse sample complexity. However, their distributed", "nature allows significantly faster training in terms of wall-clock time. Still, not all existing algorithms", "can be put in the above two categories and various hybrid approaches do exist BID17 O'Donoghue et al., 2017; BID4 BID14 .", "In this work we presented a new off-policy agent based on Retrace actor-critic architecture and show that it achieves similar performance as the current state-of-the-art while giving significant real-time performance gains.", "We demonstrate the benefits of each of the suggested algorithmic improvements, including Distributional Retrace, beta-LOO policy gradient and contextual priority tree.", "DISPLAYFORM0 Proof.", "The bias ofĜ β-LOO is DISPLAYFORM1" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6865671873092651, 0.04255318641662598, 0.04347825422883034, 0.04347825422883034, 0.04255318641662598, 0.17777776718139648, 0.1860465109348297, 0.06666666269302368, 0, 0.14035087823867798, 0.08163265138864517, 0, 0.08695651590824127, 0.0952380895614624, 0, 0.04347825422883034, 0.18518517911434174, 0.09302324801683426, 0 ]
rkHVZWZAZ
true
[ "Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C." ]
[ "Hierarchical planning, in particular, Hierarchical Task Networks, was proposed as a method to describe plans by decomposition of tasks to sub-tasks until primitive tasks, actions, are obtained.", "Plan verification assumes a complete plan as input, and the objective is finding a task that decomposes to this plan.", "In plan recognition, a prefix of the plan is given and the objective is finding a task that decomposes to the (shortest) plan with the given prefix.", "This paper describes how to verify and recognize plans using a common method known from formal grammars, by parsing.", "Hierarchical planning is a practically important approach to automated planning based on encoding abstract plans as hierarchical task networks (HTNs) BID3 .", "The network describes how compound tasks are decomposed, via decomposition methods, to sub-tasks and eventually to actions forming a plan.", "The decomposition methods may specify additional constraints among the subtasks such as partial ordering and causal links.There exist only two systems for verifying if a given plan complies with the HTN model (a given sequence of actions can be obtained by decomposing some task).", "One system is based on transforming the verification problem to SAT BID2 and the other system is using parsing of attribute grammars BID1 .", "Only the parsing-based system supports HTN fully (the SAT-based system does not support the decomposition constraints).Parsing", "became popular in solving the plan recognition problem BID5 as researchers realized soon the similarity between hierarchical plans and formal grammars, specifically context-free grammars with parsing trees close to decomposition trees of HTNs. The plan", "recognition problem can be formulated as the problem of adding a sequence of actions after some observed partial plan such that the joint sequence of actions forms a complete plan generated from some task (more general formulations also exist). Hence plan", "recognition can be seen as a generalization of plan verification. There exist", "numerous approaches to plan recognition using parsing or string rewriting (Avrahami-Zilberbrand and Kaminka 2005; BID5 BID4 BID5 ), but they use hierarchical models that are weaker than HTNs. The languages", "defined by HTN planning problems (with partial-order, preconditions and effects) lie somewhere between context-free (CF) and context-sensitive (CS) languages BID5 so to model HTNs one needs to go beyond the CF grammars. Currently, the", "only grammar-based model covering HTNs fully uses attribute grammars BID0 . Moreover, the", "expressivity of HTNs makes the plan recognition problem undecidable BID2 . Currently, there", "exists only one approach for HTN plan recognition. This approach relies", "on translating the plan recognition problem to a planning problem BID5 , which is a method invented in BID5 .In this paper we focus", "on verification and recognition of HTN plans using parsing. The uniqueness of the", "proposed methods is that they cover full HTNs including task interleaving, partial order of sub-tasks, and other decomposition constraints (prevailing constraints, specifically). The methods are derived", "from the plan verification technique proposed in BID1 .There are two novel contributions", "of this paper. First, we will simplify the above", "mentioned verification technique by exploiting information about actions and states to improve practical efficiency of plan verification. Second, we will extend that technique", "to solve the plan (task) recognition problem. For plan verification, only the method", "in BID1 supports HTN fully. We will show that the verification algorithm", "can be much simpler and, hence, it is expected to be more efficient. For plan recognition, the method proposed in", "BID5 can in principle support HTN fully, if a full HTN planner is used (which is not the case yet as prevailing conditions are not supported). However, like other plan recognition techniques", "it requires the top task (the goal) and the initial state to be specified as input. A practical difference of out methods is that they", "do not require information about possible top (root) tasks and an initial state as their input. This is particularly interesting for plan/task recognition", ", where existing methods require a set of candidate tasks (goals) to select from (in principle, they may use all tasks as candidates, but this makes them inefficient).", "In the paper, we proposed two versions of the parsing technique for verification of HTN plans and for recognition of HTN plans.", "As far as we know, these are the only approaches that currently cover HTN fully including all decomposition constraints.", "Both versions can be applied to solve both verification and recognition problems, but as we demonstrated using an example, each of them has some deficiencies when applied to the other problem.The next obvious step is implementation and empirical evaluation of both techniques.", "There is no doubt that the novel verification algorithm is faster than the previous approaches BID2 and BID1 .", "The open question is how much faster it will be, in particular for large plans.", "The efficiency of the novel plan recognition technique in comparison to existing compilation technique BID5 ) is less clear as both techniques use different approaches, bottom-up vs. top-down.", "The disadvantage of the compilation technique is that it needs to re-generate the known plan prefix, but it can exploit heuristics to remove some overhead there.", "Contrary, the parsing techniques looks more like generate-and-test, but controlled by the hierarchical structure.", "It also guarantees finding the shortest extension of plan prefix." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.12121211737394333, 0.1818181723356247, 0.5294117331504822, 0.11428570747375488, 0.23529411852359772, 0.20689654350280762, 0.34285715222358704, 0.06666666269302368, 0.2978723347187042, 0.043478257954120636, 0.07407406717538834, 0.1818181723356247, 0.21739129722118378, 0.14814814925193787, 0.07407406717538834, 0.07999999821186066, 0.1111111044883728, 0.4444444477558136, 0.19999998807907104, 0, 0.1666666567325592, 0.2222222238779068, 0.07692307233810425, 0.07407406717538834, 0.05882352590560913, 0.045454539358615875, 0.20512820780277252, 0.052631575614213943, 0.1428571343421936, 0.32258063554763794, 0.05882352590560913, 0.15094339847564697, 0.06451612710952759, 0.13333332538604736, 0.1428571343421936, 0.15789473056793213, 0.1428571343421936, 0.07999999821186066 ]
HJgRIrHWt4
true
[ "The paper describes methods to verify and recognize HTN plans by parsing of attribute grammars." ]
[ "Neural architecture search (NAS), the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones.", "However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup.", "In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension.", "From a standard sequence-to-sequence models for translation, we conduct extensive searches over the recurrent cells and attention similarity functions across two translation tasks, IWSLT English-Vietnamese and WMT German-English.", "We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines.", "In addition, we show that results on attention searches are transferable to reading comprehension on the SQuAD dataset.", "There has been vast literature on finding neural architectures automatically dated back to the 1980s with genetic algorithms BID18 to recent approaches that use random weights BID17 , Bayesian optimization BID23 , reinforcement learning BID1 BID28 , evolution BID16 , and hyper networks BID3 .", "Among these, the approach of neural architecture search (NAS) using reinforcement learning by , barring computational cost, has been most promising, yielding stateof-the-art performances on several popular vision benchmarks such as CIFAR-10 and ImageNet .", "Building on NAS, others have found better optimizers BID2 and activation functions BID15 than human-designed ones.", "Despite these success stories, most of the work mainly focuses on vision tasks, with little attention to language ones, except for a small language modeling task on the Penn Tree Bank dataset (PTB) in .This", "work aims to bridge that gap by exploring neural architecture search for language tasks. We start", "by applying the approach of to neural machine translation (NMT) with sequence-to-sequence BID25 as an underlying model. Our goal", "is to find new recurrent cells that can work better than Long Short-term Memory (LSTM) BID6 . We then", "introduce a novel \"stack\" search space as an alternative to the fixed-structure tree search space defined in . We use", "this new search space to find similarity functions for the attention mechanism in NMT BID0 BID9 . Through", "our extensive searches across two translation benchmarks, small IWSLT English-Vietnamse and large WMT German-English, we report challenges in performing cell searches for NMT and demonstrate initial success on attention searches with translation improvements over strong baselines.Lastly, we show that the attention similarity functions found for NMT are transferable to the reading comprehension task on the Stanford Question Answering Dataset (SQuAD) BID14 , yielding non-trivial improvements over the standard dot-product function. Directly", "running NAS attention search on SQuAD boosts the performance even further.Figure 1: Tree search space for recurrent cells -shown is an illustration of a tree search space specifically designed for searching over LSTM-inspired cells. The figure", "was obtained from with permission. Left: the", "tree that defines the computation steps to be predicted by controller. Center: an", "example set of predictions made by the controller for each computation step in the tree. Right: the", "computation graph of the recurrent cell constructed from example predictions of the controller.", "In this paper, we have made a contribution towards extending the success of neural architecture search (NAS) from vision to another domain, languages.", "Specifically, we are first to apply NAS to the tasks of machine translation and reading comprehension at scale.", "Our newly-found recurrent cells perform better on translation than previously-discovered NASCell .", "Furthermore, we propose a novel stack-based search space as a more flexible alternative to the fixed-structure tree search space used for recurrent cell search.", "With this search space, we find new attention functions that outperform strong translation baselines.", "In addition, we demonstrate that the attention search results are transferable to the SQuAD reading comprehension task, yielding nontrivial improvements over dot-product attention.", "Directly running NAS attention search on SQuAD boosts the performance even further.", "We hope that our extensive experiments will pave way for future research in NAS for languages." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19607841968536377, 0.1304347813129425, 0.3461538553237915, 0.11320754140615463, 0.21739129722118378, 0.2790697515010834, 0.09090908616781235, 0.1666666567325592, 0.0476190410554409, 0.20689654350280762, 0.380952388048172, 0.17777776718139648, 0.13636362552642822, 0.1395348757505417, 0.23255813121795654, 0.2142857164144516, 0.24137930572032928, 0, 0.05128204822540283, 0.09756097197532654, 0.10810810327529907, 0.20408162474632263, 0.2790697515010834, 0.10810810327529907, 0.17391303181648254, 0.14999999105930328, 0.25531914830207825, 0.15789473056793213, 0.09756097197532654 ]
r1Zi2Mb0-
true
[ "We explore neural architecture search for language tasks. Recurrent cell search is challenging for NMT, but attention mechanism search works. The result of attention search on translation is transferable to reading comprehension." ]
[ "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code.", "In some cases, autoencoders can \"interpolate\": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints.", "In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data.", "We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting.", "We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.", "One goal of unsupervised learning is to uncover the underlying structure of a dataset without using explicit labels.", "A common architecture used for this purpose is the autoencoder, which learns to map datapoints to a latent code from which the data can be recovered with minimal information loss.", "Typically, the latent code is lower dimensional than the data, which indicates that autoencoders can perform some form of dimensionality reduction.", "For certain architectures, the latent codes have been shown to disentangle important factors of variation in the dataset which makes such models useful for representation learning BID7 BID15 .", "In the past, they were also used for pre-training other networks by being trained on unlabeled data and then being stacked to initialize a deep network BID1 BID44 .", "More recently, it was shown that imposing a prior on the latent space allows autoencoders to be used for probabilistic or generative modeling BID18 BID34 BID27 .In", "some cases, autoencoders have shown the ability to interpolate. Specifically", ", by mixing codes in latent space and decoding the result, the autoencoder can produce a semantically meaningful combination of the corresponding datapoints. Interpolation", "has been frequently reported as a qualitative experimental result in studies about autoencoders BID5 BID35 BID30 BID29 BID14 and latent-variable generative models in general BID10 BID33 BID41 . The ability to", "interpolate can be useful in its own right e.g. for creative applications (Carter & Nielsen, 2017) . However, it also", "indicates that the autoencoder can \"extrapolate\" beyond the training data and has learned a latent space with a particular structure. Specifically, if", "interpolating between two points in latent space produces a smooth semantic warping in data space, this suggests that nearby points in latent space are semantically similar. A visualization", "of this idea is shown in FIG0 , where a smooth A critic network is fed interpolants and reconstructions and tries to predict the interpolation coefficient α corresponding to its input (with α = 0 for reconstructions). The autoencoder", "is trained to fool the critic into outputting α = 0 for interpolants. interpolation between", "a \"2\" and a \"9\" suggests that the 2 is surrounded by semantically similar points, i.e. other 2s. This property may suggest", "that an autoencoder which interpolates well could also provide a good learned representation for downstream tasks because similar points are clustered. If the interpolation is not", "smooth, there may be \"discontinuities\" in latent space which could result in the representation being less useful as a learned feature. This connection between interpolation", "and a \"flat\" data manifold has been explored in the context of unsupervised representation learning BID3 and regularization BID43 .Given the widespread use of interpolation", "as a qualitative measure of autoencoder performance, we believe additional investigation into the connection between interpolation and representation learning is warranted. Our goal in this paper is threefold: First", ", we introduce a regularization strategy with the specific goal of encouraging improved interpolations in autoencoders (section 2); second, we develop a synthetic benchmark where the slippery concept of a \"semantically meaningful interpolation\" is quantitatively measurable (section 3.1) and evaluate common autoencoders on this task (section 3.2); and third, we confirm the intuition that good interpolation can result in a useful representation by showing that the improved interpolation ability produced by our regularizer elicits improved representation learning performance on downstream tasks (section 4). We also make our codebase available 1 which", "provides a unified implementation of many common autoencoders including our proposed regularizer.", "In this paper, we have provided an in-depth perspective on interpolation in autoencoders.", "We proposed Adversarially Constrained Autoencoder Interpolation (ACAI), which uses a critic to encourage interpolated datapoints to be more realistic.", "To make interpolation a quantifiable concept, we proposed a synthetic benchmark and showed that ACAI substantially outperformed common autoencoder models.", "This task also yielded unexpected insights, such as that a VAE which has effectively learned the data distribution might not interpolate.", "We also studied the effect of improved interpolation on downstream tasks, and showed that ACAI led to improved performance for feature learning and unsupervised clustering.", "These findings confirm our intuition that improving the interpolation abilities of a baseline autoencoder can also produce a better learned representation for downstream tasks.", "However, we emphasize that we do not claim that good interpolation always implies a good representation -for example, the AAE produced smooth and realistic interpolations but fared poorly in our representations learning experiments and the denoising autoencoder had low-quality interpolations but provided a useful representation.In future work, we are interested in investigating whether our regularizer improves the performance of autoencoders other than the standard \"vanilla\" autoencoder we applied it to.", "In this paper, we primarily focused on image datasets due to the ease of visualizing interpolations, but we are also interested in applying these ideas to non-image datasets.", "A LINE BENCHMARK" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1463414579629898, 0.13333332538604736, 0.12765957415103912, 0.4166666567325592, 0.3478260934352875, 0.11428570747375488, 0.13333332538604736, 0.15789473056793213, 0.13333332538604736, 0.2222222238779068, 0.2666666507720947, 0.1428571343421936, 0.14999999105930328, 0.12765957415103912, 0.15789473056793213, 0.2631579041481018, 0.09756097197532654, 0.18867924809455872, 0.1818181723356247, 0.19999998807907104, 0.45454543828964233, 0.2380952388048172, 0.25, 0.21739129722118378, 0.2790697515010834, 0.19999998807907104, 0.12903225421905518, 0.1111111044883728, 0.21621620655059814, 0.25641024112701416, 0.39024388790130615, 0.4878048598766327, 0.27397260069847107, 0.09302324801683426, 0 ]
S1fQSiCcYm
true
[ "We propose a regularizer that improves interpolation and autoencoders and show that it also improves the learned representation for downstream tasks." ]
[ "We consider the problem of generating plausible and diverse video sequences, when we are only given a start and an end frame.", "This task is also known as inbetweening, and it belongs to the broader area of stochastic video generation, which is generally approached by means of recurrent neural networks (RNN).", "In this paper, we propose instead a fully convolutional model to generate video sequences directly in the pixel domain.", "We first obtain a latent video representation using a stochastic fusion mechanism that learns how to incorporate information from the start and end frames.", "Our model learns to produce such latent representation by progressively increasing the temporal resolution, and then decode in the spatiotemporal domain using 3D convolutions.", "The model is trained end-to-end by minimizing an adversarial loss.", "Experiments on several widely-used benchmark datasets show that it is able to generate meaningful and diverse in-between video sequences, according to both quantitative and qualitative evaluations.", "Imagine if we could teach an intelligent system to automatically turn comic books into animations.", "Being able to do so would undoubtedly revolutionize the animation industry.", "Although such an immensely labor-saving capability is still beyond the current state-of-the-art, advances in computer vision and machine learning are making it an increasingly more tangible goal.", "Situated at the heart of this challenge is video inbetweening, that is, the process of creating intermediate frames between two given key frames.", "Recent development in artificial neural network architectures (Simonyan & Zisserman, 2015; He et al., 2016) and the emergence of generative adversarial networks (GAN) (Goodfellow et al., 2014) have led to rapid advancement in image and video synthesis (Aigner & Körner, 2018; Tulyakov et al., 2017) .", "At the same time, the problem of inbetweening has received much less attention.", "The majority of the existing works focus on two different tasks:", "i) unconditional video generation, where the model learns the input data distribution during training and generates new plausible videos without receiving further input (Srivastava et al., 2015; Finn et al., 2016; Lotter et al., 2016) ; and", "ii) video prediction, where the model is given a certain number of past frames and it learns to predict how the video evolves thereafter (Vondrick et al., 2016; Saito et al., 2017; Tulyakov et al., 2017; Denton & Fergus, 2018) .", "In most cases, the generative process is modeled as a recurrent neural network (RNN) using either long-short term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997) or gated recurrent units (GRU) (Cho et al., 2014) .", "Indeed, it is generally assumed that some form of a recurrent model is necessary to capture long-term dependencies, when the goal is to generate videos over a length that cannot be handled by pure frame-interpolation methods based on optical flow.", "In this paper, we show that it is in fact possible to address the problem of video inbetweening using a stateless, fully convolutional model.", "A major advantage of this approach is its simplicity.", "The absence of recurrent components implies shorter gradient paths, hence allowing for deeper networks and more stable training.", "The model is also more easily parallelizable, due to the lack of sequential states.", "Moreover, in a convolutional model, it is straightforward to enforce temporal consistency with the start and end frames given as inputs.", "Motivated by these observations, we make the following contributions in this paper:", "• We propose a fully convolutional model to address the task of video inbetweening.", "The proposed model consists of three main components:", "i) a 2D-convolutional image encoder, which maps the input key frames to a latent space;", "ii) a 3D-convolutional latent representation generator, which learns how to incorporate the information contained in the input frames with progressively increasing temporal resolution; and", "iii) a video generator, which uses transposed 3D-convolutions to decode the latent representation into video frames.", "• Our key finding is that separating the generation of the latent representation from video decoding is of crucial importance to successfully address video inbetweening.", "Indeed, attempting to generate the final video directly from the encoded representations of the start and end frames tends to perform poorly, as further demonstrated in Section 4.", "To this end, we carefully design the latent representation generator to stochastically fuse the key frame representations and progressively increase the temporal resolution of the generated video.", "• We carried out extensive experiments on several widely used benchmark datasets, and demonstrate that the model is able to produce realistic video sequences, considering key frames that are well over a half second apart from each other.", "In addition, we show that it is possible to generate diverse sequences given the same start and end frames, by simply varying the input noise vector driving the generative process.", "The rest of the paper is organized as follows: We review the outstanding literature related to our work in Section 2.", "Section 3 describes our proposed model in details.", "Experimental results, both quantitative and qualitative, are presented in Section 4, followed by our conclusions in Section 5.", "We presented a method for video inbetweening using only direct 3D convolutions.", "Despite having no recurrent components, our model produces good performance on most widely-used benchmark datasets.", "The key to success for this approach is a dedicated component that learns a latent video representation, decoupled from the final video decoding phase.", "A stochastic gating mechanism is used to progressively fuse the information of the given key frames.", "The rather surprising fact that video inbetweening can be achieved over such a long time base without sophisticated recurrent models may provide a useful alternative perspective for future research on video generation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0.08888888359069824, 0.05405404791235924, 0.19512194395065308, 0.1463414579629898, 0, 0.0952380895614624, 0, 0, 0, 0.21052631735801697, 0.035087715834379196, 0, 0, 0.04081632196903229, 0.11538460850715637, 0.038461532443761826, 0, 0.0952380895614624, 0, 0.0555555522441864, 0, 0.10256409645080566, 0, 0.0624999962747097, 0, 0.1249999925494194, 0.04878048226237297, 0.12121211737394333, 0.1538461446762085, 0.1395348757505417, 0.1428571343421936, 0.145454540848732, 0.08695651590824127, 0.052631575614213943, 0, 0, 0.46666666865348816, 0, 0.19999998807907104, 0.1818181723356247, 0.0833333283662796 ]
S1epu2EtPB
true
[ "This paper presents method for stochastically generating in-between video frames from given key frames, using direct 3D convolutions." ]
[ "Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering.", "Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplets to train effective models.", "However, these aligned triplets may not be available or are expensive to obtain for many domains.", "Therefore, in this paper we study how to design fully-unsupervised methods or weakly-supervised methods, i.e., to align knowledge graphs without or with only a few aligned triplets.", "We propose an unsupervised framework based on adversarial training, which is able to map the entities and relations in a source knowledge graph to those in a target knowledge graph.", "This framework can be further seamlessly integrated with existing supervised methods, where only a limited number of aligned triplets are utilized as guidance.", "Experiments on real-world datasets prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings.", "Knowledge graphs represent a collection of knowledge facts and are quite popular in the real world.", "Each fact is represented as a triplet (h, r, t), meaning that the head entity h has the relation r with the tail entity t.", "Examples of real-world knowledge graphs include instances which contain knowledge facts from general domain in different languages (Freebase 1 , DBPedia BID2 , Yago BID19 , WordNet 2 ) or facts from specific domains such as biomedical ontology (UMLS 3 ).", "Knowledge graphs are critical to a variety of applications such as question answering BID4 ) and semantic search BID13 ), which are attracting growing interest recently in both academia and industry communities.In practice, each knowledge graph is usually constructed from a single source or language, the coverage of which is limited.", "To enlarge the coverage and construct more unified knowledge graphs, a natural idea is to integrate multiple knowledge graphs from different sources or languages BID0 ).", "However, different knowledge graphs use distinct symbol systems to represent entities and relations, which are not compatible.", "As a result, it is necessary to align entities and relations across different knowledge graphs (a.k.a., knowledge graph alignment) before integrating them.Indeed, there are some recent studies focusing on aligning entities and relations from a source knowledge graph to a target knowledge graph ( BID23 ; BID6 ; BID7 ).", "These methods typically represent entities and relations in a low-dimensional space, and meanwhile learn a mapping function to align entities and relations from the source knowledge graph to the target one.", "However, these methods usually rely on a large number of aligned triplets as labeled data to train effective alignment models.", "In reality, the aligned triplets may not be available or can be expensive to obtain, making existing methods fail to achieve satisfactory results.", "Therefore, we are seeking for an unsupervised or weakly-supervised approach, which is able to align knowledge graphs with a few or even without labeled data.In this paper, we propose an unsupervised approach for knowledge graph alignment with the adversarial training framework BID11 .", "Our proposed approach aims to learn alignment functions, i.e., P e (e tgt |e src ) and P r (r tgt |r src ), to map the entities and relations (e src and r src ) from the source knowledge graph to those (e tgt and r tgt ) in the target graph, without any labeled data.", "Towards this goal, we notice that we can align each triplet in the source knowledge graph with one in the target knowledge graph by aligning the head/tail entities and relation respectively.", "Ideally, the optimal alignment functions would align all the source triplets to some valid triplets (i.e., triplets expressing true facts).", "Therefore, we can enhance the alignment functions by improving the plausibility of the aligned triplets.", "With this intuition, we train a triplet discriminator to distinguish between the real triplets in the target knowledge graph and those aligned from the source graph, which provides a reward function to measure the plausibility of a triplet.", "Meanwhile, the alignment functions are optimized to maximize the reward.", "The above process naturally forms an adversarial training procedure BID11 ).", "By alternatively optimizing the alignment functions and the discriminator, the discriminator can consistently enhance the alignment functions.However, the above approach may suffer from the problem of mode collapse BID17 ).", "Specifically, many entities in the source knowledge graph may be aligned to only a few entities in the target knowledge graph.", "This problem can be addressed if the aggregated posterior entity distribution e src P e (e tgt |e src )P (e src ) derived by the alignment functions matches the prior entity distribution P (e tgt ) in the target knowledge graph.", "Therefore, we match them with another adversarial training framework, which shares similar idea with adversarial auto-encoders BID16 ).The", "whole framework can also be seamlessly integrated with existing supervised methods, in which we can use a few aligned entities or relations as guidance, yielding a weakly-supervised approach. Our", "approach can be effectively optimized with stochastic gradient descent, where the gradient for the alignment functions is calculated by the REINFORCE algorithm (Williams (1992)). We", "conduct extensive experiments on several real-world knowledge graphs. Experimental", "results prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.20689654350280762, 0, 0.21052631735801697, 0.1666666567325592, 0.11764705181121826, 0.0714285671710968, 0.07407406717538834, 0.060606054961681366, 0.043478257954120636, 0.07017543911933899, 0.0555555522441864, 0.0714285671710968, 0.11538460850715637, 0.11428570747375488, 0.06451612710952759, 0, 0.2978723347187042, 0.12244897335767746, 0.1666666567325592, 0.06666666269302368, 0.0833333283662796, 0.0952380895614624, 0.09999999403953552, 0.1818181723356247, 0.05714285373687744, 0.14814814925193787, 0.20000000298023224, 0.2142857164144516, 0.10526315122842789, 0.12121211737394333, 0.09999999403953552, 0.07999999821186066 ]
S14h9sCqYm
true
[ "This paper studies weakly-supervised knowledge graph alignment with adversarial training frameworks." ]
[ "Multi-view learning can provide self-supervision when different views are available of the same data.", "Distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora.", "Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we present two multi-view frameworks for learning sentence representations in an unsupervised fashion.", "One framework uses a generative objective and the other a discriminative one.", "In both frameworks, the final representation is an ensemble of two views, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model.", "We show that, after learning, the vectors produced by our multi-view frameworks provide improved representations over their single-view learnt counterparts, and the combination of different views gives representational improvement over each view and demonstrates solid transferability on standard downstream tasks.", "Multi-view learning methods provide the ability to extract information from different views of the data and enable self-supervised learning of useful features for future prediction when annotated data is not available BID16 .", "Minimising the disagreement among multiple views helps the model to learn rich feature representations of the data and, also after training, the ensemble of the feature vectors from multiple views can provide an even stronger generalisation ability.Distributional hypothesis BID22 noted that words that occur in similar contexts tend to have similar meaning BID51 , and distributional similarity BID19 consolidated this idea by stating that the meaning of a word can be determined by the company it has.", "The hypothesis has been widely used in machine learning community to learn vector representations of human languages.", "Models built upon distributional similarity don't explicitly require human-annotated training data; the supervision comes from the semantic continuity of the language data.Large quantities of annotated data are usually hard and costly to obtain, thus it is important to study unsupervised and self-supervised learning.", "Our goal is to propose learning algorithms built upon the ideas of multi-view learning and distributional hypothesis to learn from unlabelled data.", "We draw inspiration from the lateralisation and asymmetry in information processing of the two hemispheres of the human brain where, for most adults, sequential processing dominates the left hemisphere, and the right hemisphere has a focus on parallel processing BID9 , but both hemispheres have been shown to have roles in literal and non-literal language comprehension BID14 BID15 .Our", "proposed multi-view frameworks aim to leverage the functionality of both RNN-based models, which have been widely applied in sentiment analysis tasks BID57 , and the linear/log-linear models, which have excelled at capturing attributional similarities of words and sentences BID5 BID6 BID24 BID51 for learning sentence representations. Previous", "work on unsupervised sentence representation learning based on distributional hypothesis can be roughly categorised into two types:Generative objective: These models generally follow the encoder-decoder structure. The encoder", "learns to produce a vector representation for the current input, and the decoder learns to generate sentences in the adjacent context given the produced vector BID28 BID24 BID20 BID50 . The idea is", "straightforward, yet its scalability for very large corpora is hindered by the slow decoding process that dominates training time, and also the decoder in each model is discarded after learning as the quality of generated sequences is not the main concern, which is a waste of parameters and learning effort.Our first multi-view framework has a generative objective and uses an RNN as the encoder and an invertible linear projection as the decoder. The training", "time is drastically reduced as the decoder is simple, and the decoder is also utilised after learning. A regularisation", "is applied on the linear decoder to enforce invertibility, so that after learning, the inverse of the decoder can be applied as a linear encoder in addition to the RNN encoder.Discriminative Objective: In these models, a classifier is learnt on top of the encoders to distinguish adjacent sentences from those that are not BID31 BID26 BID40 BID33 ; these models make a prediction using a predefined differential similarity function on the representations of the input sentence pairs or triplets.Our second multi-view framework has a discriminative objective and uses an RNN encoder and a linear encoder; it learns to maximise agreement among adjacent sentences. Compared to earlier", "work on multi-view learning BID16 BID17 BID52 that takes data from various sources or splits data into disjoint populations, our framework processes the exact same data in two distinctive ways. The two distinctive", "information processing views tend to encode different aspects of an input sentence; forcing agreement/alignment between these views encourages each view to be a better representation, and is beneficial to the future use of the learnt representations.Our contribution is threefold:• Two multi-view frameworks for learning sentence representations are proposed, in which one framework uses a generative objective and the other one adopts a discriminative objective. Two encoding functions", ", an RNN and a linear model, are learnt in both frameworks.• The results show that", "in both frameworks, aligning representations from two views gives improved performance of each individual view on all evaluation tasks compared to their single-view trained counterparts, and furthermore ensures that the ensemble of two views provides even better results than each improved view alone.• Models trained under our", "proposed frameworks achieve good performance on the unsupervised tasks, and overall outperform existing unsupervised learning models, and armed with various pooling functions, they also show solid results on supervised tasks, which are either comparable to or better than those of the best unsupervised transfer model. It is shown BID24 that the", "consistency between supervised and unsupervised evaluation tasks is much lower than that within either supervised or unsupervised tasks alone and that a model that performs well on supervised tasks may fail on unsupervised tasks. BID13 subsequently showed", "that, with a labelled training corpus, such as SNLI BID8 and MultiNLI BID56 , the resulting representations of the sentences from the trained model excel in both supervised and unsupervised tasks. Multi-task learning BID48", "also gives impressive performance on downstream tasks while labelled data is costly. Our model is able to achieve", "good results on both groups of tasks without labelled information.", "In both frameworks, RNN encoder and linear encoder perform well on all tasks, and generative objective and discriminative objective give similar performance.", "We proposed multi-view sentence representation learning frameworks with generative and discriminative objectives; each framework combines an RNN-based encoder and an average-on-word-vectors linear encoder and can be efficiently trained within a few hours on a large unlabelled corpus.", "The experiments were conducted on three large unlabelled corpora, and meaningful comparisons were made to demonstrate the generalisation ability and transferability of our learning frameworks and consolidate our claim.", "The produced sentence representations outperform existing unsupervised transfer methods on unsupervised evaluation tasks, and match the performance of the best unsupervised model on supervised evaluation tasks.Our experimental results support the finding BID24 that linear/log-linear models (g in our frameworks) tend to work better on the unsupervised tasks, while RNN-based models (f in our frameworks) generally perform better on the supervised tasks.", "As presented in our experiments, multi-view learning helps align f and g to produce better individual representations than when they are learned separately.", "In addition, the ensemble of both views leveraged the advantages of both, and provides rich semantic information of the input sentence.", "Future work should explore the impact of having various encoding architectures and learning under the multi-view framework.Our multi-view learning frameworks were inspired by the asymmetric information processing in the two hemispheres of the human brain, in which the left hemisphere is thought to emphasise sequential processing and the right one more parallel processing BID9 .", "Our experimental results raise an intriguing hypothesis about how these two types of information processing may complementarily help learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20000000298023224, 0, 0.15789473056793213, 0, 0.10810810327529907, 0, 0.11764705926179886, 0, 0.08695651590824127, 0.09090908616781235, 0.07692307233810425, 0, 0.08510638028383255, 0.25, 0.0624999962747097, 0.032258063554763794, 0.0952380895614624, 0.024390242993831635, 0.05714285373687744, 0.06666666269302368, 0, 0, 0.07999999821186066, 0.0624999962747097, 0.10810810327529907, 0, 0, 0, 0.15789473056793213, 0.06451612710952759, 0.08163265138864517, 0.06896551698446274, 0.08695651590824127, 0.0416666641831398, 0.07999999821186066 ]
HyxaHGcij7
true
[ "Multi-view learning improves unsupervised sentence representation learning" ]
[ "There are myriad kinds of segmentation, and ultimately the `\"right\" segmentation of a given scene is in the eye of the annotator.", "Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation.", "As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally (within an image) and non-locally (across images).", "We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes (categories, instances, etc.) of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning.", "To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes.", "To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances.", "Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision.", "Many tasks of scientific and practical interest require grouping pixels, such as cellular microscopy, medical imaging, and graphic design.", "Furthermore, a single image might need to be segmented in several ways, for instance to first segment all people, then focus on a single person, and finally pick out their face.", "Learning a particular type of segmentation, or even extending an existing model to a new task like a new semantic class, generally requires collecting and annotating a large amount of data and (re-)training a large model for many iterations.", "Interactive segmentation with a supervisor in-the-loop can cope with less supervision, but requires at least a little annotation for each image, entailing significant effort over image collections or videos.", "Faced with endless varieties of segmentation and countless images, yet only so much expertise and time, a segmentor should be able to learn from varying amounts of supervision and propagate that supervision to unlabeled pixels and images.We frame these needs as the problem of guided segmentation: given supervision from few or many images and pixels, collect and propagate this supervision to segment any given images, and do so quickly and with generality across tasks.", "The amount of supervision may vary widely, from a lone annotated pixel, millions of pixels in a fully annotated image, or even more across a collection of images as in conventional supervised learning for segmentation.", "The number of classes to be segmented may also vary depending on the task, such as when segmenting categories like cats vs. dogs, or when segmenting instances to group individual people.", "Guided segmentation extends fewshot learning to the structured output setting, and the non-episodic accumulation of supervision as data is progressively annotated.", "Guided segmentation broadens the scope of interactive segmentation by integrating supervision across images and segmenting unannotated images.As a first step towards solving this novel problem, we propose guided networks to extract guidance, a latent task representation, from variable amounts of supervision (see Figure 1 ).", "To do so we meta-learn how to extract and follow guidance by training episodically on tasks synthesized from a large, fully annotated dataset.", "Once trained, our model can quickly and cumulatively incorporate annotations to perform new tasks not seen during training.", "Guided networks reconcile static and interactive modes of inference: a guided model is both able to make predictions on its own, like a fully supervised model, and to incorporate expert supervision for defining new tasks or correcting errors, Figure 1 : A guide g extracts a latent task representation z from an annotated image (red) for inference by f θ (x, z) on a different, unannotated image (blue).", "like an interactive model.", "Guidance, unlike static model parameters, does not require optimization to update: it can be quickly extended or corrected during inference.", "Unlike annotations, guidance is latent and low-dimensional: it can be collected and propagated across images and episodes for inference without the supervisor in-the-loop as needed by interactive models.We evaluate our method on a variety of challenging segmentation problems in Section 5: interactive image segmentation, semantic segmentation, video object segmentation, and real-time interactive video segmentation, as shown in 2.", "We further perform novel exploratory experiments aimed at understanding the characteristics and limits of guidance.", "We compare guidance with standard supervised learning across the few-shot and many-shot extremes of support size to identify the boundary between few-shot and many-shot learning for segmentation.", "We demonstrate that in some cases, our model can generalize to guide tasks at a different level of granularity, such as meta-learning from instance supervision and then guiding semantic segmentation of categories.", "Guided segmentation unifies annotation-bound segmentation problems.", "Guided networks reconcile task-driven and interactive inference by extracting guidance, a latent task representation, from any amount of supervision given.", "With guidance our segmentor revolver can learn and infer tasks without optimization, improve its accuracy near-instantly with more supervision, and once-guided can segment new images without the supervisor in the loop." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1818181723356247, 0.19999998807907104, 0.23999999463558197, 0.3829787075519562, 0.045454539358615875, 0.17142856121063232, 0.1395348757505417, 0.12121211737394333, 0.09302324801683426, 0.13333332538604736, 0.1428571343421936, 0.260869562625885, 0.27272728085517883, 0.04651162400841713, 0.17142856121063232, 0.25, 0.15789473056793213, 0.060606054961681366, 0.1599999964237213, 0, 0, 0.158730149269104, 0.13333332538604736, 0.21621620655059814, 0.3913043439388275, 0.09999999403953552, 0.22857142984867096, 0.0476190447807312 ]
HJej6jR5Fm
true
[ "We propose a meta-learning approach for guiding visual segmentation tasks from varying amounts of supervision." ]
[ "Training generative adversarial networks requires balancing of delicate adversarial dynamics.", "Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes.", "In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator.", "We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation.", "Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128 x 128) dataset.", "Our model achieves an Inception Score (IS) of 148 and an Frechet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.", "Generative Adversarial Nets (GANs) are implicit generative models that can be trained to match a given data distribution.", "GANs were originally proposed and demonstrated for images by Goodfellow et al. (2014) .", "As the field of generative modelling has advanced, GANs have remained at the frontier, generating high-fidelity images at large scale (Brock et al., 2018) .", "However, despite growing insights into the dynamics of GAN training, most recent advances in large-scale image generation come from architectural improvements (Radford et al., 2015; Zhang et al., 2019) , or regularisation focusing on particular parts of the model (Miyato et al., 2018; Miyato & Koyama, 2018) .", "Inspired by the compressed sensing GAN (CS-GAN; Wu et al., 2019) , we further exploit the benefit of latent optimisation in adversarial games using natural gradient descent to optimise the latent variable z at each step of training, presenting a scalable and easy to implement approach to improve the dynamical interaction between the discriminator and the generator.", "For clarity, we unify these approaches as latent optimised GANs (LOGAN).", "To summarise our contributions:", "1. We present a novel analysis of latent optimisation in GANs from the perspective of differentiable games and stochastic approximation (Balduzzi et al., 2018; Heusel et al., 2017) , arguing that latent optimisation can improve the dynamics of adversarial training.", "2. Motivated by this analysis, we improve latent optimisation by taking advantage of efficient second-order updates.", "3. Our algorithm improves the state-of-the-art BigGAN-deep model (Brock et al., 2018) by a significant margin, without introducing any architectural change or additional parameters, resulting in higher quality images and more diverse samples (Figure 1 and 2).", "In this work we present the LOGAN model which significantly improves the state-of-the-art on large scale GAN training for image generation by online optimising the latent source z.", "Our results illustrate improvements in quantitative evaluation and samples with higher quality and diversity.", "Moreover, our analysis suggests that LOGAN fundamentally improves adversarial training dynamics.", "We therefore expect our method to be useful in other tasks that involve adversarial training, including representation learning and inference (Donahue et al., 2017; Dumoulin et al., 2017 ), text generation (Zhang et al., 2019) , style learning (Zhu et al., 2017; Karras et al., 2019) , audio generation and video generation (Vondrick et al., 2016; Clark et al., 2019 A ADDITIONAL SAMPLES AND RESULTS Figure 6 and 7 provide additional samples, organised similarly as in Figure 1 and 2.", "Figure 8 shows additional truncation curves." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.14814814925193787, 0.11764705181121826, 0.2222222238779068, 0.25, 0.1538461446762085, 0.08163265138864517, 0, 0.06451612710952759, 0, 0.10169491171836853, 0.0923076868057251, 0, 0, 0.31372547149658203, 0.060606054961681366, 0.1090909019112587, 0.27272728085517883, 0.12903225421905518, 0.3448275923728943, 0.10666666179895401, 0 ]
rJeU_1SFvr
true
[ "Latent optimisation improves adversarial training dynamics. We present both theoretical analysis and state-of-the-art image generation with ImageNet 128x128." ]
[ "In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset.", "We look at this problem in the setting where the number of parameters is greater than the number of sampled points.", "We show that for a wide class of differentiable activation functions (this class involves most nonlinear functions and excludes piecewise linear functions), we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.", "We essentially show that these non-singular hidden layer matrix satisfy a ``\"good\" property for these big class of activation functions.", "Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent (SGD) step of the output layer.", "In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies the``good\" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result.", "Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix.", "Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer.", "Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply.", "We use smoothness properties to guarantee asymptotic convergence of $O(1/\\text{number of iterations})$ to a first-order optimal solution.", "Neural networks architecture has recently emerged as a powerful tool for a wide variety of applications.", "In fact, they have led to breakthrough performance in many problems such as visual object classification BID13 , natural language processing BID5 and speech recognition BID17 .", "Despite the wide variety of applications using neural networks with empirical success, mathematical understanding behind these methods remains a puzzle.", "Even though there is good understanding of the representation power of neural networks BID1 , training these networks is hard.", "In fact, training neural networks was shown to be NP-complete for single hidden layer, two node and sgn(·) activation function BID2 .", "The main bottleneck in the optimization problem comes from non-convexity of the problem.", "Hence it is not clear how to train them to global optimality with provable guarantees.", "Neural networks have been around for decades now.", "A sudden resurgence in the use of these methods is because of the following: Despite the worst case result by BID2 , first-order methods such as gradient descent and stochastic gradient descent have been surprisingly successful in training these networks to global optimality.", "For example, Zhang et al. (2016) empirically showed that sufficiently over-parametrized networks can be trained to global optimality with stochastic gradient descent.", "Neural networks with zero hidden layers are relatively well understood in theory.", "In fact, several authors have shown that for such neural networks with monotone activations, gradient based methods will converge to the global optimum for different assumptions and settings BID16 BID10 BID11 BID12 ).Despite", "the hardness of training the single hidden layer (or two-layer) problem, enough literature is available which tries to reduce the hardness by making different assumptions. E.g., BID4", "made a few assumptions to show that every local minimum of the simplified objective is close to the global minimum. They also", "require some independent activations assumption which may not be satisfied in practice. For the same", "shallow networks with (leaky) ReLU activations, it was shown in Soudry & Carmon (2016) that all local minimum are global minimum of the modified loss function, instead of the original objective function. Under the same", "setting, Xie et al. (2016) showed that critical points with large \"diversity\" are near global optimal. But ensuring such", "conditions algorithmically is difficult. All the theoretical", "studies have been largely focussed on ReLU activation but other activations have been mostly ignored. In our understanding", ", this is the first time a theoretical result will be presented which shows that for almost all nonlinear activation functions including softplus, an arbitrary first-order optimal solution is also the global optimal provided certain \"simple\" properties of hidden layer. Moreover, we show that", "a stochastic gradient descent type algorithm will give us those required properties for free for all finite number of iterations hence even if the hidden layer variables are data dependent, we still get required properties. Our assumption on data", "distribution is very general and can be reasonable for practitioners. This comes at two costs", ": First is that the hidden layer of our network can not be wider than the dimension of the input data, say d. Since we also look at", "this problem in over-parametrized setting (where there is hope to achieve global optimality), this constraint on width puts a direct upper-bound of d 2 on the number of data points that can be trained. Even though this is a", "strong upper bound, recent results from margin bounds BID19 show that if optimal network is closer to origin then we can get an upper bound on number of samples independent of dimension of the problem which will ensure closeness of population objective and training objective. Second drawback of this", "general setting is that we can prove good properties of the optimization variables (hidden layer weights) for only finite iterations of the SGD type algorithm. But as it is commonly known", ", stochastic gradient descent converges to first order point asymptotically so ideally we would like to prove these properties for infinitely many iterations. We compare our results to", "some of the prior work of Xie et al. (2016) and Soudry & Carmon (2016) . Both of these papers use", "similar ideas to examine first order conditions but give quite different results from ours. They give results for ReLU", "or Leaky ReLU activations. We, on the other hand, give", "results for most other nonlinear activations, which can be more challenging. We discuss this in section", "3 in more detail. We also formally show that", "even though the objective function for training neural networks is nonconvex, it is Lipschitz smooth meaning that gradient of the objective function does not change a lot with small changes in underlying variable. To the best of our knowledge", ", there is no such result formally stated in the literature. Soltanolkotabi et al. (2017", ") discuss similar results, but there constant itself depends locally on w max , a hidden layer matrix element, which is variable of the the optimization function. Moreover, there result is", "probabilistic. Our result is deterministic", ", global and computable. This allows us to show convergence", "results for the gradient descent algorithm, enabling us to establish an upper bound on the number of iterations for finding an ε-approximate first-order optimal solution ( ∇f () ≤ ε). Therefore our algorithm will generate", "an ε-approximate first-order optimal solution which satisfies aforementioned properties of the hidden layer. Note that this does not mean that the", "algorithm will reach the global optimal point asymptotically. As mentioned before, when number of iterations", "tend to infinity, we could not establish \"good\" properties. We discuss technical difficulties to prove such", "a conjecture in more detail in section 5 which details our convergence results. At this point we would also like to point that", "there is good amount of work happening on shallow neural networks. In this literature, we see variety of modelling", "assumptions, different objective functions and local convergence results. BID15 focuses on a class of neural networks which", "have special structure called \"Identity mapping\". They show that if the input follows from Gaussian", "distribution then SGD will converge to global optimal for population objective of the \"identity mapping\" network. BID3 show that for isotropic Gaussian inputs, with", "one hidden layer ReLU network and single non-overlapping convolutional filter, all local minimizers are global hence gradient descent will reach global optimal in polynomial time for the population objective. For the same problem, after relaxing the constraint", "of isotropic Gaussian inputs, they show that the problem is NP-complete via reduction from a variant of set splitting problem. In both of these studies, the objective function is", "a population objective which is significantly different from training objective in over parametrized domain. In over-parametrized regime, Soltanolkotabi et al.", "(2017) shows that for the training objective with data coming from isotropic Gaussian distribution, provided that we start close to the true solution and know maximum singular value of optimal hidden layer then corresponding gradient descent will converge to the optimal solution. This is one of its kind of result where local convergence", "properties of the neural network training objective function have studied in great detail. Our result differ from available current literature in variety", "of ways. First of all, we study the training problem in the over-parametrized", "regime. In that regime, the training objective can be significantly different", "from population objective. Moreover, we study the optimization problem for many general non-linear", "activation functions. Our result can be extended to deeper networks when considering the optimization", "problem with respect to outermost hidden layer. We also prove that stochastic noise helps in keeping the aforementioned properties", "of hidden layer. This result, in essence, provides justification for using stochastic gradient descent", ". Another line of study looks at the effect of over-parametrization in the training of", "neural networks BID9 Nguyen & Hein, 2017) . These result are not for the same problem as they require huge amount of over-parametrization", ". In essence, they require the width of the hidden layer to be greater than number of data points", "which is unreasonable in many settings. These result work for fairly general activations as do our results but we require a moderate over-parametrization", ", width × dimension ≥ number of data population, much more reasonable in practice as pointed before from margin bound results. They also work for deeper neural network as do our results when optimization is with respect to outermost hidden", "layer (and aforementioned technical properties are satisfied for all hidden layers)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.11764705181121826, 0.1538461446762085, 0.1111111044883728, 0.1702127605676651, 0.07547169178724289, 0.05714285373687744, 0.11538460850715637, 0.09090908616781235, 0.25, 0.0624999962747097, 0.04651162400841713, 0.10810810327529907, 0.11764705181121826, 0.10526315122842789, 0.1428571343421936, 0, 0, 0.1538461446762085, 0.05128204822540283, 0.06896550953388214, 0.03999999538064003, 0.09302324801683426, 0.0555555522441864, 0.0624999962747097, 0.0833333283662796, 0.0555555522441864, 0.0833333283662796, 0, 0.21052631735801697, 0.11538460850715637, 0.1249999925494194, 0.1428571343421936, 0.11999999731779099, 0.09999999403953552, 0.13333332538604736, 0.09302324801683426, 0.05882352590560913, 0, 0, 0.060606054961681366, 0.07692307233810425, 0.11999999731779099, 0.0624999962747097, 0.08888888359069824, 0, 0.07407406717538834, 0.12244897335767746, 0.277777761220932, 0.1875, 0.0624999962747097, 0.10526315122842789, 0.11428570747375488, 0.11764705181121826, 0, 0.14999999105930328, 0.15686273574829102, 0.04878048226237297, 0.1111111044883728, 0.12903225421905518, 0.2631579041481018, 0.2142857164144516, 0, 0, 0, 0.1666666567325592, 0.25806450843811035, 0.1428571343421936, 0.10256409645080566, 0.12121211737394333, 0.04999999329447746, 0.145454540848732, 0.1428571343421936 ]
BkIkkseAZ
true
[ "This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case" ]
[ "We introduce the concept of channel aggregation in ConvNet architecture, a novel compact representation of CNN features useful for explicitly modeling the nonlinear channels encoding especially when the new unit is embedded inside of deep architectures for action recognition.", "The channel aggregation is based on multiple-channels features of ConvNet and aims to be at the spot finding the optical convergence path at fast speed.", "We name our proposed convolutional architecture “nonlinear channels aggregation networks (NCAN)” and its new layer “nonlinear channels aggregation layer (NCAL)”.", "We theoretically motivate channels aggregation functions and empirically study their effect on convergence speed and classification accuracy.", "Another contribution in this work is an efficient and effective implementation of the NCAL, speeding it up orders of magnitude.", "We evaluate its performance on standard benchmarks UCF101 and HMDB51, and experimental results demonstrate that this formulation not only obtains a fast convergence but stronger generalization capability without sacrificing performance.", "With modern learnable representations such as deep convolutional neural networks (CNNs) matured in many image understanding tasks BID7 , human action recognition has received a significant amount of attentions BID11 BID2 BID9 BID3 BID12 .", "Due to the fact that video itself provides an additional temporal clue and that the parameters and the calculations of CNNs grow exponentially, training CNNs with such large-scale parameters in video domain is time-consuming.", "However, it remains unclear how the effective convergence accelerators could be conducted for the optimal path by formulizing the handcrafted rules.", "Since videos consist of still images, training tricks and methods, such as Relu, BN, have been shown to transfer to videos directly.", "Recent theoretical and empirical works have demonstrated the importance of quickly training deep architectures successfully, and the effective convergence accelerators advanced in the 2D image, such as relu BID4 and batch normalization BID5 , have been developed for fast convergence.", "This is in part inspired by observations of the limited GPU memory and computing power, especially when confronting the large-scale video dataset which may introduce a large majority of parameters.", "Another pipeline of algorithms focuses on the training optimizer of CNNs, for examples, sgd, momentum, nesterov, adagrad and adadelta.", "However, training CNNs utilizing the large-scale video datasets is still nontrivial in video task, particularly if one seeks a compact but fast long termporal dynamic representation that can be processed efficiently.Our current work reconsiders the means of facilitating convergence of ConvNets to increase the understanding of how to embed some hand-crafted rules inside of CNNs for fast convergence in a more thorough fashion.", "In addition to the accelerators and effective optimizers, we tend to explore a thorough method causing the value of the loss function to descend rapidly.", "Intuitively, we argue that CNNs will accelerate training process once the complex relationship across convolutional features channels is modeled, explicitly, by the hand-crafted rules.", "In the existing units 3D convolution implements a linear partial sum of channels BID6 , 3D max-pooling takes the maximum feature by channels and 3D average-pooling make a spatial-channel average of features.", "Unfortunately, all the 3D units conduct a linear channels aggregation, implicitly and locally.", "Despite that the implicit linear aggregation has been applied to broad fields, there seems to be less works explicitly taking modeling the complex nonlinear relationship across channels into account.", "In fact, either one-stream or two-stream algorithms ignore the channel-level encoding.", "For video recognition task, a very tricky problem is how to train the CNN architectures for the sake of making a lower loss rapidly in the scarcity of videos.", "We conjecture that there is complex nonlinear relationship among the channels of CNN features.", "Once this implicit relationship is explicitly modeled, such accomplishment will facilitate converging with faster search to the optimal trajectory.In this paper, we proposed a nonlinear channels aggregation layer (NCAL), which explicitly models the complex nonlinear relationships across channels.", "Since a standard CNN provides a whole hierarchy of video representations, the first question worthy exploring is where the NACL should take place.", "For example, we can aggregate the output of the fully-connected layers of CNN architecture pre-trained on videos.", "A drawback of such implementation is that the convolutional features channels of CNN itself are still implicitly encoded and are unaware of the lower level channels relationship.", "The alternative is to model the nonlinear channels aggregation of some intermediate network layer.", "In this case, the lower layers fail to extract the representative features from video sequences, but the upper layers can reason about the overall dynamics in the video.", "The former is prone to sacrificing the recognition performance while the latter is thus thought of as the appropriate convolutional features for the compact aggregation.", "Here we build our methods on top of the successful Inception V1 architecture.", "More specifically, three main contributions are provided in this work.", "Our first contribution is to introduce the concept of nonlinear channels aggregation for fast convergence.", "We also show that, in this manner, it is possible to apply the concept of nonlinear channels aggregation to the intermediate layers of a CNN representation by constructing an efficient nonlinear channels aggregation layer (NCAL).Here", "we build our methods on top of the successful Inception V1 architecture. More", "specifically, three main contributions are provided in this work. Our", "first contribution is to introduce the concept of nonlinear channels aggregation for fast convergence. We", "also show that, in this manner, it is possible to construct an efficient nonlinear channels aggregation by applying the concept of nonlinear channels aggregation to the intermediate layers of the standard CNN. More", "importantly, it is explicitly and globally that the nonlinear channels relationship is modeled compared to the traditional local and implicit units.Our second contribution is to simplify the process of nonlinear channels aggregation layer (NCAL) and make a fast yet accurate implementation of it. Notably", ", the proposed NCAL can be embodied inside of any standard CNN architectures, and not break the rest components of structures. More broadly", ", the proposed NCAL is not limited to action recognition, that is, it can be applied to any task with CNNs. Here we introduce", "it into action recognition, and leave the explorations of it on the other domains in the future.Our third contribution is to leverage these ideas to construct a novel nonlinear channels aggregation network, perform the training process end-to-end. We show that such", "nonlinear channels encoding results in a fast decline in the value of the loss function of CNNs while obtains efficient and accurate classification of actions in videos.The rest of the paper is organized as follows: Section 2 describes the related works, and section 3 represents the principle of the nonlinear channels aggregation networks (NCAN) and the backward propagation of NCAN. This is followed", "by the experiments in section 4. Finally, we conclude", "this paper in Section 6.", "We present nonlinear channels aggregation, a powerful and new, yet simple concept in the context of deep learning that captures the global channels relationship.", "We introduce a novel nonlinear channels aggregation layer (NCAL) and make a fast yet accurate implementation of NCAL, which allows us to embed the principle of complex channels encoding to the mainstream CNN architectures and back-propagate the gradients through NCALs.", "Experiments on video sequences demonstrate the effective power of nonlinear channels aggregation on facilitating training CNNs.In this paper we fit the complex channels relationships by capturing the global channels aggregation.", "Still, there seems to be some possible research directions that can be further expanded, modeling the nonlinear functions across channels.", "In the future it is beneficial to explore multiple-scale channel-levels by pyramid coding across channels.", "In sublimation, we can embed any hand-crafted rules, channels aggregation in the mainstream architectures, to making CNN working as we expect." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08888888359069824, 0.11764705181121826, 0.07407406717538834, 0.07407406717538834, 0.06666666269302368, 0.05128204822540283, 0, 0.10526315122842789, 0.06666666269302368, 0, 0.04444444179534912, 0.10256409645080566, 0.13793103396892548, 0.0634920597076416, 0.1249999925494194, 0.05882352590560913, 0.054054051637649536, 0.0833333283662796, 0.052631575614213943, 0.09090908616781235, 0.2222222238779068, 0.1599999964237213, 0.08888888359069824, 0.1875, 0.307692289352417, 0.12121211737394333, 0.07999999821186066, 0.12121211737394333, 0.0624999962747097, 0.25, 0, 0.07692307233810425, 0.09756097197532654, 0.25, 0, 0.07692307233810425, 0.10810810327529907, 0.04444444179534912, 0.1249999925494194, 0.05882352590560913, 0.0833333283662796, 0.035087715834379196, 0.09999999403953552, 0, 0.060606054961681366, 0.09090908616781235, 0.2222222238779068, 0.06666666269302368, 0.07692307233810425, 0.12903225421905518 ]
rJgdHs05FQ
true
[ "An architecture enables CNN trained on the video sequences converging rapidly " ]
[ "We present a new method for black-box adversarial attack.", "Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network.", "The method produces adversarial perturbations with high level semantic patterns that are easily transferable.", "We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures.", "We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries.", "We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.", "The wide adoption of neural network models in modern applications has caused major security concerns, as such models are known to be vulnerable to adversarial examples that can fool neural networks to make wrong predictions (Szegedy et al., 2014) .", "Methods to attack neural networks can be divided into two categories based on whether the parameters of the neural network are assumed to be known to the attacker: white-box attack and black-box attack.", "There are several approaches to find adversarial examples for black-box neural networks.", "The transfer-based attack methods first pretrain a source model and then generate adversarial examples using a standard white-box attack method on the source model to attack an unknown target network (Goodfellow et al., 2015; Madry et al., 2018; Carlini & Wagner, 2017; Papernot et al., 2016a) .", "The score-based attack requires a loss-oracle, which enables the attacker to query the target network at multiple points to approximate its gradient.", "The attacker can then apply the white-box attack techniques with the approximated gradient (Chen et al., 2017; Ilyas et al., 2018a; Tu et al., 2018) .", "A major problem of the transfer-based attack is that it can not achieve very high success rate.", "And transfer-based attack is weak in targeted attack.", "On the contrary, the success rate of score-based attack has only small gap to the white-box attack but it requires many queries.", "Thus, it is natural to combine the two black-box attack approaches, so that we can take advantage of a pretrained white-box source neural network to perform more efficient search to attack an unknown target black-box model.", "In fact, in the recent NeurIPS 2018 Adversarial Vision Challenge (Brendel et al., 2018) , many teams transferred adversarial examples from a source network as the starting point to carry out black-box boundary attack (Brendel et al., 2017) .", "N Attack also used a regression network as initialization in the score-based attack (Li et al., 2019a) .", "The transferred adversarial example could be a good starting point that lies close to the decision boundary for the target network and accelerate further optimization.", "P-RGF (Cheng et al., 2019) used the gradient information from the source model to accelerate searching process.", "However, gradient information is localized and sometimes it is misleading.", "In this paper, we push the idea of using a pretrained white-box source network to guide black-box attack significantly further, by proposing a method called TRansferable EMbedding based Black-box Attack (TREMBA).", "TREMBA contains two stages: (1) train an encoder-decoder that can effectively generate adversarial perturbations for the source network with a low-dimensional embedding space; (2) apply NES (Natural Evolution Strategy) of (Wierstra et al., 2014) to the low-dimensional embedding space of the pretrained generator to search adversarial examples for the target network.", "TREMBA uses global information of the source model, capturing high level semantic adversarial features that are insensitive to different models.", "Unlike noise-like perturbations, such perturbations would have much higher transferablity across different models.", "Therefore we could gain query efficiency by performing queries in the embedding space.", "We note that there have been a number of earlier works on using generators to produce adversarial perturbations in the white-box setting (Baluja & Fischer, 2018; Xiao et al., 2018; Wang & Yu, 2019) .", "While black-box attacks were also considered there, they focused on training generators with dynamic distillation.", "These early approaches required many queries to fine-tune the classifier for different target networks, which may not be practical for real applications.", "While our approach also relies on a generator, we train it as an encoder-decoder that produces a low-dimensional embedding space.", "By applying a standard black-box attack method such as NES on the embedding space, adversarial perturbations can be found efficiently for a target model.", "It is worth noting that the embedding approach has also been used in AutoZOOM (Tu et al., 2018) .", "However, it only trained the autoencoder to reconstruct the input, and it did not take advantage of the information of a pretrained network.", "Although it also produces structural perturbations, these perturbations are usually not suitable for attacking regular networks and sometimes its performance is even worse than directly applying NES to the images (Cheng et al., 2019; Guo et al., 2019) .", "TREMBA, on the other hand, tries to learn an embedding space that can efficiently generate adversarial perturbations for a pretrained source network.", "Compared to AutoZOOM, our new method produces adversarial perturbation with high level semantic features that could hugely affect arbitrary target networks, resulting in significantly lower number of queries.", "We summarize our contributions as follows:", "1. We propose TREMBA, an attack method that explores a novel way to utilize the information of a pretrained source network to improve the query efficiency of black-box attack on a target network.", "2. We show that TREMBA can produce adversarial perturbations with high level semantic patterns, which are effective across different networks, resulting in much lower queries on MNIST and ImageNet especially for the targeted attack that has low transferablity.", "3. We demonstrate that TREMBA can be applied to SOTA defended models (Madry et al., 2018; Xie et al., 2018) .", "Compared with other black-box attacks, TREMBA increases success rate by approximately 10% while reduces the number of queries by more than 50%.", "We propose a novel method, TREMBA, to generate likely adversarial patterns for an unknown network.", "The method contains two stages: (1) training an encoder-decoder to generate adversarial perturbations for the source network; (2) search adversarial perturbations on the low-dimensional embedding space of the generator for any unknown target network.", "Compared with SOTA methods, TREMBA learns an embedding space that is more transferable across different network architectures.", "It achieves two to six times improvements in black-box adversarial attacks on MNIST and ImageNet and it is especially efficient in performing targeted attack.", "Furthermore, TREMBA demonstrates great capability in attacking defended networks, resulting in a nearly 10% improvement on the attack success rate, with two to six times of reductions in the number of queries.", "TREMBA opens up new ways to combine transfer-based and score-based attack methods to achieve higher efficiency in searching adversarial examples.", "For targeted attack, TREMBA requires different generators to attack different classes.", "We believe methods from conditional image generation (Mirza & Osindero, 2014 ) may be combined with TREMBA to form a single generator that could attack multiple targeted classes.", "We leave it as a future work.", "A EXPERIMENT RESULT A.1", "TARGETED ATTACK ON IMAGENET Figure 9 shows result of the targeted attack on dipper, American chameleon, night snake, ruffed grouse and black swan.", "TREMBA achieves much higher success rate than other methods at almost all queries level." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.47058823704719543, 0.3125, 0.1538461446762085, 0.5777777433395386, 0.21739129722118378, 0.3265306055545807, 0.13114753365516663, 0.23999999463558197, 0.10810810327529907, 0.2539682388305664, 0.2222222238779068, 0.08695651590824127, 0.3333333432674408, 0.125, 0.22727271914482117, 0.24561403691768646, 0.20338982343673706, 0.1860465109348297, 0.2448979616165161, 0.0476190410554409, 0.05882352590560913, 0.2545454502105713, 0.17910447716712952, 0.2222222238779068, 0.10526315122842789, 0.15789473056793213, 0.20689654350280762, 0.04999999701976776, 0.08695651590824127, 0.09090908616781235, 0.25, 0.09090908616781235, 0.22727271914482117, 0.06451612710952759, 0.21276594698429108, 0.18867923319339752, 0.06451612710952759, 0.4313725531101227, 0.25806450843811035, 0.09090908616781235, 0.21739129722118378, 0.19999998807907104, 0.18518517911434174, 0.2380952388048172, 0.1702127605676651, 0.18867923319339752, 0.27272728085517883, 0.17142856121063232, 0.15094339847564697, 0.125, 0, 0.1666666567325592, 0.10256409645080566 ]
SJxhNTNYwB
true
[ "We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures." ]
[ "Deep neural networks (DNNs) are inspired from the human brain and the interconnection between the two has been widely studied in the literature. ", "However, it is still an open question whether DNNs are able to make decisions like the brain.", "Previous work has demonstrated that DNNs, trained by matching the neural responses from inferior temporal (IT) cortex in monkey's brain, is able to achieve human-level performance on the image object recognition tasks.", "This indicates that neural dynamics can provide informative knowledge to help DNNs accomplish specific tasks.", "In this paper, we introduce the concept of a neuro-AI interface, which aims to use human's neural responses as supervised information for helping AI systems solve a task that is difficult when using traditional machine learning strategies.", "In order to deliver the idea of neuro-AI interfaces, we focus on deploying it to one of the fundamental problems in generative adversarial networks (GANs): designing a proper evaluation metric to evaluate the quality of images produced by GANs. ", "Deep neural networks (DNNs) have successfully been applied to a number of different areas such as computer vision and natural language processing where they have demonstrated state-of-the-art results, often matching and even sometimes surpassing a human's ability.", "Moreover, DNNs have been studied with respect to how similar processing is carried out in the human brain, where identifying these overlaps and interconnections has been a focus of study and investigation in the literature BID5 BID4 BID11 BID18 BID30 BID1 BID35 BID17 BID15 .", "In this research area, convolutional neural networks (CNNs) are widely studied to be compared with the visual system in human's brain because of following reasons: (1) CNNs and human's visual system are both hierarchical system; (2) Steps of processing input between CNNs and human's visual system are similar to each other e.g., in a object recognition task, both CNNs and human recognize a object based on their its shape, edge, color etc..", "Work BID35 outlines the use of CNNs approach for delving even more deeply into understanding the development and organization of sensory cortical processing.", "It has been demonstrated that CNNs are able to reflect the spatio-temporal neural dynamics in human's brain visual area BID5 BID30 BID18 .", "Despite lots of work is carried out to reveal the similarity between CNNs and brain system, research on interacting between CNNs and neural dynamics is less discussed in the literature as understanding of neural dynamics in the neuroscience area is still limited.There is a growing interest in studying generative adversarial networks (GANs) in the deep learning community BID10 .", "Specifically, GANs have been widely applied to various domains such as computer vision BID14 , natural language processing BID7 and speech synthesis BID6 .", "Compared with other deep generative models (e.g. variational autoencoders (VAEs)), GANs are favored for effectively handling sharp estimated density functions, efficiently generating desired samples and eliminating deterministic bias.", "Due to these properties GANs have successfully contributed to plausible image generation BID14 , image to image translation BID38 , image super-resolution BID19 , image completion BID37 etc..", "However, three main challenges still exist currently in the research of GANs: (1) Mode collapse -the model cannot learn the distribution of the full dataset well, which leads to poor generalization ability; (2) Difficult to trainit is non-trivial for discriminator and generator to achieve Nash equilibrium during the training; (3) Hard to evaluate -the evaluation of GANs can be considered as an effort to measure the dissimilarity between real distribution p r and generated distribution p g .", "Unfortunately, the accurate estimation of p r is intractable.", "Thus, it is challenging to have a good estimation of the correspondence between p r and p g .", "Aspects (1) and (2) are more concerned with computational aspects where much research has been carried out to mitigate these issues BID20 Salimans et al., 2016; BID0 .", "Aspect (3) is similarly fundamental, however, limited literature is available and most of the current metrics only focus on measuring the dissimilarity between training and generated images.", "A more meaning-ful GANs evaluation metric that is consistent with human perceptions is paramount in helping researchers to further refine and design better GANs.Although some evaluation metrics, e.g., Inception Score (IS), Kernel Maximum Mean Discrepancy (MMD) and Fréchet Inception Distance (FID), have already been proposed (Salimans et al., 2016; BID13 BID2 , their limitations are obvious: (1) These metrics do not agree with human perceptual judgments and human rankings of GAN models.", "A small artefact on images can have a large effect on the decision made by a machine learning system BID16 , whilst the intrinsic image content does not change.", "In this aspect, we consider human perception to be more robust to adversarial images samples when compared to a machine learning system; (2) These metrics require large sample sizes for evaluation Salimans et al., 2016) .", "Large-scale samples for evaluation sometimes are not realistic in real-world applications since it is time-consuming; and (3) They are not able to rank individual GAN-generated images by their quality i.e., the metrics are generated on a collection of images rather than on a single image basis.", "The within GAN variances are crucial because it can provide the insight on the variability of that GAN.Work BID36 demonstrates that CNN matched with neural data recorded from inferior temporal cortex BID3 has high performance in object recognition tasks.", "Given the evidence above that a CNN is able to predict the neural response in the brain, we describe a neuro-AI interface system, where human being's neural response is used as supervised information to help the AI system (CNNs used in this work) solve more difficult problems in real-world.", "As a starting point for exploiting the idea of neuro-AI interface, we focus on utilizing it to solve one of the fundamental problems in GANs: designing a proper evaluation metric.", "In this paper, we introduce a neuro-AI interface that interacts CNNs with neural signals.", "We demonstrate the use of neuro-AI interface by introducing a challenge in the area of GANs i.e., evaluate the quality of images produced by GANs.", "Three deep network architectures are explored and the results demonstrate that including neural responses during the training phase of the neuro-AI interface improves its accuracy even when neural measurements are absent when evaluating on the test set.", "More details of the performance of Neuroscore can be referred in Appendix.", "FIG1 shows the averaged reconstructed P300 signal across all participants (using LDA beamformer) in the RSVP experiment.", "It should be noted here that the averaged reconstructed P300 signal is calculated as the difference between averaged target trials and averaged standard trials after applying the LDA beamformer method.", "The solid lines in FIG1 are the means of the averaged reconstructed P300 signals for each image category (across 12 participants) while the shaded areas represent the standard deviations (across participants).", "It can be seen that the averaged reconstructed P300 (across participants) clearly distinguishes between different image categories.", "In order to statistically measure this correlative relationship, we calculated the Pearson correlation coefficient and p-value (two-tailed) between Neuroscore and BE accuracy and found (r(48) = −0.767, p = 2.089e − 10).", "We also did the Pearson statistical test and bootstrap on the correlation between Neuroscore and BE accuracy (human judgment performance) only for GANs i.e., DCGAN, BEGAN and PROGAN.", "Pearson statistic is (r(36)=-0.827, p=4.766e-10) and the bootstrapped p ≤ 0.0001." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06451612710952759, 0.07407406717538834, 0.04878048226237297, 0.07999999821186066, 0.1304347813129425, 0.3181818127632141, 0.13636362552642822, 0.07999999821186066, 0.09090908616781235, 0, 0.0624999962747097, 0.18518517911434174, 0.060606054961681366, 0.05128204822540283, 0.06666666269302368, 0.0555555522441864, 0, 0.14814814925193787, 0.052631575614213943, 0, 0.026666663587093353, 0.0555555522441864, 0.13636362552642822, 0.07692307233810425, 0, 0.1666666567325592, 0.1621621549129486, 0.25, 0.25806450843811035, 0.09756097197532654, 0, 0, 0, 0, 0, 0.04878048226237297, 0, 0 ]
H1xKj1n9a4
true
[ "Describe a neuro-AI interface technique to evaluate generative adversarial networks" ]
[ "While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing.", "Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims.", "We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms.", "Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior.", "We demonstrate our framework on a highway scenario.", "Several fatal accidents involving autonomous vehicles (AVs) underscore the importance of testing whether AV perception and control pipelines-when considered as a whole systemcan safely interact with other human traffic participants.", "Unfortunately, testing AVs in real environments, the most straightforward validation framework for system-level inputoutput behavior, requires prohibitive amounts of time due to the rare nature of serious accidents BID22 .", "Concretely, a recent study BID8 argues that AVs need to drive \"hundreds of millions of miles, and, under some scenarios, hundreds of billions of miles to create enough data to clearly demonstrate their safety.\"", "On the other hand, formally verifying an AV algorithm's \"correctness\" BID11 BID0 BID21 BID13 is inherently difficult because all driving policies are subject to crashes caused by other drivers BID22 .", "Ruling out scenarios where the AV should not be blamed for such accidents is a task subject to logical inconsistency and subjective assignment of fault.Motivated by the challenges underlying real-world testing and formal verification, we consider a probabilistic paradigmwhich we describe as a risk-based framework BID14 -where the goal is to evaluate the probability of an accident under a base distribution representing standard traffic behavior.", "By assigning learned probabilities to environmental states and agent behaviors, our risk-based framework considers performance of the AV policy under a data-driven model of the world.", "A fundamental tradeoff emerges when comparing the requirements of our risk-based framework to other testing paradigms.", "Real-world testing endangers the public but is still in some sense a gold standard.", "Verified subsystems provide evidence that the AV should drive safely in all specified scenarios; they are limited by computational intractability and require both white-box models and a complete specifications for assigning blame (e.g. BID22 ).", "In turn, our risk-based framework is most useful when the base distribution P 0 is accurate.", "Although an estimate of p γ is not informative when P 0 is misspecified, our adaptive sampling techniques still efficiently identify dangerous scenarios in this case; such dangerous scenarios are independent of potentially subjective assignments of blame.", "Principled techniques for building and validating the model of the environment P 0 represent an open research question.Rigorous safety evaluation of AVs necessitates benchmarks based on adaptive adversarial conditions rather than standard nominal conditions.", "Importantly, our framework only requires black-box access to the driving policy and simulation environment.", "Our approach offers significant speedups over realworld testing and allows efficient evaluation of black-box AV input/output behavior, providing a powerful tool to aid in the design of safe AVs.", "DISPLAYFORM0 Evaluate and sort f (X i ) in decreasing order BID5 : DISPLAYFORM1 Discard X (1) , . . . , X (δN ) and reinitialize by resampling with replacement from X (δN +1) , . . . , X (N )" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04651162400841713, 0.11764705181121826, 0.0833333283662796, 0.978723406791687, 0.0624999962747097, 0.14814814925193787, 0.11764705181121826, 0.14814814925193787, 0.11320754140615463, 0.3544303774833679, 0.2083333283662796, 0.14999999105930328, 0.15789473056793213, 0.06779660284519196, 0.1538461446762085, 0.178571417927742, 0.178571417927742, 0.10526315122842789, 0.1538461446762085, 0.038461532443761826 ]
SJxc4FZoTV
true
[ "Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. " ]
[ "Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment.", "These aforementioned tasks are similar in nature, yet they are often modeled individually.", "Knowledge transfer can be effective for closely related tasks, which is usually carried out using parameter transfer in neural networks.", "However, transferring all parameters, some of which irrelevant for a target task, can lead to sub-optimal results and can have a negative effect on performance, referred to as \\textit{negative} transfer. \n\n", "Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method in the context of few-shot learning.\n\n", "Our main contribution is a method for mitigating negative transfer across tasks when using neural networks, which involves dynamically bagging small recurrent neural networks trained on different subsets of the source task/s.", "We present a straightforward yet novel approach for incorporating these networks to a target task for few-shot learning by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training.\n\n", "Our proposed method show improvements over hard and soft parameter sharing transfer methods in the few-shot learning case and shows competitive performance against models that are trained given full supervision on the target task, from only few examples.", "Learning relationships between sentences is a fundamental task in natural language understanding (NLU).", "Given that there is gradience between words alone, the task of scoring or categorizing sentence pairs is made even more challenging, particularly when either sentence is less grounded and more conceptually abstract e.g sentence-level semantic textual similarity and textual inference.The area of pairwise-based sentence classification/regression has been active since research on distributional compositional semantics that use distributed word representations (word or sub-word vectors) coupled with neural networks for supervised learning e.g pairwise neural networks for textual entailment, paraphrasing and relatedness scoring BID15 .Many", "of these tasks are closely related and can benefit from transferred knowledge. However", ", for tasks that are less similar in nature, the likelihood of negative transfer is increased and therefore hinders the predictive capability of a model on the target task. However", ", challenges associated with transfer learning, such as negative transfer, are relatively less explored explored with few exceptions BID23 ; BID5 and even fewer in the context of natural language tasks BID18 . More specifically", ", there is only few methods for addressing negative transfer in deep neural networks BID9 .Therefore, we propose", "a transfer learning method to address negative transfer and describe a simple way to transfer models learned from subsets of data from a source task (or set of source tasks) to a target task. The relevance of each", "subset per task is weighted based on the respective models validation performance on the target task. Hence, models within", "the ensemble trained on subsets of a source task which are irrelevant to the target task are assigned a lower weight in the overall ensemble prediction on the target task. We gradually transition", "from using the source task ensemble models for prediction on the target task to making predictions solely using the single model trained on few examples from the target task. The transition is made", "using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training. The idea is that early", "in training the target task benefits more from knowledge learned from other tasks than later in training and hence the influence of past knowledge is annealed. We refer to our method", "as Dropping Networks as the approach involves using a combination of Dropout and Bagging in neural networks for effective regularization in neural networks, combined with a way to weight the models within the ensembles.For our experiments we focus on two Natural Language Inference (NLI) tasks and one Question Matching (QM) dataset. NLI deals with inferring", "whether a hypothesis is true given a premise. Such examples are seen in", "entailment and contradiction. QM is a relatively new pairwise", "learning task in NLU for semantic relatedness that aims to identify pairs of questions that have the same intent. We purposefully restrict the analysis", "to no more than three datasets as the number of combinations of transfer grows combinatorially. Moreover, this allows us to analyze how", "the method performs when transferring between two closely related tasks (two NLI tasks where negative transfer is less apparent) to less related tasks (between NLI and QM). We show the model averaging properties", "of our negative transfer method show significant benefits over Bagging neural networks or a single neural network with Dropout, particularly when dropout is high (p=0.5). Additionally, we find that distant tasks", "that have some knowledge transfer can be overlooked if possible effects of negative transfer are not addressed. The proposed weighting scheme takes this", "issue into account, improving over alternative approaches as we will discuss.", "Our proposed method combines neural network-based bagging with dynamic cubic spline error curve fitting to transition between source models and a single target model trained on only few target samples.", "We find our proposed method overcomes limitations in transfer learning such as avoiding negative transfer when attempting to transfer from more distant task, which arises during few-shot learning setting.", "This paper has empirically demonstrated this for learning complex semantic relationships between sentence pairs for pairwise learning tasks.", "Additionally, we find the co-attention network and the ensemble GRU network to perform comparably for single-task learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.11764705181121826, 0.07692307233810425, 0.1818181723356247, 0.0952380895614624, 0.1904761791229248, 0.13333332538604736, 0.16326530277729034, 0.20000000298023224, 0.07407406717538834, 0.04878048598766327, 0, 0.09756097197532654, 0.08695651590824127, 0.25, 0.14999999105930328, 0, 0.10810810327529907, 0.052631575614213943, 0.052631575614213943, 0.09999999403953552, 0.12903225421905518, 0.07692307233810425, 0, 0.1666666567325592, 0.11764705181121826, 0.09756097197532654, 0.13333332538604736, 0.0555555522441864, 0, 0.1860465109348297, 0.29999998211860657, 0.06666666269302368, 0.20689654350280762 ]
HyeggoCN_4
true
[ "A dynamic bagging methods approach to avoiding negatve transfer in neural network few-shot transfer learning" ]
[ "Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment.", "Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray).", "In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression.", "Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice.", "In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events.", "CSI is guaranteed to converge to the global minimum of the corresponding optimization problem.", "Experimental results also demonstrates the effectiveness of CSI using both simulated and real dataset.", "The course of disease progression in individual patients is one of the biggest uncertainties in medical practice.", "In an ideal world, accurate, continuous assessment of a patient's condition helps with prevention and treatment.", "However, many medical tests are either harmful or inconvenient to perform frequently, and practitioners have to infer the development of disease from sparse, noisy observations.", "In its simplest form, the problem of modeling disease progressions is to fit the curve of y(t), t ∈ [t min , t max ] for each patient, given sparse observations y := (ỹ(t 1 ), . . . ,ỹ(t n )).", "Due to the highdimensional nature of longitudinal data, existing results usually restrict solutions to subspace of functions and utilize similarities between patients via enforcing low-rank structures.", "One popular approach is the mixed effect models, including Gaussian process approaches (Verbeke, 1997; Zeger et al., 1988) and functional principal components (James et al., 2000) .", "While generative models are commonly used and have nice theoretical properties, their result could be sensitive to the underlying distributional assumptions of observed data and hard to adapt to different applications.", "Another line of research is to pose the problem of disease progression estimation as an optimization problem.", "Kidzinski and Hastie.", "Kidziński & Hastie (2018) proposed a framework which formulates the problem as a matrix completion problem and solve it using matrix factorization techniques.", "This method is distribution-free and flexible to possible extensions.", "Meanwhile, both types of solutions model the natural progression of disease using observations of the targeted variables only.", "They fail to incorporate the existence and effect of human interference: medications, therapies, surgeries, etc.", "Two patients with similar symptoms initially may have different futures if they choose different treatments.", "Without that information, predictions can be way-off.", "To the best of our knowledge, existing literature talks little about modeling treatment effect on disease progression.", "In Kidziński & Hastie (2018) , authors use concurrent observations of auxillary variables (e.g. oxygen consumption to motor functions) to help estimate the target one, under the assumption that both variables reflect the intrinsic latent feature of the disease and are thus correlated.", "Treatments of various types, however, rely on human decisions and to some extent, an exogenous variable to the development of disease.", "Thus they need to modeled differently.", "In this work, we propose a model for tracking disease progression that includes the effects of treatments.", "We introduce the Coordinatewise-Soft-Impute (CSI) algorithm for fitting the model and investigate its theoretical and practical properties.", "The contribution of our work is threefold: First, we propose a model and an algorithm CSI, to estimate the progression of disease which incorporates the effect of treatment events.", "The framework is flexible, distribution-free, simple to implement and generalizable.", "Second, we prove that CSI converges to the global solution regardless of the initialization.", "Third, we compare the performance of CSI with various other existing methods on both simulated data and a dataset of Gillette Children's Hospital with patients diagnosed with Cerebral Palsy, and demonstrate the superior performances of CSI.", "The rest of the paper is organized as follows.", "In Section 2 we state the problem and review existing methods.", "Next, in Section 3 we describe the model and the algorithm.", "Theoretic properties of the algorithm are derived in Section 4.", "Finally, in Section 5 and 6 we provides empirical results of CSI on the simulated and the real datesets respectively.", "We discuss some future directions in Section 7.", "In this paper, we propose a new framework in modeling the effect of treatment events in disease progression and prove a corresponding algorithm CSI.", "To the best of our knowledge, it's the first comprehensive model that explicitly incorporates the effect of treatment events.", "We would also like to mention that, although we focus on the case of disease progression in this paper, our framework is quite general and can be used to analyze data in any disciplines with sparse observations as well as external effects.", "There are several potential extensions to our current framework.", "Firstly, our framework could be extended to more complicated settings.", "In our model, treatments have been characterized as the binary matrix I S with a single parameter µ.", "In practice, each individual may take different types of surgeries for one or multiple times.", "Secondly, the treatment effect may be correlated with the latent variables of disease type, and can be estimated together with the random effect w i .", "Finally, our framework could be used to evaluate the true effect of a surgery.", "A natural question is: does surgery really help?", "CSI provides estimate of the surgery effect µ, it would be interesting to design certain statistical hypothesis testing/casual inference procedure to answer the proposed question.", "Though we are convinced that our work will not be the last word in estimating the disease progression, we hope our idea is useful for further research and we hope the readers could help to take it further." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2142857164144516, 0.05128204822540283, 0.1428571343421936, 0.0624999962747097, 0.1621621549129486, 0.0833333283662796, 0, 0.14814814925193787, 0.0714285671710968, 0.1111111044883728, 0.08163265138864517, 0.0555555522441864, 0, 0.04999999701976776, 0.2222222238779068, 0, 0.1249999925494194, 0.0952380895614624, 0.2222222238779068, 0.07407406717538834, 0.07692307233810425, 0, 0.13793103396892548, 0.07999999821186066, 0.12903225421905518, 0.1111111044883728, 0.20689654350280762, 0.14814814925193787, 0.31578946113586426, 0.09090908616781235, 0.07999999821186066, 0.04878048226237297, 0, 0, 0.1818181723356247, 0.09090908616781235, 0, 0, 0.23529411852359772, 0.1428571343421936, 0.15686273574829102, 0.0952380895614624, 0.09090908616781235, 0.13333332538604736, 0, 0.1249999925494194, 0.07692307233810425, 0.09999999403953552, 0.05714285373687744, 0.09302325546741486 ]
B1gm-a4tDH
true
[ "A novel matrix completion based algorithm to model disease progression with events" ]
[ "Multilingual Neural Machine Translation (NMT) systems are capable of translating between multiple source and target languages within a single system.", "An important indicator of generalization within these systems is the quality of zero-shot translation - translating between language pairs that the system has never seen during training.", "However, until now, the zero-shot performance of multilingual models has lagged far behind the quality that can be achieved by using a two step translation process that pivots through an intermediate language (usually English).", "In this work, we diagnose why multilingual models under-perform in zero shot settings.", "We propose explicit language invariance losses that guide an NMT encoder towards learning language agnostic representations.", "Our proposed strategies significantly improve zero-shot translation performance on WMT English-French-German and on the IWSLT 2017 shared task, and for the first time, match the performance of pivoting approaches while maintaining performance on supervised directions.", "In recent years, the emergence of sequence to sequence models has revolutionized machine translation.", "Neural models have reduced the need for pipelined components, in addition to significantly improving translation quality compared to their phrase based counterparts BID35 .", "These models naturally decompose into an encoder and a decoder with a presumed separation of roles: The encoder encodes text in the source language into an intermediate latent representation, and the decoder generates the target language text conditioned on the encoder representation.", "This framework allows us to easily extend translation to a multilingual setting, wherein a single system is able to translate between multiple languages BID11 BID28 .Multilingual", "NMT models have often been shown to improve translation quality over bilingual models, especially when evaluated on low resource language pairs BID14 BID20 . Most strategies", "for training multilingual NMT models rely on some form of parameter sharing, and often differ only in terms of the architecture and the specific weights that are tied. They allow specialization", "in either the encoder or the decoder, but tend to share parameters at their interface. An underlying assumption", "of these parameter sharing strategies is that the model will automatically learn some kind of shared universally useful representation, or interlingua, resulting in a single model that can translate between multiple languages.The existence of such a universal shared representation should naturally entail reasonable performance on zero-shot translation, where a model is evaluated on language pairs it has never seen together during training. Apart from potential practical", "benefits like reduced latency costs, zero-shot translation performance is a strong indicator of generalization. Enabling zero-shot translation", "with sufficient quality can significantly simplify translation systems, and pave the way towards a single multilingual model capable of translating between any two languages directly. However, despite being a problem", "of interest for a lot of recent research, the quality of zero-shot translation has lagged behind pivoting through a common language by 8-10 BLEU points BID15 BID24 BID21 BID27 . In this paper we ask the question", ", What is the missing ingredient that will allow us to bridge this gap? Figure 1 : The proposed multilingual", "NMT model along with the two training objectives. CE stands for the cross-entropy loss", "associated with maximum likelihood estimation for translation between English and other languages. Align represents the source language", "invariance loss that we impose on the representations of the encoder. While training on the translation objective", ", training samples (x, y) are drawn from the set of parallel sentences", ", D x,y . For the invariance losses, (x, y) could be drawn", "from D x,y for the cosine loss", ", or independent data distributions for the adversarial loss. Both losses are minimized simultaneously. Since", "we have supervised data only to and from", "English, one of x or y is always in English.In BID24 , it was hinted that the extent of separation between language representations was negatively correlated with zero-shot translation performance. This is supported by theoretical and empirical", "observations in domain adaptation literature, where the extent of subspace alignment between the source and target domains is strongly associated with transfer performance BID7 BID8 BID17 . Zero-shot translation is a special case of domain", "adaptation in multilingual models, where English is the source domain and other languages collectively form the target domain. Following this thread of domain adaptation and subspace", "alignment, we hypothesize that aligning encoder representations of different languages with that of English might be the missing ingredient to improving zero-shot translation performance.In this work, we develop auxiliary losses that can be applied to multilingual translation models during training, or as a fine-tuning step on a pre-trained model, to force encoder representations of different languages to align with English in a shared subspace. Our experiments demonstrate significant improvements on", "zero-shot translation performance and, for the first time, match the performance of pivoting approaches on WMT English-French-German (en-fr-de) and the IWSLT 2017 shared task, in all zero shot directions, without any meaningful regression in the supervised directions.We further analyze the model's representations in order to understand the effect of our explicit alignment losses. Our analysis reveals that tying weights in the encoder,", "by itself, is not sufficient to ensure shared representations. As a result, standard multilingual models overfit to the", "supervised directions, and enter a failure mode when translating between zero-shot languages. Explicit alignment losses incentivize the model to use shared", "representations, resulting in better generalization.2 ALIGNMENT OF LATENT REPRESENTATIONS 2.1 MULTILINGUAL NEURAL MACHINE", "TRANSLATION Let x = (x 1 , x 2 ...x m ) be a sentence in the source language and y = (y 1 , y 2 , ...y n ) be its translation in the target language. For machine translation, our objective is to learn a model, p(y|x;", "θ). In modern NMT, we use sequence-to-sequence models supplemented with", "an attention mechanism BID5 to learn this distribution. These sequence-to-sequence models consist of an encoder, Enc(x) = z", "= (z 1 , z 2 , ...z m ) parameterized with θ enc , and a decoder", "that learns to map from the latent representation z to y by modeling p(y|z; θ dec ), again parameterized with θ dec . This model is trained to maximize the likelihood of the available parallel", "data, D x,y . DISPLAYFORM0 In multilingual training we jointly train a single model BID26", "to translate from many possible source languages to many potential target languages. When only the decoder is informed about the desired target language, a special", "token to indicate the target language, < tl >, is input to the first step of the decoder. In this case, D x,y is the union of all the parallel data for each of the supervised", "translation directions. Note that either the source or the target is always English.", "In this work we propose explicit alignment losses, as an additional constraint for multilingual NMT models, with the goal of improving zero-shot translation.", "We view the zero-shot NMT problem in the light of subspace alignment for domain adaptation, and propose simple approaches to achieve this.", "Our experiments demonstrate significantly improved zero-shot translation performance that are, for the first time, comparable to strong pivoting based approaches.", "Through careful analyses we show how our proposed alignment losses result in better representations, and thereby better zeroshot performance, while still maintaining performance on the supervised directions.", "Our proposed methods have been shown to work reliably on two public benchmarks datasets: WMT EnglishFrench-German and the IWSLT 2017 shared task." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.31111109256744385, 0.23076923191547394, 0.060606054961681366, 0.11428570747375488, 0.25, 0.1818181723356247, 0.1904761791229248, 0.1599999964237213, 0.1395348757505417, 0.2666666507720947, 0.25, 0.05405404791235924, 0.15584415197372437, 0.11428570747375488, 0.2448979616165161, 0.23076923191547394, 0.09999999403953552, 0.1818181723356247, 0.2702702581882477, 0.23529411852359772, 0.12121211737394333, 0.0624999962747097, 0.14814814925193787, 0.11428570747375488, 0, 0.1818181723356247, 0.1599999964237213, 0.1463414579629898, 0.138888880610466, 0.1666666567325592, 0.10810810327529907, 0.09756097197532654, 0, 0.1111111044883728, 0, 0.05405404791235924, 0, 0.0833333283662796, 0.05882352590560913, 0.04999999329447746, 0.1702127605676651, 0.1249999925494194, 0.2790697515010834, 0.19512194395065308, 0.19999998807907104, 0.08695651590824127, 0.0952380895614624 ]
ByWMz305FQ
true
[ "Simple similarity constraints on top of multilingual NMT enables high quality translation between unseen language pairs for the first time." ]
[ "We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network.", "The standard deviation is exponential in the ratio of network depth to width.", "Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity.", "Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width.", "This is sharp contrast to the regime where depth is fixed and network width is very large.", "Our results suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime.", "Modern neural networks are typically overparameterized: they have many more parameters than the size of the datasets on which they are trained.", "That some setting of parameters in such networks can interpolate the data is therefore not surprising.", "But it is a priori unexpected that not only can such interpolating parameter values can be found by stochastic gradient descent (SGD) on the highly non-convex empirical risk but also that the resulting network function generalizes to unseen data.", "In an overparameterized neural network N (x) the individual parameters can be difficult to interpret, and one way to understand training is to rewrite the SGD updates ∆θ p = − λ ∂L ∂θ p , p = 1, . . . , P of trainable parameters θ = {θ p } P p=1 with a loss L and learning rate λ as kernel gradient descent updates for the values N (x) of the function computed by the network:", "Here B = {(x 1 , y 1 ), . . . , (x |B| , y |B| )} is the current batch, the inner product is the empirical 2 inner product over B, and K N is the neural tangent kernel (NTK):", "Relation (1) is valid to first order in λ.", "It translates between two ways of thinking about the difficulty of neural network optimization:", "(i) The parameter space view where the loss L, a complicated function of θ ∈ R #parameters , is minimized using gradient descent with respect to a simple (Euclidean) metric;", "(ii) The function space view where the loss L, which is a simple function of the network mapping x → N (x), is minimized over the manifold M N of all functions representable by the architecture of N using gradient descent with respect to a potentially complicated Riemannian metric K N on M N .", "A remarkable observation of Jacot et al. (2018) is that K N simplifies dramatically when the network depth d is fixed and its width n tends to infinity.", "In this setting, by the universal approximation theorem (Cybenko, 1989; Hornik et al., 1989) , the manifold M N fills out any (reasonable) ambient linear space of functions.", "The results in Jacot et al. (2018) then show that the kernel K N in this limit is frozen throughout training to the infinite width limit of its average E[K N ] at initialization, which depends on the depth and non-linearity of N but not on the dataset.", "This mapping between parameter space SGD and kernel gradient descent for a fixed kernel can be viewed as two separate statements.", "First, at initialization, the distribution of K N converges in the infinite width limit to the delta function on the infinite width limit of its mean E[K N ].", "Second, the infinite width limit of SGD dynamics in function space is kernel gradient descent for this limiting mean kernel for any fixed number of SGD iterations.", "As long as the loss L is well-behaved with respect to the network outputs N (x) and E[K N ] is non-degenerate in the subspace of function space given by values on inputs from the dataset, SGD for infinitely wide networks will converge with probability 1 to a minimum of the loss.", "Further, kernel method-based theorems show that even in this infinitely overparameterized regime neural networks will have non-vacuous guarantees on generalization (Wei et al., 2018) .", "But replacing neural network training by gradient descent for a fixed kernel in function space is also not completely satisfactory for several reasons.", "First, it suggests that no feature learning occurs during training for infinitely wide networks in the sense that the kernel E[K N ] (and hence its associated feature map) is data-independent.", "In fact, empirically, networks with finite but large width trained with initially large learning rates often outperform NTK predictions at infinite width.", "One interpretation is that, at finite width, K N evolves through training, learning data-dependent features not captured by the infinite width limit of its mean at initialization.", "In part for such reasons, it is important to study both empirically and theoretically finite width corrections to K N .", "Another interpretation is that the specific NTK scaling of weights at initialization (Chizat & Bach, 2018b; a; Mei et al., 2019; 2018; Rotskoff & Vanden-Eijnden, 2018a; b) and the implicit small learning rate limit (Li et al., 2019) obscure important aspects of SGD dynamics.", "Second, even in the infinite width limit, although K N is deterministic, it has no simple analytical formula for deep networks, since it is defined via a layer by layer recursion.", "In particular, the exact dependence, even in the infinite width limit, of K N on network depth is not well understood.", "Moreover, the joint statistical effects of depth and width on K N in finite size networks remain unclear, and the purpose of this article is to shed light on the simultaneous effects of depth and width on K N for finite but large widths n and any depth d.", "Our results apply to fully connected ReLU networks at initialization for which our main contributions are:", "1. In contrast to the regime in which the depth d is fixed but the width n is large, K N is not approximately deterministic at initialization so long as d/n is bounded away from 0.", "Specifically, for a fixed input x the normalized on-diagonal second moment of K N satisfies", "Thus, when d/n is bounded away from 0, even when both n, d are large, the standard deviation of K N (x, x) is at least as large as its mean, showing that its distribution at initialization is not close to a delta function.", "See Theorem 1.", "2. Moreover, when L is the square loss, the average of the SGD update ∆K N (x, x) to K N (x, x) from a batch of size one containing x satisfies", "where n 0 is the input dimension.", "Therefore, if d 2 /nn 0 > 0, the NTK will have the potential to evolve in a data-dependent way.", "Moreover, if n 0 is comparable to n and d/n > 0 then it is possible that this evolution will have a well-defined expansion in d/n.", "See Theorem 2.", "In both statements above, means is bounded above and below by universal constants.", "We emphasize that our results hold at finite d, n and the implicit constants in both and in the error terms Under review as a conference paper at ICLR 2020", "2 ) are independent of d, n.", "Moreover, our precise results, stated in §2 below, hold for networks with variable layer widths.", "We have denoted network width by n only for the sake of exposition.", "The appropriate generalization of d/n to networks with varying layer widths is the parameter", "which in light of the estimates in (1) and (2) plays the role of an inverse temperature.", "Taken together Theorems 1 and 2 show that in fully connected ReLU nets that are both deep and wide the neural tangent kernel K N is genuinely stochastic and enjoys a non-trivial evolution during training.", "This suggests that in the overparameterized limit n, d → ∞ with d/n ∈ (0, ∞), the kernel K N may learn data-dependent features.", "Moreover, our results show that the fluctuations of both K N and its time derivative are exponential in the inverse temperature β = d/n.", "It would be interesting to obtain an exact description of its statistics at initialization and to describe the law of its trajectory during training.", "Assuming this trajectory turns out to be data-dependent, our results suggest that the double descent curve Belkin et al. (2018; 2019); Spigler et al. (2018) that trades off complexity vs. generalization error may display significantly different behaviors depending on the mode of network overparameterization.", "However, it is also important to point out that the results in Hanin (2018); Hanin & Nica (2018); Hanin & Rolnick (2018) show that, at least for fully connected ReLU nets, gradient-based training is not numerically stable unless d/n is relatively small (but not necessarily zero).", "Thus, we conjecture that there may exist a \"weak feature learning\" NTK regime in which network depth and width are both large but 0 < d/n 1.", "In such a regime, the network will be stable enough to train but flexible enough to learn data-dependent features.", "In the language of Chizat & Bach (2018b) one might say this regime displays weak lazy training in which the model can still be described by a stochastic positive definite kernel whose fluctuations can interact with data.", "Finally, it is an interesting question to what extent our results hold for non-linearities other than ReLU and for network architectures other than fully connected (e.g. convolutional and residual).", "Typical ConvNets, for instance, are significantly wider than they are deep, and we leave it to future work to adapt the techniques from the present article to these more general settings.", ".", "Since the number of V in Γ 2 ( n) with specified V (0) equals", ", we find that so that for each", "and similarly,", "Here, E x is the expectation with respect to the probability measure on V = (v 1 , v 2 ) ∈ Γ 2 obtained by taking v 1 , v 2 independent, each drawn from the products of the measure", "We are now in a position to complete the proof of Theorems 1 and 2.", "To do this, we will evaluate the expectations E x above to leading order in i 1/n i with the help of the following elementary result which is proven as Lemma 18 in Hanin & Nica (2018).", "Proposition 10.", "Let A 0 , A 1 , . . . , A d be independent events with probabilities p 0 , . . . , p d and B 0 , . . . , B d be independent events with probabilities q 0 , . . . , q d such that", "Denote by X i the indicator that the event A i happens, X i := 1 {Ai} , and by Y i the indicator that B i happens,", "Then, if γ i ≥ 1 for every i, we have:", "where by convention α 0 = γ 0 = 1.", "In contrast, if γ i ≤ 1 for every i, we have:", "We first apply Proposition 10 to the estimates above for", "we find that", "Since the contribution for each layer in the product is bounded above and below by constants, we have that", "2 is bounded below by a constant times", "and above by a constant times", "Here, note that the initial condition given by x and the terminal condition that all paths end at one neuron in the final layer are irrelevant.", "The expression (45) is there precisely", "3 ≤ 1, and K i = 1.", "Thus, since for i = 1, . . . , d − 1, the probability of X i is 1/n i + O(1/n 2 i ), we find that", "where in the last inequality we used that 1 + x ≥ e", "When combined with (23) this gives the lower bound in Proposition 3.", "The matching upper bound is obtained from (46) in the same way using the opposite inequality from Proposition 10.", "To complete the proof of Proposition 3, we prove the analogous bounds for E[∆ ww ] in a similar fashion.", "Namely, we fix 1 ≤ i 1 < i 2 ≤ d and write", "The set A is the event that the first collision between layers i 1 , i 2 occurs at layer .", "We then have", "On the event A , notice that F * (V ) only depends on the layers 1 ≤ i ≤ i 1 and layers < i ≤ d because the event A fixes what happens in layers i 1 < i ≤ .", "Mimicking the estimates (45), (46) and the application of Proposition 10 and using independence, we get that:", "Finally, we compute:", "Under review as a conference paper at ICLR 2020", "Combining this we obtain that E[∆ ww ]/ x 4 2 is bounded above and below by constants times", "This completes the proof of Proposition 3, modulo the proofs of Lemmas 6-9, which we supply below." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.47826087474823, 0.34285715222358704, 0.2857142686843872, 0.28070175647735596, 0.2702702581882477, 0.2083333283662796, 0.1463414579629898, 0.15789473056793213, 0.10344827175140381, 0.202531635761261, 0.2745097875595093, 0.12903225421905518, 0.11428570747375488, 0.15686273574829102, 0.1269841194152832, 0.20408162474632263, 0.04081632196903229, 0.26229506731033325, 0.1904761791229248, 0.1395348757505417, 0.2222222238779068, 0.21875, 0.12765957415103912, 0.22727271914482117, 0.1599999964237213, 0.04878048226237297, 0.1249999925494194, 0.1463414579629898, 0.09677419066429138, 0.19999998807907104, 0.2380952388048172, 0.2181818187236786, 0.052631575614213943, 0.2641509473323822, 0.10810810327529907, 0.1666666567325592, 0, 0.1249999925494194, 0.13793103396892548, 0.1463414579629898, 0.22727271914482117, 0, 0.11428570747375488, 0.2083333283662796, 0.06896551698446274, 0.05405404791235924, 0.11428570747375488, 0.1666666567325592, 0.1666666567325592, 0.40740740299224854, 0.13333332538604736, 0.2222222238779068, 0.09302324801683426, 0.032258059829473495, 0.13114753365516663, 0.2448979616165161, 0.10256409645080566, 0.17543859779834747, 0.1249999925494194, 0.12244897335767746, 0.1111111044883728, 0, 0.07547169178724289, 0.2702702581882477, 0.145454540848732, 0.09999999403953552, 0.10256409645080566, 0, 0, 0, 0.0624999962747097, 0, 0.19999998807907104, 0.13333332538604736, 0.1428571343421936, 0.1818181723356247, 0.1428571343421936, 0.06666666269302368, 0.13333332538604736, 0.11428570747375488, 0.11764705181121826, 0.20512819290161133, 0.1463414579629898, 0.060606054961681366, 0.14999999105930328, 0, 0.1249999925494194, 0.10810810327529907, 0, 0.12903225421905518, 0.09756097197532654, 0.05405404791235924 ]
SJgndT4KwB
true
[ "The neural tangent kernel in a randomly initialized ReLU net is non-trivial fluctuations as long as the depth and width are comparable. " ]
[ "Most algorithms for representation learning and link prediction in relational data have been designed for static data.", "However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems.", "This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time.", "For the problem of link prediction under temporal constraints, i.e., answering queries of the form (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4.\n", "We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance.", "Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.", "Link prediction in relational data has been the subject of interest, given the widespread availability of such data and the breadth of its use in bioinformatics (Zitnik et al., 2018) , recommender systems (Koren et al., 2009) or Knowledge Base completion (Nickel et al., 2016a) .", "Relational data is often temporal, for example, the action of buying an item or watching a movie is associated to a timestamp.", "Some medicines might not have the same adverse side effects depending on the subject's age.", "The task of temporal link prediction is to find missing links in graphs at precise points in time.", "In this work, we study temporal link prediction through the lens of temporal knowledge base completion, which provides varied benchmarks both in terms of the underlying data they represent, but also in terms of scale.", "A knowledge base is a set of facts (subject, predicate, object) about the world that are known to be true.", "Link prediction in a knowledge base amounts to answer incomplete queries of the form (subject, predicate, ?) by providing an accurate ranking of potential objects.", "In temporal knowledge bases, these facts have some temporal metadata attached.", "For example, facts might only hold for a certain time interval, in which case they will be annotated as such.", "Other facts might be event that happened at a certain point in time.", "Temporal link prediction amounts to answering queries of the form (subject, predicate, ?, timestamp) .", "For example, we expect the ranking of queries (USA, president, ?, timestamp) to vary with the timestamps.", "As tensor factorization methods have proved successful for Knowledge Base Completion (Nickel et al., 2016a; Trouillon et al., 2016; Lacroix et al., 2018) , we express our Temporal Knowledge Base Completion problem as an order 4 tensor completion problem.", "That is, timestamps are discretized and used to index a 4-th mode in the binary tensor holding (subject, predicate, object, timestamps) facts.", "First, we introduce a ComplEx (Trouillon et al., 2016) decomposition of this order 4 tensor, and link it with previous work on temporal Knowledge Base completion.", "This decomposition yields embeddings for each timestamps.", "A natural prior is for these timestamps representation to evolve slowly over time.", "We are able to introduce this prior as a regularizer for which the optimum is a variation on the nuclear p-norm.", "In order to deal with heterogeneous temporal knowledge bases where a significant amount of relations might be non-temporal, we add a non-temporal component to our decomposition.", "Experiments on available benchmarks show that our method outperforms the state of the art for similar number of parameters.", "We run additional experiments for larger, regularized models and obtain improvements of up to 0.07 absolute Mean Reciprocal Rank (MRR).", "Finally, we propose a dataset of 400k entities, based on Wikidata, with 7M train triples, of which 10% contain temporal validity information.", "This dataset is larger than usual benchmarks in the Knowledge Base completion community and could help bridge the gap between the method designed and the envisaged web-scale applications.", "Tensor methods have been successful for Knowledge Base completion.", "In this work, we suggest an extension of these methods to Temporal Knowledge Bases.", "Our methodology adapts well to the various form of these datasets : point-in-time, beginning and endings or intervals.", "We show that our methods reach higher performances than the state of the art for similar number of parameters.", "For several datasets, we also provide performances for higher dimensions.", "We hope that the gap between low-dimensional and high-dimensional models can motivate further research in models that have increased expressivity at lower number of parameters per entity.", "Finally, we propose a large scale temporal dataset which we believe represents the challenges of large scale temporal completion in knowledge bases.", "We give performances of our methods for low-ranks on this dataset.", "We believe that, given its scale, this dataset could also be an interesting addition to non-temporal knowledge base completion.", "Then according to Kolda & Bader (2009) , unfolding along modes 3 and 4 leads to an order three tensor of decompositionX", "Where • is the Khatri-Rao product (Smilde et al., 2005) , which is the column-wise Kronecker product :", "Note that for a fourth mode of size L:", "This justifies the regularizers used in Section 3.2." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05714285373687744, 0.09090908616781235, 0.0833333283662796, 0.1538461446762085, 0.22857142984867096, 0.31372547149658203, 0.145454540848732, 0.19999998807907104, 0.11764705181121826, 0.1621621549129486, 0.20408162474632263, 0.25, 0.22727271914482117, 0.13333332538604736, 0, 0, 0.1764705777168274, 0.1666666567325592, 0.07999999821186066, 0.1904761791229248, 0.21276594698429108, 0, 0.060606054961681366, 0.20512819290161133, 0.1818181723356247, 0.2702702581882477, 0.24390242993831635, 0.19512194395065308, 0.13636362552642822, 0.06896550953388214, 0.11764705181121826, 0.21052631735801697, 0.3243243098258972, 0.06666666269302368, 0.17777776718139648, 0.31578946113586426, 0.25806450843811035, 0.25641024112701416, 0.19512194395065308, 0.05714285373687744, 0.06896550953388214, 0.13793103396892548 ]
rke2P1BFwS
true
[ "We propose new tensor decompositions and associated regularizers to obtain state of the art performances on temporal knowledge base completion." ]
[ "The conventional approach to solving the recommendation problem greedily ranks\n", "individual document candidates by prediction scores.", "However, this method fails to\n", "optimize the slate as a whole, and hence, often struggles to capture biases caused\n", "by the page layout and document interdepedencies.", "The slate recommendation\n", "problem aims to directly find the optimally ordered subset of documents (i.e.\n", "slates) that best serve users’ interests.", "Solving this problem is hard due to the\n", "combinatorial explosion of document candidates and their display positions on the\n", "page.", "Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework.", "In this paper, we introduce List Conditional Variational Auto-Encoders (ListCVAE),\n", "which learn the joint distribution of documents on the slate conditioned\n", "on user responses, and directly generate full slates.", "Experiments on simulated\n", "and real-world data show that List-CVAE outperforms greedy ranking methods\n", "consistently on various scales of documents corpora.", "Recommender systems modeling is an important machine learning area in the IT industry, powering online advertisement, social networks and various content recommendation services BID0 Lu et al., 2015) .", "In the context of document recommendation, its aim is to generate and display an ordered list of \"documents\" to users (called a \"slate\" in BID2 ; BID3 ), based on both user preferences and documents content.", "For large scale recommender systems, a common scalable approach at inference time is to first select a small subset of candidate documents S out of the entire document pool D. This step is called \"candidate generation\".", "Then a function approximator such as a neural network (e.g., a Multi-Layer Perceptron (MLP)) called the \"ranking model\" is used to predict probabilities of user engagements for each document in the small subset S and greedily generates a slate by sorting the top documents from S based on estimated prediction scores BID4 .", "This two-step process is widely popular to solve large scale recommendation problems due to its scalability and fast inference at serving time.", "The candidate generation step can decrease the number of candidates from millions to hundreds or less, effectively dealing with scalability when faced with a large corpus of documents D. Since |S| is much smaller than |D|, the ranking model can be reasonably complicated without increasing latency.However, there are two main problems with this approach.", "First the candidate generation and the ranking models are not trained jointly, which can lead to having candidates in S that are not the highest scoring documents of the ranking model.", "Second and most importantly, the greedy ranking method suffers from numerous biases that come with the visual presentation of the slate and context in which documents are presented, both at training and serving time.", "For example, there exists positional biases caused by users paying more attention to prominent slate positions BID5 , and contextual biases, due to interactions between documents presented together in the same slate, such as competition and complementarity, relative attractiveness, etc.", ".In this paper, we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework. We consider a slate \"optimal\" when it maximizes some type of user engagement feedback, a typical desired scenario in recommender systems. For example, given a database of song tracks, the optimal slate can be an ordered list (in time or space) of k songs such that the user ideally likes every song in that list. Another example considers news articles, the optimal slate has k ordered articles such that every article is read by the user. In general, optimality can be defined as a desired user response vector on the slate and the proposed model should be agnostic to these problem-specific definitions. Solving the slate recommendation problem by direct slate generation differs from ranking in that first, the entire slate is used as a training example instead of single documents, preserving numerous biases encoded into the slate that might influence user responses. Secondly, it does not assume that more relevant documents should necessarily be put in earlier positions in the slate at serving time. Our model directly generates slates, taking into account all the relevant biases learned through training.In this paper, we apply Conditional Variational Auto-Encoders (CVAEs) BID7 BID8 to model the distributions of all documents in the same slate conditioned on the user response. All documents in a slate along with their positional, contextual biases are jointly encoded into the latent space, which is then sampled and combined with desired conditioning for direct slate generation, i.e. sampling from the learned conditional joint distribution. Therefore, the model first learns which slates give which type of responses and then directly generates similar slates given a desired response vector as the conditioning at inference time. We call our proposed model List-CVAE. The key contributions of our work are:1. To the best of our knowledge, this is the first model that provides a conditional generative modeling framework for slate recommendation by direct generation. It does not necessarily require a candidate generator at inference time and is flexible enough to work with any visual presentation of the slate as long as the ordering of display positions is fixed throughout training and inference times.2. To deal with the problem at scale, we introduce an architecture that uses pretrained document embeddings combined with a negatively downsampled k-head softmax layer within the List-CVAE model, where k is the slate size.The structure of this paper is the following. First we introduce related work using various CVAE-type models as well as other approaches to solve the slate generation problem. Next we introduce our List-CVAE modeling approach. The last part of the paper is devoted to experiments on both simulated and the real-world datasets.2", "RELATED WORK Traditional matrix factorization techniques have been applied to recommender systems with success in modeling competitions such as the Netflix Prize BID10 .", "Later research emerged on using autoencoders to improve on the results of matrix factorization BID11 (CDAE, CDL).", "More recently several works use Boltzmann Machines BID13 and variants of VAE models in the Collaborative Filtering (CF) paradigm to model recommender systems BID14 BID15 BID16 ) (Collaborative VAE, JMVAE, CVAE-CF, JVAE-CF).", "See FIG0 for model structure comparisons.", "In this paper, unless specified otherwise, the user features and any context are routinely considered part of the conditioning variables (in Appendix A Personalization Test, we test List-CVAE generating personalized slates for different users).", "These models have primarily focused on modeling individual document or pairs of documents in the slate and applying greedy ordering at inference time.Our model is also using a VAE type structure and in particular, is closely related to the Joint Multimodel Variational Auto-Encoder (JMVAE) architecture FIG0 ).", "However, we use whole slates as input instead of single documents, and directly generate slates instead of using greedy ranking by prediction scores.Other relevant work from the Information Retrieval (IR) literature are listwise ranking methods BID17 BID18 BID19 BID20 BID21 .", "These methods use listwise loss functions that take the contexts and positions of training examples into account.", "However, they eventually assign a prediction score for each document and greedily rank them at inference time.In the Reinforcement Learning (RL) literature, BID3 view the whole slates as actions and use a deterministic policy gradient update to learn a policy that generates these actions, given concatenated document features as input.Finally, the framework proposed by BID22 predicts user engagement for document and position pairs.", "It optimizes whole page layouts at inference time but may suffer from poor scalability due to the combinatorial explosion of all possible document position pairs.", "The List-CVAE model moves away from the conventional greedy ranking paradigm, and provides the first conditional generative modeling framework that approaches slate recommendation problem using direct slate generation.", "By modeling the conditional probability distribution of documents in a slate directly, this approach not only automatically picks up the positional and contextual biases between documents at both training and inference time, but also gracefully avoids the problem of combinatorial explosion of possible slates when the candidate set is large.", "The framework is flexible and can incorporate different types of conditional generative models.", "In this paper we showed its superior performance over popular greedy and auto-regressive baseline models with a conditional VAE model.In addition, the List-CVAE model has good scalability.", "We designed an architecture that uses pretrained document embeddings combined with a negatively downsampled k-head softmax layer that greatly speeds up the training, scaling easily to millions of documents." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0, 0.0952380895614624, 0.13333332538604736, 0, 0.10526315867900848, 0.13333332538604736, 0, 0.0833333283662796, 0, 0.11428570747375488, 0, 0.07692307233810425, 0.1666666567325592, 0, 0, 0, 0.08888888359069824, 0.12244897335767746, 0.08163265138864517, 0.1249999925494194, 0.10810810327529907, 0.09090908616781235, 0.09756097197532654, 0, 0.03703703358769417, 0.08695652335882187, 0.10256409645080566, 0.0624999962747097, 0.1249999925494194, 0.27272728085517883, 0.04081632196903229, 0.1666666567325592, 0.07547169178724289, 0, 0.11428570747375488, 0.04878048226237297, 0.0952380895614624, 0.03389830142259598, 0, 0.0952380895614624, 0.13636362552642822 ]
r1xX42R5Fm
true
[ "We used a CVAE type model structure to learn to directly generate slates/whole pages for recommendation systems." ]
[ "Neural networks for structured data like graphs have been studied extensively in recent years.\n", "To date, the bulk of research activity has focused mainly on static graphs.\n", "However, most real-world networks are dynamic since their topology tends to change over time.\n", "Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining.\n", "Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature.\n", "In this paper, we propose a model that predicts the evolution of dynamic graphs.\n", "Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs.\n", "Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology.\n", "We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets.\n", "Results demonstrate the effectiveness of the proposed model.", "Graph neural networks (GNNs) have emerged in recent years as an effective tool for analyzing graph-structured data (Scarselli et al., 2008; Gilmer et al., 2017; Zhou et al., 2018; Wu et al., 2019) .", "These architectures bring the expressive power of deep learning into non-Euclidean data such as graphs, and have demonstrated convincing performance in several graph mining tasks, including graph classification (Morris et al., 2019) , link prediction (Zhang & Chen, 2018) , and community detection Chen et al., 2017) .", "So far, GNNs have been mainly applied to tasks that involve static graphs.", "However, most real-world networks are dynamic, i.", "e.", ", nodes and edges are added and removed over time.", "Despite the success of GNNs in various applications, it is still not clear if these models are useful for learning from dynamic graphs.", "Although some models have been applied to this type of data, most studies have focused on predicting a low-dimensional representation (i. e., embedding) of the graph for the next time step (Li et al., 2016; Nguyen et al., 2018; Goyal et al., 2018; Seo et al., 2018; Pareja et al., 2019) .", "These representations can then be used in downstream tasks (Li et al., 2016; Goyal et al., 2018; Meng et al., 2018; Pareja et al., 2019) .", "Predicting the topology of the graph is a task that has not been properly addressed yet.", "Graph generation, another important task in graph mining, has attracted a lot of attention from the deep learning community in recent years.", "The objective of this task is to generate graphs that exhibit specific properties, e.", "g.", ", degree distribution, node triangle participation, community structure etc.", "Traditionally, graphs are generated based on some network generation model such as the Erdős-Rényi model.", "These models focus on modeling one or more network properties, and neglect the others.", "Neural network approaches, on the other hand, can better capture the properties of graphs since they follow a supervised approach (You et al., 2018; Bojchevski et al., 2018; Grover et al., 2018) .", "These architectures minimize a loss function such as the reconstruction error of the adjacency matrix or the value of a graph comparison algorithm.", "Capitalizing on recent developments in neural networks for graph-structured data and graph generation, we propose in this paper, to the best of our knowledge, the first framework for predicting the evolution of the topology of networks in time.", "The proposed framework can be viewed as an encoderdecoder architecture.", "The \"encoder\" network takes a sequence of graphs as input and uses a GNN to produce a low-dimensional representation for each one of these graphs.", "These representations capture structural information about the input graphs.", "Then, it employs a recurrent architecture which predicts a representation for the future instance of the graph.", "The \"decoder\" network corresponds to a graph generation model which utilizes the predicted representation, and generates the topology of the graph for the next time step.", "The proposed model is evaluated over a series of experiments on synthetic and real-world datasets.", "To measure its effectiveness, the generated graphs need to be compared with the corresponding ground-truth graph instances.", "To this end, we use the Weisfeiler-Lehman subtree kernel which scales to very large graphs and has achieved state-of-the-art results on many graph datasets (Shervashidze et al., 2011) .", "The proposed model is compared against several baseline methods.", "Results show that the proposed model is very competitive, and in most cases, outperforms the competing methods.", "The rest of this paper is organized as follows.", "Section 2 provides an overview of the related work and elaborates our contribution.", "Section 3 introduces some preliminary concepts and definitions related to the graph generation problem, followed by a detailed presentation of the components of the proposed model.", "Section 4 evaluates the proposed model on several tasks.", "Finally, Section 5 concludes.", "In this paper, we proposed EvoNet, a model that predicts the evolution of dynamic graphs, following an encoder-decoder framework.", "We also proposed an evaluation methodology for this task which capitalizes on the well-established family of graph kernels.", "Experiments show that the proposed model outperforms traditional random graph methods on both synthetic and real-world datasets." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.13636362552642822, 0.13333332538604736, 0.30434781312942505, 0.07999999821186066, 0.35555556416511536, 0.3921568691730499, 0.37735849618911743, 0.0833333283662796, 0.10810810327529907, 0.10344827175140381, 0.1111111044883728, 0.1395348757505417, 0.054054051637649536, 0.05128204822540283, 0.22641508281230927, 0.17391303181648254, 0, 0.31111109256744385, 0.19607841968536377, 0.22727271914482117, 0, 0.09090908616781235, 0.09090908616781235, 0.14035087823867798, 0.16326530277729034, 0.4067796468734741, 0.04999999701976776, 0.2745097875595093, 0.10256409645080566, 0.31111109256744385, 0.307692289352417, 0.17777776718139648, 0.17391303181648254, 0.20338982343673706, 0.05128204822540283, 0.17391303181648254, 0.10256409645080566, 0.1395348757505417, 0.22641508281230927, 0.05128204822540283, 0, 0.2448979616165161, 0.1666666567325592, 0.1702127605676651 ]
Byg5flHFDr
true
[ "Combining graph neural networks and the RNN graph generative model, we propose a novel architecture that is able to learn from a sequence of evolving graphs and predict the graph topology evolution for the future timesteps" ]
[ "As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions.", "Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions.", "Our model consists of two parts:", "(i) a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and", "(ii) three independent simple-question answerers that classify the corresponding relations for each simple question.", "Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art results on WebQuestions and MetaQA.", "We analyze the interpretable decomposition process as well as generated partitions.", "Knowledge-Based Question Answering (KBQA) is one of the most interesting approaches of answering a question, which bridges a curated knowledge base of tremendous facts to answerable questions.", "With question answering as a user-friendly interface, users can easily query a knowledge base through natural language, i.e., in their own words.", "In the past few years, many systems BID5 BID2 Yih et al., 2015; BID11 BID13 have achieved remarkable improvements in various datasets, such as WebQuestions BID5 , SimpleQuestions BID6 and MetaQA .However", ", most of them BID31 BID6 BID10 BID34 BID36 assume that only simple questions are answerable. Simple", "questions are questions that have only one relation from the topic entity to unknown tail entities (answers, usually substituted by an interrogative word) while compound questions are questions that have multiple 1 relations. For example", ", \"Who are the daughters of Barack Obama?\" is a simple question and \"Who is the mother of the daughters of Barack Obama?\" is a compound question which can be decomposed into two simple questions.In this paper, we aim to relax the assumption of answerable questions from simple questions to compound questions. Figure 1 illustrates", "the process of answering compound questions. Intuitively, to answer", "a compound question, traditional approaches firstly detect topic entity mentioned in the question, as the starting point for traversing the knowledge graph, then find a chain of multiple (≤ 3) relations as a multi-hop 2 path to golden answers.We propose a learning-to-decompose agent which assists simple-question answerers to solve compound questions directly. Our agent learns a policy", "for decomposing compound question into simple ones in a meaningful way, guided by the feedback from the downstream simple-question answerers.The goal of the agent is to produce partitions and compute the compositional structure of questions 1 We assume that the number of corresponding relations is at most three. 2 We are aware of the term", "multi-hop question in the literature. We argue that compound question", "is a better fit for the context of KBQA since multi-hop characterizes a path, not a question. As for document-based QA, multi-hop", "also refers to routing over multiple evidence to answers.Figure 1: An example of answering compound questions. Given a question Q, we first identify", "the topic entity e with entity linking. By relation detection, a movie-to-actor", "relation f 1 , an actor-tomovie relation f 2 and a movie-to-writer relation f 3 forms a path to the answers W i . Note that each relation f i corresponds", "to a part of the question. If we decomposes the question in a different", "way, we may find a movie-to-movie relation g as a shortcut, and g(e) = f 2 (f 1 (e)) = (f 2 • f 1 )(e) holds. Our model discovered such composite rules. See", "section 4 for further discussion.with maximum", "information utilization. The intuition is that encouraging the model to learn", "structural compositions of compound questions will bias the model toward better generalizations about how the meaning of a question is encoded in terms of compositional structures on sequences of words, leading to better performance on downstream question answering tasks.We demonstrate that our agent captures the semantics of compound questions and generate interpretable decomposition. Experimental results show that our novel approach achieves", "state-of-the-art performance in two challenging datasets (WebQuestions and MetaQA), without re-designing complex neural networks to answer compound questions.", "Understanding compound questions, in terms of The Principle of Semantic Compositionality BID20 , require one to decompose the meaning of a whole into the meaning of parts.", "While previous works focus on leveraging knowledge graph for generating a feasible path to answers, we Figure 4 : A continuous example of figure 1.", "The hollow circle indicates the corresponding action the agent takes for each time step.", "The upper half is the actual prediction while the lower half is a potential partition.", "Since we do not allow a word to join two partitions, the agent learns to separate \"share\" and \"actors\" into different partitions to maximize information utilization.propose a novel approach making full use of question semantics efficiently, in terms of the Principle of Semantic Compositionality.In other words, it is counterintuitive that compressing the whole meaning of a variable-length sentence to a fixed-length vector, which leaves the burden to the downstream relation classifier.", "In contrast, we assume that a compound question can be decomposed into three simple questions at most.", "Our model generates partitions by a learned policy given a question.", "The vector representations of each partition are then fed into the downstream relation classifier.While previous works focus on leveraging knowledge graph for generating a feasible path to answers, we propose a novel approach making full use of question semantics efficiently, in terms of the Principle of Semantic Compositionality.Our learning-to-decompose agent can also serve as a plug-and-play module for other question answering task that requires to understand compound questions.", "This paper is an example of how to help the simple-question answerers to understand compound questions.", "The answerable question assumption must be relaxed in order to generalize question answering." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.22857142984867096, 0.36734694242477417, 0, 0.4375, 0.2666666507720947, 0.0952380895614624, 0.07692307233810425, 0.14999999105930328, 0.1538461446762085, 0, 0.060606054961681366, 0.13333332538604736, 0.15686273574829102, 0.23999999463558197, 0.3174603283405304, 0.29032257199287415, 0.3199999928474426, 0.11764705181121826, 0.25641024112701416, 0.07407406717538834, 0.15789473056793213, 0.2222222238779068, 0.045454539358615875, 0, 0.14814814925193787, 0.2153846174478531, 0.1764705777168274, 0.15789473056793213, 0.19512194395065308, 0.06896550953388214, 0.0714285671710968, 0.1599999964237213, 0.24242423474788666, 0.1538461446762085, 0.25974026322364807, 0.25806450843811035, 0.1428571343421936 ]
SJl2ps0qKQ
true
[ "We propose a learning-to-decompose agent that helps simple-question answerers to answer compound question over knowledge graph." ]
[ "Energy based models outputs unmormalized log-probability values given datasamples. ", "Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. ", "However, standard maximum likelihood training iscomputationally expensive due to the requirement of sampling model distribution.", "Score matching potentially alleviates this problem, and denoising score matching(Vincent, 2011) is a particular convenient version. ", "However, previous attemptsfailed to produce models capable of high quality sample synthesis. ", "We believethat it is because they only performed denoising score matching over a singlenoise scale.", "To overcome this limitation, here we instead learn an energy functionusing all noise scales. ", "When sampled using Annealed Langevin dynamics andsingle step denoising jump, our model produced high-quality samples comparableto state-of-the-art techniques such as GANs, in addition to assigning likelihood totest data comparable to previous likelihood models. ", "Our model set a new sam-ple quality baseline in likelihood-based models. ", "We further demonstrate that our model learns sample distribution and generalize well on an image inpainting tasks.", "Treating data as stochastic samples from a probability distribution and developing models that can learn such distributions is at the core for solving a large variety of application problems, such as error correction/denoising (Vincent et al., 2010) , outlier/novelty detection (Zhai et al., 2016; Choi and Jang, 2018) , sample generation (Nijkamp et al., 2019; Du and Mordatch, 2019) , invariant pattern recognition, Bayesian reasoning (Welling and Teh, 2011) which relies on good data priors, and many others.", "Energy-Based Models (EBMs) (LeCun et al., 2006; Ngiam et al., 2011 ) assign an energy E(x x x) to each data point x x x which implicitly defines a probability by the Boltzmann distribution p m (x x x) = e −E(x x x) /Z.", "Sampling from this distribution can be used as a generative process that yield plausible samples of x x x.", "Compared to other generative models, like GANs (Goodfellow et al., 2014) , flowbased models (Dinh et al., 2015; Kingma and Dhariwal, 2018) , or auto-regressive models (van den Oord et al., 2016; Ostrovski et al., 2018) , energy-based models have significant advantages.", "First, they provide explicit (unnormalized) density information, compositionality (Hinton, 1999; Haarnoja et al., 2017) , better mode coverage (Kumar et al., 2019) and flexibility (Du and Mordatch, 2019) .", "Further, they do not require special model architecture, unlike auto-regressive and flow-based models.", "Recently, Energy-based models has been successfully trained with maximum likelihood (Nijkamp et al., 2019; Du and Mordatch, 2019) , but training can be very computationally demanding due to the need of sampling model distribution.", "Variants with a truncated sampling procedure have been proposed, such as contrastive divergence (Hinton, 2002) .", "Such models learn much faster with the draw back of not exploring the state space thoroughly (Tieleman, 2008) .", "Score matching (SM) (Hyvärinen, 2005) circumvents the requirement of sampling the model distribution.", "In score matching, the score function is defined to be the gradient of log-density or the negative energy function.", "The expected L2 norm of difference between the model score function and the data score function are minimized.", "One convenient way of using score matching is learning the energy function corresponding to a Gaussian kernel Parzen density estimator (Parzen, 1962) of the data: p σ0 (x x x) = q σ0 (x x x|x x x)p(x x x)dx x x.", "Though hard to evaluate, the data score is well defined: s d (x x x) = ∇x x x log(p σ0 (x x x)), and the corresponding objective is:", "L SM (θ) = E pσ0(x x x) ∇x x x log(p σ0 (x x x)) + ∇x x x E(x x x; θ)", "In this work we provided analyses and empirical results for understanding the limitations of learning the structure of high-dimensional data with denoising score matching.", "We found that the objective function confines learning to a small set due to the measure concentration phenomenon in random vectors.", "Therefore, sampling the learned distribution outside the set where the gradient is learned does not produce good result.", "One remedy to learn meaningful gradients in the entire space is to use samples during learning that are corrupted by different amounts of noise.", "Indeed, Song and Ermon (2019) applied this strategy very successfully.", "The central contribution of our paper is to investigate how to use a similar learning strategy in EBMs.", "Specifically, we proposed a novel EBM model, the Multiscale Denoising Score Matching (MDSM) model.", "The new model is capable of denoising, producing high-quality samples from random noise, and performing image inpainting.", "While also providing density information, our model learns an order of magnitude faster than models based on maximum likelihood.", "Our approach is conceptually similar to the idea of combining denoising autoencoder and annealing (Geras and Sutton, 2015; Chandra and Sharma, 2014; Zhang and Zhang, 2018) though this idea was proposed in the context of pre-training neural networks for classification applications.", "Previous efforts of learning energy-based models with score matching (Kingma and LeCun, 2010; were either computationally intensive or unable to produce high-quality samples comparable to those obtained by other generative models such as GANs.", "Saremi et al. (2018) and Saremi and Hyvarinen (2019) trained energy-based model with the denoising score matching objective but the resulting models cannot perform sample synthesis from random noise initialization.", "Recently, proposed the NCSN model, capable of high-quality sample synthesis.", "This model approximates the score of a family of distributions obtained by smoothing the data by kernels of different widths.", "The sampling in the NCSN model starts with sampling the distribution obtained with the coarsest kernel and successively switches to distributions obtained with finer kernels.", "Unlike NCSN, our method learns an energy-based model corresponding to p σ0 (x x x) for a fixed σ 0 .", "This method improves score matching in high-dimensional space by matching the gradient of an energy function to the score of p σ0 (x x x) in a set that avoids measure concentration issue.", "All told, we offer a novel EBM model that achieves high-quality sample synthesis, which among other EBM approaches provides a new state-of-the art.", "Compared to the NCSN model, our model is more parsimonious than NCSN and can support single step denoising without prior knowledge of the noise magnitude.", "But our model performs sightly worse than the NCSN model, which could have several reasons.", "First, the derivation of Equation 6 requires an approximation to keep the training procedure tractable, which could reduce the performance.", "Second, the NCSNs output is a vector that, at least during optimization, does not always have to be the derivative of a scalar function.", "In contrast, in our model the network output is a scalar function.", "Thus it is possible that the NCSN model performs better because it explores a larger set of functions during optimization." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0, 0.09090908616781235, 0.1666666567325592, 0, 0.1818181723356247, 0.09090908616781235, 0.05128204822540283, 0.10526315122842789, 0.0833333283662796, 0, 0.045454543083906174, 0, 0, 0, 0.09999999403953552, 0.09756097197532654, 0.09090908616781235, 0.0833333283662796, 0.21052631735801697, 0.1818181723356247, 0.1818181723356247, 0.14999999105930328, 0.06451612710952759, 0, 0.20689654350280762, 0, 0, 0, 0, 0, 0.0952380895614624, 0.0833333283662796, 0.1538461446762085, 0, 0.1538461446762085, 0.23529411852359772, 0, 0.17391303181648254, 0.1538461446762085, 0.07407406717538834, 0.17142856121063232, 0.0714285671710968, 0.06666666269302368, 0.09090908616781235, 0, 0, 0.10526315122842789, 0.07692307233810425 ]
HJeFmkBtvB
true
[ "Learned energy based model with score matching" ]
[ "A restricted Boltzmann machine (RBM) learns a probabilistic distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling.", "Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input.", "Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by construction.", "This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power.", "A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed." ]
[ 0, 0, 0, 1, 0 ]
[ 0.04999999329447746, 0.05882352590560913, 0, 0.12765957415103912, 0 ]
rye9F3Vvo7
false
[ "Propose a general tensor-based RBM model which can compress the model greatly at the same keep a strong model expression capacity" ]
[ "Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues.", "Despite reinforcement learning algorithms have achieved notable results in games and some robotic manipulations, this technique has not been widely scaled up to the more challenging real world applications like autonomous driving.", "In this work, we propose a deep reinforcement learning (RL) algorithm embedding an actor critic architecture with multi-step returns to achieve a better robustness of the agent learning strategies when acting in complex and unstable environments.", "The experiment is conducted with Carla simulator offering a customizable and realistic urban driving conditions.", "The developed deep actor RL guided by a policy-evaluator critic distinctly surpasses the performance of a standard deep RL agent.", "An important approach for goal-oriented optimization is reinforcement learning (RL) inspired from behaviorist psychology BID25 .", "The frame of RL is an agent learning through interaction with its environment driven by an impact (reward) signal.", "The environment return reinforces the agent to select new actions improving learning process, hence the name of reinforcement learning BID10 .", "RL algorithms have achieved notable results in many domains as games BID16 and advanced robotic manipulations BID13 beating human performance.", "However, standard RL strategies that randomly explore and learn faced problems lose efficiency and become computationally intractable when dealing with high-dimensional and complex environments BID26 .Autonomous", "driving is one of the current highly challenging tasks that is still an \"unsolved problem\" more than one decade after the promising 2007 DARPA Urban Challenge BID4 ). The origin", "of its difficulty lies in the important variability inherent to the driving task (e.g. uncertainty of human behavior, diversity of driving styles, complexity of scene perception...).In this work", ", we propose to implement an advantage actor-critic approach with multi-step returns for autonomous driving. This type of", "RL has demonstrated good convergence performance and faster learning in several applications which make it among the preferred RL algorithms BID7 . Actor-critic", "RL consolidates the robustness of the agent learning strategy by using a temporal difference (T D) update to control returns and guide exploration. The training", "and evaluation of the approach are conducted with the recent CARLA simulator BID6 . Designed as", "a server-client system, where the server runs the simulation commands and renders the scene readings in return, CARLA is an interesting tool since physical autonomous urban driving generates major infrastructure costs and logistical difficulties. It particularly", "offers a realistic driving environment with challenging properties variability as weather conditions, illumination, and density of cars and pedestrians.The next sections review previous work on actor-critic RL and provide a detailed description of the proposed method. After presenting", "CARLA simulator and related application advantages, we evaluate our model using this environment and discuss experimental results.", "In this paper we addressed the limits of RL algorithms in solving high-dimensional and complex tasks.", "Combining both actor and critic methods advantages, the proposed approach implemented a continuous process of policy assessment and improvement using multi-step T D learning.", "Evaluated on the challenging problem of autonomous driving using CARLA simulator, our deep actor-critic algorithm demonstrated higher performance and faster learning capabilities than a standard deep RL.", "Furthermore, the results showed a certain vulnerability of the approach when facing unseen testing conditions.", "Considering this paper as a preliminary attempt to scale up RL approaches to highdimensional real world applications like autonomous driving, we plan in future work to examine the performance of other RL methods such as deep Q-learning and Trust Region Policy Optimization BID22 on similar complex tasks.", "Furthermore, we propose to tackle the issue of non-stationary environments impact on RL methods robustness as a multi-task learning problem BID5 .", "In such context, we will explore recently applied concepts and methodologies such as novel adaptive dynamic programming (ADP) approaches, context-aware and meta-learning strategies.", "The latter are currently attracting a keen research interest and particularly achieving promising advances in designing generalizable and fast adapting RL algorithms BID21 BID20 .", "Subsequently, we will be able to increase driving tasks complexity and operate conclusive comparisons with the few available state-of-the-art experiments on CARLA simulator." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.21739129722118378, 0.25, 0.27586206793785095, 0, 0.27586206793785095, 0.1249999925494194, 0.1875, 0, 0.052631575614213943, 0.04878048226237297, 0.09999999403953552, 0.5, 0.05714285373687744, 0.15789473056793213, 0.2142857164144516, 0.08510638028383255, 0.12244897335767746, 0.06666666269302368, 0, 0.1621621549129486, 0.19999998807907104, 0.0714285671710968, 0.07017543166875839, 0.11428570747375488, 0.05714285373687744, 0, 0.21621620655059814 ]
Bke03G85DN
true
[ "An actor-critic reinforcement learning approach with multi-step returns applied to autonomous driving with Carla simulator." ]
[ "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.", "The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability.", "In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data.", "These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets.", "They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset.", "In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.", "Generative Adversarial Networks (GANs) BID6 have garnered a significant amount of attention due to their ability to learn generative models of multiple natural image datasets BID11 BID3 BID7 Zhu et al., 2017) .", "Since their conception, a fundamental question regarding GANs is to what extent they truly learn the underlying data distribution.", "This is a key issue for multiple reasons.", "From a scientific perspective, understanding the capabilities of common GANs can shed light on what precisely the adversarial training setup allows the GAN to learn.", "From an engineering standpoint, it is important to grasp the power and limitations of the GAN framework when applying it in concrete applications.", "Due to the broad potential applicability of GANs, researchers have investigated this question in a variety of ways.When we evaluate the quality of a GAN, an obvious first check is to establish that the generated samples lie in the support of the true distribution.", "In the case of images, this corresponds to checking if the generated samples look realistic.", "Indeed, visual inspection of generated images is currently the most common way of assessing the quality of a given GAN.", "Individual humans can performs this task quickly and reliably, and various GANs have achieved impressive results for generating realistic-looking images of faces and indoor scenes BID13 BID3 .Once", "we have established that GANs produce realistic-looking images, the next concern is that the GAN might simply be memorizing the training dataset. While", "this hypothesis cannot be ruled out entirely, there is evidence that GANs perform at least some non-trivial modeling of the unknown distribution. Previous", "studies show that interpolations in the latent space of the generator produce novel and meaningful image variations BID11 , and that there is a clear disparity between generated samples and their nearest neighbors in the true dataset BID1 .Taken together", ", these results provide evidence that GANs could constitute successful distribution learning algorithms, which motivates studying their distributions in more detail. The direct approach", "is to compare the probability density assigned by the generator with estimates of the true distribution BID16 . However, in the context", "of GANs and high-dimensional image distributions, this is complicated by two factors. First, GANs do not naturally", "provide probability estimates for their samples. Second, estimating the probability", "density of the true distribution is a challenging problem itself (the adversarial training framework specifically avoids this issue). Hence prior work has only investigated", "the probability density of GANs on simple datasets such as MNIST BID16 .Since reliably computing probability densities", "in high dimensions is challenging, we can instead study the behavior of GANs in low-dimensional problems such as two-dimensional Gaussian mixtures. Here, a common failure of GANs is mode collapse", ", wherein the generator assigns a disproportionately large mass to a subset of modes from the true distribution BID5 . This raises concerns about a lack of diversity", "in the synthetic GAN distributions, and recent work shows that the learned distributions of two common GANs indeed have (moderately) low support size for the CelebA dataset BID1 . However, the approach of BID1 heavily relies on", "a human annotator in order to identify duplicates. Hence it does not easily scale to comparing many", "variants of GANs or asking more fine-grained questions than collision statistics. Overall, our understanding of synthetic GAN distributions", "remains blurry, largely due to the lack of versatile tools for a quantitative evaluation of GANs in realistic settings. The focus of this work is precisly to address this question", ":Can we develop principled and quantitative approaches to study synthetic GAN distributions?To this end, we propose two new evaluation techniques for synthetic", "GAN distributions. Our methods are inspired by the idea of comparing moments of distributions", ", which is at the heart of many methods in classical statistics. Although simple moments of high-dimensional distributions are often not semantically", "meaningful, we can extend this idea to distributions of realistic images by leveraging image statistics identified using convolutional neural networks. In particular, we train image classifiers in order to construct test functions corresponding", "to semantically meaningful properties of the distributions. An important feature of our approach is that it requires only light human supervision and can", "easily be scaled to evaluating many GANs and large synthetic datasets.Using our new evaluation techniques, we study five state-of-the-art GANs on the CelebA and LSUN datasets, arguably the two most common testbeds for advanced GANs. We find that most of the GANs significantly distort the relative frequency of even basic image", "attributes, such as the hair style of a person or the type of room in an indoor scene. This clearly indicates a mismatch between the true and synthetic distributions. Moreover, we conduct", "experiments to explore the diversity of GAN distributions. We use synthetic GAN", "data to train image classifiers and find that these have significantly lower accuracy", "than classifiers trained on the true data set. This points towards a lack of diversity in the GAN data, and again towards a discrepancy between the true", "and synthetic distributions. In fact, our additional examinations show that the diversity in GANs is only comparable to a subset of the", "true data that is 100× smaller.", "In this paper, we put forth techniques for examining the ability of GANs to capture key characteristics of the training data, through the lens of classification.", "Our tools are scalable, quantitative and automatic (no need for visual inspection of images).", "They thus are capable of studying state-ofthe-art GANs on realistic, large-scale image datasets.", "Further, they serve as a mean to perform a nuanced comparison of GANs and to identify their relative merits, including properties that cannot be discerned from mere visual inspection.We then use the developed techniques to perform empirical studies on popular GANs on the CelebA and LSUN datasets.", "Our examination shows that mode collapse is indeed a prevalent issue for GANs.", "Also, we observe that synthetic GAN-generated datasets have significantly reduced diversity, at least when examined from a classification perspective.", "In fact, the diversity of such synthetic data is often few orders of magnitude smaller than that of the true data.", "Furthermore, this gap in diversity does not seem to be bridged by simply producing much larger datasets by oversampling GANs.", "Finally, we also notice that good perceptual quality of samples does not necessarily correlate -and might sometime even anti-correlate -with distribution diversity.", "These findings suggest that we need to go beyond the visual inspection-based evaluations and look for more quantitative tools for assessing quality of GANs, such as the ones presented in this paper.", "To assess GAN performance from the perspective of classification, we construct a set of classification tasks on the CelebA and LSUN datasets.", "In the case of the LSUN dataset, images are annotated with scene category labels, which makes it straightforward to use this data for binary and multiclass classification.", "On the other hand, each image in the CelebA dataset is labeled with 40 binary attributes.", "As a result, a single image has multiple associated attribute labels.", "Here, we construct classification tasks can by considering binary combinations of an attribute(s) (examples are shown in FIG1 ).", "Attributes used in our experiments were chosen such that the resulting dataset was large, and classifiers trained on true data got high-accuracy so as to be good annotators for the synthetic data.", "Details on datasets used in our classification tasks, such as training set size (N ), number of classes (C), and accuracy of the annotator, i.e., a classifier pre-trained on true data which is used to label the synthetic GAN-generated data, are provided in Table 2 .", "Table 2 : Details of CelebA and LSUN subsets used for the studies in Section 3.3.", "Here, we use a classifier trained on true data as an annotator that let's us infer label distribution for the synthetic, GAN-generated data.", "N is the size of the training set and C is the number of classes in the true and synthetic datasets.", "Annotator's accuracy refers to the accuracy of the classifier on a test set of true data.", "For CelebA, we use a combination of attribute-wise binary classifiers as annotators due their higher accuracy compared to a single classifier trained jointly on all the four classes." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12765957415103912, 0.09302324801683426, 0.4000000059604645, 0.0952380895614624, 0.11428570747375488, 0.1818181723356247, 0.04081632196903229, 0.05405404791235924, 0.07692307233810425, 0.1463414579629898, 0.20512819290161133, 0.11320754140615463, 0.1249999925494194, 0.22857142984867096, 0.13636362552642822, 0.10526315122842789, 0.09756097197532654, 0.11538460850715637, 0.0476190410554409, 0.1111111044883728, 0.11764705181121826, 0.14814814925193787, 0.0952380895614624, 0.11764705181121826, 0.09090908616781235, 0.1463414579629898, 0.2857142686843872, 0, 0.22857142984867096, 0.13636362552642822, 0.31578946113586426, 0.3333333432674408, 0.20512819290161133, 0.0833333283662796, 0.19999998807907104, 0.26229506731033325, 0.21739129722118378, 0.41379308700561523, 0.06451612710952759, 0.19999998807907104, 0.25641024112701416, 0, 0.19999998807907104, 0.1875, 0.06451612710952759, 0.17241379618644714, 0.06451612710952759, 0.21621620655059814, 0.17142856121063232, 0, 0.09999999403953552, 0.2083333283662796, 0.42105263471603394, 0.22727271914482117, 0.060606054961681366, 0, 0.1621621549129486, 0.1666666567325592, 0.1666666567325592, 0.23529411852359772, 0.09999999403953552, 0.24242423474788666, 0.12903225421905518, 0.08888888359069824 ]
S1FQEfZA-
true
[ "We propose new methods for evaluating and quantifying the quality of synthetic GAN distributions from the perspective of classification tasks" ]
[ "The goal of survival clustering is to map subjects (e.g., users in a social network, patients in a medical study) to $K$ clusters ranging from low-risk to high-risk.", "Existing survival methods assume the presence of clear \\textit{end-of-life} signals or introduce them artificially using a pre-defined timeout.", "In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic.", "We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups.", "We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives.", "Free online subscription services (e.g., Facebook, Pandora) use survival models to predict the relationship between observed subscriber covariates (e.g. usage patterns, session duration, gender, location, etc.) and how long a subscriber remains with an active account BID26 BID11 .", "Using the same tools, healthcare providers make extensive use of survival models to predict the relationship between patient covariates (e.g. smoking, administering drug A or B) and the duration of a disease (e.g., herpes, cancer, etc.).", "In these scenarios, rarely there is an end-of-life signal: non-paying subscribers do not cancel their accounts, tests rarely declare a patient cancer-free.", "We want to assign subjects into K clusters, ranging from short-lived to long-lived subscribers (diseases).Despite", "the recent community interest in survival models BID1 BID33 , existing survival analysis approaches require an unmistakable end-of-life signal (e.g., the subscriber deletes his or her account, the patient is declared disease-free), or a pre-defined endof-life \"timeout\" (e.g., the patient is declared disease-free after 5 years, the subscriber is declared permanently inactive after 100 days of inactivity). Methods", "that require end-of-life signals also include BID23 BID8 BID3 BID14 BID24 BID29 BID31 BID47 BID9 BID19 BID41 BID40 BID17 BID48 BID26 BID0 BID4 BID5 BID35 BID46 BID30 .In this", "work, we propose to address the lifetime clustering problem without end-of-life signals for the first time, to the best of our knowledge. We begin", "by describing two possible datasets where such a clustering approach could be applied.• Social", "Network Dataset : Users join the social network at different times and participate in activities defined by the social network (login, send/receive comments). The covariates", "are the various attributes of a user like age, gender, number of friends, etc., and the inter-event time is the time between user's two consecutive activities. In this case,", "censoring is due to a fixed point of data collection that we denote t m , the time of measurement. Thus, time till", "censoring for a particular user is the time from her last activity to t m . Lifetime of a user", "is defined as the time from her joining till she permanently deletes her account.• Medical Dataset :", "Subjects join the medical study at the same time and are checked for the presence of a particular disease. The covariates are", "the attributes of the disease-causing cell in subject, inter-event time is the time between two consecutive observations of the presence of disease. The time to censoring", "is the difference between the time of last observation when the disease was present and the time of final observation. If the final observation", "for a subject indicates presence of the disease, then time to censoring is zero. Lifetime of the disease", "is defined as the time between the first observation of the disease and the time until it is permanently cured.We use a deep neural network and a new loss function, with a corresponding backpropagation modification, for clustering subjects without end-of-life signals. We are able to overcome", "the technical challenges of this problem, in part, thanks to the ability of deep neural networks to generalize while overfitting the training data BID49 . The task is challenging", "for the following reasons:• The problem is fully unsupervised, as there is no pre-defined end-of-life timeout. While semisupervised clustering", "approaches exist BID0 BID4 BID5 BID35 BID46 , they assume that end-of-life signals appearing before the observation time are observed; to the best of our knowledge, there are no fully unsupervised approach that can take complex input variables.• There is no hazard function that", "can be used to define the \"cure\" rate, as we cannot determine whether the disease is cured, or whether the subscriber will never return to the website, without observing for an infinitely long time.• Cluster assignments may depend on", "highly complex interactions between the observed covariates and the observed events. The unobserved lifetime distributions", "may not be smooth functions.Contributions. Using the ability of deep neural networks", "to model complex nonlinear relationships in the input data, our contribution is a loss function (using the p-value from a modified Kuiper nonparametric two-sample test BID28 ) and a backpropagation algorithm that can perform model-free (nonparametric) unsupervised clustering of subjects based on their latent lifetime distributions, even in the absence of end-of-life signals. The output of our algorithm is a trained", "deep neural network classifier that can (soft) assign test and training data subjects into K categories, from high-risk and to low-risk individuals. We apply our method to a large social network", "dataset and show that our approach is more robust than competing methods and obtains better clusters (higher C-index scores).Why deep neural networks. As with any optimization", "method that returns a point", "estimate (a set of neural network weights W in our case), our approach is subject to overfitting the training data. And because our loss function uses p-values, the optimization", "and overfitting have a rather negative name: p-hacking BID36 . That is, the optimization is looking for a W (hypothesis) that", "decreases the p-value. Deep neural networks, however, are known to both overfit the training", "data and generalize well BID49 . That is, the hypothesis (W ) tends to also have small p-values in the", "(unseen) test data, despite overfitting in the training data (p-hacking).Outline: In section 3, we describe the traditional survival analysis concepts", "that assume the presence of end-of-life signals. In section 4, we define a loss function that quantifies the divergence between", "empirical lifetime distributions of two clusters without assuming end-of-life signals. We also provide a neural network approach to optimize said loss function. We describe", "the dataset used in our experiments followed by results in section 5. In", "section 6, we describe a few methods in literature that are related to our work.", "Finally, we present our conclusions in section 7.", "In this work we introduced a Kuiper-based nonparametric loss function, and a corresponding backpropagation procedure (which backpropagates the loss over clusters rather than the loss per training example).", "These procedures are then used to train a feedforward neural network to inductively assign observed subject covariates into K survival-based clusters, from high-risk to low-risk subjects, without requiring an end-of-life signal.", "We showed that the resultant neural network produces clusters with better C-index values than other competing methods.", "We also presented the survival distributions of the clusters obtained from our procedure and concluded that there were only two groups of users in the Friendster dataset.Both parts", "(a) and", "(b) of our proof need definition 3 that translates the observed data D u for subject u into a stochastic process.Proof of", "(a): If the two clusters have distinct lifetime distributions, it means that the distributions of T 0 and T 1 in eq. (2) are different.", "Then, either the right-censoring δ in eq. (3) does not allow us to see the difference between T 0 and T 1 , and then there is no mappingsp andκ that can get the distribution of S 0 (t;κ,p) and S 1 (t;κ,p) to be distinct, implying an L(κ, p) → 0, as n → ∞ as the observations come from the same distribution, making the Kuiper score asymptotically equal to one; or δ does allow us to see the difference and then, clearlyp ≡ 0 with a mappingκ that assigns more than half of the subjects to their correct clusters, which would allow us to see the difference in H 0 and H 1 , would give Kuiper score asymptotically equal to zero.", "Thus, L(κ, p) → −∞, as n → ∞.Proof", "of (b):", "Because κ only take the subject covariates as input, and there are no dependencies between the subject covariates and the subject lifetime in eq. (2), any clustering based on the covariates will be a random assignment of users into clusters. Moreover", ", from eq. (3), the censoring time of subject u, S u , has the same distribution for both clusters because the RMPPs are the same. Thus, H", "0 d = H 1 , i.e., H 0 and H 1 have the same distributions, and the Kuiper p-value test returns zero, L(κ, p) → 0, as n → ∞. Table 4", ": C-index (%) over different learning rates and batch sizes for the proposed NN approach with Kuiper loss (with learnt exponential) and K = 2." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3928571343421936, 0.1666666567325592, 0.3333333432674408, 0.2857142686843872, 0.07843136787414551, 0.11594202369451523, 0.1538461446762085, 0.11764705181121826, 0.13333332538604736, 0.1599999964237213, 0.06779660284519196, 0.31372547149658203, 0.13333332538604736, 0.11538460850715637, 0.178571417927742, 0.23529411852359772, 0.21276594698429108, 0.08510638028383255, 0.16326530277729034, 0.20408162474632263, 0.13333332538604736, 0.21739129722118378, 0.29411762952804565, 0.29629629850387573, 0.2083333283662796, 0.17142856121063232, 0.1230769157409668, 0.09302324801683426, 0.09302324801683426, 0.29999998211860657, 0.13793103396892548, 0.0714285671710968, 0.05714285373687744, 0.2142857164144516, 0.12244897335767746, 0.09302324801683426, 0.0833333283662796, 0.11999999731779099, 0.2916666567325592, 0.2641509473323822, 0.0952380895614624, 0.13333332538604736, 0.052631575614213943, 0.2222222238779068, 0.1355932205915451, 0.08510638028383255, 0.1428571343421936, 0.15686273574829102, 0.11320754140615463, 0.1320754736661911, 0, 0.1904761791229248, 0.11320754140615463, 0.07017543166875839, 0.1111111044883728 ]
SJme6-ZR-
true
[ "The goal of survival clustering is to map subjects into clusters. Without end-of-life signals, this is a challenging task. To address this task we propose a new loss function by modifying the Kuiper statistics." ]
[ "Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions.", "Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets.", "In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics.", "The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks.", "We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior.", "We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy.", "Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.", "Tuning complex machine learning models such as deep neural networks can be a daunting task.", "Object detection or language understanding models often rely on deep neural networks with many tunable hyperparameters, and automatic hyperparameter optimization (HPO) techniques such as Bayesian optimization (BO) are critical to find the good hyperparameters in short time.", "BO addresses the black-box optimization problem by placing a probabilistic model on the function to minimize (e.g., the mapping of neural network hyperparameters to a validation loss), and determine which hyperparameters to evaluate next by trading off exploration and exploitation through an acquisition function.", "While traditional BO focuses on each problem in isolation, recent years have seen a surge of interest in transfer learning for HPO.", "The key idea is to exploit evaluations from previous, related tasks (e.g., the same neural network tuned on multiple datasets) to further speed up the hyperparameter search.", "A central challenge of hyperparameter transfer learning is that different tasks typically have different scales, varying noise levels, and possibly contain outliers, making it hard to learn a joint model.", "In this work, we show how a semi-parametric Gaussian Copula can be leveraged to learn a joint prior across datasets in such a way that scale issues vanish.", "We then demonstrate how such prior estimate can be used to transfer information across tasks and objectives.", "We propose two HPO strategies: a Copula Thompson Sampling and a Gaussian Copula Process.", "We show that these approaches can jointly model several objectives with potentially different scales, such as validation error and compute time, without requiring processing.", "We demonstrate significant speed-ups over a number of baselines in extensive experiments.", "The paper is organized as follows.", "Section 2 reviews related work on transfer learning for HPO.", "Section 3 introduces Copula regression, the building block for the HPO strategies we propose in Section 4.", "Specifically, we show how Copula regression can be applied to design two HPO strategies, one based on Thompson sampling and an alternative GP-based approach.", "Experimental results are given in Section 5 where we evaluate both approaches against state-of-the-art methods on three algorithms.", "Finally, Section 6 outlines conclusions and further developments.", "We introduced a new class of methods to accelerate hyperparameter optimization by exploiting evaluations from previous tasks.", "The key idea was to leverage a semi-parametric Gaussian Copula prior, using it to account for the different scale and noise levels across tasks.", "Experiments showed that we considerably outperform standard approaches to BO, and deal with heterogeneous tasks more robustly compared to a number of transfer learning approaches recently proposed in the literature.", "Finally, we showed that our approach can seamlessly combine multiple objectives, such as accuracy and runtime, further speeding up the search of good hyperparameter configurations.", "A number of directions for future work are open.", "First, we could combine our Copula-based HPO strategies with Hyperband-style optimizers (Li et al., 2016) .", "In addition, we could generalize our approach to deal with settings in which related problems are not limited to the same algorithm run over different datasets.", "This would allow for different hyperparameter dimensions across tasks, or perform transfer learning across different black-boxes." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1599999964237213, 0.1764705777168274, 0.08510638028383255, 0.15789473056793213, 0.2380952388048172, 0, 0.06451612710952759, 0.038461532443761826, 0.037735845893621445, 0.05405404791235924, 0.09302324801683426, 0.04444443807005882, 0.3333333432674408, 0.3636363446712494, 0.2142857164144516, 0.19999998807907104, 0.0714285671710968, 0, 0.07692307233810425, 0.06451612710952759, 0.25, 0, 0.0833333283662796, 0.060606054961681366, 0.20512819290161133, 0.045454539358615875, 0.1463414579629898, 0, 0.0624999962747097, 0.04878048226237297, 0.06666666269302368 ]
ryx4PJrtvS
true
[ "We show how using semi-parametric prior estimations can speed up HPO significantly across datasets and metrics." ]
[ "We propose Pure CapsNets (P-CapsNets) without routing procedures.", "Specifically, we make three modifications to CapsNets. ", "First, we remove routing procedures from CapsNets based on the observation that the coupling coefficients can be learned implicitly.", "Second, we replace the convolutional layers in CapsNets to improve efficiency.", "Third, we package the capsules into rank-3 tensors to further improve efficiency.", "The experiment shows that P-CapsNets achieve better performance than CapsNets with varied routine procedures by using significantly fewer parameters on MNIST&CIFAR10.", "The high efficiency of P-CapsNets is even comparable to some deep compressing models.", "For example, we achieve more than 99% percent accuracy on MNIST by using only 3888 parameters. ", "We visualize the capsules as well as the corresponding correlation matrix to show a possible way of initializing CapsNets in the future.", "We also explore the adversarial robustness of P-CapsNets compared to CNNs.", "Capsule Networks, or CapsNets, have been found to be more efficient for encoding the intrinsic spatial relationships among features (parts or a whole) than normal CNNs.", "For example, the CapsNet with dynamic routing (Sabour et al. (2017) ) can separate overlapping digits accurately, while the CapsNet with EM routing (Hinton et al. (2018) ) achieves lower error rate on smallNORB (LeCun et al. (2004) ).", "However, the routing procedures of CapsNets (including dynamic routing (Sabour et al. (2017) ) and EM routing (Hinton et al. (2018) )) are computationally expensive.", "Several modified routing procedures have been proposed to improve the efficiency ; Choi et al. (2019) ; Li et al. (2018) ), but they sometimes do not \"behave as expected and often produce results that are worse than simple baseline algorithms that assign the connection strengths uniformly or randomly\" (Paik et al. (2019) ).", "Even we can afford the computation cost of the routing procedures, we still do not know whether the routing numbers we set for each layer serve our optimization target.", "For example, in the work of Sabour et al. (2017) , the CapsNet models achieve the best performance when the routing number is set to 1 or 3, while other numbers cause performance degradation.", "For a 10-layer CapsNet, assuming we have to try three routing numbers for each layer, then 3", "10 combinations have to be tested to find the best routing number assignment.", "This problem could significantly limit the scalability and efficiency of CapsNets.", "Here we propose P-CapsNets, which resolve this issue by removing the routing procedures and instead learning the coupling coefficients implicitly during capsule transformation (see Section 3 for details).", "Moreover, another issue with current CapsNets is that it is common to use several convolutional layers before feeding these features into a capsule layer.", "We find that using convolutional layers in CapsNets is not efficient, so we replace them with capsule layers.", "Inspired by Hinton et al. (2018) , we also explore how to package the input of a CapsNet into rank-3 tensors to make P-CapsNets more representative.", "The capsule convolution in P-CapsNets can be considered as a more general version of 3D convolution.", "At each step, 3D convolution uses a 3D kernel to map a 3D tensor into a scalar (as Figure 1 shows) while the capsule convolution in Figure 2 adopts a 5D kernel to map a 5D tensor into a 5D tensor.", "Figure 1: 3D convolution: tensor-to-scalar mapping.", "The shape of input is 8ˆ8ˆ4.", "The shape of 3D kernel is 4ˆ4ˆ3.", "As a result, the shape of output is 5ˆ5ˆ3.", "Yellow area shows current input area being convolved by kernel and corresponding output.", "Figure 2: Capsule convolution in P-CapsNets: tensor-to-tensor mapping.", "The input is a tensor of 1's which has a shape of 1ˆ5ˆ5ˆp3ˆ3ˆ3q (correspond to the the input channel, input height, input width, first capsule dimension, second capsule dimension, and third capsule dimension, respectively).", "The capsule kernel is also a tensor of 1's which has a shape of 4ˆ4ˆ1ˆ1ˆp3ˆ3ˆ3q -kernel height, kernel width, number of input channel, number of output channel, and the three dimensions of the 3D capsule.", "As a result, we get an output tensor of 48's which has a shape of 1ˆ2ˆ2ˆp3ˆ3ˆ3q.", "Yellow areas show current input area being convolved by kernel and corresponding output.", "We propose P-CapsNets by making three modifications based on CapsNets Sabour et al. (2017) , 1) We replace all the convolutional layers with capsule layers,", "2) We remove routing procedures from the whole network, and", "3) We package capsules into rank-3 tensors to further improve the efficiency.", "The experiment shows that P-CapsNets can achieve better performance than multiple other CapsNets variants with different routing procedures, as well as than deep compressing models, by using fewer parameters.", "We visualize the capsules in P-CapsNets and point out that the initializing methods of CNNs might not be appropriate for CapsNets.", "We conclude that the capsule layers in P-CapsNets can be considered as a general version of 3D convolutional layers.", "We conjecture that CapsNets can encode the intrinsic spatial relationship between a part and a while efficiently, comes from the tensor-to-tensor mapping between adjacent capsule layers.", "This mapping is presumably also the reason for P-CapsNets' good performance.", "CapsNets#0, CapsNets#1, CapsNets#2, CapsNets#3) , they are all five-layer CapsNets.", "Take CapsNets#2 as an example, the input are gray-scale images with a shape of 28ˆ28, we reshape it as a 6D tensor, 1ˆ28ˆ28ˆp1ˆ1ˆ1q to fit our P-CaspNets.", "The first capsule layer (CapsConv#1, as Figure 7 shows.), is a 7D tensor, 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ16q.", "Each dimension of the 7D tensor represents the kernel height, the kernel width, the number of input capsule feature map, the number of output capsule feature map, the capsule's first dimension, the capsule's second dimension, the capsule's third dimension.", "All the following feature maps and filters can be interpreted in a similar way.", "Similarly, the five capsule layers of P-CapsNets#0 are 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ32, 3ˆ3ˆ1ˆ2p 1ˆ8ˆ8q, 3ˆ3ˆ2ˆ4ˆp1ˆ8ˆ8q, 3ˆ3ˆ4ˆ2ˆp1ˆ8ˆ8, 3ˆ3ˆ2ˆ10ˆp1ˆ8ˆ8q respectively.", "The strides for each layers are p2, 1, 2, 1, 1q.", "The five capsule layers of P-CapsNets#1 are 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ16, 3ˆ3ˆ1ˆ1ˆp1ˆ4ˆ6q, 3ˆ3ˆ1ˆ1ˆp1ˆ6ˆ4q, 3ˆ3ˆ1ˆ1ˆp1ˆ4ˆ6, 3ˆ3ˆ1ˆ10ˆp1ˆ6ˆ4q respectively.", "The strides for each layers are p2, 1, 2, 1, 1q.", "The five capsule layers of P-CapsNets#3 are 3ˆ3ˆ1ˆ1ˆp1ˆ1ˆ32, 3ˆ3ˆ1ˆ4ˆp1ˆ8ˆ16q, 3ˆ3ˆ4ˆ8ˆp1ˆ16ˆ8q, 3ˆ3ˆ8ˆ4ˆp1ˆ8ˆ16, 3ˆ3ˆ4ˆ10ˆp1ˆ16ˆ16q respectively.", "The strides for each layers are p2, 1, 2, 1, 1q.", "The five capsule layers of P-CapsNets#4 are 3ˆ3ˆ1ˆ1ˆp1ˆ3ˆ32, 3ˆ3ˆ1ˆ4ˆp1ˆ8ˆ16q, 3ˆ3ˆ4ˆ8ˆp1ˆ16ˆ8q, 3ˆ3ˆ8ˆ10ˆp1ˆ8ˆ16, 3ˆ3ˆ10ˆ10ˆp1ˆ16ˆ16q respectively.", "The strides for each layers are p2, 1, 1, 2, 1q." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.13333332538604736, 0.1599999964237213, 0.1111111044883728, 0, 0.1428571343421936, 0, 0, 0.07692307233810425, 0, 0.0624999962747097, 0, 0.2142857164144516, 0.11320754140615463, 0.12903225421905518, 0, 0.0833333283662796, 0, 0.1111111044883728, 0.11764705926179886, 0.06666666269302368, 0.1666666567325592, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06451612710952759, 0.11764705181121826, 0, 0.05882352590560913, 0.2222222238779068, 0, 0.06666666269302368, 0.1111111044883728, 0.23529411852359772, 0.0624999962747097, 0, 0, 0, 0.09090908616781235, 0.23529411852359772, 0.09999999403953552, 0.23529411852359772, 0.09999999403953552, 0.23529411852359772, 0.09999999403953552, 0.23529411852359772 ]
B1gNfkrYvS
true
[ "Routing procedures are not necessary for CapsNets" ]
[ "\tA recent line of work has studied the statistical properties of neural networks to great success from a {\\it mean field theory} perspective, making and verifying very precise predictions of neural network behavior and test time performance.\n\t", "In this paper, we build upon these works to explore two methods for taming the behaviors of random residual networks (with only fully connected layers and no batchnorm).\n\t", "The first method is {\\it width variation (WV)}, i.e. varying the widths of layers as a function of depth.\n\t", "We show that width decay reduces gradient explosion without affecting the mean forward dynamics of the random network.\n\t", "The second method is {\\it variance variation (VV)}, i.e. changing the initialization variances of weights and biases over depth.\n\t", "We show VV, used appropriately, can reduce gradient explosion of tanh and ReLU resnets from $\\exp(\\Theta(\\sqrt L))$ and $\\exp(\\Theta(L))$ respectively to constant $\\Theta(1)$.\n\t", "A complete phase-diagram is derived for how variance decay affects different dynamics, such as those of gradient and activation norms.\n\t", "In particular, we show the existence of many phase transitions where these dynamics switch between exponential, polynomial, logarithmic, and even constant behaviors.\n\t", "Using the obtained mean field theory, we are able to track surprisingly well how VV at initialization time affects training and test time performance on MNIST after a set number of epochs: the level sets of test/train set accuracies coincide with the level sets of the expectations of certain gradient norms or of metric expressivity (as defined in \\cite{yang_meanfield_2017}), a measure of expansion in a random neural network.\n\t", "Based on insights from past works in deep mean field theory and information geometry, we also provide a new perspective on the gradient explosion/vanishing problems: they lead to ill-conditioning of the Fisher information matrix, causing optimization troubles.", "Deep mean field theory studies how random neural networks behave with increasing depth, as the width goes to infinity.", "In this limit, several pieces of seminal work used statistical physics BID7 Sompolinsky et al., 1988) and Gaussian Processes (Neal, 2012) to show that neural networks exhibit remarkable regularity.", "Mean field theory also has a substantial history studying Boltzmann machines BID0 and sigmoid belief networks (Saul et al., 1996) .Recently", ", a number of results have revitalized the use of mean field theory in deep learning, with a focus on addressing practical design questions. In Poole", "et al. (2016) , mean field theory is combined with Riemannian geometry to quantify the expressivity of random neural networks. In Schoenholz", "et al. (2017) and Yang and Schoenholz (2017) , a study of the critical phenomena of mean field neural networks and residual networks 1 is leveraged to theoretically predict test time relative performance of differential initialization schemes. Additionally,", "BID5 and Pennington and Bahri (2017) have used related techniques to investigate properties of the loss landscape of deep networks. Together these", "results have helped a large number of experimental observations onto more rigorous footing (Montfar et al., 2014; BID9 BID3 . Finally, deep", "mean field theory has proven to be a necessary underpinning for studies using random matrix theory to 1 without batchnorm and with only fully connected layers understand dynamical isometry in random neural networks (Pennington et al., 2017; Pennington and Worah, 2017) . Overall, a program", "is emerging toward building a mean field theory for state-of-the-art neural architectures as used in the wild, so as to provide optimal initialization parameters quickly for any deep learning practitioner.In this paper, we contribute to this program by studying how width variation (WV), as practiced commonly, can change the behavior of quantities mentioned above, with gradient norm being of central concern. We find that WV can", "dramatically reduce gradient explosion without affecting the mean dynamics of forward computation, such as the activation norms, although possibly increasing deviation from the mean in the process (Section 6).We also study a second", "method, variance variation (VV), for manipulating the mean field dynamics of a random neural network (Section 7 and Appendix B). In this paper, we focus", "on its application to tanh and ReLU residual networks, where we show that VV can dramatically ameliorate gradient explosion, and in the case of ReLU resnet, activation explosion 2 . Affirming the results of", "Yang and Schoenholz (2017) and predicted by our theory, VV improves performances of tanh and ReLU resnets through these means.Previous works (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017) have focused on how network architecture and activation functions affect the dynamics of mean field quantities, subject to the constraint that initialization variances and widths are constant across layers. In each combination of (", "architecture, activation), the mean field dynamics have the same kinds of asymptotics regardless of the variances. For example, tanh feedforward", "networks have exp(Θ(l)) forward and backward dynamics, while tanh residual networks have poly(l) forward and exp(Θ( √ l)) backward dynamics. Such asymptotics were considered", "characteristics of the (architecture, activation) combination (Yang and Schoenholz, 2017) . We show by counterexample that this", "perception is erroneous. In fact, as discussed above, WV can", "control the gradient dynamics arbitrarily and VV can control forward and backward dynamics jointly, all without changing the network architecture or activation. To the best of our knowledge, this", "is the first time methods for reducing gradient explosion or vanishing have been proposed that vary initialization variance and/or width across layers.With regard to ReLU resnets, we find that gradient norms and \"metric expressivity\" (as introduced in Yang and Schoenholz (2017) , also defined in Defn 4.2), make surprisingly good predictors, respectively in two separate phases, of how VV at initialization affects performance after a fixed amount of training time (Section 7.1). However, in one of these phases, larger", "gradient explosion seems to cause better performance, with no alternative course of explanation. In this paper we have no answer for why", "this occurs but hope to elucidate it for future work. With regard to tanh resnets, we find that", ", just as in Yang and Schoenholz (2017) , the optimal initialization balances trainability and expressivity: Decaying the variance too little means we suffer from gradient explosion, but decaying the variance too much means we suffer from not enough metric expressivity.We want to stress that in this work, by \"performance\" we do not mean absolute performance but rather relative performance between different initialization schemes. For example, we do not claim to know what", "initialization scheme is needed to make a particular neural network architecture solve ImageNet, but rather, conditioned on the architecture, whether one initialization is better than another in terms of test set accuracy after the same amount of training iterations.Before we begin the mean field analysis, we present a perspective on gradient explosion/vanishing problem from a combination of mean field theory and information geometry, which posits that such problem manifests in the ill-conditioning of the Fisher information matrix.", "In this paper, we derived the mean field theory of width and variance variation and showed that they are powerful methods to control forward (VV) and backward (VV + WV) dynamics.", "We proved that even with a fixed architecture and activation function, the mean field dynamics of a residual neural network can still be manipulated at will by these two methods.", "Extraordinarily, the mean field theory we developed allowed us to accurately predict the performances of trained MNIST models relative to different initializations, but one puzzling aspect remains where test set accuracy seems to increase as gradient explosion worsens in one regime of random ReLU resnets.Open Problems.", "We solved a small part, width variation, of the program to construct mean field theories of state-of-the-art neural networks used in practice.", "Many open problems still remain, and the most important of them include but is not limited to", "1. batchnorm,", "2. convolution layers, and", "3. recurrent layers.", "In addition, more work is needed to mathematically justify our \"physical\" assumptions Axiom 1 and Axiom 2 to a \"math\" problem.", "We hope readers will take note and contribute toward deep mean field theory.", "Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli.Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice.In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4788-4798.", "Curran Associates, Inc., 2017.URL http://papers.nips.cc/paper/ 7064-resurrecting-the-sigmoid-in-deep-learning-through-dynamical-isometry-theory-a pdf.Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli.", "Exponential expressivity in deep neural networks through transient chaos.", "In Advances In Neural Information Processing Systems, pages 3360-3368, 2016.", "DISPLAYFORM0 DISPLAYFORM1 The two cases for χ/χ are resp.", "for a projection and a normal residual block, assuming σπ", "= 1. The V and W operators are defined in Defn C.1." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.18691588938236237, 0.2745097875595093, 0.12903225421905518, 0.1538461446762085, 0.12765957415103912, 0.125, 0.1702127605676651, 0.0833333283662796, 0.29921260476112366, 0.18691588938236237, 0.1304347813129425, 0.09708737581968307, 0.12631578743457794, 0.10309278219938278, 0.12631578743457794, 0.20952381193637848, 0.10752687603235245, 0.042105261236429214, 0.17699114978313446, 0.21212120354175568, 0.1553398072719574, 0.1428571343421936, 0.21568627655506134, 0.1538461446762085, 0.04444444179534912, 0.06521739065647125, 0.08988763391971588, 0.048192769289016724, 0.14432989060878754, 0.2916666567325592, 0.12765957415103912, 0.08888888359069824, 0.20472440123558044, 0.2706766724586487, 0.1764705777168274, 0.15686273574829102, 0.208695650100708, 0.1489361673593521, 0.17777778208255768, 0.025974024087190628, 0.026315787807106972, 0.08695651590824127, 0.04651162400841713, 0.07017543166875839, 0.021052628755569458, 0.04878048598766327, 0, 0, 0.0731707289814949, 0.047058820724487305 ]
rJGY8GbR-
true
[ "By setting the width or the initialization variance of each layer differently, we can actually subdue gradient explosion problems in residual networks (with fully connected layers and no batchnorm). A mathematical theory is developed that not only tells you how to do it, but also surprisingly is able to predict, after you apply such tricks, how fast your network trains to achieve a certain test set performance. This is some black magic stuff, and it's called \"Deep Mean Field Theory.\"" ]
[ "We explore the collaborative multi-agent setting where a team of deep reinforcement learning agents attempt to solve a shared task in partially observable environments.", "In this scenario, learning an effective communication protocol is key.", "We propose a communication protocol that allows for targeted communication, where agents learn \\emph{what} messages to send and \\emph{who} to send them to.", "Additionally, we introduce a multi-stage communication approach where the agents co-ordinate via several rounds of communication before taking an action in the environment.", "We evaluate our approach on several cooperative multi-agent tasks, of varying difficulties with varying number of agents, in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments.", "We demonstrate the benefits of targeted as well as multi-stage communication.", "Moreover, we show that the targeted communication strategies learned by the agents are quite interpretable and intuitive.", "Effective communication is a key ability for collaborative multi-agents systems.", "Indeed, intelligent agents (humans or artificial) in real-world scenarios can significantly benefit from exchanging information that enables them to coordinate, strategize, and utilize their combined sensory experiences to act in the physical world.", "The ability to communicate has wide-ranging applications for artificial agents -from multi-player gameplay in simulated games (e.g. DoTA, Quake, StarCraft) or physical worlds (e.g. robot soccer), to networks of self-driving cars communicating with each other to achieve safe and swift transport, to teams of robots on search-and-rescue missions deployed in hostile and fast-evolving environments.A salient property of human communication is the ability to hold targeted interactions.", "Rather than the 'one-size-fits-all' approach of broadcasting messages to all participating agents, as has been previously explored BID19 BID4 , it can be useful to direct certain messages to specific recipients.", "This enables a more flexible collaboration strategy in complex environments.", "For example, within a team of search-and-rescue robots with a diverse set of roles and goals, a message for a fire-fighter (\"smoke is coming from the kitchen\") is largely meaningless for a bomb-defuser.In this work we develop a collaborative multi-agent deep reinforcement learning approach that supports targeted communication.", "Crucially, each individual agent actively selects which other agents to send messages to.", "This targeted communication behavior is operationalized via a simple signaturebased soft attention mechanism: along with the message, the sender broadcasts a key which encodes properties of agents the message is intended for, and is used by receivers to gauge the relevance of the message.", "This communication mechanism is learned implicitly, without any attention supervision, as a result of end-to-end training using a downstream task-specific team reward.The inductive bias provided by soft attention in the communication architecture is sufficient to enable agents to", "1) communicate agent-goal-specific messages (e.g. guide fire-fighter towards fire, bomb-defuser towards bomb, etc.),", "2) be adaptive to variable team sizes (e.g. the size of the local neighborhood a self-driving car can communicate with changes as it moves), and", "3) be interpretable through predicted attention probabilities that allow for inspection of which agent is communicating what message and to whom.", "We introduced TarMAC, an architecture for multi-agent reinforcement learning which allows targeted interactions between agents and multiple stages of collaborative reasoning at every timestep.", "Evaluation on three diverse environments show that our model is able to learn intuitive attention behavior and improves performance, with downstream task-specific team reward as sole supervision.While multi-agent navigation experiments in House3D show promising performance, we aim to exhaustively benchmark TarMAC on more challenging 3D navigation tasks because we believe this is where decentralized targeted communication can have the most impact -as it allows scaling to a large number of agents with large observation spaces.", "Given that the 3D navigation problem is hard in and of itself, it would be particularly interesting to investigate combinations with recent advances orthogonal to our approach (e.g. spatial memory, planning networks) with the TarMAC framework." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.23529411852359772, 0.07407406717538834, 0.1428571343421936, 0.14999999105930328, 0.11764705181121826, 0.08695651590824127, 0.11764705181121826, 0.052631575614213943, 0.0615384578704834, 0, 0.11764705181121826, 0.1666666567325592, 0, 0.0476190447807312, 0.09756097197532654, 0, 0, 0, 0.19354838132858276, 0.08219178020954132, 0.04878048598766327 ]
H1e572A5tQ
true
[ "Targeted communication in multi-agent cooperative reinforcement learning" ]
[ "It is difficult for the beginners of etching latte art to make well-balanced patterns by using two fluids with different viscosities such as foamed milk and syrup.", "Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance.", "Thus well-balanced etching latte art cannot be made easily. \n", "In this paper, we propose a system which supports the beginners to make well-balanced etching latte art by projecting a making procedure of etching latte art directly onto a cappuccino. \n", "The experiment results show the progress by using our system. ", "We also discuss about the similarity of the etching latte art and the design templates by using background subtraction.", "Etching latte art is the practice of literally drawing on a coffee with a thin rod, such as a toothpick, in order to create images in the coffee [2] .", "There are several kinds of making method of etching latte art depending on tools and toppings.", "A method which is often introduced as easy one for beginners is putting syrup directly onto milk foam and etching to make patterns as shown in Figure 1 .", "The color combination automatically makes the drink look impressive by using syrup, so baristas are under less pressure to create a difficult design [8] .", "However, it is difficult for beginners to imagine how they ought to put syrup and etch in order to make beautiful patterns since etching latte art offers two fluids with different viscosities.", "On top of this, even though they can watch videos which show a making procedure of etching latte art, etching latte art made by imitating hardly looks well-balanced.", "It is impossible to make well-balanced etching latte art without repeated practice.", "In this paper, we develop a support system which helps even beginners to make well-balanced latte art by directly projecting a making procedure of etching latte art using syrup, which has high needs, onto a cappuccino.", "Moreover, projecting a deformation of fluid with viscosity such as syrup which is difficult to imagine as animations in order to support beginners to understand the deformation of fluid with viscosity.", "We indicate the usefulness of this system through a questionnaire survey and the similarity to the design templates using background subtraction.", "We have developed the system which supports etching latte art beginners to practice and make etching late art and also help them to understand the syrup deformation by directly projecting a making procedure and animations of syrup deformation onto the cappuccino.", "The participants' evaluations verified the usefulness of our system.", "The results of the inexperienced people's questionnaire and the participants' questionnaire show that more than 80 percent of participants made better-balanced etching latte art with our system.", "However, each evaluation says that two participants made betterbalanced etching latte art by themselves and they are all different participants.", "From this result, we confirm there are some instances that human beings suppose the etching latte art is similar to the design template even though the result of the background subtraction says it is not similar to the design template, and vice versa.", "In our future work, we will improve the system with considering what kind of etching latte art human beings prefer and develop a system which creates animations of syrup deformation automatically.", "We also handle the development factors got in the survey.", "Table 4 : Experimental result.", "Group 1 makes etching latte art by themselves firstly.", "Whereas Group 2 makes etching latte art with our system firstly.", "Table 5 : Participants' questionnaire result.", "Table 6 : Results of background subtraction.", "Similarities are represented by a number in the range of 0.000 to 1.000 (1.000 indicates totally the same as the design template)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3199999928474426, 0.3333333432674408, 0.24242423474788666, 0.6530612111091614, 0.11764705181121826, 0.25, 0.21276594698429108, 0.21052631735801697, 0.2857142686843872, 0.12765957415103912, 0.22641508281230927, 0.3333333134651184, 0.34285715222358704, 0.5925925970077515, 0.260869562625885, 0.2380952388048172, 0.6909090876579285, 0.1249999925494194, 0.21276594698429108, 0.1428571343421936, 0.17241378128528595, 0.26923075318336487, 0.1249999925494194, 0, 0.1875, 0.23529411852359772, 0, 0, 0.13636362552642822 ]
Tu1NiBXxf0
true
[ "We have developed an etching latte art support system which projects the making procedure directly onto a cappuccino to help the beginners to make well-balanced etching latte art." ]
[ "We focus on temporal self-supervision for GAN-based video generation tasks.", "While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored.", "This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation.", "For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training.", "However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail.", "For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies.", "In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm.", "For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail.", "We also propose a novel Ping-Pong loss to improve the long-term temporal consistency.", "It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features.", "We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution.", "A series of user studies confirms the rankings computed with these metrics.", "Generative adversarial models (GANs) have been extremely successful at learning complex distributions such as natural images (Zhu et al., 2017; Isola et al., 2017) .", "However, for sequence generation, directly applying GANs without carefully engineered constraints typically results in strong artifacts over time due to the significant difficulties introduced by the temporal changes.", "In particular, conditional video generation tasks are very challenging learning problems where generators should not only learn to represent the data distribution of the target domain, but also learn to correlate the output distribution over time with conditional inputs.", "Their central objective is to faithfully reproduce the temporal dynamics of the target domain and not resort to trivial solutions such as features that arbitrarily appear and disappear over time.", "In our work, we propose a novel adversarial learning method for a recurrent training approach that supervises both spatial content as well as temporal relationships.", "We apply our approach to two video-related tasks that offer substantially different challenges: video super-resolution (VSR) and unpaired video translation (UVT).", "With no ground truth motion available, the spatio-temporal adversarial loss and the recurrent structure enable our model to generate realistic results while keeping the generated structures coherent over time.", "With the two learning tasks we demonstrate how spatio-temporal adversarial training can be employed in paired as well as unpaired data domains.", "In addition to the adversarial network which supervises the short-term temporal coherence, long-term consistency is self-supervised using a novel bi-directional loss formulation, which we refer to as \"Ping-Pong\" (PP) loss in the following.", "The PP loss effectively avoids the temporal accumulation of artifacts, which can potentially benefit a variety of recurrent architectures.", "The central contributions of our work are: a spatio-temporal discriminator unit together with a careful analysis of training objectives for realistic and coherent video generation tasks, a novel PP loss supervising long-term consistency, in addition to a set of metrics for quantifying temporal coherence based on motion estimation and perceptual distance.", "Together, our contributions lead to models that outperform previous work in terms of temporally-coherent detail, which we quantify with a wide range of metrics and user studies.", "features, but collapses to essentially static outputs of Obama.", "It manages to transfer facial expressions back to Trump using tiny differences encoded in its Obama outputs, instead of learning a meaningful mapping.", "Being able to establish the correct temporal cycle-consistency between domains, ours and RecycleGAN can generate correct blinking motions.", "Our model outperforms the latter in terms of coherent detail that is generated.", "In paired as well as unpaired data domains, we have demonstrated that it is possible to learn stable temporal functions with GANs thanks to the proposed discriminator architecture and PP loss.", "We have shown that this yields coherent and sharp details for VSR problems that go beyond what can be achieved with direct supervision.", "In UVT, we have shown that our architecture guides the training process to successfully establish the spatio-temporal cycle consistency between two domains.", "These results are reflected in the proposed metrics and user studies.", "While our method generates very realistic results for a wide range of natural images, our method can generate temporally coherent yet sub-optimal details in certain cases such as under-resolved faces and text in VSR, or UVT tasks with strongly different motion between two domains.", "For the latter case, it would be interesting to apply both our method and motion translation from concurrent work (Chen et al., 2019) .", "This can make it easier for the generator to learn from our temporal self supervision.", "In our method, the interplay of the different loss terms in the non-linear training procedure does not provide a guarantee that all goals are fully reached every time.", "However, we found our method to be stable over a large number of training runs, and we anticipate that it will provide a very useful basis for wide range of generative models for temporal data sets." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.29999998211860657, 0.1249999925494194, 0.0833333283662796, 0, 0.07407406717538834, 0, 0.1599999964237213, 0.13333332538604736, 0.260869562625885, 0, 0.20689654350280762, 0.09090908616781235, 0.060606054961681366, 0.1621621549129486, 0.09302325546741486, 0.054054051637649536, 0.24242423474788666, 0.06666666269302368, 0, 0.06451612710952759, 0.052631575614213943, 0.0714285671710968, 0.1111111044883728, 0.0555555522441864, 0, 0.0624999962747097, 0.07407406717538834, 0, 0.25641024112701416, 0.1875, 0, 0, 0.07843136787414551, 0, 0.1599999964237213, 0, 0.1428571343421936 ]
r1ltgp4FwS
true
[ "We propose temporal self-supervisions for learning stable temporal functions with GANs." ]
[ "The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. ", "Cross-lingual understanding has made progress in this area using language universal representations.", "However, most current approaches focus on the problem as one of aligning language and do not address the natural domain drift across languages and cultures. ", "In this paper, We address the domain gap in the setting of semi-supervised cross-lingual document classification, where labeled data is available in a source language and only unlabeled data is available in the target language. ", "We combine a state-of-the-art unsupervised learning method, masked language modeling pre-training, with a recent method for semi-supervised learning, Unsupervised Data Augmentation (UDA), to simultaneously close the language and the domain gap. ", "We show that addressing the domain gap in cross-lingual tasks is crucial. ", "We improve over strong baselines and achieve a new state-of-the-art for cross-lingual document classification.", "Recent advances in Natural Language Processing have enabled us to train high-accuracy systems for many language tasks.", "However, training an accurate system still requires a large amount of training data.", "It is inefficient to collect data for a new task and it is virtually impossible to annotate a separate data set for each language.", "To go beyond English and a few popular languages, we need methods that can learn from data in one language and apply it to others.", "Cross-Lingual Understanding (XLU) has emerged as a field concerned with learning models on data in one language and applying it to others.", "Much of the work in XLU focuses on the zero-shot setting, which assumes that labeled data is available in one source language (usually English) and not in any of the target languages in which the model is evaluated.", "The labeled data can be used to train a high quality model in the source language.", "One then relies on general domain parallel corpora and monolingual corpora to learn to 'transfer' from the source language to the target language.", "Transfer methods can explicitly rely on machine translation models built from such parallel corpora.", "Alternatively, one can use such corpora to learn language universal representations to produce features to train a model in one language, which one can directly apply to other languages.", "Such representations can be in the form of cross-lingual word embeddings, contextual word embeddings, or sentence embeddings (Ruder et al. (2017) ; Lample & Conneau (2019) ; Schwenk & Douze (2017) ).", "Using such techniques, recent work has demonstrated reasonable zero-shot performance for crosslingual document classification (Schwenk & Li (2018) ) and natural language inference (Conneau et al. (2018) ).", "What we have so far described is a simplified view of XLU, which focuses solely on the problem of aligning languages.", "This view assumes that, if we had access to a perfect translation system, and translated our source training data into the target language, the resulting model would perform as well as if we had collected a similarly sized labeled dataset directly in our target language.", "Existing work in XLU to date also works under this assumption.", "However, in real world applications, we must also bridge the domain gap across different languages, as well as the language gap.", "No task is ever identical in two languages, even if we group them under the same label, e.g. 'news document classification' or 'product reviews sentiment analysis'.", "A Chinese customer might express sentiment differently than his American counterpart.", "Or French news might simply cover different topics than English news.", "As a result, any approach which ignores this domain drift will fall short of native in-language performance in real world XLU.", "In this paper, we propose to jointly tackle both language and domain transfer.", "We consider the semi-supervised XLU task, where in addition to labeled data in a source language, we have access to unlabeled data in the target language.", "Using this unlabeled data, we combine the aforementioned cross-lingual methods with recently proposed unsupervised domain adaptation and weak supervision techniques on the task of cross-lingual document classification (XLDC).", "In particular, we focus on two approaches for domain adaptation.", "The first method is based on masked language model (MLM) pre-training (as in Devlin et al. (2018) ) using unlabeled target language corpora.", "Such methods have been shown to improve over general purpose pre-trained models such as BERT in the weakly supervised setting (Lee et al. (2019) ; Han & Eisenstein (2019) ).", "The second method is unsupervised data augmentation (UDA) (Xie et al. (2019) ), where synthetic paraphrases are generated from the unlabeled corpus, and the model is trained on a label consistency loss.", "While both of these techniques were proposed previously, in both cases there are some open questions when applying them on the cross-lingual problems.", "For instance when performing data augmentation, one could generate augmented paraphrases in either the source or the target language or both.", "We experiment with various approaches and provide guidelines with ablation studies.", "Furthermore, we find that the value of additional labeled data in the source language is limited due to the train-test discrepancy of XLDC tasks.", "We propose to alleviate this issue by using self-training technique to do the domain adaptation from the source language into the target language.", "By combining these methods, we are able to reduce error rates by an average 44% over a strong XLM baseline, setting a new state-of-the-art for cross-lingual document classification.", "In this paper, we tackled the domain mismatch challenge in cross-lingual document classification -an important, yet often overlooked problem in cross-lingual understanding.", "We provided evidence for the existence and importance of this problem, even when utilizing strong pre-trained cross-lingual representations.", "We proposed a framework combining cross-lingual transfer techniques with three domain adaptation methods; unsupervised data augmentation, masked language model pre-training and self-training, which can leverage unlabeled data in the target language to moderate the domain gap.", "Our results show that by removing the domain discrepancy, we can close the performance gap between crosslingual transfer and monolingual baselines almost completely for the document classification task.", "We are also able to improve the state-of-the-art in this area by a large margin.", "While document classification is by no means the most challenging task for XLU, we believe the strong gains that we demonstrated could be extended to other cross-lingual tasks, such as cross-lingual question answering and event detection.", "Developing cross-lingual methods which are competitive with in-language models for real world, semantically challenging NLP problems remains an open problem and subject of future research.", "The experiments in this paper are based on the PyTorch (Paszke et al., 2017) and Pytext (Aly et al., 2018) package.", "We use the Adam (Kingma & Ba, 2014) as the optimizer.", "For all experiments, we grid search the learning rate in the set {5 × 10 −6 , 1 × 10 −5 , 2 × 10 −5 }.", "When using UDA method, we also try the three different annealing strategies introduced in the UDA paper (Xie et al., 2019) , and the λ in (1) is always set as 1.", "The batch size in the Ft and UDA+Self method is 128.", "In the UDA method, the batch size is 16 for the labeled data and 80 for the unlabeled data.", "Due to the limitation of the GPU memory, in all experiments, we set the length of samples as 256, and cut the input tokens exceeding this length.", "Finally, we report the results with the best hyper-parameters.", "As for the augmentation process, we sweep the temperature which controls the diversity of beam search in translation.", "The best temperature for \"en-de, en-fr, en-es\" and \"en-ru\" are 1.0 and 0.6, the sampling space is the whole vocabulary.", "In the \"en-zh\" setting, the temperature is 1.0 and the sampling space is the top 100 tokens in the vocabulary.", "We note that this uses the Facebook production translation models, and results could vary when other translation systems are applied.", "For reproducibility, we will release the augmented datasets that we generated." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1249999925494194, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryeot3EYPB
true
[ "Semi-supervised Cross-lingual Document Classification" ]
[ "A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data.", "In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back propagation algorithm used for neural networks (Eisner (2016)). ", "Do these observations suggest that, despite their many apparent differences, HMMs are a special case of RNNs? ", "In this paper, we investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization, to answer this question.", "In particular, we investigate three key design factors—independence assumptions between the hidden states and the observation, the placement of softmax, and the use of non-linearity—in order to pin down their empirical effects. ", "We present a comprehensive empirical study to provide insights on the interplay between expressivity and interpretability with respect to language modeling and parts-of-speech induction.", "Sequence is a common structure among many forms of naturally occurring data, including speech, text, video, and DNA.", "As such, sequence modeling has long been a core research problem across several fields of machine learning and AI.", "By far the most widely used approach for decades is the Hidden Markov Models of BID1 ; BID10 , which assumes a sequence of discrete latent variables to generate a sequence of observed variables.", "When the latent variables are unobserved, unsupervised training of HMMs can be performed via the Baum-Welch algorithm (which, in turn, is based on the forward-backward algorithm), as a special case of ExpectationMaximization (EM) BID4 ).", "Importantly, the discrete nature of the latent variables has the benefit of interpretability, as they recover contextual clustering of the output variables.In contrast, Recurrent Neural Networks (RNNs), introduced later in the form of BID11 and BID6 networks, assume continuous latent representations.", "Notably, unlike the hidden states of HMMs, there is no probabilistic interpretation of the hidden states of RNNs, regardless of their many different architectural variants (e.g. LSTMs of BID9 , GRUs of BID3 and RANs of BID13 ).Despite", "their many apparent differences, both HMMs and RNNs model hidden representations for sequential data. At the", "heart of both models are: a state at time t, a transition function f : h t−1 → h t in latent space, and an emission function g : h t → x t . In addition", ", it has been noted that the backward computation in the Baum-Welch algorithm is a special case of back-propagation for neural networks BID5 ). Therefore,", "a natural question arises as to the fundamental relationship between HMMs and RNNs. Might HMMs", "be a special case of RNNs?In this paper", ", we investigate a series of architectural transformations between HMMs and RNNsboth through theoretical derivations and empirical hybridization. In particular", ", we demonstrate that the forward marginal inference for an HMM-accumulating forward probabilities to compute the marginal emission and hidden state distributions at each time step-can be reformulated as equations for computing an RNN cell. In addition,", "we investigate three key design factors-independence assumptions between the hidden states and the observation, the placement of soft-max, and the use of non-linearity-in order to pin down their empirical effects. Above each of", "the models we indicate the type of transition and emission cells used. H for HMM, R", "for RNN/Elman and F is a novel Fusion defined in §3.3. It is particularly", "important to understanding this work to track when a vector is a distribution (resides in a simplex) versus in the unit cube (e.g. after a sigmoid non-linearity). These cases are indicated", "by c i and c i , respectively.Our work is supported by several earlier works such as BID23 and BID25 that have also noted the connection between RNNs and HMMs (see §7 for more detailed discussion). Our contribution is to provide", "the first thorough theoretical investigation into the model variants, carefully controlling for every design choices, along with comprehensive empirical analysis over the spectrum of possible hybridization between HMMs and RNNs.We find that the key elements to better performance of the HMMs are the use of a sigmoid instead of softmax linearity in the recurrent cell, and the use of an unnormalized output distribution matrix in the emission computation. On the other hand, multiplicative", "integration of the previous hidden state and input embedding, and intermediate normalizations in the cell computation are less consequential. We also find that HMM outperforms", "other RNNs variants for unsupervised prediction of the next POS tag, demonstrating the advantages of discrete bottlenecks for increased interpretability.The rest of the paper is structured as follows. First, we present in §2 the derivation", "of HMM marginal inference as a special case of RNN computation. Next in §3, we explore a gradual transformation", "of HMMs into RNNs. In §4, we present the reverse transformation of", "Elman RNNs back to HMMs. Finally, building on these continua, we provide", "empirical analysis in §5 and §6 to pin point the empirical effects of varying design choices over the possible hybridization between HMMs and RNNs. We discuss related work in §7 and conclude in §", "8.", "In this work, we presented a theoretical and empirical investigation into the model variants over the spectrum of possible hybridization between HMMs and RNNs.", "By carefully controlling for every design choices, we provide new insights into several factors including independence assumptions, the placement of softmax, and the use of nonliniarity and how these choices influence the interplay between expressiveness and interpretability.", "Comprehensive empirical results demonstrate that the key elements to better performance of the HMM are the use of a sigmoid instead of softmax linearity in the recurrent cell, and the use of an unnormalized output distribution matrix in the emission computation.", "Multiplicative integration of the previous hidden state and input embedding, and intermediate normalizations in the cell computation are less consequential.", "We also find that HMM outperforms other RNNs variants in a next POS tag prediction task, which demonstrates the advantages of models with discrete bottlenecks in increased interpretability." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19512194395065308, 0.19607841968536377, 0.2926829159259796, 0.6382978558540344, 0.19230768084526062, 0.30434781312942505, 0.1428571343421936, 0.1395348757505417, 0.07692307233810425, 0.178571417927742, 0.07017543166875839, 0.145454540848732, 0.14999999105930328, 0.1538461446762085, 0.1666666567325592, 0.21052631735801697, 0.25, 0.6511628031730652, 0.0357142798602581, 0.19607841968536377, 0.10256409645080566, 0.10526315122842789, 0.039215680211782455, 0.1355932205915451, 0.2222222238779068, 0.12765957415103912, 0.03703703358769417, 0.19512194395065308, 0.11428570747375488, 0.1111111044883728, 0.26923075318336487, 0.3478260934352875, 0.2142857164144516, 0.145454540848732, 0.0952380895614624, 0.11764705181121826 ]
HyesB2RqFQ
true
[ "Are HMMs a special case of RNNs? We investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization and provide new insights." ]
[ "We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference.", "This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate).", "We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier.", "However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable.", "Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.", "Deep learning has led to tremendous advances in supervised learning BID39 BID20 BID46 ; however, unsupervised learning remains a challenging area.", "Recent advances in variational inference (VI) BID23 BID33 , have led to an explosion of research in the area of deep latent-variable models and breakthroughs in our ability to model natural high-dimensional data.", "This class of models typically optimize a lower bound on the log-likelihood of the data known as the evidence lower bound (ELBO), and leverage the \"reparameterization trick\" to make large-scale training feasible.However, a number of papers have observed that VAEs trained with powerful decoders can learn to ignore the latent variables BID44 BID6 .", "We demonstrate this empirically and explain the issue theoretically by deriving the ELBO in terms of the mutual information between X, the data, and Z, the latent variables.", "Having done so, we show that the previously-described β-VAE objective (Higgins et al., 2017 ) has a theoretical justification in terms of a Legendre-transformation of a constrained optimization of the mutual information.", "This leads to the core point of this paper, which is that the optimal rate of information in a model is taskdependent, and optimizing the ELBO directly makes the selection of that rate purely a function of architectural choices, whereas by using β-VAE or other constrained optimization objectives, practitioners can learn models with optimal rates for their particular task without having to do extensive architectural search.Mutual information provides a reparameterization-independent measure of dependence between two random variables.", "Computing mutual information exactly in high dimensions is problematic BID29 Gao et al., 2017 ), so we turn to recently developed tools in variational inference to approximate it.", "We find that a natural lower and upper bound on the mutual information between the input and latent variable can be simply related to the ELBO, and understood in terms of two terms: (1) a lower bound that depends on the distortion, or how well an input can be reconstructed through the encoder and decoder, and (2) an upper bound that measures the rate, or how much information is retained about the input.", "Together these terms provide a unifying perspective on the set of optimal models given a dataset, and show that there exists a continuum of models that make very different trade-offs in terms of rate and distortion.By leveraging additional information about the amount of information contained in the latent variable, we show that we can recover the ground-truth generative model used to create the data in a toy model.", "We perform extensive experiments on MNIST and Omniglot using a variety of encoder, decoder, and prior architectures and demonstrate how our framework provides a simple and intuitive mechanism for understanding the trade-offs made by these models.", "We further show that we can control this tradeoff directly by optimizing the β-VAE objective, rather than the ELBO.", "By varying β, we can learn many models with the same architecture and comparable generative performance (in terms of marginal data log likelihood), but that exhibit qualitatively different behavior in terms of the usage of the latent variable and variability of the decoder.", "We have motivated the β-VAE objective on information theoretic grounds, and demonstrated that comparing model architectures in terms of the rate-distortion plot offers a much better look at their performance and tradeoffs than simply comparing their marginal log likelihoods.", "Additionally, we have shown a simple way to fix models that ignore the latent space due to the use of a powerful decoder: simply reduce β and retrain.", "This fix is much easier to implement than other solutions that have been proposed in the literature, and comes with a clear theoretical justification.", "We strongly encourage future work to report rate and distortion values independently, rather than just reporting the log likelihood.", "If future work proposes new architectural regularization techniques, we suggest the authors train their objective at various rate distortion tradeoffs to demonstrate and quantify the region of the RD plane where their method dominates.Through a large set of experiments we have demonstrated the performance at various rates and distortion tradeoffs for a set of representative architectures currently under study, confirming the power of autoregressive decoders, especially at low rates.", "We have also shown that current approaches seem to have a hard time achieving high rates at low distortion.", "This suggests a set of experiments with a simple encoder / decoder pair but a powerful autoregressive marginal posterior approximation, which should in principle be able to reach the autoencoding limit, with vanishing distortion and rates approaching the data entropy.Interpreting the β-VAE objective as a constrained optimization problem also hints at the possibility of applying more powerful constrained optimization techniques, which we hope will be able to advance the state of the art in unsupervised representation learning.", "A RESULTS ON OMNIGLOT FIG4 plots the RD curve for various models fit to the Omniglot dataset BID25 , in the same form as the MNIST results in FIG2 .", "Here we explored βs for the powerful decoder models ranging from 1.1 to 0.1, and βs of 0.9, 1.0, and 1.1 for the weaker decoder models.", "On Omniglot, the powerful decoder models dominate over the weaker decoder models.", "The powerful decoder models with their autoregressive form most naturally sit at very low rates.", "We were able to obtain finite rates by means of KL annealing.", "Further experiments will help to fill in the details especially as we explore differing β values for these architectures on the Omniglot dataset.", "Our best achieved ELBO was at 90.37 nats, set by the ++-model with β = 1.0 and KL annealing.", "This model obtains R = 0.77, D = 89.60, ELBO = 90.37 and is nearly auto-decoding.", "We found 14 models with ELBOs below 91.2 nats ranging in rates from 0.0074 nats to 10.92 nats.Similar to FIG3 in FIG5 we show sample reconstruction and generated images from the same \"-+v\" model family trained with KL annealing but at various βs.", "Just like in the MNIST case, this demonstrates that we can smoothly interpolate between auto-decoding and auto-encoding behavior in a single model family, simply by adjusting the β value." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.0624999962747097, 0.14999999105930328, 0.045454543083906174, 0.1538461446762085, 0, 0.19512194395065308, 0.0714285671710968, 0.22857142984867096, 0.09999999403953552, 0.0810810774564743, 0.10256409645080566, 0.16949152946472168, 0.13333332538604736, 0.13636362552642822, 0.06666666269302368, 0.08510638028383255, 0.21276594698429108, 0.10810810327529907, 0.0555555522441864, 0.12903225421905518, 0.0624999962747097, 0.06666666269302368, 0.056338027119636536, 0, 0.1249999925494194, 0, 0, 0.1666666567325592, 0, 0.060606054961681366, 0.06896550953388214, 0.07547169178724289, 0.05128204822540283 ]
H1rRWl-Cb
true
[ "We provide an information theoretic and experimental analysis of state-of-the-art variational autoencoders." ]
[ "Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs.", "GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes.", "Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks.", "However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations.", "Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures.", "Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures.", "We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test.", "We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.", "Learning with graph structured data, such as molecules, social, biological, and financial networks, requires effective representation of their graph structure BID14 .", "Recently, there has been a surge of interest in Graph Neural Network (GNN) approaches for representation learning of graphs BID23 BID13 BID21 BID34 BID37 .", "GNNs broadly follow a recursive neighborhood aggregation (or message passing) scheme, where each node aggregates feature vectors of its neighbors to compute its new feature vector BID37 BID12 .", "After k iterations of aggregation, a node is represented by its transformed feature vector, which captures the structural information within the node's k-hop neighborhood.", "The representation of an entire graph can then be obtained through pooling BID39 , for example, by summing the representation vectors of all nodes in the graph.Many GNN variants with different neighborhood aggregation and graph-level pooling schemes have been proposed BID31 BID3 BID6 BID8 BID13 BID19 BID21 BID23 BID34 BID28 BID37 BID29 BID35 BID39 .", "Empirically, these GNNs have achieved state-of-the-art performance in many tasks such as node classification, link prediction, and graph classification.", "However, the design of new GNNs is mostly based on empirical intuition, heuristics, and experimental trial-anderror.", "There is little theoretical understanding of the properties and limitations of GNNs, and formal analysis of GNNs' representational capacity is limited.Here, we present a theoretical framework for analyzing the representational power of GNNs.", "We formally characterize how expressive different GNN variants are in learning to represent and distinguish between different graph structures.", "Our framework is inspired by the close connection between GNNs and the Weisfeiler-Lehman (WL) graph isomorphism test BID36 , a powerful test known to distinguish a broad class of graphs BID2 .", "Similar to GNNs, the WL test iteratively updates a given node's feature vector by aggregating feature vectors of its network neighbors.", "What makes the WL test so powerful is its injective aggregation update that maps different node neighborhoods to different feature vectors.", "Our key insight is that a GNN can have as large discriminative power as the WL test if the GNN's aggregation scheme is highly expressive and can model injective functions.To mathematically formalize the above insight, our framework first represents the set of feature vectors of a given node's neighbors as a multiset, i.e., a set with possibly repeating elements.", "Then, the neighbor aggregation in GNNs can be thought of as an aggregation function over the multiset.", "Hence, to have strong representational power, a GNN must be able to aggregate different multisets into different representations.", "We rigorously study several variants of multiset functions and theoretically characterize their discriminative power, i.e., how well different aggregation functions can distinguish different multisets.", "The more discriminative the multiset function is, the more powerful the representational power of the underlying GNN.Our main results are summarized as follows:", "1) We show that GNNs are at most as powerful as the WL test in distinguishing graph structures.2) We establish conditions on the neighbor aggregation and graph readout functions under which the resulting GNN is as powerful as the WL test.3) We identify graph structures that cannot be distinguished by popular GNN variants, such as GCN BID21 and GraphSAGE (Hamilton et al., 2017a) , and we precisely characterize the kinds of graph structures such GNN-based models can capture.", "In this paper, we developed theoretical foundations for reasoning about the expressive power of GNNs, and proved tight bounds on the representational capacity of popular GNN variants.", "We also designed a provably maximally powerful GNN under the neighborhood aggregation framework.", "An interesting direction for future work is to go beyond neighborhood aggregation (or message passing) in order to pursue possibly even more powerful architectures for learning with graphs.", "To complete the picture, it would also be interesting to understand and improve the generalization properties of GNNs as well as better understand their optimization landscape.A PROOF FOR LEMMA 2Proof.", "Suppose after k iterations, a graph neural network A has A(G 1 ) = A(G 2 ) but the WL test cannot decide G 1 and G 2 are non-isomorphic.", "It follows that from iteration 0 to k in the WL test, G 1 and G 2 always have the same collection of node labels.", "In particular, because G 1 and G 2 have the same WL node labels for iteration i and i + 1 for any i = 0, ..., k − 1, G 1 and G 2 have the same collection, i.e. multiset, of WL node labels l DISPLAYFORM0 as well as the same collection of node neighborhoods l DISPLAYFORM1 .", "Otherwise, the WL test would have obtained different collections of node labels at iteration i + 1 for G 1 and G 2 as different multisets get unique new labels.The WL test always relabels different multisets of neighboring nodes into different new labels.", "We show that on the same graph G = G 1 or G 2 , if WL node labels l DISPLAYFORM2 u for any iteration i.", "This apparently holds for i = 0 because WL and GNN starts with the same node features.", "Suppose this holds for iteration j, if for any u, v, l DISPLAYFORM3 , then it must be the case that DISPLAYFORM4 By our assumption on iteration j, we must have DISPLAYFORM5 In the aggregation process of the GNN, the same AGGREGATE and COMBINE are applied.", "The same input, i.e. neighborhood features, generates the same output.", "Thus, h DISPLAYFORM6 .", "By induction, if WL node labels l DISPLAYFORM7 u , we always have GNN node features h DISPLAYFORM8 u for any iteration i.", "This creates a valid mapping φ such that h DISPLAYFORM9 v ) for any v ∈ G. It follows from G 1 and G 2 have the same multiset of WL neighborhood labels that G 1 and G 2 also have the same collection of GNN neighborhood features DISPLAYFORM10 are the same.", "In particular, we have the same collection of GNN node features DISPLAYFORM11 for G 1 and G 2 .", "Because the graph level readout function is permutation invariant with respect to the collection of node features, A(G 1 ) = A(G 2 ).", "Hence we have reached a contradiction." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.25, 0.12121211737394333, 0.1764705777168274, 0.4571428596973419, 0.2222222238779068, 0.550000011920929, 0.2702702581882477, 0.10810810327529907, 0.14999999105930328, 0.1395348757505417, 0.14999999105930328, 0.1515151411294937, 0.1111111044883728, 0.3030303120613098, 0.3720930218696594, 0.22857142984867096, 0.2666666507720947, 0.1621621549129486, 0.10810810327529907, 0.20895521342754364, 0.1875, 0.12121211737394333, 0.1463414579629898, 0.2702702581882477, 0.21621620655059814, 0.4285714328289032, 0.4000000059604645, 0.09302324801683426, 0.17777776718139648, 0.1428571343421936, 0.14999999105930328, 0.1538461446762085, 0.16326530277729034, 0.1463414579629898, 0.23529411852359772, 0.1428571343421936, 0.07407406717538834, 0, 0.10526315122842789, 0.22641508281230927, 0.29411762952804565, 0.10526315122842789, 0.08695651590824127 ]
ryGs6iA5Km
true
[ "We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN." ]
[ "We introduce MTLAB, a new algorithm for learning multiple related tasks with strong theoretical guarantees.", "Its key idea is to perform learning sequentially over the data of all tasks, without interruptions or restarts at task boundaries.", "Predictors for individual tasks are derived from this process by an additional online-to-batch conversion step.\n\n", "By learning across task boundaries, MTLAB achieves a sublinear regret of true risks in the number of tasks.", "In the lifelong learning setting, this leads to an improved generalization bound that converges with the total number of samples across all observed tasks, instead of the number of examples per tasks or the number of tasks independently.", "At the same time, it is widely applicable: it can handle finite sets of tasks, as common in multi-task learning, as well as stochastic task sequences, as studied in lifelong learning.", "In recent years, machine learning has become a core technology in many commercially relevant applications.", "One observation in this context was that real-world learning tasks often do not occur in isolation, but rather as collections or temporal sequences of many, often highly related tasks.", "Examples include click-through rate prediction for online ads, personalized voice recognition for smart devices, or handwriting recognition of different languages.Multi-task learning BID3 has been developed exactly to handle such situations.", "It is based on an intuitive idea that sharing information between tasks should help the learning process and therefore lead to improved prediction quality.", "In practice, however, this is not guaranteed and multi-task learning can even lead to a reduction of prediction quality, so called negative transfer.", "The question when negative transfer occurs and how it can be avoided has triggered a surge of research interest to better understanding the theoretical properties of multi-task learning, as well as related research areas, such as lifelong learning BID1 BID9 , where more and more tasks occur sequentially, and task curriculum learning , where the order in which to learn tasks needs to be determined.In this work, we describe a new approach to multi-task learning that has strong theoretical guarantees, in particular improving the rate of convergence over some previous work.", "Our core idea is to decouple the process of predictor learning from the task structure.", "This is also the main difference of our approach to previous work, which typically learned one predictor for each task.", "We treat the available data for all tasks as parts of a single large online-learning problem, in which individual tasks simply correspond to subsets of the data stream that is processed.", "To obtain predictors for the individual tasks, we make use of online-to-batch conversion methods.", "We name the method MTLAB (multi-task learning across boundaries).Our", "main contribution is a sublinear bound on the task regret of MTLAB with true risks. As", "a corollary, we show that MTLAB improves the existing convergence rates in the case of lifelong learning. From", "the regret-type bounds, we derive high probability bounds on the expected risk of each task, which constitutes a second main contribution of our work.For real-world problems, not all tasks might be related to all previous ones. Our", "third contribution is a theoretically well-founded, yet practical, mechanism to avoid negative transfer in this case: we show that by splitting the set of tasks into homogeneous groups and using MTLAB to learn individual predictors on each of the resulting subsequences of samples, one obtains the same strong guarantees for each of the learned predictors while avoiding negative transfer.", "We introduced a new and widely applicable algorithm for sequentially learning of multiple tasks.", "By performing learning across tasks boundaries it is able to achieve a sublinear regret bound and improves the convergence rates in the lifelong learning scenario.", "MTLAB's way of not interrupting or restarting the learning process at task boundaries results in faster convergence rates than what can be achieved by learning individual predictors for each task: in particular, the generalization error decreases with the product of the number of tasks and the number of samples per task, instead of separately in each of these quantities.", "We also introduced a mechanism for the situation when the tasks to be learned are not all related to each other.", "We show that by constructing suitable subsequences of task, the convergence properties can hold even in this case." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.3333333432674408, 0.06451612710952759, 0.1875, 0.13636362552642822, 0.19512194395065308, 0.06666666269302368, 0.09756097197532654, 0.13636362552642822, 0.1538461446762085, 0.10526315122842789, 0.14457830786705017, 0.20689654350280762, 0.17142856121063232, 0.1428571343421936, 0.13793103396892548, 0.1599999964237213, 0.12903225421905518, 0.1875, 0.03999999538064003, 0.0952380895614624, 0.27586206793785095, 0.10526315122842789, 0.1666666567325592, 0.11764705181121826, 0.12121211737394333 ]
HkllV5Bs24
true
[ "A new algorithm for online multi-task learning that learns without restarts at the task borders" ]
[ "Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data.", "In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability.", "We study the impact of linguistic properties of the languages, the architecture of the model, and of the learning objectives.", "The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition.", "Among our key conclusions is the fact that lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an important part of it", "Embeddings of natural language text via unsupervised learning, coupled with sufficient supervised training data, have been ubiquitous in NLP in recent years and have shown success in a wide range of monolingual NLP tasks, mostly in English.", "Training models for other languages have been shown more difficult, and recent approaches relied on bilingual embeddings that allowed the transfer of supervision in high resource languages like English to models in lower resource languages; however, inducing these bilingual embeddings required some level of supervision (Upadhyay et al., 2016) .", "Multilingual BERT 1 (M-BERT), a Transformer-based (Vaswani et al., 2017) language model trained on raw Wikipedia text of 104 languages suggests an entirely different approach.", "Not only the model is contextual, but its training also requires no supervision -no alignment between the languages is done.", "Nevertheless, and despite being trained with no explicit cross-lingual objective, M-BERT produces a representation that seems to generalize well across languages for a variety of downstream tasks (Wu & Dredze, 2019) .", "In this work, we attempt to develop an understanding of the success of M-BERT.", "We study a range of aspects on a couple of different NLP tasks, in order to identify the key components in the success of the model.", "Our study is done in the context of only two languages, source (typically English) and target (multiple, quite different languages).", "By involving only a pair of languages, we can study the performance on a given target language, ensuring that it is influenced only by the cross-lingual transfer from the source language, without having to worry about a third language interfering.", "We analyze the two-languages version of M-BERT (B-BERT, from now on) in three orthogonal dimensions:", "(i) Linguistics properties and similarities of target and source languages;", "(ii) Network Architecture, and", "(iii) Input and Learning Objective.", "One hypothesis that came up when the people thoughts about the success of M-BERT is due to some level of language similarity.", "This could be lexical similarity (shared words or word-parts) or structural similarities, or both.", "We, therefore, investigate the contribution of word-piece overlap -the extent to which the same word-pieces appear in both source and target languages -and distinguish it from other similarities, which we call structural similarity between the source and target languages.", "Surprisingly, as we show, B-BERT is cross-lingual even when there is absolutely no word-piece overlap.", "That is, other aspects of language similarity must be contributing to the cross-lingual capabilities of the model.", "This is contrary to Pires et al. (2019) hypothesis that M-BERT gains its power from shared word-pieces.", "Furthermore, we show that the amount of word-piece overlap in B-BERT's training data contributes little to performance improvements.", "Our study of the model architecture addresses the importance of", "(i) The network depth,", "(ii) the number of attention heads, and", "(iii) the total number of model parameters in B-BERT.", "Our results suggest that depth and the total number of parameters of B-BERT are crucial for both monolingual and cross-lingual performance, whereas multi-head attention is not a significant factor -a single attention head B-BERT can already give satisfactory results.", "To understand the role of the learning objective and the input representation, we study the effect of", "(i) the next sentence prediction objective,", "(ii) the language identifier in the training data, and", "(iii) the level of tokenization in the input representation (character, word-piece, or word tokenization).", "Our results indicate that the next sentence prediction objective actually hurts the performance of the model while identifying the language in the input does not affect B-BERT's performance crosslingually.", "Our experiments also show that character-level and word-level tokenization of the input results in significantly worse performance than word-piece level tokenization.", "Overall, we provide an extensive set of experiments on three source-target language pairs, EnglishSpanish, English-Russian, and English-Hindi.", "We chose these target languages since they vary in scripts and typological features.", "We evaluate the performance of B-BERT on two very different downstream tasks: cross-lingual Named Entity Recognition -a sequence prediction task the requires only local context -and cross-lingual Textual Entailment Dagan et al. (2013) that requires more global representation of the text.", "Ours is not the first study of M-BERT.", "(Wu & Dredze, 2019) and (Pires et al., 2019) identified the cross-lingual success of the model and tried to understand it.", "The former by considering M-BERT layerwise, relating cross-lingual performance with the amount of shared word-pieces and the latter by considering the model's ability to transfer between languages as a function of word order similarity in languages.", "However, both works treated M-BERT as a black box and compared M-BERT's performance on different languages.", "This work, on the other hand, examines how B-BERT performs cross-lingually by probing its components, along multiple aspects.", "We also note that some of the architectural conclusions have been observed earlier, if not investigated, in other contexts.", "; Yang et al. (2019) argued that the next Sentence prediction objective of BERT (the monolingual model) is not very useful; we show that this is the case in the cross-lingual setting.", "Voita et al. (2019) prunes attention heads for a transformer based machine translation model and argues that most attention heads are not important; in this work, we show that the number of attention heads is not important in the cross-lingual setting.", "Our contributions are threefold:", "(i) we provide the first extensive study of the aspects of the multilingual BERT that give rise to its cross-lingual ability.", "(ii) We develop a methodology that facilitates the analysis of similarities between languages and their impact on cross-lingual models; we do this by mapping English to a Fake-English language, that is identical in all aspects to English but shares not word-pieces with any target language.", "Finally,", "(iii) we develop a set of insights into B-BERT, along linguistics, architectural, and learning dimensions, that would contribute to further understanding and to the development of more advanced cross-lingual neural models.", "This paper provides a systematic empirical study addressing the cross-lingual ability of B-BERT.", "The analysis presented here covers three dimensions: (1) Linguistics properties and similarities of the source and target languages, (2) Neural Architecture, and (3) Input representation and Learning Objective.", "In order to gauge the language similarity aspect needed to make B-BERT successful, we created a new language -Fake-English -and this allows us to study the effect of word-piece overlap while maintaining all other properties of the source language.", "Our experiments reveal some interesting and surprising results like the fact that word-piece overlap on the one hand, and multi-head attention on the other, are both not significant, whereas structural similarity and the depth of B-BERT are crucial for its cross-lingual ability.", "While, in order to better control interference among languages, we studied the cross-lingual ability of B-BERT instead of those of M-BERT, it would be interesting now to extend this study, allowing for more interactions among languages.", "We leave it to future work to study these interactions.", "In particular, one important question is to understand the extent to which adding to M-BERT languages that are related to the target language, helps the model's cross-lingual ability.", "We introduced the term Structural Similarity, despite its obscure definition, and show its significance in cross-lingual ability.", "Another interesting future work could be to develop a better definition and, consequently, a finer set of experiments, to better understand the Structural similarity and study its individual components.", "Finally, we note an interesting observation made in Table 8 .", "We observe a drastic drop in the entailment performance of B-BERT when the premise and hypothesis are in different languages.", "(This data was created using XNLI when in the original form the languages contain same premise and hypothesis pair).", "One of the possible explanations could be that BERT is learning to make textual entailment decisions by matching words or phrases in the premise to those in the hypothesis.", "This question, too, is left as a future direction.", "In the main text, we defined structural similarity as all the properties of a language that is invariant to the script of the language, like morphology, word-ordering, word-frequency, etc..", "Here, we analyze 2 sub-components of structural similarity -word-ordering similarity and word-frequency (Unigram frequency) similarity to understand the concept of structural similarity better." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.060606058686971664, 0.07407406717538834, 0.0952380895614624, 0.0555555522441864, 0.05714285373687744, 0.05128204822540283, 0.03999999910593033, 0.11764705181121826, 0, 0.052631575614213943, 0.0952380895614624, 0.0714285671710968, 0.0714285671710968, 0.0476190447807312, 0.08695651590824127, 0.11764705181121826, 0, 0, 0.0714285671710968, 0, 0.04999999701976776, 0, 0.08695651590824127, 0, 0.07692307233810425, 0.1249999925494194, 0, 0.13333332538604736, 0.11764705181121826, 0.0476190447807312, 0.0952380895614624, 0, 0, 0.0952380895614624, 0.0624999962747097, 0.0714285671710968, 0.07999999821186066, 0, 0.045454543083906174, 0.1249999925494194, 0.07407406717538834, 0.052631575614213943, 0, 0, 0.07407406717538834, 0.0555555522441864, 0.04878048598766327, 0, 0.07692307233810425, 0.04081632196903229, 0.0555555522441864, 0.0952380895614624, 0.060606058686971664, 0.04999999701976776, 0.04651162400841713, 0.04999999701976776, 0, 0, 0, 0.05882352590560913, 0, 0.07692307233810425, 0, 0.060606058686971664, 0, 0.060606058686971664, 0.07692307233810425 ]
HJeT3yrtDr
true
[ "Cross-Lingual Ability of Multilingual BERT: An Empirical Study" ]
[ "We analyze the dynamics of training deep ReLU networks and their implications on generalization capability.", "Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks.", "With this relationship and the assumption of small overlapping teacher node activations, we prove that (1) student nodes whose weights are initialized to be close to teacher nodes converge to them at a faster rate, and (2) in over-parameterized regimes and 2-layer case, while a small set of lucky nodes do converge to the teacher nodes, the fan-out weights of other nodes converge to zero.", "This framework provides insight into multiple puzzling phenomena in deep learning like over-parameterization, implicit regularization, lottery tickets, etc.", "We verify our assumption by showing that the majority of BatchNorm biases of pre-trained VGG11/16 models are negative.", "Experiments on (1) random deep teacher networks with Gaussian inputs, (2) teacher network pre-trained on CIFAR-10 and (3) extensive ablation studies validate our multiple theoretical predictions.", "Although neural networks have made strong empirical progress in a diverse set of domains (e.g., computer vision (16; 32; 10), speech recognition (11; 1), natural language processing (22; 3), and games (30; 31; 35; 23)), a number of fundamental questions still remain unsolved.", "How can Stochastic Gradient Descent (SGD) find good solutions to a complicated non-convex optimization problem?", "Why do neural networks generalize?", "How can networks trained with SGD fit both random noise and structured data (38; 17; 24), but prioritize structured models, even in the presence of massive noise (27)?", "Why are flat minima related to good generalization?", "Why does overparameterization lead to better generalization (25; 39; 33; 26; 19)?", "Why do lottery tickets exist (6; 7)?In", "this paper, we propose a theoretical framework for multilayered ReLU networks. Based", "on this framework, we try to explain these puzzling empirical phenomena with a unified view. We adopt", "a teacher-student setting where the label provided to an over-parameterized deep student ReLU network is the output of a fixed teacher ReLU network of the same depth and unknown weights ( FIG0 ). In this", "perspective, hidden student nodes are randomly initialized with different activation regions. (Fig. 2(a", ") ).", "During", "optimization", ", student nodes compete with each other to explain teacher nodes. Theorem 4 shows that", "lucky student nodes which have greater overlap with teacher nodes converge to those teacher nodes at a fast rate, resulting in winner-takeall behavior. Furthermore, Theorem", "5 shows that if a subset of student nodes are close to the teacher nodes, they converge to them and the fan-out weights of other irrelevant nodes of the same layer vanishes.With this framework, we can explain various neural network behaviors as follows:Fitting both structured and random data. Under gradient descent", "dynamics, some student nodes, which happen to overlap substantially with teacher nodes, will move into the teacher node and cover them. This is true for both", "structured data that corresponds to small teacher networks with few intermediate nodes, or noisy/random data that correspond to large teachers with many intermediate nodes. This explains why the", "same network can fit both structured and random data ( Fig. 2(a-b) ).Over-parameterization.", "In over-parameterization", ", lots of student nodes are initialized randomly at each layer. Any teacher node is more", "likely to have a substantial overlap with some student nodes, which leads to fast convergence ( Fig. 2(a) and (c), Thm. 4), consistent", "with (6", "; 7). This", "also explains that training", "models whose capacity just fit the data (or teacher) yields worse performance (19).Flat minima. Deep networks often", "converge to", "\"flat minima\" whose Hessian has a lot of small eigenvalues (28; 29; 21; 2). Furthermore, while controversial", "(4), flat minima seem to be associated with good generalization, while sharp minima often lead to poor generalization (12; 14; 36; 20). In our theory, when fitting with", "structured data, only a few lucky student nodes converge to the teacher, while for other nodes, their fan-out weights shrink towards zero, making them (and their fan-in weights) irrelevant to the final outcome (Thm. 5), yielding flat minima in which movement along most dimensions (\"unlucky nodes\") results in minimal change in output. On the other hand, sharp min- Figure", "2 . Explanation of implicit regularization", ". Blue are activation regions of teacher", "nodes, while orange are students'. (a) When the data labels are structured", ", the underlying teacher network is small and each layer has few nodes. Over-parameterization (lots of red regions", ") covers them all. Moreover, those student nodes that heavily", "overlap with the teacher nodes converge faster (Thm. 4), yield good generalization performance. (b) If a dataset contains random labels, the", "underlying teacher network that can fit to it has a lot of nodes. Over-parameterization can still handle them", "and achieves zero training error.(a) (b) (c) Figure 3 . Explanation of lottery", "ticket", "phenomenon", ". (a) A successful", "training with over-parameterization (2 filters", "in the teacher network and 4 filters in the student network). Node j3 and j4 are lucky draws with strong overlap with two teacher", "node j • 1 and j • 2 , and thus converges with high weight magnitude. (b) Lottery ticket phenomenon: initialize node j3 and j4 with the same", "initial weight, clamp the weight of j1 and j2 to zero, and retrain the model, the test performance becomes better since j3 and j4 still converge to their teacher node, respectively. (c) If we reinitialize node j3 and j4, it is highly likely that they are", "not overlapping with teacher node j ima is related to noisy data ( Fig. 2(d) ), in which more student nodes match with the teacher.Implicit regularization", ". On the other hand, the snapping behavior enforces winner-take-all: after optimization", ", a teacher node is fully covered (explained) by a few student nodes, rather than splitting amongst student nodes due to over-parameterization. This explains why the same network, once trained with structured data, can generalize", "to the test set.Lottery Tickets. Lottery Tickets (6; 7) is an interesting phenomenon: if we reset \"salient weights\" (trained", "weights with large magnitude) back to the values before optimization but after initialization, prune other weights (often > 90% of total weights) and retrain the model, the test performance is the same or better; if we reinitialize salient weights, the test performance is much worse. In our theory, the salient weights are those lucky regions (E j3 and E j4 in Fig. 3 ) that", "happen to overlap with some teacher nodes after initialization and converge to them in optimization. Therefore, if we reset their weights and prune others away, they can still converge to the", "same set of teacher nodes, and potentially achieve better performance due to less interference with other irrelevant nodes. However, if we reinitialize them, they are likely to fall into unfavorable regions which can", "not cover teacher nodes, and therefore lead to poor performance ( Fig. 3(c) ), just like in the case of under-parameterization.", "We propose a novel mathematical framework for multilayered ReLU networks.", "This could tentatively explain many puzzling empirical phenomena in deep learning.", ". Correlationρ and mean rankr over training on GAUS.ρ steadily grows andr quickly improves over time. Layer-0 (the lowest layer that is closest to the input) shows best match with teacher nodes and best mean rank. BatchNorm helps achieve both better correlation and lowerr, in particular for the CNN case. [5] Simon S Du, Jason D Lee, Yuandong Tian, Barnabas Poczos, and Aarti Singh. Gradient descent learns onehidden-layer cnn: Don't be afraid of spurious local minima. ICML, 2018.[6] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Training pruned neural networks. ICLR, 2019.[7] Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. The lottery ticket hypothesis at scale. arXiv preprint arXiv:1903.01611, 2019.[8] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135-1143, 2015.[9] Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pages 293-299. IEEE, 1993.[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.[ 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 5. Appendix: Mathematical Framework Notation. Consider a student network and its associated teacher network ( FIG0 ). Denote the input as x. For each node j, denote f j (x) as the activation, f j (x) as the ReLU gating, and g j (x) as the backpropagated gradient, all as functions of x. We use the superscript• to represent a teacher node (e.g., j • ). Therefore, g j • never appears as teacher nodes are not updated. We use w jk to represent weight between node j and k in the student network. Similarly, w * j • k • represents the weight between node j• and k • in the teacher network.We focus on multi-layered ReLU networks. We use the following equality extensively: σ(x) = σ (x)x. For ReLU node j, we use E j ≡ {x : f j (x) > 0} as the activation region of node j.Objective. We assume that both the teacher and the student output probabilities over C classes. We use the output of teacher as the input of the student. At the top layer, each node c in the student corresponds to each node c • in the teacher. Therefore, the objective is: DISPLAYFORM0 By the backpropagation rule, we know that for each sample x, the (negative) gradient DISPLAYFORM1 The gradient gets backpropagated until the first layer is reached.Note that here, the gradient g c (x) sent to node c is correlated with the activation of the corresponding teacher node f c • (x) and other student nodes at the same layer. Intuitively, this means that the gradient \"pushes\" the student node c to align with class c• of the teacher. If so, then the student learns the corresponding class well. A natural question arises:Are student nodes at intermediate layers correlated with teacher nodes at the same layers?One might wonder this is hard since the student's intermediate layer receives no direct supervision from the corresponding teacher layer, but relies only on backpropagated gradient. Surprisingly, the following theorem shows that it is possible for every intermediate layer: DISPLAYFORM2 . If all nodes j at layer l satisfies Eqn. 4 DISPLAYFORM3 then all nodes k at layer l − 1 also satisfies Eqn. 4 with β * kk • (x) and β kk (x) defined as follows: DISPLAYFORM4 Note that this formulation allows different number of nodes for the teacher and student. In particular, we consider the over-parameterization setting: the number of nodes on the student side is much larger (e.g., 5-10x) than the number of nodes on the teacher side. Using Theorem 1, we discover a novel and concise form of gradient update rule: Assumption 1 (Separation of Expectations). DISPLAYFORM5 DISPLAYFORM6 Theorem 2. If Assumption 1 holds, the gradient dynamics of deep ReLU networks with objective (Eqn. 3) is: DISPLAYFORM7 Here we explain the notations. DISPLAYFORM8 We can define similar notations for W (which has n l columns/filters), β, D, H and L FIG4" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.13333332538604736, 0.030303025618195534, 0.6153846383094788, 0.052631575614213943, 0.17777776718139648, 0, 0.0555555522441864, 0, 0.04255318641662598, 0, 0, 0.0714285671710968, 0.24242423474788666, 0.10526315122842789, 0.11999999731779099, 0, 0.05714285373687744, 0, 0.09090908616781235, 0.04444443807005882, 0.09090908616781235, 0.11428570747375488, 0, 0, 0, 0.1599999964237213, 0, 0, 0, 0.02739725634455681, 0.07692307233810425, 0, 0, 0.051282044500112534, 0.06451612710952759, 0, 0.15789473056793213, 0.05882352590560913, 0.0833333283662796, 0, 0.04999999329447746, 0, 0.03333332762122154, 0, 0, 0.07407406717538834, 0, 0.02702702209353447, 0.0416666604578495, 0.037735845893621445, 0.0476190410554409, 0.19354838132858276, 0.1875, 0.0357142835855484 ]
S1xcwNr22E
true
[ "A theoretical framework for deep ReLU network that can explains multiple puzzling phenomena like over-parameterization, implicit regularization, lottery tickets, etc. " ]
[ "State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora.", "Recent studies showed that the need for parallel data supervision can be alleviated with character-level information.", "While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet.", "In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way.", "Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs.", "Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese.", "We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.", "Our code, embeddings and dictionaries are publicly available.", "Most successful methods for learning distributed representations of words (e.g. BID32 a) ; BID34 ; ) rely on the distributional hypothesis of BID20 , which states that words occurring in similar contexts tend to have similar meanings.", "BID28 show that the skip-gram with negative sampling method of BID32 amounts to factorizing a word-context co-occurrence matrix, whose entries are the pointwise mutual information of the respective word and context pairs.", "Exploiting word cooccurrence statistics leads to word vectors that reflect the semantic similarities and dissimilarities: similar words are close in the embedding space and conversely.", "BID31 first noticed that continuous word embedding spaces exhibit similar structures across languages, even when considering distant language pairs like English and Vietnamese.", "They proposed to exploit this similarity by learning a linear mapping from a source to a target embedding space.", "They employed a parallel vocabulary of five thousand words as anchor points to learn this mapping and evaluated their approach on a word translation task.", "Since then, several studies aimed at improving these cross-lingual word embeddings BID12 ; BID47 ; ; BID0 ; BID1 ; BID43 ), but they all rely on bilingual word lexicons.Recent attempts at reducing the need for bilingual supervision BID43 employ identical character strings to form a parallel vocabulary.", "The iterative method of BID2 gradually aligns embedding spaces, starting from a parallel vocabulary of aligned digits.", "These methods are however limited to similar languages sharing a common alphabet, such as European languages.", "Some recent methods explored distribution-based approach BID7 or adversarial training BID50 to obtain cross-lingual word embeddings without any parallel data.", "While these approaches sound appealing, their performance is significantly below supervised methods.", "To sum up, current methods have either not reached competitive performance, or they still require parallel data, such as aligned corpora BID18 BID46 or a seed parallel lexicon BID11 .In", "this paper, we introduce a model that either is on par, or outperforms supervised state-of-the-art methods, without employing any cross-lingual annotated data. We", "only use two large monolingual corpora, one in the source and one in the target language. Our", "method leverages adversarial training to learn a linear mapping from a source to a target space and operates in two steps. First", ", in a twoplayer game, a discriminator is trained to distinguish between the mapped source embeddings and the target embeddings, while the mapping (which can be seen as a generator) is jointly trained to fool the discriminator. Second", ", we extract a synthetic dictionary from the resulting shared embedding space and fine-tune the mapping with the closed-form Procrustes solution from BID42 . Since", "the method is unsupervised, cross-lingual data can not be used to select the best model. To overcome", "this issue, we introduce an unsupervised selection metric that is highly correlated with the mapping quality and that we use both as a stopping criterion and to select the best hyper-parameters.In summary, this paper makes the following main contributions:• We present an unsupervised approach that reaches or outperforms state-of-the-art supervised approaches on several language pairs and on three different evaluation tasks, namely word translation, sentence translation retrieval, and cross-lingual word similarity. On a standard", "word translation retrieval benchmark, using 200k vocabularies, our method reaches 66.2% accuracy on English-Italian while the best supervised approach is at 63.7%.• We introduce", "a cross-domain similarity adaptation to mitigate the so-called hubness problem (points tending to be nearest neighbors of many points in high-dimensional spaces). It is inspired", "by the self-tuning method from BID48 , but adapted to our two-domain scenario in which we must consider a bi-partite graph for neighbors. This approach", "significantly improves the absolute performance, and outperforms the state of the art both in supervised and unsupervised setups on word-translation benchmarks.• We propose an", "unsupervised criterion that is highly correlated with the quality of the mapping, that can be used both as a stopping criterion and to select the best hyper-parameters.• We release high-quality", "dictionaries for 12 oriented languages pairs, as well as the corresponding supervised and unsupervised word embeddings.• We demonstrate the effectiveness", "of our method using an example of a low-resource language pair where parallel corpora are not available (English-Esperanto) for which our method is particularly suited.The paper is organized as follows. Section 2 describes our unsupervised", "approach with adversarial training and our refinement procedure. We then present our training procedure", "with unsupervised model selection in Section 3. We report in Section 4 our results on", "several cross-lingual tasks for several language pairs and compare our approach to supervised methods. Finally, we explain how our approach", "differs from recent related work on learning cross-lingual word embeddings.", "In what follows, we present the results on word translation retrieval using our bilingual dictionaries in Table 1 and our comparison to previous work in TAB1 where we significantly outperform previous approaches.", "We also present results on the sentence translation retrieval task in TAB3 and the cross-lingual word similarity task in Table 4 .", "Finally, we present results on word-by-word translation for English-Esperanto in Table 5 .Baselines", "In our experiments, we consider a supervised baseline that uses the solution of the Procrustes formula given in (2), and trained on a dictionary of 5,000 source words. This baseline", "can be combined with different similarity measures: NN for nearest neighbor similarity, ISF for Inverted SoftMax and the CSLS approach described in Section 2.2.Cross-domain similarity local scaling This approach has a single parameter K defining the size of the neighborhood. The performance", "is very stable and therefore K does not need cross-validation: the results are essentially the same for K = 5, 10 and 50, therefore we set K = 10 in all experiments.In Table 1 provides a strong and robust gain in performance across all language pairs, with up to 7.2% in eneo. We observe that", "Procrustes-CSLS is almost systematically better than Procrustes-ISF, while being computationally faster and not requiring hyper-parameter tuning. In TAB1 , we compare", "our Procrustes-CSLS approach to previous models presented in BID31 ; ; Smith et al. FORMULA0 ; BID2 on the English-Italian word translation task, on which state-of-the-art models have been already compared. We show that our Procrustes-CSLS", "approach obtains an accuracy of 44.9%, outperforming all previous approaches. In TAB3 , we also obtain a strong", "gain in accuracy in the Italian-English sentence retrieval task using CSLS, from 53.5% to 69.5%, outperforming previous approaches by an absolute gain of more than 20%.Impact of the monolingual embeddings", "For the word translation task, we obtained a significant boost in performance when considering fastText embeddings trained on Wikipedia, as opposed to previously used CBOW embeddings trained on the WaCky datasets BID3 ), as can been seen in TAB1 . Among the two factors of variation,", "we noticed that this boost in performance was mostly due to the change in corpora. The fastText embeddings, which incorporates", "more syntactic information about the words, obtained only two percent more accuracy compared to CBOW embeddings trained on the same corpus, out of the 18.8% gain. We hypothesize that this gain is due to the", "similar co-occurrence statistics of Wikipedia corpora. Figure 3 in the appendix shows results on the", "alignment of different monolingual embeddings and concurs with this hypothesis. We also obtained better results for monolingual", "evaluation tasks such as word similarities and word analogies when training our embeddings on the Wikipedia corpora.Adversarial approach Table 1 shows that the adversarial approach provides a strong system for learning cross-lingual embeddings without parallel data. On the es-en and en-fr language pairs, Adv-CSLS", "obtains a P@1 of 79.7% and 77.8%, which is only 3.2% and 3.3% below the supervised approach. Additionally, we observe that most systems still", "obtain decent results on distant languages that do not share a common alphabet (en-ru and en-zh), for which method exploiting identical character strings are just not applicable BID2 ). This method allows us to build a strong synthetic", "vocabulary using similarities obtained with CSLS. The gain in absolute accuracy observed with CSLS", "on the Procrustes method is even more important here, with differences between Adv-NN and Adv-CSLS of up to 8.4% on es-en. As a simple baseline, we tried to match the first", "two moments of the projected source and target embeddings, which amounts to solving DISPLAYFORM0 and solving the sign ambiguity BID45 . This attempt was not successful, which we explain", "by the fact that this method tries to align only the first two moments, while adversarial training matches all the moments and can learn to focus on specific areas of the distributions instead of considering global statistics.Refinement: closing the gap with supervised approaches The refinement step on the synthetic bilingual vocabulary constructed after adversarial training brings an additional and significant gain in performance, closing the gap between our approach and the supervised baseline. In Table 1 , we observe that our unsupervised method", "even outperforms our strong supervised baseline on en-it and en-es, and is able to retrieve the correct translation of a source word with up to 83% accuracy. The better performance of the unsupervised approach", "can be explained by the strong similarity of cooccurrence statistics between the languages, and by the limitation in the supervised approach that uses a pre-defined fixed-size vocabulary (of 5,000 unique source words): in our case the refinement step can potentially use more anchor points. In TAB3 , we also observe a strong gain in accuracy", "Table 4 : Cross-lingual wordsim task. NASARI (Camacho-Collados et al. FORMULA0 ) refers to", "the official SemEval2017 baseline. We report Pearson correlation.en-eo eo-en Dictionary", "-NN 6.1 11.9 Dictionary -CSLS 11.1 14.3 Table 5 : BLEU score on English-Esperanto. Although being a naive approach, word-byword translation", "is enough to get a rough idea of the input sentence. The quality of the generated dictionary has a significant", "impact on the BLEU score.(up to 15%) on sentence retrieval using bag-of-words embeddings", ", which is consistent with the gain observed on the word retrieval task.Application to a low-resource language pair and to machine translation Our method is particularly suited for low-resource languages for which there only exists a very limited amount of parallel data. We apply it to the English-Esperanto language pair. We use the", "fastText embeddings trained on Wikipedia, and create", "a dictionary based on an online lexicon. The performance of our unsupervised approach on English-Esperanto", "is of 28.2%, compared to 29.3% with the supervised method. On Esperanto-English, our unsupervised approach obtains 25.6%, which", "is 1.3% better than the supervised method. The dictionary we use for that language pair does not take into account", "the polysemy of words, which explains why the results are lower than on other language pairs. People commonly report the P@5 to alleviate this issue. In particular,", "the P@5 for English-Esperanto and Esperanto-English is", "of 46.5% and 43.9% respectively.To show the impact of such a dictionary on machine translation, we apply it to the English-Esperanto Tatoeba corpora BID44 . We remove all pairs containing sentences with unknown words, resulting", "in about 60k pairs. Then, we translate sentences in both directions by doing word-byword translation", ". In Table 5 , we report the BLEU score with this method, when using a dictionary", "generated using nearest neighbors, and CSLS. With CSLS, this naive approach obtains 11.1 and 14.3 BLEU on English-Esperanto", "and Esperanto-English respectively. Table 6 in the appendix shows some examples of sentences in Esperanto translated", "into English using word-by-word translation.As one can see, the meaning is mostly conveyed in the translated sentences, but the translations contain some simple errors. For instance, the \"mi\" is translated into \"sorry\" instead of \"i\", etc. The translations", "could easily be improved using a language model.", "In this work, we show for the first time that one can align word embedding spaces without any cross-lingual supervision, i.e., solely based on unaligned datasets of each language, while reaching or outperforming the quality of previous supervised approaches in several cases.", "Using adversarial training, we are able to initialize a linear mapping between a source and a target space, which we also use to produce a synthetic parallel dictionary.", "It is then possible to apply the same techniques proposed for supervised techniques, namely a Procrustean optimization.", "Two key ingredients contribute to the success of our approach: First we propose a simple criterion that is used as an effective unsupervised validation metric.", "Second we propose the similarity measure CSLS, which mitigates the hubness problem and drastically increases the word translation accuracy.", "As a result, our approach produces high-quality dictionaries between different pairs of languages, with up to 83.3% on the Spanish-English word translation task.", "This performance is on par with supervised approaches.", "Our method is also effective on the English-Esperanto pair, thereby showing that it works for lowresource language pairs, and can be used as a first step towards unsupervised machine translation." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.1860465109348297, 0.11538460850715637, 0.25, 0.08695651590824127, 0.04444443807005882, 0.10344827175140381, 0.11428570747375488, 0.06557376682758331, 0.1071428507566452, 0.08163265138864517, 0.03999999538064003, 0, 0.07843136787414551, 0.11764705181121826, 0.04651162400841713, 0.0476190447807312, 0.12765957415103912, 0, 0.072727270424366, 0.07999999821186066, 0.09756097197532654, 0.08695651590824127, 0.0714285671710968, 0.1666666567325592, 0.04651162400841713, 0.13636362552642822, 0.07407406717538834, 0.07843136787414551, 0.11538460850715637, 0.12244897335767746, 0.14814814925193787, 0.21739129722118378, 0.13793103396892548, 0.15789473056793213, 0.04999999701976776, 0.13636362552642822, 0, 0.2181818187236786, 0.08888888359069824, 0.09999999403953552, 0.11320754140615463, 0.1538461446762085, 0.13698630034923553, 0.0833333283662796, 0.035087715834379196, 0.08695651590824127, 0.1071428507566452, 0.0615384578704834, 0.08695651590824127, 0.035087715834379196, 0.04878048226237297, 0.1395348757505417, 0.1818181723356247, 0.11320754140615463, 0.09677419066429138, 0.10256409645080566, 0.1428571343421936, 0.11764705181121826, 0.1538461446762085, 0.10526315122842789, 0.0833333283662796, 0, 0.052631575614213943, 0, 0.045454539358615875, 0.09999999403953552, 0.1764705777168274, 0.05882352590560913, 0.04878048226237297, 0.0833333283662796, 0.1249999925494194, 0.038461532443761826, 0.1764705777168274, 0.12903225421905518, 0.04878048226237297, 0.1860465109348297, 0.08695651590824127, 0.0952380895614624, 0.06779660284519196, 0.05714285373687744, 0.11594202369451523, 0.19999998807907104, 0.09090908616781235, 0.1538461446762085, 0.13636362552642822, 0.11764705181121826, 0.05714285373687744, 0.10526315122842789 ]
H196sainb
true
[ "Aligning languages without the Rosetta Stone: with no parallel data, we construct bilingual dictionaries using adversarial training, cross-domain local scaling, and an accurate proxy criterion for cross-validation." ]
[ "Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA).", "The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image.", "In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count.", "Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections.", "A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image.", "Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.", "Visual question answering (VQA) is an important benchmark to test for context-specific reasoning over complex images.", "While the field has seen substantial progress, counting-based questions have seen the least improvement .", "Intuitively, counting should involve finding the number of distinct scene elements or objects that meet some criteria, see Fig. 1 for an example.", "In contrast, the predominant approach to VQA involves representing the visual input with the final feature map of a convolutional neural network (CNN), attending to regions based on an encoding of the question, and classifying the answer from the attention-weighted image features BID32 BID31 Lu et al., 2016b; BID7 BID14 .", "Our intuition about counting seems at odds with the effects of attention, where a weighted average obscures any notion of distinct elements.", "As such, we are motivated to re-think the typical approach to counting in VQA and propose a method that embraces the discrete nature of the task.Our approach is partly inspired by recent work that represents images as a set of distinct objects, as identified by object detection , and making use of the relationships between these objects BID26 .", "We experiment with counting systems that build off of the vision module used for these two works, which represents each image as a set of detected objects.", "For training and evaluation, we create a new dataset, HowMany-QA.", "It is taken from the countingspecific union of VQA 2.0 BID10 and Visual Genome QA (Krishna et al., 2016) .We", "introduce the Interpretable Reinforcement Learning Counter (IRLC), which treats counting as a sequential decision process. We", "treat learning to count as learning to enumerate the relevant objects in the scene. As", "a result, IRLC not only returns a count but also the objects supporting its answer. This", "output is produced through an iterative method. Each", "step of this sequence has two stages: First, an object is selected to be added to the count. Second", ", the model adjusts the priority given to unselected objects based on their configuration with the selected objects (Fig. 1) . We supervise", "only the final count and train the decision process using reinforcement learning (RL).Additional experiments", "highlight the importance of the iterative approach when using this manner of weak supervision. Furthermore, we train", "the current state of the art model for VQA on HowMany-QA and find that IRLC achieves a higher accuracy and lower count error. Lastly, we compare the", "Figure 1: IRLC takes as input a counting question and image. Detected objects are added", "to the returned count through a sequential decision process. The above example illustrates", "actual model behavior after training.grounded counts of our model to the attentional focus of the state of the art baseline to demonstrate the interpretability gained through our approach.", "We present an interpretable approach to counting in visual question answering, based on learning to enumerate objects in a scene.", "By using RL, we are able to train our model to make binary decisions about whether a detected object contributes to the final count.", "We experiment with two additional baselines and control for variations due to visual representations and for the mechanism of visuallinguistic comparison.", "Our approach achieves state of the art for each of the evaluation metrics.", "In addition, our model identifies the objects that contribute to each count.", "These groundings provide traction for identifying the aspects of the task that the model has failed to learn and thereby improve not only performance but also interpretability.A EXAMPLES Figure 8 : Example outputs produced by each model.", "For SoftCount, objects are shaded according to the fractional count of each (0=transparent; 1=opaque).", "For UpDown, we similarly shade the objects but use the attention focus to determine opacity.", "For IRLC, we plot only the boxes from objects that were selected as part of the count.", "At each timestep, we illustrate the unchosen boxes in pink, and shade each box according to κ t (corresponding to the probability that the box would be selected at that time step; see main text).", "We also show the already-selected boxes in blue.", "For each of the questions, the counting sequence terminates at t = 3, meaning that the returned count C is 3.", "For each of these questions, that is the correct answer.", "The example on the far right is a 'correct failure,' a case where the correct answer is returned but the counted objects are not related to the question.", "These kinds of subtle failures are revealed with the grounded counts." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23529411852359772, 0.08695651590824127, 0.15789473056793213, 0.24242423474788666, 0.10810810327529907, 0.1764705777168274, 0.12121211737394333, 0, 0.14999999105930328, 0.06557376682758331, 0.052631575614213943, 0.0937499925494194, 0.23255813121795654, 0, 0.10256409645080566, 0.12121211737394333, 0.06896550953388214, 0.0624999962747097, 0, 0, 0.1666666567325592, 0, 0, 0.09756097197532654, 0.1875, 0, 0.10526315122842789, 0.34285715222358704, 0.1538461446762085, 0.1666666567325592, 0.0714285671710968, 0.20689654350280762, 0.1538461446762085, 0.06451612710952759, 0.06451612710952759, 0.12121211737394333, 0, 0.07999999821186066, 0.0555555522441864, 0, 0.09999999403953552, 0 ]
S1J2ZyZ0Z
true
[ "We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects." ]
[ "Graphs are fundamental data structures required to model many important real-world data, from knowledge graphs, physical and social interactions to molecules and proteins.", "In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest.", "After learning, these models can be used to generate samples with similar properties as the ones in the dataset. ", "Such models can be useful in a lot of applications, e.g. drug discovery and knowledge graph construction.", "The task of learning generative models of graphs, however, has its unique challenges.", "In particular, how to handle symmetries in graphs and ordering of its elements during the generation process are important issues.", "We propose a generic graph neural net based model that is capable of generating any arbitrary graph. ", "We study its performance on a few graph generation tasks compared to baselines that exploit domain knowledge. ", "We discuss potential issues and open problems for such generative models going forward.", "Graphs are natural representations of information in many problem domains.", "For example, relations between entities in knowledge graphs and social networks are well captured by graphs, and they are also good for modeling the physical world, e.g. molecular structure and the interactions between objects in physical systems.", "Thus, the ability to capture the distribution of a particular family of graphs has many applications.", "For instance, sampling from the graph model can lead to the discovery of new configurations that share same global properties as is, for example, required in drug discovery BID10 .", "Obtaining graph-structured semantic representations for natural language sentences BID15 requires the ability to model (conditional) distributions on graphs.", "Distributions on graphs can also provide priors for Bayesian structure learning of graphical models BID23 .Probabilistic", "models of graphs have been studied for a long time, from at least two perspectives. On one hand,", "there are random graph models that robustly assign probabilities to large classes of graphs BID8 BID1 . These make strong", "independence assumptions and are designed to capture only certain graph properties, like degree distribution and diameter. While these are effective", "models of the distributions of graphs found in some domains, such as social networks, they are poor models of more richly structured graphs where small structural differences can be functionally significant, such as those encountered in chemistry or when representing the meaning of natural language sentences. As an alternative, a more", "expressive class of models makes use of graph grammars, which generalize devices from formal language theory so as to produce non-sequential structures BID27 . Graph grammars are systems", "of rewrite rules that incrementally derive an output graph via a sequence of transformations of intermediate graphs.While symbolic graph grammars can be made stochastic or otherwise weighted using standard techniques BID5 , from a learnability standpoint, two problems remain. First, inducing grammars from", "a set of unannotated graphs is nontrivial since formalism-appropriate derivation steps must be inferred and transformed into rules BID17 Aguiñaga et al., 2016, for example) . Second, as with linear output", "grammars, graph grammars make a hard distinction between what is in the language and what is excluded, making such models problematic for applications where it is inappropriate to assign 0 probability to certain graphs.In this work we develop an expressive model which makes no assumptions on the graphs and can therefore assign probabilities to any arbitrary graph.1 Our model generates graphs in", "a manner similar to graph grammars, where during the course of a derivation new structure (specifically, a new node or a new edge) is added to the existing graph, and where the probability of that addition event depends on the history of the graph derivation. To represent the graph during", "each step of the derivation, we use a representation based on graph-structured neural networks (graph nets). Recently there has been a surge", "of interest in graph nets for learning graph representations and solving graph prediction problems BID11 BID6 BID2 BID14 BID9 . These models are structured according", "to the graph being utilized, and are parameterized independent of graph sizes therefore invariant to isomorphism, providing a good match for our purposes. We evaluate our model by fitting graphs", "in three problem domains: (1) generating random graphs with certain common topological properties (e.g., cyclicity); (2) generating molecule graphs; and (3) conditional generation of parse trees. Our proposed model performs better than", "random graph models and LSTM baselines on (1) and FORMULA0 and is close to a LSTM sequence to sequence with attention model on (3). We also analyze the challenges our model", "is facing, e.g. the difficulty of learning and optimization, and discuss possible ways to make it better.", "The graph model in the proposed form is a powerful model capable of generating arbitrary graphs.", "However, as we have seen in the experiments and the analysis, there are still a number of challenges facing these models.", "Here we discuss a few of these challenges and possible solutions going forward.Ordering Ordering of nodes and edges is critical for both learning and evaluation.", "In the experiments we always used predefined distribution over orderings.", "However, it may be possible to learn an ordering of nodes and edges by treating the ordering π as a latent variable, this is an interesting direction to explore in the future.Long Sequences The generation process used by the graph model is typically a long sequence of decisions.", "If other forms of sequentializing the graph is available, e.g. SMILES strings or flattened parse trees, then such sequences are typically 2-3x shorter.", "This is a significant disadvantage for the graph model, it not only makes it harder to get the likelihood right, but also makes training more difficult.", "To alleviate this problem we can tweak the graph model to be more tied to the problem domain, and reduce multiple decision steps and loops to single steps.Scalability Scalability is a challenge to the graph generative model we proposed in this paper.", "Large graphs typically lead to very long graph generating sequences.", "On the other side, the graph nets use a fixed T propagation steps to propagate information on the graph.", "However, large graphs require large T s to have sufficient information flow, this would also limit the scalability of these models.", "To solve this problem, we may use models that sequentially sweep over edges, like BID25 , or come up with ways to do coarse-to-fine generation.", "In this paper, we proposed a powerful deep generative model capable of generating arbitrary graphs through a sequential process.", "We studied its properties on a few graph generation problems.", "This model has shown great promise and has unique advantages over standard LSTM models.", "We hope that our results can spur further research in this direction to obtain better generative models of graphs.", "DISPLAYFORM0 DISPLAYFORM1 Incorporate node v t 6: DISPLAYFORM2 Probability of adding an edge to v t 8: DISPLAYFORM3 Sample whether to add an edge to v t" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.4117647111415863, 0.05405404791235924, 0.2222222238779068, 0.13333332538604736, 0.2631579041481018, 0.514285683631897, 0.277777761220932, 0.19354838132858276, 0.1428571343421936, 0.12244897335767746, 0.25, 0.17777776718139648, 0.1666666567325592, 0.11764705181121826, 0.1666666567325592, 0.1621621549129486, 0.1111111044883728, 0.1355932205915451, 0.09090908616781235, 0.1428571343421936, 0.16326530277729034, 0.2028985470533371, 0.19999998807907104, 0.1538461446762085, 0.14999999105930328, 0.35555556416511536, 0.2745097875595093, 0.2857142686843872, 0.17142856121063232, 0.6060606241226196, 0.21052631735801697, 0.14999999105930328, 0.0714285671710968, 0.24137930572032928, 0.1428571343421936, 0.1463414579629898, 0.2916666567325592, 0.2142857164144516, 0.1764705777168274, 0.15789473056793213, 0.04651162400841713, 0.555555522441864, 0.2857142686843872, 0.12903225421905518, 0.21621620655059814, 0.05405404791235924 ]
Hy1d-ebAb
true
[ "We study the graph generation problem and propose a powerful deep generative model capable of generating arbitrary graphs." ]
[ "We introduce a neural network that represents sentences by composing their words according to induced binary parse trees.", "We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser.", "Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM.", "It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees.", "As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation.", "We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.", "Finally, we show how performance can be improved with an attention mechanism which fully exploits the parse chart, by attending over all possible subspans of the sentence.", "Recurrent neural networks, in particular the Long Short-Term Memory (LSTM) architecture BID10 and some of its variants BID8 BID1 , have been widely applied to problems in natural language processing.", "Examples include language modelling BID35 BID13 , textual entailment BID2 BID30 , and machine translation BID1 BID36 amongst others.The topology of an LSTM network is linear: words are read sequentially, normally in left-to-right order.", "However, language is known to have an underlying hierarchical, tree-like structure BID4 .", "How to capture this structure in a neural network, and whether doing so leads to improved performance on common linguistic tasks, is an open question.", "The Tree-LSTM network BID37 BID41 provides a possible answer, by generalising the LSTM to tree-structured topologies.", "It was shown to be more effective than a standard LSTM in semantic relatedness and sentiment analysis tasks.Despite their superior performance on these tasks, Tree-LSTM networks have the drawback of requiring an extra labelling of the input sentences in the form of parse trees.", "These can be either provided by an automatic parser BID37 , or taken from a gold-standard resource such as the Penn Treebank BID18 .", "BID39 proposed to remove this requirement by including a shift-reduce parser in the model, to be optimised alongside the composition function based on a downstream task.", "This makes the full model non-differentiable so it needs to be trained with reinforcement learning, which can be slow due to high variance.Our proposed approach is to include a fully differentiable chart parser in the model, inspired by the CYK constituency parser BID5 BID40 BID15 .", "Due to the parser being differentiable, the entire model can be trained end-to-end for a downstream task by using stochastic gradient descent.", "Our model is also unsupervised with respect to the parse trees, similar to BID39 .", "We show that the proposed method outperforms baseline Tree-LSTM architectures based on fully left-branching, right-branching, and supervised parse trees on a textual entailment task and a reverse dictionary task.", "We also introduce an attention mechanism in the spirit of BID1 for our model, which attends over all possible subspans of the source sentence via the parse chart.", "The results in TAB1 show a strong performance of the Unsupervised Tree-LSTM against our tested baselines, as well as other similar methods in the literature with a comparable number of parameters.For the textual entailment task, our model outperforms all baselines including the supervised Tree-LSTM, as well as some of the other sentence embedding models in the literature with a higher number of parameters.", "The use of attention, extended for the Unsupervised Tree-LSTM to be over all possible subspans, further improves performance.", "In the reverse dictionary task, the poor performance of the supervised Tree-LSTM can be explained by the unusual tokenisation used in the dataset of BID9 : punctuation is simply stripped, turning e.g. \"(archaic) a section of a poem\" into \"archaic a section of a poem\", or stripping away the semicolons in long lists of synonyms.", "On the one hand, this might seem unfair on the supervised Tree-LSTM, which received suboptimal trees as input.", "On the other hand, it demonstrates the robustness of our method to noisy data.", "Our model also performed well in comparison to the LSTM and the other Tree-LSTM baselines.", "Despite the slower training time due to the additional complexity, FIG2 shows how our model needed fewer training examples to reach convergence in this task.Following BID39 , we also manually inspect the learned trees to see how closely they match conventional syntax trees, as would typically be assigned by trained linguists.", "We analyse the same four sentences they chose.", "The trees produced by our model are shown in Figure 3 .", "One notable feature is the fact that verbs are joined with their subject noun phrases first, which differs from the standard verb phrase structure.", "However, formalisms such as combinatory categorial grammar BID34 , through type-raising and composition operators, do allow such constituents.", "The spans of prepositional phrases in", "(b),", "(c) and", "(d) are correctly identified at the highest level; but only in", "(d) does the structure of the subtree match convention.", "As could be expected, other features such as the attachment of the full stops or of some determiners do not appear to match human intuition.", "We presented a fully differentiable model to jointly learn sentence embeddings and syntax, based on the Tree-LSTM composition function.", "We demonstrated its benefits over standard Tree-LSTM on a textual entailment task and a reverse dictionary task.", "Introducing an attention mechanism over the parse chart was shown to further improve performance for the textual entailment task.", "The model is conceptually simple, and easy to train via backpropagation and stochastic gradient descent with popular deep learning toolkits based on dynamic computation graphs such as DyNet BID26 and PyTorch.", "The unsupervised Tree-LSTM we presented is relatively simple, but could be plausibly improved by combining it with aspects of other models.", "It should be noted in particular that (4), the function assigning an energy to alternative ways of forming constituents, is extremely basic and does not rely on any global information on the sentence.", "Using a more complex function, perhaps relying on a mechanism such as the tracking LSTM in BID3 , might lead to improvements in performance.", "Techniques such as batch normalization BID11 or layer normalization BID0 might also lead to further improvements.In future work, it may be possible to obtain trees closer to human intuition by training models to perform well on multiple tasks instead of a single one, an important feature for intelligent agents to demonstrate BID21 .", "Elastic weight consolidation BID19 has been shown to help with multitask learning, and could be readily applied to our model." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5161290168762207, 0.060606054961681366, 0.11428570747375488, 0.25806450843811035, 0.06666666269302368, 0.05882352590560913, 0.1538461446762085, 0.0476190447807312, 0, 0.07999999821186066, 0.05405404791235924, 0.13793103396892548, 0.15094339847564697, 0.0555555522441864, 0.1111111044883728, 0.11320754140615463, 0.11764705181121826, 0.23076923191547394, 0.10526315122842789, 0.052631575614213943, 0.0363636314868927, 0.06451612710952759, 0.0363636314868927, 0.06666666269302368, 0.07692307233810425, 0.07407406717538834, 0.10169491171836853, 0.0952380895614624, 0.1666666567325592, 0.0555555522441864, 0, 0, 0, 0, 0.0555555522441864, 0.0624999962747097, 0, 0.12903225421905518, 0.0952380895614624, 0.11764705181121826, 0.045454543083906174, 0.05714285373687744, 0.09836065024137497, 0.1249999925494194 ]
BJMuY-gRW
true
[ "Represent sentences by composing them with Tree-LSTMs according to automatically induced parse trees." ]
[ "Pruning neural networks for wiring length efficiency is considered.", "Three techniques are proposed and experimentally tested: distance-based regularization, nested-rank pruning, and layer-by-layer bipartite matching.", "The first two algorithms are used in the training and pruning phases, respectively, and the third is used in the arranging neurons phase.", "Experiments show that distance-based regularization with weight based pruning tends to perform the best, with or without layer-by-layer bipartite matching.", "These results suggest that these techniques may be useful in creating neural networks for implementation in widely deployed specialized circuits.", "In this paper we consider the novel problem of learning accurate neural networks that have low total wiring length because this corresponds to energy consumption in the fundamental limit.", "We introduce weight-distance regularization, nested rank pruning, and layer-by-layer bipartite matching and show through ablation studies that all of these algorithms are effective, and can even reach state-of-the-art compression ratios.", "The results suggests that these techniques may be worth the computational effort if the neural network is to be widely deployed, if significantly lower energy is worth the slight decrease in accuracy, or if the application is to be deployed on either a specialized circuit or general purpose processor.", "Table 2 : Average and standard deviation over four trials for Street View House Numbers task on both the wiring length metric (energy) and remaining edges metric (edges).", "We note that with the appropriate hyperparameter setting our algorithm outperforms the baseline weight based techniques (p=0) often on both the energy and number of remaining edges metric.", "Table 3 : Results of applying the bipartite matching algorithm on the best performing weight-based pruning network and best performing distance-based regularization method before and after applying layer-by-layer bipartite matching.", "Average and standard deviation over 4 trials presented." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20689654350280762, 0.05882352590560913, 0.052631575614213943, 0.10256409645080566, 0.10256409645080566, 0.1702127605676651, 0.1666666567325592, 0.10526315122842789, 0.1304347813129425, 0.17391303181648254, 0.09302324801683426, 0 ]
rygpbR2Pi7
true
[ "Three new algorithms with ablation studies to prune neural network to optimize for wiring length, as opposed to number of remaining weights." ]
[ "Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. ", "Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. ", "Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations.", "To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations.", "The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG).", "Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing.", "Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics.", "Video understanding has been a mainstay of artificial intelligence research.", "Recent work has sought to better reason about videos by learning more effective spatio-temporal representations (Tran et al., 2015; Carreira & Zisserman, 2017) .", "The video moment retrieval task, also known as text-to-clip retrieval, combines language and video understanding to find activities described by a natural language sentence.", "The main objective of the task is to identify the video segment within a longer video that is most relevant to a sentence.", "This requires a model to learn the mapping of correspondences (alignment) between the visual and natural language modalities.", "In the strongly-supervised setting, existing methods (Hendricks et al., 2017; Ghosh et al., 2019) generally learn joint visual-semantic representations by projecting video and language representations into a common embedding space and leverage provided temporal annotations to learn regressive functions (Gao et al., 2017) for localization.", "However, such temporal annotations are often ambiguous and expensive to collect.", "Mithun et al. (2019) seeks to circumvent these problems by proposing to address this task in the weakly-supervised setting where only full video-sentence pairs are provided as weak labels.", "However, the lack of temporal annotations renders the aforementioned approaches infeasible.", "In their approach (Figure 1a) , Mithun et al. (2019) proposes a Text-Guided Attention (TGA) mechanism to attend on segment-level features w.r.t. the sentence-level representations.", "However, such an approach treats the segment-level visual representations as independent inputs and ignores the contextual information derived from other segments in the video.", "More importantly, it does not exploit the fine-grained semantics of each word in the sentence.", "Consequently, existing methods are not able to reason about the latent alignment between the visual and language representations comprehensively.", "Figure 1: Given a video and a sentence, our aim is to retrieve the most relevant segment (the red bounding box in this example).", "Existing methods consider video frames as independent inputs and ignore the contextual information derived from other frames in the video.", "They compute a similarity score between the segment and the entire sentence to determine their relevance to each other.", "In contrast, our proposed approach aggregates contextual information from all the frames using graph propagation and leverages fine-grained frame-by-word interactions for more accurate retrieval.", "(Only some interactions are shown to prevent overcrowding the figure.)", "In this paper, we take another step towards addressing the limitations of current weakly-supervised video moment retrieval methods by exploiting the fine-grained temporal and visual relevance of each video frame to each word ( Figure 1b) .", "Our approach is built on two core insights:", "1) The temporal occurrence of frames or segments in a video provides vital visual information required to reason about the presence of an event;", "2) The semantics of the query are integral to reasoning about the relationships between entities in the video.", "With this in mind, we propose our Weakly-Supervised Moment Alignment Network (wMAN).", "An illustrative overview of our model is shown in Figure 2 .", "The key component of wMAN is a multi-level co-attention mechanism that is encapsulated by a Frame-by-Word (FBW) interaction module as well as a Word-Conditioned Visual Graph (WCVG).", "To begin, we exploit the similarity scores of all possible pairs of visual frame and word features to create frame-specific sentence representations and word-specific video representations.", "The intuition is that frames relevant to a word should have a higher measure of similarity as compared to the rest.", "The word representations are updated by their word-specific video representations to create visual-semantic representations.", "Then a graph (WCVG) is built upon the frame and visualsemantic representations as nodes and introduces another level of attention between them.", "During the message-passing process, the frame nodes are iteratively updated with relational information from the visual-semantic nodes to create the final temporally-aware multimodal representations.", "The contribution of each visual-semantic node to a frame node is dynamically weighted based on their similarity.", "To learn such representations, wMAN also incorporates positional encodings (Vaswani et al., 2017) into the visual representations to integrate contextual information about their relative positions.", "Such contextual information encourages the learning of temporally-aware multimodal representations.", "To learn these representations, we use a Multiple Instance Learning (MIL) framework that is similar in nature to the Stacked Cross Attention Network (SCAN) model .", "The SCAN model leverages image region-by-word interactions to learn better representations for image-text matching.", "In addition, the WCVG module draws inspiration from the Language-Conditioned Graph Network (LCGN) by Hu et al. (2019) which seeks to create context-aware object features in an image.", "However, the LCGN model works with sentence-level representations, which does not account for the semantics of each word to each visual node comprehensively.", "wMAN also distinguishes itself from the above-mentioned models by extracting temporally-aware multimodal representations from videos and their corresponding descriptions, whereas SCAN and LCGN only work on images.", "The contributions of our paper are summarized below:", "• We propose a simple yet intuitive MIL approach for weakly-supervised video moment retrieval from language queries by exploiting fine-grained frame-by-word alignment.", "• Our novel Word-Conditioned Visual Graph learns richer visual-semantic context through a multi-level co-attention mechanism.", "• We introduce a novel application of positional embeddings in video representations to learn temporally-aware multimodal representations.", "To demonstrate the effectiveness of our learned temporally-aware multimodal representations, we perform extensive experiments over two datasets, Didemo (Hendricks et al., 2017) and Charades-STA (Gao et al., 2017) , where we outperform the state-of-the-art weakly supervised model by a significant margin and strongly-supervised state-of-the-art models on some metrics.", "In this work, we propose our weakly-supervised Moment Alignment Network with WordConditioned Visual Graph which exploits a multi-level co-attention mechanism to infer the latent alignment between visual and language representations at fine-grained word and frame level.", "Learning context-aware visual-semantic representations helps our model to reason about the temporal occurrence of an event as well as the relationships of entities described in the natural language query.", "(b) Figure 3 : Visualization of the final relevance weights of each word in the query with respect to each frame.", "Here, we display the top three weights assigned to the frames for each phrase.", "The colors of the three numbers (1,2,3) indicate the correspondence to the words in the query sentence.", "We also show the ground truth (GT) temporal annotation as well as our predicted weakly localized temporal segments in seconds.", "The highly correlated frames to each query word generally fall into the GT temporal segment in both examples.", "In Table 5 , we show the comparisons of the different methods with different number of model parameters.", "While wMAN has 18M parameters as compared to 3M parameters in TGA, the performance gains are not simply attributed to the number of model parameters.", "We increase the dimensions of visual and semantic representations as well as corresponding fully-connected layers in the TGA model which leads to a total of 19M parameters.", "Despite having more parameters than wMAN, it still does significantly worse on all metrics.", "We also provide results obtained by a direct adaptation of the Language-Conditioned Graph Network (LCGN), which is designed to work on the image level for VQA as well.", "While LCGN leverages attention over the words in the natural language query, the computed attention is only conditioned on the entire sentence without contextual information derived from the objects' visual representations.", "In contrast, the co-attention mechanism in our combined wMAN model is conditioned on both semantic and contextual visual information derived from words and video frames respectively.", "LCGN is also a lot more complicated and requires significantly more computing resources than wMAN.", "Despite possessing much more parameters than wMAN, it is still not able to achieve comparable results to ours.", "In this section, we include ablation results on the number of message-passing rounds required to learn effective visual-semantic representations.", "In our experiments, we have found that three rounds work best on both Charades-Sta and DiDeMo." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0.0714285671710968, 0, 0, 0, 0.13333332538604736, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.23529411852359772, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04999999701976776, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJx4rerFwB
true
[ "Weakly-Supervised Text-Based Video Moment Retrieval" ]
[ "In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufficient, for the generalization ability of the classifier is poor in this circumstance.", "To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner.", "The main idea of existing transfer learning algorithms is to reduce the dierence between domains by sample selection or domain adaptation.", "However, no matter what transfer learning algorithm we use, the difference always exists and the hybrid training of source and target data leads to reducing fitting capability of the learner on target domain.", "Moreover, when the relatedness between domains is too low, negative transfer is more likely to occur.", "To tackle the problem, we proposed a two-phase transfer learning architecture based on ensemble learning, which uses the existing transfer learning algorithms to train the weak learners in the first stage, and uses the predictions of target data to train the final learner in the second stage.", "Under this architecture, the fitting capability and generalization capability can be guaranteed at the same time.", "We evaluated the proposed method on public datasets, which demonstrates the effectiveness and robustness of our proposed method.", "Transfer learning has attracted more and more attention since it was first proposed in 1995 BID11 and is becoming an important field of machine learning.", "The main purpose of transfer learning is to solve the problem that the same distributed data is hard to get in practical applications by using different distributed data of similar domains.", "Several different kinds of transfer stratagies are proposed in recent years, transfer learning can be devided into 4 categories BID17 , including instance-based transfer learning, feature-based transfer learning, parameter-based transfer learning and relation-based transfer learning.", "In this paper, we focus on how to enhance the performance of instance-based transfer learning and feature-based transfer learning when limited labeled data from target domain can be obtained.", "In transfer learning tasks, when diff-distribution data is obtained to improve the generalization ability of learners, the fitting ability on target data set will be affected more or less, especially when the domains are not relative enough, negative transfer might occur BID11 , it's hard to trade off between generalization and fitting.", "Most of the existing methods to prevent negative transfer learning are based on similarity measure(e.g., maximum mean distance(MMD), KL divergence), which is used for choosing useful knowledge on source domains.", "However, similarity and transferability are not equivalent concepts.", "To solve those problems, we proposed a novel transfer learning architecture to improve the fitting capability of final learner on target domain and the generalization capability is provided by weak learners.", "As shown in FIG0 , to decrease the learning error on target training set when limited labeled data on target domain can be obtained, ensemble learning is introduced and the performances of transfer learning algorithms are significantly improved as a result.In the first stage, traditional transfer learning algorithms are applied to diversify training data(e.g., Adaptive weight adjustment of boosting-based transfer learning or different parameter settings of domain adaptation).", "Then diversified training data is fed to several weak classifiers to improve the generalization ability on target data.", "To guarantee the fitting capability on target data, the predictions of target data is vectorized to be fed to the final estimator.", "This architecture brings the following advantages:• When the similarity between domains is low, the final estimator can still achieve good performance on target training set.", "Firstly, source data and target data are hybrid together to train the weak learners, then super learner is used to fit the predictions of target data.•", "Parameter setting is simplified and performance is better than individual estimators under normal conditions.To test the effectiveness of the method, we respectively modified TrAdaboost BID1 and BDA BID16 as the base algorithms for data diversification and desired result is achieved.1.1 RELATED WORK", "In this paper, we proposed a 2-phase transfer learning architecture, which uses the traditional transfer learning algorithm to achieve data diversification in the first stage and the target data is fitted in the second stage by stacking method, so the generalization ability and fitting ability on target data could be satisfied at the same time.", "The experiment of instance-based transfer learning and feature-based transfer learning on 11 domains proves the validity of our method.", "In summary, this framework has the following advantages:• No matter if source domain and target domain are similar, the training error on target data set can be minimized theoretically.•", "We reduce the risk of negative transfer in a simple and effective way without a similarity measure.•", "Introduction of ensemble learning gives a better performance than any single learner.•", "Most existing transfer learning algorithm can be integrated into this framework.Moreover, there're still some problems require our further study, some other data diversification method for transfer learning might be useful in our model, such as changing the parameter µ in BDA, integrating multiple kinds of transfer learning algorithms, or even applying this framework for multi-source transfer learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.34285715222358704, 0.4000000059604645, 0.25531914830207825, 0.29411762952804565, 0.29629629850387573, 0.12121211737394333, 0.11764705181121826, 0.1463414579629898, 0.3181818127632141, 0.1304347813129425, 0.43478259444236755, 0.32258063554763794, 0.2800000011920929, 0, 0.3333333432674408, 0.3055555522441864, 0.34285715222358704, 0.2702702581882477, 0.1428571343421936, 0.24390242993831635, 0.21052631735801697, 0.23333333432674408, 0.22857142984867096, 0.08695651590824127, 0.1666666567325592, 0.1875, 0.1875 ]
ryxOIsA5FQ
true
[ "How to use stacked generalization to improve the performance of existing transfer learning algorithms when limited labeled data is available." ]
[ "Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data.", "For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems.", "In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system.\n", "Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples.", "As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed.", "DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility.\n", "The resulting DeLaN network performs very well at robot tracking control.", "The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.19999998807907104, 0.2448979616165161, 0.05128204822540283, 0.1621621549129486, 0.14999999105930328, 0.2142857164144516, 0.04444443807005882 ]
BklHpjCqKm
false
[ "This paper introduces a physics prior for Deep Learning and applies the resulting network topology for model-based control." ]
[ "Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs.", "However, available training data may not be sufficient for a generative model to learn all possible complex transformations.", "By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models.", "Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood.", "In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch.", "Our method is applicable in the supervised as well as semi-supervised settings.", "We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis.", "In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain.", "Deep architectures are becoming increasingly adept at generating complex objects such as images, text, molecules, or programs.", "Many useful generation problems can be seen as translation tasks, where the goal is to take a source (precursor) object such as a molecule and turn it into a target satisfying given design characteristics.", "Indeed, molecular optimization of this kind is a key step in drug development, though the adoption of automated tools remains limited due to accuracy concerns.", "We propose here a simple, broadly applicable meta-algorithm to improve translation quality.", "Translation is a challenging task for many reasons.", "Objects are complex and the available training data pairs do not fully exemplify the intricate ways in which valid targets can be created from the precursors.", "Moreover, precursors provided at test time may differ substantially from those available during training -a scenario common in drug development.", "While data augmentation and semisupervised methods have been used to address some of these challenges, the focus has been on either simple prediction tasks (e.g., classification) or augmenting data primarily on the source side.", "We show, in contrast, that iteratively augmenting translation targets significantly improves performance on complex generation tasks in which each precursor corresponds to multiple possible outputs.", "Our iterative target augmentation approach builds on the idea that it is easier to evaluate candidate objects than to generate them.", "Thus a learned predictor of target object quality (a filter) can be used to effectively guide the generation process.", "To this end, we construct an external filter and apply it to the complex generative model's sampled translations of training set precursors.", "Candidate translations that pass the filter criteria become part of the training data for the next training epoch.", "The translation model is therefore iteratively guided to generate candidates that pass the filter.", "The generative model can be viewed as an adaptively tuned prior distribution over complex objects, with the filter as the likelihood.", "For this reason, it is helpful to apply the filter at test time as well, or to use the approach transductively 1 to adapt the generation process to novel test cases.", "The approach is reminiscent of self-training or reranking approaches employed with some success for parsing (McClosky et al., 2006; Charniak et al., 2016) .", "However, in our case, it is the candidate generator that is complex while the filter is relatively simple and remains fixed during the iterative process.", "We demonstrate that our meta-algorithm is quite effective and consistent in its ability to improve translation quality in the supervised setting.", "On a program synthesis task (Bunel et al., 2018) , under the same neural architecture, our augmented model outperforms their MLE baseline by 8% and their RL model by 3% in top-1 generalization accuracy (in absolute measure).", "On molecular optimization (Jin et al., 2019a) , their sequence to sequence translation baseline, when combined with our target data augmentation, achieves a new state-of-the-art result and outperforms their graph based approach by over 10% in success rate.", "Their graph based methods are also improved by iterative target augmentation with more than 10% absolute gain.", "The results reflect the difficulty of generation in comparison to evaluation; indeed, the gains persist even if the filter quality is reduced somewhat.", "Source side augmentation with unlabeled precursors (the semi-supervised setting) can further improve results, but only when combined with the filter in the target data augmentation framework.", "We provide ablation experiments to empirically highlight the effect of our method and also offer some theoretical insights for why it is effective.", "In this work, we have presented an iterative target augmentation framework for generation tasks with multiple possible outputs.", "Our approach is theoretically motivated, and we demonstrate strong empirical results on both the molecular optimization and program synthesis tasks, significantly outperforming baseline models on each task.", "Moreover, we find that iterative target augmentation is complementary to architectural improvements, and that its effect can be quite robust to the quality of the external filter.", "Finally, in principle our approach is applicable to other domains as well." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.22857142984867096, 0.19999998807907104, 0.1818181723356247, 0.22857142984867096, 0.0714285671710968, 0.11428570747375488, 0.11764705181121826, 0, 0.0833333283662796, 0.09756097197532654, 0.27586206793785095, 0.07999999821186066, 0.19512194395065308, 0.10810810327529907, 0.08163265138864517, 0.1463414579629898, 0.10810810327529907, 0.1111111044883728, 0.20512819290161133, 0.25, 0.12903225421905518, 0.1111111044883728, 0.0476190410554409, 0, 0.10526315122842789, 0.2702702581882477, 0.11764705181121826, 0.15094339847564697, 0.05882352590560913, 0.052631575614213943, 0.14999999105930328, 0.09999999403953552, 0.05714285373687744, 0.0952380895614624, 0.09756097197532654, 0 ]
rylztAEYvr
true
[ "We improve generative models by proposing a meta-algorithm that filters new training data from the model's outputs." ]
[ "The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time.", "This gap between the expressive capabilities and sampling practicalities of energy-based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles.", "In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data.", "We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end-to-end-differentiable model of atomic protein structure given amino acid sequence information.", "We introduce techniques for stabilizing backpropagation under long roll-outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures.", "Many natural systems, such as cells in a tissue or atoms in a protein, organize into complex structures from simple underlying interactions.", "Explaining and predicting how macroscopic structures such as these arise from simple interactions is a major goal of science and, increasingly, machine learning.The Boltzmann distribution is a foundational model for relating local interactions to system behavior, but can be difficult to fit to data.", "Given an energy function U ✓ [x] , the probability of a system configuration x scales exponentially with energy as DISPLAYFORM0 where the (typically intractable) constant Z normalizes the distribution.", "Importantly, simple energy functions U ✓ [x] consisting of weak, local interactions can collectively encode complex system behaviors, such as the structures of materials and molecules or, when endowed with latent variables, the statistics of images, sound, and text BID0 BID17 .", "Unfortunately, learning model parameters✓ and generating samples x ⇠ p ✓ (x) of the Boltzmann distribution is difficult in practice, as these procedures depend on expensive Monte Carlo simulations that may struggle to mix effectively.", "These difficulties have driven a shift towards generative models that are easier to learn and sample from, such as directed latent variable models and autoregressive models (Goodfellow et al., 2016) .The", "protein folding problem provides a prime example of both the power of energy-based models at describing complex relationships in data as well as the challenge of generating samples from them. Decades", "of research in biochemistry and biophysics support an energy landscape theory of An unrolled simulator as a model for protein structure. NEMO combines", "a neural energy function for coarse protein structure, a stochastic simulator based on Langevin dynamics with learned (amortized) initialization, and an atomic imputation network to build atomic coordinate output from sequence information. It is trained", "end-to-end by backpropagating through the unrolled folding simulation.protein folding (Dill et al., 2017) , in which the folds that natural protein sequences adopt are those that minimize free energy. Without the availability", "of external information such as coevolutionary information (Marks et al., 2012) or homologous structures (Martí-Renom et al., 2000) to constrain the energy function, however, contemporary simulations are challenged to generate globally favorable low-energy structures in available time.How can we get the representational benefits of energy-based models with the sampling efficiency of directed models? Here we explore a potential", "solution of directly training an unrolled simulator of an energy function as a model for data. By directly training the sampling", "process, we eschew the question 'when has the simulator converged' and instead demand that it produce a useful answer in a fixed amount of time. Leveraging this idea, we construct", "an end-to-end differentiable model of protein structure that is trained by backpropagtion through folding ( FIG0 ). NEMO (Neural energy modeling and optimization", ") can learn at scale to generate 3D protein structures consisting of hundreds of points directly from sequence information. Our main contributions are:• Neural energy simulator", "model for protein structure that composes a deep energy function, unrolled Langevin dynamics, and an atomic imputation network for an end-to-end differentiable model of protein structure given sequence information• Efficient sampling algorithm that is based on a transform integrator for efficient sampling in transformed coordinate systems• Stabilization techniques for long roll-outs of simulators that can exhibit chaotic dynamics and, in turn, exploding gradients during backpropagation• Systematic analysis of combinatorial generalization with a new dataset of protein sequence and structure", "We described a model for protein structure given sequence information that combines a coarse-grained neural energy function and an unrolled simulation into an end-to-end differentiable model.", "To realize this idea at the scale of real proteins, we introduced an efficient simulator for Langevin dynamics in transformed coordinate systems and stabilization techniques for backpropagating through long simulator roll-outs.", "We find that that model is able to predict the structures of protein molecules with hundreds of atoms while capturing structural uncertainty, and that the model can structurally generalize to distant fold classifications more effectively than a strong baseline.", "(MPNN, bottom left) , and outputs energy function weights l as well as simulator hyperparameters (top center).", "Second, the simulator iteratively modifies the structure via Langevin dynamics based on the gradient of the energy landscape (Forces, bottom center).", "Third, the imputation network constructs predicted atomic coordinates X from the final simulator time step x (T ) .", "During training, the true atomic coordinates X (Data) , predicted atomic coordinates X, simulator trajectory x (1) , . . . , x (T ) , and secondary structure predictions SS (Model) feed into a composite loss function (Loss, bottom right), which is then optimized via backpropagation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17543859779834747, 0.16393442451953888, 0.2545454502105713, 0.3461538553237915, 0.24137930572032928, 0.04651162400841713, 0.1904761791229248, 0.11999999731779099, 0.13333332538604736, 0.17241378128528595, 0.11538460850715637, 0.11999999731779099, 0.4000000059604645, 0.178571417927742, 0.11764705181121826, 0.10958904027938843, 0.29999998211860657, 0.1599999964237213, 0.3478260934352875, 0.2083333283662796, 0.24390242993831635, 0.3913043439388275, 0.1538461446762085, 0.3214285671710968, 0.1538461446762085, 0.1463414579629898, 0.04999999329447746, 0.09677419066429138 ]
Byg3y3C9Km
true
[ "We use an unrolled simulator as an end-to-end differentiable model of protein structure and show it can (sometimes) hierarchically generalize to unseen fold topologies." ]
[ "Progress in understanding how individual animals learn requires high-throughput standardized methods for behavioral training and ways of adapting training.", "During the course of training with hundreds or thousands of trials, an animal may change its underlying strategy abruptly, and capturing these changes requires real-time inference of the animal’s latent decision-making strategy.", "To address this challenge, we have developed an integrated platform for automated animal training, and an iterative decision-inference model that is able to infer the momentary decision-making policy, and predict the animal’s choice on each trial with an accuracy of ~80\\%, even when the animal is performing poorly.", "We also combined decision predictions at single-trial resolution with automated pose estimation to assess movement trajectories.", "Analysis of these features revealed categories of movement trajectories that associate with decision confidence." ]
[ 0, 1, 0, 0, 0 ]
[ 0.12903225421905518, 0.24390242993831635, 0.1111111044883728, 0.06896550953388214, 0.07692307233810425 ]
Hylu4mYIIS
false
[ "Automated mice training for neuroscience with online iterative latent strategy inference for behavior prediction" ]
[ "Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data.", "Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. ", "Here, we characterize how popular RNN architectures perform document-level sentiment classification.", "Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. ", "We identify a simple mechanism, integration along an approximate line attractor, and find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs).", "Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.", "Recurrent neural networks (RNNs) are a popular tool for sequence modelling tasks.", "These architectures are thought to learn complex relationships in input sequences, and exploit this structure in a nonlinear fashion.", "RNNs are typically viewed as black boxes, despite considerable interest in better understanding how they function.Here, we focus on studying how recurrent networks solve document-level sentiment analysis-a simple, but longstanding benchmark task for language modeling BID6 BID13 .", "We demonstrate that popular RNN architectures, despite having the capacity to implement high-dimensional and nonlinear computations, in practice converge to low-dimensional representations when trained against this task.", "Moreover, using analysis techniques from dynamical systems theory, we show that locally linear approximations to the nonlinear RNN dynamics are highly interpretable.", "In particular, they all involve approximate low-dimensional line attractor dynamics-a useful dynamical feature that can be implemented by linear dynamics and used to store an analog value BID10 .", "Furthermore, we show that this mechanism is surprisingly consistent across a range of RNN architectures.", "In this work we applied dynamical systems analysis to understand how RNNs solve sentiment analysis.", "We found a simple mechanismintegration along a line attractorpresent in multiple architectures trained to solve the task.", "Overall, this work provides preliminary, but optimistic, evidence that different, highly intricate network models can converge to similar solutions that may be reduced and understood by human practitioners." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0, 0.060606054961681366, 0.1904761791229248, 0.260869562625885, 0.19512194395065308, 0.05882352590560913, 0.09999999403953552, 0.20338982343673706, 0.2916666567325592, 0.09090908616781235, 0.3199999928474426, 0.10810810327529907, 0.1111111044883728, 0.21052631735801697, 0.12244897335767746 ]
SJlKDVS2hV
true
[ "We analyze recurrent networks trained on sentiment classification, and find that they all exhibit approximate line attractor dynamics when solving this task." ]
[ "Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems.", "However, this interaction between fields is less developed in the study of motor control.", "In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control.", "We then use this platform to study motor activity across contexts by training a model to solve four complex tasks.", "Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals.", "We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics.", "These representations are reflected in the sequential activity and population dynamics of neural subpopulations.", "Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience.", "Animals have nervous systems that allow them to coordinate their movement and perform a diverse set of complex behaviors.", "Mammals, in particular, are generalists in that they use the same general neural network to solve a wide variety of tasks.", "This flexibility in adapting behaviors towards many different goals far surpasses that of robots or artificial motor control systems.", "Hence, studies of the neural underpinnings of flexible behavior in mammals could yield important insights into the classes of algorithms capable of complex control across contexts and inspire algorithms for flexible control in artificial systems (Merel et al., 2019b) .", "Recent efforts at the interface of neuroscience and machine learning have sparked renewed interest in constructive approaches in which artificial models that solve tasks similar to those solved by animals serve as normative models of biological intelligence.", "Researchers have attempted to leverage these models to gain insights into the functional transformations implemented by neurobiological circuits, prominently in vision (Khaligh-Razavi & Kriegeskorte, 2014; Yamins et al., 2014; Kar et al., 2019) , but also increasingly in other areas, including audition (Kell et al., 2018) and navigation (Banino et al., 2018; Cueva & Wei, 2018) .", "Efforts to construct models of biological locomotion systems have informed our understanding of the mechanisms and evolutionary history of bodies and behavior (Grillner et al., 2007; Ijspeert et al., 2007; Ramdya et al., 2017; Nyakatura et al., 2019) .", "Neural control approaches have also been applied to the study of reaching movements, though often in constrained behavioral paradigms (Lillicrap & Scott, 2013) , where supervised training is possible (Sussillo et al., 2015; Michaels et al., 2019) .", "While these approaches model parts of the interactions between animals and their environments (Chiel & Beer, 1997) , none attempt to capture the full complexity of embodied control, involving how an animal uses its senses, body and behaviors to solve challenges in a physical environment.", "Equal contribution.", "The development of models of embodied control is valuable to the field of motor neuroscience, which typically focuses on restricted behaviors in controlled experimental settings.", "It is also valuable for AI research, where flexible models of embodied control could be applicable to robotics.", "Here, we introduce a virtual model of a rodent to facilitate grounded investigation of embodied motor systems.", "The virtual rodent affords a new opportunity to directly compare principles of artificial control to biological data from real-world rodents, which are more experimentally accessible than humans.", "We draw inspiration from emerging deep reinforcement learning algorithms which now allow artificial agents to perform complex and adaptive movement in physical environments with sensory information that is increasingly similar to that available to animals (Peng et al., 2016; Heess et al., 2017; Merel et al., 2019a; .", "Similarly, our virtual rodent exists in a physical world, equipped with a set of actuators that must be coordinated for it to behave effectively.", "It also possesses a sensory system that allows it to use visual input from an egocentric camera located on its head and proprioceptive input to sense the configuration of its body in space.", "There are several questions one could answer using the virtual rodent platform.", "Here we focus on the problem of embodied control across multiple tasks.", "While some efforts have been made to analyze neural activity in reduced systems trained to solve multiple tasks (Song et al., 2017; Yang et al., 2019) , those studies lacked the important element of motor control in a physical environment.", "Our rodent platform presents the opportunity to study how representations of movements as well as sequences of movements change as a function of goals and task contexts.", "To address these questions, we trained our virtual rodent to solve four complex tasks within a physical environment, all requiring the coordinated control of its body.", "We then ask \"Can a neuroscientist understand a virtual rodent?\" -a more grounded take on the originally satirical \"Can a biologist fix a radio?\" (Lazebnik, 2002) or the more recent \"Could a neuroscientist understand a microprocessor?\" (Jonas & Kording, 2017) .", "We take a more sanguine view of the tremendous advances that have been made in computational neuroscience in the past decade, and posit that the supposed 'failure' of these approaches in synthetic systems is partly a misdirection.", "Analysis approaches in neuroscience were developed with the explicit purpose of understanding sensation and action in real brains, and often implicitly rooted in the types of architectures and processing that are thought relevant in biological control systems.", "With this philosophy, we use analysis approaches common in neuroscience to explore the types of representations and dynamics that the virtual rodent's neural network employs to coordinate multiple complex movements in the service of solving motor and cognitive tasks.", "We implemented a virtual rodent body (Figure 1 ) in MuJoCo (Todorov et al., 2012) , based on measurements of laboratory rats (see Appendix A.1).", "The rodent body has 38 controllable degrees of freedom.", "The tail, spine, and neck consist of multiple segments with joints, but are controlled by tendons that co-activate multiple joints (spatial tendons in MuJoCo).", "The rodent will be released as part of dm control/locomotion.", "The virtual rodent has access to proprioceptive information as well as \"raw\" egocentric RGB-camera (64×64 pixels) input from a head-mounted camera.", "The proprioceptive inputs include internal joint angles and angular velocities, the positions and velocities of the tendons that provide actuation, egocentric vectors from the root (pelvis) of the body to the positions of the head and paws, a vestibular-like upright orientation vector, touch or contact sensors in the paws, as well as egocentric acceleration, velocity, and 3D angular velocity of the root.", "For many computational neuroscientists and artificial intelligence researchers, an aim is to reverseengineer the nervous system at an appropriate level of abstraction.", "In the motor system, such an effort requires that we build embodied models of animals equipped with artificial nervous systems capable of controlling their synthetic bodies across a range of behavior.", "Here we introduced a virtual rodent capable of performing a variety of complex locomotor behaviors to solve multiple tasks using a single policy.", "We then used this virtual nervous system to study principles of the neural control of movement across contexts and described several commonalities between the neural activity of artificial control and previous descriptions of biological control.", "A key advantage of this approach relative to experimental approaches in neuroscience is that we can fully observe sensory inputs, neural activity, and behavior, facilitating more comprehensive testing of theories related to how behavior can be generated.", "Furthermore, we have complete knowledge of the connectivity, sources of variance, and training objectives of each component of the model, providing a rare ground truth to test the validity of our neural analyses.", "With these advantages in mind, we evaluated our analyses based on their capacity to both describe the algorithms and representations employed by the virtual rodent and recapitulate the known functional objectives underlying its creation without prior knowledge.", "To this end, our description of core and policy as respectively representing value and motor production is consistent with the model's actor-critic training objectives.", "But beyond validation, our analyses provide several insights into how these objectives are reached.", "RSA revealed that the cell activity of core and policy layers had greater similarity with behavioral and postural features than with short-timescale actuators.", "This suggests that the representation of behavior is useful in the moment-to-moment production of motor actions in artificial control, a model that has been previously proposed in biological action selection and motor control (Mink, 1996; Graziano, 2006) .", "These behavioral representations were more consistent across tasks in the policy than in the core, suggesting that task context and value activity in the core engaged task-specific behavioral strategies through the reuse of shared motor activity in the policy.", "Our analysis of neural dynamics suggests that reused motor activity patterns are often organized as sequences.", "Specifically, the activity of policy units uniformly tiles time in the production of several stereotyped behaviors like running, jumping, spinning, and the two-tap sequence.", "This finding is consistent with reports linking sequential neural activity to the production of stereotyped motor and task-oriented behavior in rodents (Berke et al., 2009; Rueda-Orozco & Robbe, 2015; Dhawale et al., 2019) , including during task delay periods (Akhlaghpour et al., 2016) , as well as in singing birds (Albert & Margoliash, 1996; Hahnloser et al., 2002) .", "Similarly, by relating rotational dynamics to the virtual rodent's behavior, we found that different behaviors were seemingly associated with distinct rotations in neural activity space that evolved at different timescales.", "These findings are consistent with a hierarchical control scheme in which policy layer dynamics that generate reused behaviors are activated and modulated by sensorimotor signals from the core.", "This work represents an early step toward the constructive modeling of embodied control for the purpose of understanding the neural mechanisms behind the generation of behavior.", "Incrementally and judiciously increasing the realism of the model's embodiment, behavioral repertoire, and neural architecture is a natural path for future research.", "Our virtual rodent possesses far fewer actuators and touch sensors than a real rodent, uses a vastly different sense of vision, and lacks integration with olfactory, auditory, and whisker-based sensation (see Zhuang et al., 2017) .", "While the virtual rodent is capable of locomotor behaviors, an increased diversity of tasks involving decision making, memory-based navigation, and working memory could give insight into \"cognitive\" behaviors of which rodents are capable.", "Furthermore, biologically-inspired design of neural architectures and training procedures should facilitate comparisons to real neural recordings and manipulations.", "We expect that this comparison will help isolate residual elements of animal behavior generation that are poorly captured by current models of motor control, and encourage the development of artificial neural architectures that can produce increasingly realistic behavior." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.1249999925494194, 0.14999999105930328, 0.21621620655059814, 0.21276594698429108, 0.21621620655059814, 0.1875, 0.1249999925494194, 0.2702702581882477, 0.2631579041481018, 0.05405404791235924, 0.11999999731779099, 0.19230768084526062, 0.0952380895614624, 0.1702127605676651, 0.11320754140615463, 0.23728813230991364, 0.1463414579629898, 0.1111111044883728, 0.1818181723356247, 0.13636362552642822, 0.1355932205915451, 0.2926829159259796, 0.25, 0.06666666269302368, 0.13333332538604736, 0.25925925374031067, 0.25, 0.3181818127632141, 0.1249999925494194, 0.2083333283662796, 0.1249999925494194, 0.15686273574829102, 0.13636362552642822, 0.07407406717538834, 0.09999999403953552, 0.0714285671710968, 0.10526315122842789, 0.16393442451953888, 0.20512819290161133, 0.12765957415103912, 0.21052631735801697, 0.2222222238779068, 0.11538460850715637, 0.2222222238779068, 0.11538460850715637, 0.1463414579629898, 0, 0.1538461446762085, 0.16326530277729034, 0.12765957415103912, 0.05882352590560913, 0.1538461446762085, 0.12121211737394333, 0.08695651590824127, 0.13333332538604736, 0.10256409645080566, 0.21052631735801697, 0.15686273574829102, 0.1249999925494194, 0.1764705777168274, 0.15686273574829102 ]
SyxrxR4KPS
true
[ "We built a physical simulation of a rodent, trained it to solve a set of tasks, and analyzed the resulting networks." ]
[ "We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution.", "This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients.", "Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.", "Generative modeling is a major sub-field of Machine Learning that studies the problem of how to learn models that generate images, audio, video, text or other data.", "Applications of generative models include image compression, generating speech from text, planning in reinforcement learning, semi-supervised and unsupervised representation learning, and many others.", "Since generative models can be trained on unlabeled data, which is almost endlessly available, they have enormous potential in the development of artificial intelligence.The central problem in generative modeling is how to train a generative model such that the distribution of its generated data will match the distribution of the training data.", "Generative adversarial nets (GANs) represent an advance in solving this problem, using a neural network discriminator or critic to distinguish between generated data and training data.", "The critic defines a distance between the model distribution and the data distribution which the generative model can optimize to produce data that more closely resembles the training data.A closely related approach to measuring the distance between the distributions of generated data and training data is provided by optimal transport theory.", "By framing the problem as optimally transporting one set of data points to another, it represents an alternative method of specifying a metric over probability distributions and provides another objective for training generative models.", "The dual problem of optimal transport is closely related to GANs, as discussed in the next section.", "However, the primal formulation of optimal transport has the advantage that it allows for closed form solutions and can thus more easily be used to define tractable training objectives that can be evaluated in practice without making approximations.", "A complication in using primal form optimal transport is that it may give biased gradients when used with mini-batches (see BID1 and may therefore be inconsistent as a technique for statistical estimation.In this paper we present OT-GAN, a variant of generative adversarial nets incorporating primal form optimal transport into its critic.", "We derive and justify our model by defining a new metric over probability distributions, which we call Mini-batch Energy Distance, combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space.", "This combination results in a highly discriminative metric with unbiased mini-batch gradients.In Section 2 we provide the preliminaries required to understand our work, and we put our contribution into context by discussing the relevant literature.", "Section 3 presents our main theoretical contribution: Minibatch energy distance.", "We apply this new distance metric to the problem of learning generative models in Section 4, and show state-of-the-art results in Section 5.", "Finally, Section 6 concludes by discussing the strengths and weaknesses of the proposed method, as well as directions for future work.", "We have presented OT-GAN, a new variant of GANs where the generator is trained to minimize a novel distance metric over probability distributions.", "This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients.", "OT-GAN was shown to be uniquely stable when trained with large mini-batches and to achieve state-of-the-art results on several common benchmarks.One downside of OT-GAN, as currently proposed, is that it requires large amounts of computation and memory.", "We achieve the best results when using very large mini-batches, which increases the time required for each update of the parameters.", "All experiments in this paper, except for the mixture of Gaussians toy example, were performed using 8 GPUs and trained for several days.", "In future work we hope to make the method more computationally efficient, as well as to scale up our approach to multi-machine training to enable generation of even more challenging and high resolution image data sets.A unique property of OT-GAN is that the mini-batch energy distance remains a valid training objective even when we stop training the critic.", "Our implementation of OT-GAN updates the generative model more often than the critic, where GANs typically do this the other way around (see e.g. BID11 .", "As a result we learn a relatively stable transport cost function c(x,", "y), describing how (dis)similar two images are, as well as an image embedding function v η", "(x) capturing the geometry of the training data.", "Preliminary experiments suggest these learned functions can be used successfully for unsupervised learning and other applications, which we plan to investigate further in future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0.5416666865348816, 0.045454539358615875, 0.045454539358615875, 0.09999999403953552, 0.06557376682758331, 0.09090908616781235, 0.14814814925193787, 0.07692307233810425, 0.2222222238779068, 0.22641508281230927, 0.2153846174478531, 0.555555522441864, 0.07692307233810425, 0.13793103396892548, 0.14999999105930328, 0.052631575614213943, 0.1463414579629898, 0.5416666865348816, 0.07547169178724289, 0.052631575614213943, 0.09756097197532654, 0.09090908616781235, 0.09302324801683426, 0.06666666269302368, 0.05882352590560913, 0.07692307233810425, 0.09090908616781235 ]
rkQkBnJAb
true
[ "An extension of GANs combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space." ]
[ "We build a virtual agent for learning language in a 2D maze-like world.", "The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards.", "It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering.", "It learns simultaneously the visual representations of the world, the language, and the action control.", "By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences.", "The new words are transferred from the answers of language prediction.", "Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words.", "The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences.", "In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix.", "Some empiricists argue that language may be learned based on its usage (Tomasello, 2003) .", "Skinner (1957) suggests that the successful use of a word reinforces the understanding of its meaning as well as the probability of it being used again in the future.", "BID3 emphasizes the role of social interaction in helping a child develop the language, and posits the importance of the feedback and reinforcement from the parents during the learning process.", "This paper takes a positive view of the above behaviorism and tries to explore some of the ideas by instantiating them in a 2D virtual world where interactive language acquisition happens.", "This interactive setting contrasts with a common learning setting in that language is learned from dynamic interactions with environments instead of from static labeled data.Language acquisition can go beyond mapping language as input patterns to output labels for merely obtaining high rewards or accomplishing tasks.", "We take a step further to require the language to be grounded BID13 .", "Specifically, we consult the paradigm of procedural semantics BID24 which posits that words, as abstract procedures, should be able to pick out referents.", "We will attempt to explicitly link words to environment concepts instead of treating the whole model as a black box.", "Such a capability also implies that, depending on the interactions with the world, words would have particular meanings in a particular context and some content words in the usual sense might not even have meanings in our case.", "As a result, the goal of this paper is to acquire \"in-context\" word meanings regardless of their suitability in all scenarios.On the other hand, it has been argued that a child's exposure to adult language provides inadequate evidence for language learning BID7 , but some induction mechanism should exist to bridge this gap (Landauer & Dumais, 1997) .", "This property is critical for any AI system to learn an infinite number of sentences from a finite amount of training data.", "This type of generalization problem is specially addressed in our problem setting.", "After training, we want the agent to generalize to interpret zero-shot sentences of two types: Testing ZS2 sentences contain a new word (\"watermelon\") that never appears in any training sentence but is learned from a training answer.", "This figure is only a conceptual illustration of language generalization; in practice it might take many training sessions before the agent can generalize.", "(Due to space limitations, the maps are only partially shown.)", "1) interpolation, new combinations of previously seen words for the same use case, or", "2) extrapolation, new words transferred from other use cases and models.In the following, we will call the first type ZS1 sentences and the second type ZS2 sentences.", "Note that so far the zero-shot problems, addressed by most recent work BID14 BID4 of interactive language learning, belong to the category of ZS1.", "In contrast, a reliable interpretation of ZS2 sentences, which is essentially a transfer learning (Pan & Yang, 2010) problem, will be a major contribution of this work.We created a 2D maze-like world called XWORLD FIG0 ), as a testbed for interactive grounded language acquisition and generalization.", "1 In this world, a virtual agent has two language use cases: navigation (NAV) and question answering (QA).", "For NAV, the agent needs to navigate to correct places indicated by language commands from a virtual teacher.", "For QA, the agent must correctly generate single-word answers to the teacher's questions.", "NAV tests language comprehension while QA additionally tests language prediction.", "They happen simultaneously: When the agent is navigating, the teacher might ask questions regarding its current interaction with the environment.", "Once the agent reaches the target or the time is up, the current session ends and a new one is randomly generated according to our configuration (Appendix B).", "The ZS2 sentences defined in our setting require word meanings to be transferred from single-word answers to sentences, or more precisely, from language prediction to grounding.", "This is achieved by establishing an explicit link between grounding and prediction via a common concept detection function, which constitutes the major novelty of our model.", "With this transferring ability, the agent is able to comprehend a question containing a new object learned from an answer, without retraining the QA pipeline.", "It is also able to navigate to a freshly taught object without retraining the NAV pipeline.It is worthwhile emphasizing that this seemingly \"simple\" world in fact poses great challenges for language acquisition and generalization, because:The state space is huge.", "Even for a 7ˆ7 map with 15 wall blocks and 5 objects selected from 119 distinct classes, there are already octillions (10 27 ) of possible different configurations, not to mention the intra-class variance of object instances (see FIG0 in the appendix).", "For two configurations that only differ in one block, their successful navigation paths could be completely different.", "This requires an accurate perception of the environment.", "Moreover, the configuration constantly changes from session to session, and from training to testing.", "In particular, the target changes across sessions in both location and appearance.The goal space implied by the language for navigation is huge.", "For a vocabulary containing only 185 words, the total number of distinct commands that can be said by the teacher conforming to our defined grammar is already over half a million.", "Two commands that differ by only one word could imply completely different goals.", "This requires an accurate grounding of language.", "The environment demands a strong language generalization ability from the agent.", "The agent has to learn to interpret zero-shot sentences that might be as long as 13 words.", "It has to \"plug\" the meaning of a new word or word combination into a familiar sentential context while trying to still make sense of the unfamiliar whole.", "The recent work BID14 BID4 addresses ZS1 (for short sentences with several words) but not ZS2 sentences, which is a key difference between our learning problem and theirs.We describe an end-to-end model for the agent to interactively acquire language from scratch and generalize to unfamiliar sentences.", "Here \"scratch\" means that the model does not hold any assumption of the language semantics or syntax.", "Each sentence is simply a sequence of tokens with each token being equally meaningless in the beginning of learning.", "This is unlike some early pioneering systems (e.g., SHRDLU BID23 and ABIGAIL (Siskind, 1994) ) that hard-coded the syntax or semantics to link language to a simulated world-an approach that presents scalability issues.", "There are two aspects of the interaction: one is with the teacher (i.e., language and rewards) and the other is with the environment (e.g., stepping on objects or hitting walls).", "The model takes as input RGB images, sentences, and rewards.", "It learns simultaneously the visual representations of the world, the language, and the action control.", "We evaluate our model on randomly generated XWORLD maps with random agent positions, on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words.", "Detailed analysis (Appendix A) of the trained model shows that the language is grounded in such a way that the words are capable to pick out referents in the environment.", "We specially test the generalization ability of the agent for handling zero-shot sentences.", "The average NAV success rates are 84.3% for ZS1 and 85.2% for ZS2 when the zero-shot portion is half, comparable to the rate of 90.5% in a normal language setting.", "The average QA accuracies are 97.8% for ZS1 and 97.7% for ZS2 when the zero-shot portion is half, almost as good as the accuracy of 99.7% in a normal language setting.", "We have presented an end-to-end model of a virtual agent for acquiring language from a 2D world in an interactive manner, through the visual and linguistic perception channels.", "After learning, the agent is able to both interpolate and extrapolate to interpret zero-shot sentences that contain new word combinations or even new words.", "This generalization ability is supported by an explicit grounding strategy that disentangles the language grounding from the subsequent languageindependent computations.", "It also depends on sharing a detection function between the language grounding and prediction as the core computation.", "This function enables the word meanings to transfer from the prediction to the grounding during the test time.", "Promising language acquisition and generalization results have been obtained in the 2D XWORLD.", "We hope that this work can shed some light on acquiring and generalizing language in a similar way in a 3D world.Thomas Landauer and Susan Dumais.", "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge.", "Psychological Review, 104, 1997 .", "blue brown gray green orange purple red yellow apple armadillo artichoke avocado banana bat bathtub beans bear bed bee beet beetle bird blueberry bookshelf broccoli bull butterfly cabbage cactus camel carpet carrot cat centipede chair cherry circle clock coconut corn cow crab crocodile cucumber deer desk dinosaur dog donkey dragon dragonfly duck eggplant elephant fan fig fireplace fish fox frog garlic giraffe glove goat grape greenonion greenpepper hedgehog horse kangaroo knife koala ladybug lemon light lion lizard microwave mirror monitor monkey monster mushroom octopus onion ostrich owl panda peacock penguin pepper pig pineapple plunger potato pumpkin rabbit racoon rat rhinoceros rooster seahorse seashell seaurchin shrimp snail snake sofa spider square squirrel stairs star strawberry tiger toilet tomato triangle turtle vacuum wardrobe washingmachine watermelon whale wheat zebra east north northeast northwest south southeast southwest west blue brown gray green orange purple red yellow apple armadillo artichoke avocado banana bat bathtub beans bear bed bee beet beetle bird blueberry bookshelf broccoli bull butterfly cabbage cactus camel carpet carrot cat centipede chair cherry circle clock coconut corn cow crab crocodile cucumber Channel mask x feat .", "We inspect the channel mask x feat which allows the model to select certain feature maps from a feature cube h and predict an answer to the question s.", "We randomly sample 10k QA questions and compute x feat for each of them using the grounding module L. We divide the 10k questions into 134 groups, where each group corresponds to a different answer.", "4 Then we compute an Euclidean distance matrix D where entry Dri, js is the average distance between the x feat of a question from the ith group and that from the jth group FIG6 ." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6153846383094788, 0.25, 0.1249999925494194, 0.07692307233810425, 0.1702127605676651, 0.07999999821186066, 0.1428571343421936, 0.07692307233810425, 0.07692307233810425, 0.0714285671710968, 0.10810810327529907, 0.1621621549129486, 0.380952388048172, 0.178571417927742, 0.23076923191547394, 0, 0.060606054961681366, 0.1395348757505417, 0.1230769231915474, 0.17142856121063232, 0.1599999964237213, 0.12765957415103912, 0.21621620655059814, 0, 0.0714285671710968, 0.05405404791235924, 0.0555555522441864, 0.3214285671710968, 0.3125, 0.25806450843811035, 0.07692307233810425, 0.09090908616781235, 0.0624999962747097, 0.15789473056793213, 0.10810810327529907, 0.14999999105930328, 0.1621621549129486, 0.2800000011920929, 0.14814814925193787, 0.06451612710952759, 0.09090908616781235, 0.07692307233810425, 0.2222222238779068, 0.04651162400841713, 0, 0.1904761791229248, 0.3199999928474426, 0.06896550953388214, 0.05405404791235924, 0.20689654350280762, 0.06666666269302368, 0.1249999925494194, 0.12765957415103912, 0.09756097197532654, 0.0833333283662796, 0.07692307233810425, 0.1249999925494194, 0.20512820780277252, 0.23076923191547394, 0.2222222238779068, 0.23255813121795654, 0.5, 0.1111111044883728, 0.1875, 0.19354838132858276, 0, 0.4444444477558136, 0.2631579041481018, 0.06666666269302368, 0, 0, 0.1538461446762085, 0.13636362552642822, 0.1395348757505417 ]
H1UOm4gA-
true
[ "Training an agent in a 2D virtual world for grounded language acquisition and generalization." ]
[ "Reinforcement learning algorithms, though successful, tend to over-fit to training environments, thereby hampering their application to the real-world.", "This paper proposes $\\text{W}\\text{R}^{2}\\text{L}$ -- a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks.", "Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver.", "Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. \n", "We empirically demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments", "Reinforcement learning (RL) has become a standard tool for solving decision-making problems with feedback, and though significant progress has been made, algorithms often over-fit to training environments and fail to generalise across even slight variations of transition dynamics (Packer et al., 2018; Zhao et al., 2019) .", "Robustness to changes in transition dynamics is a crucial component for adaptive and safe RL in real-world environments.", "Motivated by real-world applications, recent literature has focused on the above problems, proposing a plethora of algorithms for robust decisionmaking (Morimoto & Doya, 2005; Pinto et al., 2017; Tessler et al., 2019) .", "Most of these techniques borrow from game theory to analyse, typically in a discrete state and actions spaces, worst-case deviations of agents' policies and/or environments, see Sargent & Hansen (2001) ; Nilim & El Ghaoui (2005) ; Iyengar (2005); Namkoong & Duchi (2016) and references therein.", "These methods have also been extended to linear function approximators (Chow et al., 2015) , and deep neural networks (Peng et al., 2017) showing (modest) improvements in performance gain across a variety of disturbances, e.g., action uncertainties, or dynamical model variations.", "In this paper, we propose a generic framework for robust reinforcement learning that can cope with both discrete and continuous state and actions spaces.", "Our algorithm, termed Wasserstein Robust Reinforcement Learning (WR 2 L), aims to find the best policy, where any given policy is judged by the worst-case dynamics amongst all candidate dynamics in a certain set.", "This set is essentially the average Wasserstein ball around a reference dynamics P 0 .", "The constraints makes the problem well-defined, as searching over arbitrary dynamics can only result in non-performing system.", "The measure of performance is the standard RL objective, the expected return.", "Both the policy and the dynamics are parameterised; the policy parameters θ k may be the weights of a deep neural network, and the dynamics parameters φ j the parameters of a simulator or differential equation solver.", "The algorithm performs estimated descent steps in φ space and -after (almost) convergence -performs an update of policy parameters, i.e., in θ space.", "Since φ j may be high-dimensional, we adapt a zero'th order sampling method based extending Salimans et al. (2017) to make estimations of gradients, and in order to define the constraint set which φ j is bounded by, we generalise the technique to estimate Hessians (Proposition 2).", "We emphasise that although access to a simulator with parameterisable dynamics is required, the actual reference dynamics P 0 need not be known explicitly nor learnt by our algorithm.", "Put another way, we are in the \"RL setting\", not the \"MDP setting\" where the transition probability matrix is known a priori.", "The difference is made obvious, for example, in the fact that we cannot perform dynamic programming, and the determination of a particular probability transition can only be estimated from sampling, not retrieved explicitly.", "Hence, our algorithm is not model-based in the traditional sense of learning a model to perform planning.", "We believe our contribution is useful and novel for two main reasons.", "Firstly, our framing of the robust learning problem is in terms of dynamics uncertainty sets defined by Wasserstein distance.", "Whilst we are not the first to introduce the Wasserstein distance into the context of MDPs (see, e.g., Yang (2017) or Lecarpentier & Rachelson (2019) ), we believe our formulation is amongst the first suitable for application to the demanding application-space we desire, that being, high-dimensional, continuous state and action spaces.", "Secondly, we believe our solution approach is both novel and effective (as evidenced by experiments below, see Section 5), and does not place a great demand on model or domain knowledge, merely access to a simulator or differentiable equation solver that allows for the parameterisation of dynamics.", "Furthermore, it is not computationally demanding, in particular, because it does not attempt to build a model of the dynamics, and operations involving matrices are efficiently executable using the Jacobian-vector product facility of automatic differentiation engines.", "In this paper, we proposed a robust reinforcement learning algorithm capable of outperforming others in terms of test returns on unseen dynamics.", "The algorithm makes use of Wasserstein constraints for policies generalising across varying domains, and considers a zero-order method for scalable solutions.", "Empirically, we demonstrated superior performance against state-of-the-art from both standard and robust reinforcement learning on low and high-dimensional MuJuCo environments.", "In future work, we aim to consider robustness in terms of other components of MDPs, e.g., state representations, reward functions, and others.", "Furthermore, we will implement WR 2 L on real hardware, considering sim-to-real experiments.", "-Sub-Case III when indices are all distinct: We have", "Diagonal Elements Conclusion: Using the above results we conclude that", "• Off-Diagonal Elements (i.e., when i = j): The above analysis is now repeated for computing the expectation of the off-diagonal elements of matrix B. Similarly, this can also be split into three sub-cases depending on indices:", "-Sub-Case III when indices are all distinct: We have", "Off-Diagonal Elements Conclusion: Using the above results and due to the symmetric properties of H, we conclude that", "Finally, analysing c, one can realise that", "Substituting the above conclusions back in the original approximation in Equation 11 , and using the linearity of the expectation we can easily achieve the statement of the proposition." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07407406717538834, 0.13333332538604736, 0.06666666269302368, 0.20512820780277252, 0.14814814925193787, 0.07547169178724289, 0.3571428656578064, 0.0476190447807312, 0.07692307233810425, 0.07843136787414551, 0.11764705181121826, 0.1395348757505417, 0.07999999821186066, 0.1428571343421936, 0.09090908616781235, 0.1111111044883728, 0.11764705181121826, 0.11764705181121826, 0.25641024112701416, 0.06451612710952759, 0.1395348757505417, 0.2142857164144516, 0, 0.20689654350280762, 0.0714285671710968, 0.1090909093618393, 0.09302325546741486, 0.25, 0.06451612710952759, 0.06666666269302368, 0.11764705181121826, 0, 0, 0.0952380895614624, 0.0416666641831398, 0, 0.1428571343421936, 0.1111111044883728, 0.060606054961681366 ]
HyxwZRNtDr
true
[ "An RL algorithm that learns to be robust to changes in dynamics" ]
[ "Partially observable Markov decision processes (POMDPs) are a natural model for scenarios where one has to deal with incomplete knowledge and random events.\n", "Applications include, but are not limited to, robotics and motion planning.\n", "However, many relevant properties of POMDPs are either undecidable or very expensive to compute in terms of both runtime and memory consumption.\n", "In our work, we develop a game-based abstraction method that is able to deliver safe bounds and tight\n approximations for important sub-classes of such properties.\n", "We discuss the theoretical implications and showcase the applicability of our results on a broad spectrum of benchmarks.\n", "We developed a game-based abstraction technique to synthesize strategies for a class of POMDPs.", "This class encompasses typical grid-based motion planning problems under restricted observability of the environment.", "For these scenarios, we efficiently compute strategies that allow the agent to maneuver the grid in order to reach a given goal state while at the same time avoiding collisions with faster moving obstacles.", "Experiments show that our approach can handle state spaces up to three orders of magnitude larger than general-purpose state-of-the-art POMDP solvers in less time, while at the same time using fewer states to represent the same grid sizes." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.15789473056793213, 0, 0.1666666567325592, 0.25, 0.06451612710952759, 0.4444444477558136, 0.0714285671710968, 0.13333332538604736, 0.04081632196903229 ]
rJeKVt-iaV
true
[ "This paper provides a game-based abstraction scheme to compute provably sound policies for POMDPs." ]
[ "In this paper we approach two relevant deep learning topics:", "i) tackling of graph structured input data and", "ii) a better understanding and analysis of deep networks and related learning algorithms.", "With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes).", "Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures.", "We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task.", "The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit.", "Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum.", "We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs.", "We further support our claims with training experiments and numerical analysis of the cost function on networks with up to $128$ layers.", "Deep convolutional networks have achieved great success in the last years by presenting human and super-human performance on many machine learning problems such as image classification, speech recognition and natural language processing ).", "Importantly, the data in these common tasks presents particular statistical properties and it normally rests on regular lattices (e.g. images) in Euclidean space BID3 ).", "Recently, more attention has been given to other highly relevant problems in which the input data belongs to non-Euclidean spaces.", "Such kind of data may present a graph structure when it represents, for instance, social networks, knowledge bases, brain activity, protein-interaction, 3D shapes and human body poses.", "Although some works found in the literature propose methods and network architectures specifically tailored to tackle graph-like input data BID3 ; BID4 ; BID15 ; BID22 ; BID23 b) ), in comparison with other topics in the field this one is still not vastly investigated.Another recent focus of interest of the machine learning community is in the detailed analysis of the functioning of deep networks and related algorithms BID8 ; BID12 ).", "The minimization of high dimensional non-convex loss function by means of stochastic gradient descent techniques is theoretically unlikely, however the successful practical achievements suggest the contrary.", "The hypothesis that very deep neural nets do not suffer from local minima BID9 ) is not completely proven BID36 ).", "The already classical adversarial examples BID27 ), as well as new doubts about supposedly well understood questions, such as generalization BID43 ), bring even more relevance to a better understanding of the methods.In the present work we aim to advance simultaneously in the two directions described above.", "To accomplish this goal we focus on the topological classification of graphs BID29 ; BID30 ).", "However, we restrict our attention to a particular subset of planar graphs constrained by a regular lattice.", "The reason for that is threefold:", "i) doing so we still touch upon the issue of real world graph structured data, such as the 2D pose of a human body BID1 ; BID16 ) or road networks BID25 ; BID39 );", "ii) we maintain the data in Euclidean space, allowing its processing with standard CNN architectures;", "iii) this particular class of graphs has various non-trivial statistical properties derived from percolation theory and conformal field theories BID5 ; BID20 ; BID34 ), allowing us to analytically compute various properties of a deep CNN proposed by the authors to tackle the problem.Specifically, we introduce Maze-testing, a specialized version of the reachability problem in graphs BID42 ).", "In Maze-testing, random mazes, defined as L by L binary images, are classified as solvable or unsolvable according to the existence of a path between given starting and ending points in the maze (vertices in the planar graph).", "Other recent works approach maze problems without framing them as graphs BID37 ; BID28 ; BID33 ).", "However, to do so with mazes (and maps) is a common practice in graph theory BID2 ; BID32 ) and in applied areas, such as robotics BID11 ; BID7 ).", "Our Mazetesting problem enjoys a high degree of analytical tractability, thereby allowing us to gain important theoretical insights regarding the learning process.", "We propose a deep network to tackle the problem that consists of O(L 2 ) layers, alternating convolutional, sigmoid, and skip operations, followed at the end by a logistic regression function.", "We prove that such a network can express an exact solution to this problem which we call the optimal-BFS (breadth-first search) minimum.", "We derive the shape of the cost function around this minimum.", "Quite surprisingly, we find that gradients around the minimum do not scale with L. This peculiar effect is attributed to rare events in the data.In addition, we shed light on a type of sub-optimal local minima in the cost function which we dub \"neglect minima\".", "Such minima occur when the network discards some important features of the data samples, and instead develops a sub-optimal strategy based on the remaining features.", "Minima similar in nature to the above optimal-BFS and neglect minima are shown to occur in numerical training and dominate the training dynamics.", "Despite the fact the Maze-testing is a toy problem, we believe that its fundamental properties can be observed in real problems, as is frequently the case in natural phenomena BID31 ), making the presented analytical analysis of broader relevance.Additionally important, our framework also relates to neural network architectures with augmented memory, such as Neural Turing Machines BID13 ) and memory networks BID40 ; BID35 ).", "The hot-spot images FIG9 , used to track the state of our graph search algorithm, may be seen as an external memory.", "Therefore, to observe how activations spread from the starting to the ending point in the hot-spot images, and to analyze errors and the landscape of the cost function (Sec. 5) , is analogous to analyze how errors occur in the memory of the aforementioned architectures.", "This connection gets even stronger when such memory architectures are employed over graph structured data, to perform task such as natural language reasoning and graph search ; BID17 ; BID14 ).", "In these cases, it can be considered that their memories in fact encode graphs, as it happens in our framework.", "Thus, the present analysis may eventually help towards a better understanding of the cost functions of memory architectures, potentially leading to improvements of their weight initialization and optimization algorithms thereby facilitating training BID26 ).The", "paper is organized as follows: Sec. 2 describes in detail the Maze-testing problem. In", "Sec. 3 we suggest an appropriate architecture for the problem. In", "Sec. 4 we describe an optimal set of weights for the proposed architecture and prove that it solves the problem exactly. In", "Sec. 5 we report on training experiments and describe the observed training phenomena. In", "Sec. 6 we provide an analytical understanding of the observed training phenomena. Finally", ", we conclude with a discussion and an outlook.", "Despite their black-box reputation, in this work we were able to shed some light on how a particular deep CNN architecture learns to classify topological properties of graph structured data.", "Instead of focusing our attention on general graphs, which would correspond to data in non-Euclidean spaces, we restricted ourselves to planar graphs over regular lattices, which are still capable of modelling real world problems while being suitable to CNN architectures.We described a toy problem of this type (Maze-testing) and showed that a simple CNN architecture can express an exact solution to this problem.", "Our main contribution was an asymptotic analysis of the cost function landscape near two types of minima which the network typically settles into: BFS type minima which effectively executes a breadth-first search algorithm and poorly performing minima in which important features of the input are neglected.Quite surprisingly, we found that near the BFS type minima gradients do not scale with L, the maze size.", "This implies that global optimization approaches can find such minima in an average time that does not increase with L. Such very moderate gradients are the result of an essential singularity in the cost function around the exact solution.", "This singularity in turn arises from rare statistical events in the data which act as early precursors to failure of the neural network thereby preventing a sharp and abrupt increase in the cost function.In addition we identified an obstacle to learning whose severity scales with L which we called neglect minima.", "These are poorly performing minima in which the network neglects some important features relevant for predicting the label.", "We conjectured that these occur since the gradual incorporation of these important features in the prediction requires some period in the training process in which predictions become more noisy.", "A \"wall of noise\" then keeps the network in a poorly performing state.It would be interesting to study how well the results and lessons learned here generalize to other tasks which require very deep architectures.", "These include the importance of rare-events, the essential singularities in the cost function, the localized nature of malfunctions (bugs), and neglect minima stabilized by walls of noise.These conjectures potentially could be tested analytically, using other toy models as well as on real world problems, such as basic graph algorithms (e.g. shortest-path) BID14 ); textual reasoning on the bAbI dataset ), which can be modelled as a graph; and primitive operations in \"memory\" architectures (e.g. copy and sorting) BID13 ).", "More specifically the importance of rare-events can be analyzed by studying the statistics of errors on the dataset as it is perturbed away from a numerically obtained minimum.", "Technically one should test whether the perturbation induces an typical small deviation of the prediction on most samples in the dataset or rather a strong deviation on just a few samples.", "Bugs can be similarly identified by comparing the activations of the network on the numerically obtained minimum and on some small perturbation to that minimum while again looking at typical versus extreme deviations.", "Such an analysis can potentially lead to safer and more robust designs were the network fails typically and mildly rather than rarely and strongly.Turning to partial neglect minima these can be identified provided one has some prior knowledge on the relevant features in the dataset.", "The correlations or mutual information between these features and the activations at the final layer can then be studied to detect any sign of neglect.", "If problems involving neglect are discovered it may be beneficial to add extra terms to the cost function which encourage more mutual information between these neglected features and the labels thereby overcoming the noise barrier and pushing the training dynamics away from such neglect minimum." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.060606054961681366, 0.12903225421905518, 0.22857142984867096, 0.2926829159259796, 0.1702127605676651, 0.1395348757505417, 0.17391303181648254, 0.1860465109348297, 0.22727271914482117, 0.27272728085517883, 0.145454540848732, 0.1249999925494194, 0.1428571343421936, 0.11999999731779099, 0.14999999105930328, 0.08510638028383255, 0.0952380895614624, 0.15625, 0.1538461446762085, 0.20512819290161133, 0, 0.1818181723356247, 0.10526315122842789, 0.19718308746814728, 0.2142857164144516, 0, 0.1599999964237213, 0.2222222238779068, 0.19230768084526062, 0.17777776718139648, 0.12121211737394333, 0.1875, 0.2222222238779068, 0.24390242993831635, 0.2195121943950653, 0.2222222238779068, 0.15094339847564697, 0.07843136787414551, 0.04878048226237297, 0.1818181723356247, 0.10810810327529907, 0.11764705181121826, 0.13636362552642822, 0.1666666567325592, 0.277777761220932, 0.1249999925494194, 0.26923075318336487, 0.20779220759868622, 0.13513512909412384, 0.14035087823867798, 0.20588235557079315, 0.09999999403953552, 0.1702127605676651, 0.24561403691768646, 0.1818181723356247, 0.2083333283662796, 0.2916666567325592, 0.1538461446762085, 0.1904761791229248, 0.12765957415103912, 0.12903225421905518 ]
BJGWO9k0Z
true
[ "A toy dataset based on critical percolation in a planar graph provides an analytical window to the training dynamics of deep neural networks " ]
[ "While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training.", "For instance, a generative adversarial network (GAN) exclusively trained to transform images of cars from light to dark might not have the same effect on images of horses.", "This is because neural networks are good at generation within the manifold of the data that they are trained on.", "However, generating new samples outside of the manifold or extrapolating \"out-of-sample\" is a much harder problem that has been less well studied.", "To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space.", "We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons.", "By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations.", "We showcase our technique on image domain/style transfer and two biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.", "Many experiments in biology are conducted to study the effect of a treatment or a condition on a set of samples.", "For example, the samples can be groups of cells and the treatment can be the administration of a drug.", "However, experiments and clinical trials are often performed on only a small subset of samples from the entire population.", "Usually, it is assumed that the effects generalize in a context-independent manner without mathematically attempting to model the effect and potential interactions with the context.", "However, mathematically modeling the effect and potential interactions with background information would give us a powerful tool that would allow us to assess how the treatment would generalize beyond the samples measured.We propose a neural network-based method for learning a general edit function corresponding to treatment in the biological setting.", "While neural networks offer the power and flexibility to learn complicated ways of transforming data from one distribution to another, they are often overfit to the training dataset in the sense that they only learn how to map one specific data manifold to another, and not a general edit function.", "Indeed, popular neural network architectures like GANs pose the problem as one of learning to generate the post-treatment data distributions from pre-treatment data distributions.", "Instead, we reframe the problem as that of learning an edit function between the preand post-treatment versions of the data, that could be applied to other datasets.We propose to learn such an edit, which we term neuron editing, in the latent space of an autoencoder neural network with non-linear activations.", "First we train an autoencoder on the entire population of data which we are interested in transforming.", "This includes all of the pre-treatment samples and the post-treatment samples from the subset of the data on which we have post-treatment measurements.The internal layers of this autoencoder represent the data with all existing variation decomposed into abstract features (neurons) that allow the network to reconstruct the data accurately BID28 BID4 BID17 BID24 .", "Neuron editing involves extracting differences between the observed pre-and post-treatment activation distributions for neurons in this layer and then applying them to pre-treatment data from the rest of the population to synthetically generate post-treatment data.", "Thus performing the edit node-by-node in this space actually encodes complex multivariate edits in the ambient space, performed on denoised and meaningful features, owing to the fact that these features themselves are complex non-linear combinations of the input features.While neuron editing is a general technique that could be applied to the latent space of any neural network, even GANs themselves, we instead focus exclusively on the autoencoder in this work to leverage three of its key advantages.", "First, we seek to model complex distribution-to-distribution transformations between large samples in high-dimensional space.", "While this can be generally intractable due to difficulty in estimating joint probability distributions, research has provided evidence that working in a lower-dimensional manifold facilitates learning transformations that would otherwise be infeasible in the original ambient space BID32 BID21 BID29 .", "The non-linear dimensionality reduction performed by autoencoders finds intrinsic data dimensions that esentially straighten the curvature of data in the ambient space.", "Thus complex effects can become simpler shifts in distribution that can be computationally efficient to apply.Second, by performing the edit to the neural network internal layer, we allow for the modeling of some context dependence.", "Some neurons of the internal layer have a drastic change between preand post-treatment versions of the experimental subpopulation, while other neurons such as those that encode background context information not directly associated with treatment have less change in the embedding layer.", "The latter neurons are less heavily edited but still influence the output jointly with edited neurons due to their integration in the decoding layers.", "These edited neurons interact with the data-context-encoding neurons in complex ways that may be more predictive of treatment than the experimental norm of simply assuming widespread generalization of results context-free.Third, editing in a low-dimensional internal layer allows us to edit on a denoised version of the data.", "Because of the reconstruction penalty, more significant dimensions are retained through the bottleneck dimensions of an autoencoder while noise dimensions are discarded.", "Thus, by editing in the hidden layer, we avoid editing noise and instead edit significant dimensions of the data.We note that neuron editing makes the assumption that the internal neurons have semantic consistency across the data, i.e., the same neurons encode the same types of features for every data manifold.", "We demonstrate that this holds in our setting because the autoencoder learns a joint manifold of all of the given data including pre-and post-treatment samples of the experimental subpopulation and pre-treatment samples from the rest of the population.", "Recent results show that neural networks prefer to learn patterns over memorizing inputs even when they have the capacity to do so BID31 .We", "demonstrate that neuron editing extrapolates better than generative models on two important criteria. First", ", as to the original goal, the predicted change on extrapolated data more closely resembles the predicted change on interpolated data. Second", ", the editing process produces more complex variation, since it simply preserves the existing variation in the data rather than needing a generator to learn to create it. We compare", "the predictions from neuron editing to those of several generationbased approaches: a traditional GAN, a GAN implemented with residual blocks (ResnetGAN) to show generating residuals is not the same as editing BID26 , and a CycleGAN BID33 . While in other", "applications, like natural images, GANs have shown an impressive ability to generate plausible individual points, we illustrate that they struggle with these two criteria. We also motivate", "why neuron editing is performed on inference by comparing against a regularized autoencoder that performs the internal layer transformations during training, but the decoder learns to undo the transformation and reconstruct the input unchanged BID0 .In the following", "section, we detail the neuron editing method. Then, we motivate", "the extrapolation problem by trying to perform natural image domain transfer on the canonical CelebA dataset . We then move to two", "biological applications where extrapolation is essential: correcting the artificial variability introduced by measuring instruments (batch effects), and predicting the combined effects of multiple drug treatments (combinatorial drug effects) BID1 .", "In this paper, we tackled a data-transformation problem inspired by biological experimental settings: that of generating transformed versions of data based on observed pre-and post-transformation versions of a small subset of the available data.", "This problem arises during clinical trials or in settings where effects of drug treatment (or other experimental conditions) are only measured in a subset of the population, but expected to generalize beyond that subset.", "Here we introduce a novel approach that we call neuron editing, for applying the treatment effect to the remainder of the dataset.", "Neuron editing makes use of the encoding learned by the latent layers of an autoencoder and extracts the changes in activation distribution between the observed pre-and post treatment measurements.", "Then, it applies these same edits to the internal layer encodings of other data to mimic the transformation.", "We show that performing the edit on neurons of an internal layer results in more realistic transformations of image data, and successfully predicts synergistic effects of drug treatments in biological data.", "Moreover, we note that it is feasible to learn complex data transformations in the non-linear dimensionality reduced space of a hidden layer rather than in ambient space where joint probability distributions are difficult to extract.", "Finally, learning edits in a hidden layer allows for interactions between the edit and other context information from the dataset during decoding.", "Future work along these lines could include training parallel encoders with the same decoder, or training to generate conditionally." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2083333283662796, 0.13636362552642822, 0.1621621549129486, 0.19512194395065308, 0.0952380895614624, 0.22727271914482117, 0.13636362552642822, 0.1666666567325592, 0.1621621549129486, 0.24242423474788666, 0.21052631735801697, 0.1428571343421936, 0.13333332538604736, 0.17543859779834747, 0.29999998211860657, 0.20000000298023224, 0.11428570747375488, 0.13333332538604736, 0.16326530277729034, 0.14999999105930328, 0, 0.072727270424366, 0.10256409645080566, 0.07843136787414551, 0.15094339847564697, 0.04999999329447746, 0.16949151456356049, 0.1111111044883728, 0.16949151456356049, 0.20408162474632263, 0.0952380895614624, 0.1818181723356247, 0.11428570747375488, 0.2666666507720947, 0.25925925374031067, 0.1304347813129425, 0.15094339847564697, 0.1428571343421936, 0.15789473056793213, 0.1304347813129425, 0.1702127605676651, 0.1599999964237213, 0.15789473056793213, 0.1818181723356247, 0.11428570747375488, 0.1702127605676651, 0.15686273574829102, 0.14999999105930328, 0.05405404791235924 ]
rygZJ2RcF7
true
[ "We reframe the generation problem as one of editing existing points, and as a result extrapolate better than traditional GANs." ]
[ "We present a representation for describing transition models in complex uncertain domains using relational rules. ", "For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. ", "An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. ", "Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. ", "This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table.", "Many complex domains are appropriately described in terms of sets of objects, properties of those objects, and relations among them.", "We are interested in the problem of taking actions to change the state of such complex systems, in order to achieve some objective.", "To do this, we require a transition model, which describes the system state that results from taking a particular action, given the previous system state.", "In many important domains, ranging from interacting with physical objects to managing the operations of an airline, actions have localized effects: they may change the state of the object(s) being directly operated on, as well as some objects that are related to those objects in important ways, but will generally not affect the vast majority of other objects.In this paper, we present a strategy for learning state-transition models that embodies these assumptions.", "We structure our model in terms of rules, each of which only depends on and affects the properties and relations among a small number of objects in the domain, and only very few of which may apply for characterizing the effects of any given action.", "Our primary focus is on learning the kernel of a rule: that is, the set of objects that it depends on and affects.", "At a moderate level of abstraction, most actions taken by an intentional system are inherently directly parametrized by at least one object that is being operated on: a robot pushes a block, an airport management system reschedules a flight, an automated assistant commits to a venue for a meeting.", "It is clear that properties of these \"direct\" objects are likely to be relevant to predicting the action's effects and that some properties of these objects will be changed.", "But how can we characterize which other objects, out of all the objects in a household or airline network, are relevant for prediction or likely to be affected?To", "do so, we make use of the notion of a deictic reference. In", "linguistics, a deictic (literally meaning \"pointing\") reference, is a way of naming an object in terms of its relationship to the current situation rather than in global terms. So", ", \"the object I am pushing,\" \"all the objects on the table nearest me,\" and \"the object on top of the object I am pushing\" are all deictic references. This", "style of reference was introduced as a representation strategy for AI systems by BID0 , under the name indexical-functional representations, for the purpose of compactly describing policies for a video-game agent, and has been in occasional use since then.We will learn a set of deictic references, for each rule, that characterize, relative to the object(s) being operated on, which other objects are relevant. Given", "this set of relevant objects, the problem of describing the transition model on a large, variable-size domain, reduces to describing a transition model on fixed-length vectors characterizing the relevant objects and their properties and relations, which we represent and learn using standard feed-forward neural networks.Next, we briefly survey related work, describe the problem more formally, and then provide an algorithm for learning both the structure, in terms of deictic references, and parameters, in terms of neural networks, of a sparse relational transition model. We go", "on to demonstrate this algorithm in a simulated robot-manipulation domain in which the robot pushes objects on a cluttered table." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7058823704719543, 0.1395348757505417, 0.1904761791229248, 0.12121211737394333, 0.1249999925494194, 0.11428570747375488, 0.10810810327529907, 0.1538461446762085, 0.12820512056350708, 0.11538460850715637, 0.10810810327529907, 0.10344827175140381, 0.04999999329447746, 0.13333332538604736, 0.06666666269302368, 0.09302324801683426, 0, 0.1621621549129486, 0.1794871687889099, 0.11428570747375488 ]
SJxsV2R5FQ
true
[ "A new approach that learns a representation for describing transition models in complex uncertaindomains using relational rules. " ]
[ "Differentiable planning network architecture has shown to be powerful in solving transfer planning tasks while possesses a simple end-to-end training feature.", "Many great planning architectures that have been proposed later in literature are inspired by this design principle in which a recursive network architecture is applied to emulate backup operations of a value iteration algorithm.", "However existing frame-works can only learn and plan effectively on domains with a lattice structure, i.e. regular graphs embedded in a certain Euclidean space.", "In this paper, we propose a general planning network, called Graph-based Motion Planning Networks (GrMPN), that will be able to", "i) learn and plan on general irregular graphs, hence", "ii) render existing planning network architectures special cases.", "The proposed GrMPN framework is invariant to task graph permutation, i.e. graph isormophism.", "As a result, GrMPN possesses the generalization strength and data-efficiency ability.", "We demonstrate the performance of the proposed GrMPN method against other baselines on three domains ranging from 2D mazes (regular graph), path planning on irregular graphs, and motion planning (an irregular graph of robot configurations).", "Reinforcement learning (RL) is a sub-field of machine learning that studies about how an agent makes sequential decision making (Sutton et al., 1998) to interact with an environment.", "These problems can in principle be formulated as Markov decision process (MDP).", "(Approximate)", "Dynamic programming methods such as value iteration or policy iterations are often used for policy optimization.", "These dynamic programming approaches can also be leveraged to handle learning, hence referred as model-based RL (Kober et al., 2013) .", "Model-based RL requires an estimation of the environment model hence is computationally expensive, but it is shown to be very data-efficient.", "The second common RL paradigm is model-free which does not require a model estimation hence has a lower computation cost but less data-efficiency (Kober et al., 2013) .", "With a recent marriage with deep learning, deep reinforcement learning (DRL) has achieved many remarkable successes on a wide variety of applications such as game (Mnih et al., 2015; Silver et al., 2016) , robotics , chemical synthesis (Segler et al., 2017) , news recommendation (Zhang et al., 2019) etc.", "DRL methods also range from model-based (Kurutach et al., 2018; Lee et al., 2018a) to model-free (Mnih et al., 2015; Heess et al., 2015) approaches.", "On the other hand, transfer learning across tasks has long been desired because it is much more challenging in comparison to single-task learning.", "Recent work (Tamar et al., 2016) has proposed a very elegant idea that suggests to encode a differentiable planning module in a policy network architecture.", "This planning module can emulate the recursive operation of value iterations, called Value Iteration Networks (VIN) .", "Using this network, the agent is able to evaluate multiple future planning steps for a given policy.", "The planning module is designed to base on a recursive application of convolutional neural networks (CNN) and max-pooling for value function updates.", "VIN not only allows policy optimization with more data-efficiency, but also enables transfer learning across problems with shared transition and reward structures.", "VIN has laid foundation for many later differentiable planning network architectures such as QMDP-Net (Karkus et al., 2017) , planning under uncertainty (Gupta et al., 2017) , Memory Augmented Control Network (MACN) (Khan et al., 2018) , Predictron (Silver et al., 2017) , planning networks (Srinivas et al., 2018) etc.", "However, these approaches including VIN is limited to learning with regular environment structures, i.e. the transition function forms an underlying 2D lattice structure.", "Recent works have tried to mitigate this issue by resorting to graph neural networks.", "These work exploit geometric intuition in environments which have irregular structures such as generalized VIN (Niu et al., 2018) , planning on relational domains (Toyer et al., 2018; Bajpai et al., 2018) , (Ma et al., 2018) , automated planning for scheduling (Ma et al., 2018) , etc.", "The common between these approaches are in the use of graph neural networks to process irregular data structures like graphs.", "Among these frameworks, only GVIN is able to emulate the value iteration algorithm on irregular graphs of arbitrary sizes, e.g. generalization to arbitrary graphs.", "GVIN has a differentiable policy network architecture which is very similar to VIN.", "GVIN can also have a zero-shot planning ability on unseen graphs.", "However, GVIN requires domain knowledge to design a graph convolution which might limit it to become a universal graph-based path planning framework.", "In this paper, we aim to demonstrate different formulations for value iteration networks on irregular graphs.", "These proposed formulations are based on different graph neural network models.", "These models are capable of learning optimal policies on general graphs where their transition and reward functions are not provided a priori and yet to be estimated.", "These models are known to be invariant to graph isomorphism, therefore they are able to have a generalization ability to graphs of different sizes and structures.", "As a result, they enjoy the ability of zero-shot learning to plan.", "Specifically, it is known that Bellman equations are written as the form of message passing, therefore we propose using message passing neural networks (MPNN) to emulate the value iteration algorithm on graphs.", "We will show two most general formulations of graph-based value iteration network that are based on two general-purpose approaches in the MPNN family: Graph Networks (GN) and Graph Attention Networks (GAT) (Velickovic et al., 2018) .", "In particular, our contributions are three-fold:", "• We develop a MPNN based path planning network (GrMPN) which can learn to plan on general graphs, e.g. regular and irregular graphs.", "GrMPN is an differentiable end-to-end planning network architecture trained via imitation learning.", "We implement GrMPN via two formulations that are based on GN and GAT.", "• GrMPN is a general graph-based value iteration network that will render existing graphbased planning algorithms special cases.", "GrMPN is invariant to graph isomorphism which enables transfer planning on graphs of different structure and size.", "• We will demonstrate the efficacy of GrMPN which achieves state of the art results on various domains including 2D maze with regular graph structures, irregular graphs, and motion planning problems.", "We show that GrMPN outperforms existing approaches in terms of data-efficiency, performance and scalability." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.2702702581882477, 0.12244897335767746, 0.09756097197532654, 0.21621620655059814, 0, 0.1599999964237213, 0.06666666269302368, 0, 0.12765957415103912, 0.09090908616781235, 0.20689654350280762, 0.0624999962747097, 0.15789473056793213, 0.1621621549129486, 0, 0.035087715834379196, 0.05405404791235924, 0.05128204822540283, 0.19512194395065308, 0.1818181723356247, 0.1764705777168274, 0.1538461446762085, 0.052631575614213943, 0.19607841968536377, 0.09756097197532654, 0.06666666269302368, 0.08163265138864517, 0.10810810327529907, 0.10256409645080566, 0.19999998807907104, 0.2142857164144516, 0.10810810327529907, 0.1818181723356247, 0.0714285671710968, 0.1428571343421936, 0.1538461446762085, 0.06896550953388214, 0.12765957415103912, 0.07999999821186066, 0, 0.2926829159259796, 0.3448275923728943, 0.06666666269302368, 0.11428570747375488, 0.1764705777168274, 0.17391303181648254, 0.06451612710952759 ]
HkxLiJSKwB
true
[ "We propose an end-to-end differentiable planning network for graphs. This can be applicable to many motion planning problems" ]
[ "We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data.", "Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a \"blind spot\" in the receptive field, we address two of its shortcomings: inefficient training and poor final denoising performance.", "This is achieved through a novel blind-spot convolutional network architecture that allows efficient self-supervised training, as well as application of Bayesian distribution prediction on output colors.", "Together, they bring the self-supervised model on par with fully supervised deep learning techniques in terms of both quality and training speed in the case of i.i.d. Gaussian noise.", "Denoising, the removal of noise from images, is a major application of deep learning.", "Several architectures have been proposed for general-purpose image restoration tasks, e.g., U-Nets BID13 , hierarchical residual networks BID11 , and residual dense networks BID17 .", "Traditionally, the models are trained in a supervised fashion with corrupted images as inputs and clean images as targets, so that the network learns to remove the corruption.", "BID9 introduced NOISE2NOISE training, where pairs of corrupted images are used as training data.", "They observe that when certain statistical conditions are met, a network faced with the impossible task of mapping corrupted images to corrupted images learns, loosely speaking, to output the \"average\" image.", "For a large class of image corruptions, the clean image is a simple per-pixel statistic -such as mean, median, or mode -over the stochastic corruption process, and hence the restoration model can be supervised using corrupted data by choosing the appropriate loss function to recover the statistic of interest.While removing the need for clean training images, NOISE2NOISE training still requires at least two independent realizations of the corruption for each training image.", "While this eases data collection significantly compared to noisy-clean pairs, large collections of (single) poor images are still much more widespread.", "This motivates investigation of self-supervised training: how much can we learn from just looking at bad data?", "While foregoing supervision would lead to the expectation of some regression in performance, can we make up for it by making stronger assumptions about the corruption process?", "In this paper, we show that under the assumption of additive Gaussian noise that is i.i.d. between pixels, no concessions in denoising performance are necessary.We draw inspiration from the recent NOISE2VOID (N2V) training technique of BID7 .", "The algorithm needs no image pairs, and uses just individual noisy images as training data, assuming that the corruption is zero-mean and independent between pixels.", "The method is based on blind-spot networks where the receptive field of the network does not include the center pixel.", "This allows using the same noisy image as both training input and training target -because the network cannot see the correct answer, using the same image as target is equivalent to using a different noisy realization.", "This approach is self-supervised in the sense that the surrounding context is used to predict the value of the output pixel without a separate reference image BID3 .The", "networks used by BID7 do not have a blind spot by design, but are trained to ignore the center pixel using a masking scheme where only a few output pixels can contribute to the loss function, reducing training efficiency considerably. We", "remedy this with a novel architecture that allows efficient training without masking. Furthermore", ", the existence of the blind spot leads to poor denoising quality. We derive", "a scheme for combining the network output with data in the blind spot, bringing the denoising quality on par with conventionally trained networks. In our blind-spot", "network architecture, we effectively construct four denoiser network branches, each having its receptive field restricted to a different direction. A single-pixel offset", "at the end of each branch separates the receptive field from the center pixel. The results are then", "combined by 1×1 convolutions. In practice, we run", "four rotated versions of each input image through a single receptive field -restricted branch, yielding a simpler architecture that performs the same function. This also implicitly", "shares the convolution kernels between the branches and thus avoids the four-fold increase in the number of trainable weights.", "For the baseline experiments, as well as for the backbone of our blind-spot networks, we use the same U-Net BID13 architecture as BID9 , see their appendix for details.", "The only differences are that we have layers DEC CONV1A and DEC CONV1B output 96 feature maps like the other convolution layers at the decoder stage, and layer DEC CONV1C is removed.", "After combining the four receptive field restricted branches, we thus have 384 feature maps.", "These are fed into three successive 1×1 convolutions with 384, 96, and n output channels, respectively, where n is the number of output components for the network.", "All convolution layers except the last 1×1 convolution use leaky ReLU with α = 0.1 (Maas et al., 2013).", "All networks were trained using Adam with default parameters BID6 , learning rate λ = 0.0003, and minibatch size of 4.", "As training data, we used random 256×256 crops from the 50K images in the ILSVRC2012 (Imagenet) validation set.", "The training continued until 1.2M images were shown to the network.", "All training and test images were corrupted with Gaussian noise, σ = 25.", "Table 1 shows the denoising quality in dB for the four test datasets used.", "From the BSD300 dataset we use the 100 validation images only.", "Similar to BID7 , we use the grayscale version of the BSD68 dataset -for this case we train a single-channel (c = 1) denoiser using only the luminance channel of the training images.", "All our blind-spot noise-to-noise networks use the convolutional architecture (Section", "2) and are trained without masking.", "In BSD68 our simplified L2 variant closely matches the original NOISE2VOID training, suggesting that our network with an architecturally enforced blind spot is approximately as capable as the masking-based network trained by BID7 .", "We see that the denoising quality of our Full setup (Section", "3) is on par with baseline results of N2N and N2C, and clearly surpasses standard blind-spot denoising (L2) that does not exploit the information in the blind spot.", "Doing the estimation separately for each color BID9 and BID7 .", "Full is our blind-spot training and denoising method as described in Section 3.", "Per-comp.", "is an ablated setup where each color component is treated as an independent univariate Gaussian, highlighting the importance of expressing color outputs as multivariate distributions.", "L2 refers to training using the standard L2 loss function and ignoring the center pixel when denoising.", "Columns N2N and N2C refer to NOISE2NOISE training of BID9 and traditional supervised training with clean targets (i.e., noise-to-clean), respectively.", "Results within 0.05 dB of the best result for each dataset are shown in boldface.", "channel (Per-comp.) performs significantly worse, except in the grayscale BSD68 dataset where it is equivalent to the Full method.", "FIG1 shows example denoising results.", "Our Full setup produces images that are virtually identical to the N2N baseline both visually and in terms of PSNR.", "The ablated Per-comp.", "setup tends to produce color artifacts, demonstrating the shortcomings of the simpler per-component univariate model.", "Finally, the L2 variant that ignores the center pixel during denoising produces visible checkerboard patterns, some of which can also be seen in the result images of BID7 .", "We have shown that self-supervised training -looking at noisy images only, without the benefit of seeing the same image under different noise realizations -is sufficient for learning deep denoising models on par with those that make use of another realization as a training target, be it clean or corrupted.", "Currently this comes at the cost of assuming pixel-wise independent noise with a known analytic likelihood model." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7272727489471436, 0.12244897335767746, 0.10256409645080566, 0.09756097197532654, 0.07407406717538834, 0, 0.15789473056793213, 0.4285714328289032, 0.1463414579629898, 0.17142856121063232, 0.17142856121063232, 0.12903225421905518, 0.04999999701976776, 0.16326530277729034, 0.15789473056793213, 0.0624999962747097, 0.1538461446762085, 0.052631575614213943, 0.1599999964237213, 0.07407406717538834, 0.2222222238779068, 0.10810810327529907, 0, 0.06666666269302368, 0, 0.10256409645080566, 0.06666666269302368, 0.10526315122842789, 0.04878048226237297, 0, 0.052631575614213943, 0, 0.1111111044883728, 0.12903225421905518, 0.1538461446762085, 0.2222222238779068, 0.07407406717538834, 0.1666666567325592, 0.2380952388048172, 0, 0, 0.04651162400841713, 0.23999999463558197, 0.09999999403953552, 0, 0.2222222238779068, 0.11428570747375488, 0.20689654350280762, 0.11764705181121826, 0.06666666269302368, 0, 0.10526315122842789, 0.11764705181121826, 0, 0.0714285671710968, 0.1538461446762085, 0.23728813230991364, 0.06451612710952759 ]
H1e7g4edw4
true
[ "We learn high-quality denoising using only single instances of corrupted images as training data." ]
[ "Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.", "This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a single mistake can ruin the entire sequence of actions.", "A common remedy is to \"warm-start\" the agent by pre-training it to mimic expert demonstrations, but this is prone to overfitting.", "Instead, we propose to constrain exploration using demonstrations.", "From each demonstration, we induce high-level \"workflows\" which constrain the allowable actions at each time step to be similar to those in the demonstration (e.g., \"Step 1: click on a textbox; Step 2: enter some text\").", "Our exploration policy then learns to identify successful workflows and samples actions that satisfy these workflows.", "Workflows prune out bad exploration directions and accelerate the agent’s ability to discover rewards.", "We use our approach to train a novel neural policy designed to handle the semi-structured nature of websites, and evaluate on a suite of web tasks, including the recent World of Bits benchmark.", "We achieve new state-of-the-art results, and show that workflow-guided exploration improves sample efficiency over behavioral cloning by more than 100x.", "We are interested in training reinforcement learning (RL) agents to use the Internet (e.g., to book flights or reply to emails) by directly controlling a web browser.", "Such systems could expand the capabilities of AI personal assistants BID42 , which are currently limited to interacting with machine-readable APIs, rather than the much larger world of human-readable web interfaces.Reinforcement learning agents could learn to accomplish tasks using these human-readable web interfaces through trial-and-error BID44 .", "But this learning process can be very slow in tasks with sparse reward, where the vast majority of naive action sequences lead to no reward signal BID46 BID30 .", "This is the case for many web tasks, which involve a large action space (the agent can type or click anything) and require a well-coordinated sequence of actions to succeed.A common countermeasure in RL is to pre-train the agent to mimic expert demonstrations via behavioral cloning BID37 BID23 , encouraging it to take similar actions in similar states.", "But in environments with diverse and complex states such as websites, demonstrations may cover only a small slice of the state space, and it is difficult to generalize beyond these states (overfitting).", "Indeed, previous work has found that warm-starting with behavioral cloning often fails to improve over pure RL BID41 .", "At the same time, simple strategies to combat overfitting (e.g. using fewer parameters or regularization) cripple the policy's flexibility BID9 , which is required for complex spatial and structural reasoning in user interfaces.In this work, we propose a different method for leveraging demonstrations.", "Rather than training an agent to directly mimic them, we use demonstrations to constrain exploration.", "By pruning away bad exploration directions, we can accelerate the agent's ability to discover sparse rewards.", "Furthermore, for all demonstrations d do Induce workflow lattice from d", "Learning agents for the web.", "Previous work on learning agents for web interactions falls into two main categories.", "First, simple programs may be specified by the user BID50 or may be inferred from demonstrations BID1 .", "Second, soft policies may be learned from scratch or \"warm-started\" from demonstrations BID41 .", "Notably, sparse rewards prevented BID41 from successfully learning, even when using a moderate number of demonstrations.", "While policies have proven to be more difficult to learn, they have the potential to be expressive and flexible.", "Our work takes a step in this direction.Sparse rewards without prior knowledge.", "Numerous works attempt to address sparse rewards without incorporating any additional prior knowledge.", "Exploration methods BID32 BID11 BID48 help the agent better explore the state space to encounter more reward; shaping rewards BID31 directly modify the reward function to encourage certain behaviors; and other works BID22 augment the reward signal with additional unsupervised reward.", "However, without prior knowledge, helping the agent receive additional reward is difficult in general.Imitation learning.", "Various methods have been proposed to leverage additional signals from experts.", "For instance, when an expert policy is available, methods such as DAGGER BID40 and AGGREVATE BID39 BID43 can query the expert policy to augment the dataset for training the agent.", "When only expert demonstrations are available, inverse reinforcement learning methods BID0 Ziebart et al., 2008; BID15 BID19 BID7 infer a reward function from the demonstrations without using reinforcement signals from the environment.The usual method for incorporating both demonstrations and reinforcement signals is to pre-train the agent with demonstrations before applying RL.", "Recent work extends this technique by (1) introducing different objective functions and regularization during pre-training, and (2) mixing demonstrations and rolled-out episodes during RL updates BID20 BID18 BID46 BID30 .Instead", "of training the agent on demonstrations directly, our work uses demonstrations to guide exploration. The core", "idea is to explore trajectories that lie in a \"neighborhood\" surrounding an expert demonstration. In our case", ", the neighborhood is defined by a workflow, which only permits action sequences analogous to the demonstrated actions. Several previous", "works also explore neighborhoods of demonstrations via reward shaping BID10 BID21 or off-policy sampling BID26 . One key distinction", "of our work is that we define neighborhoods in terms of action similarity rather than state similarity. This distinction is", "particularly important for the web tasks: we can easily and intuitively describe how two actions are analogous (e.g., \"they both type a username into a textbox\"), while it is harder to decide if two web page states are analogous (e.g., the email inboxes of two different users will have completely different emails, but they could still be analogous, depending on the task.)Hierarchical reinforcement learning. Hierarchical reinforcement", "learning (HRL) methods decompose complex tasks into simpler subtasks that are easier to learn. Main HRL frameworks include", "abstract actions BID45 BID25 BID17 , abstract partial policies BID33 , and abstract states BID38 BID14 BID27 . These frameworks require varying", "amounts of prior knowledge. The original formulations required", "programmers to manually specify the decomposition of the complex task, while BID3 only requires supervision to identify subtasks, and BID6 ; BID12 learn the decomposition fully automatically, at the cost of performance.Within the HRL methods, our work is closest to BID33 and the line of work on constraints in robotics BID36 BID34 . The work in BID33 specifies partial", "policies, which constrain the set of possible actions at each state, similar to our workflow items. In contrast to previous instantiations", "of the HAM framework BID2 BID28 , which require programmers to specify these constraints manually, our work automatically induces constraints from user demonstrations, which do not require special skills to provide. BID36 ; Perez-D'Arpino & Shah (2017) also", "resemble our work, in learning constraints from demonstrations, but differ in the way they use the demonstrations. Whereas our work uses the learned constraints", "for exploration, BID36 only uses the constraints for planning and Perez-D'Arpino & Shah (2017) build a knowledge base of constraints to use at test time.Summary. Our workflow-guided framework represents a judicious", "combination of demonstrations, abstractions, and expressive neural policies. We leverage the targeted information of demonstrations", "and the inductive bias of workflows. But this is only used for exploration, protecting the", "expressive neural policy from overfitting. As a result, we are able to learn rather complex policies", "from a very sparse reward signal and very few demonstrations.Acknowledgments. This work was supported by NSF CAREER Award IIS-1552635." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.08510638028383255, 0.12121211737394333, 0.260869562625885, 0.07999999821186066, 0.06666666269302368, 0.20689654350280762, 0.1860465109348297, 0.17142856121063232, 0.1904761791229248, 0.145454540848732, 0.1395348757505417, 0.09375, 0.08888888359069824, 0, 0.10344827175140381, 0.13793103396892548, 0.25806450843811035, 0.07999999821186066, 0.20000000298023224, 0.1428571343421936, 0.19999998807907104, 0.07407406717538834, 0.25806450843811035, 0.06666666269302368, 0.0714285671710968, 0.1428571343421936, 0.07999999821186066, 0.06451612710952759, 0, 0.04878048226237297, 0.10169491171836853, 0.0952380895614624, 0.2666666507720947, 0, 0.11764705181121826, 0.060606054961681366, 0, 0.08219178020954132, 0.060606054961681366, 0, 0, 0.06779660284519196, 0.05714285373687744, 0.04081632196903229, 0.11764705181121826, 0.045454539358615875, 0.20689654350280762, 0.06896550953388214, 0, 0.1764705777168274 ]
ryTp3f-0-
true
[ "We solve the sparse rewards problem on web UI tasks using exploration guided by demonstrations" ]
[ "Nowadays deep learning is one of the main topics in almost every field.", "It helped to get amazing results in a great number of tasks.", "The main problem is that this kind of learning and consequently neural networks, that can be defined deep, are resource intensive.", "They need specialized hardware to perform a computation in a reasonable time.", "Unfortunately, it is not sufficient to make deep learning \"usable\" in real life.", "Many tasks are mandatory to be as much as possible real-time.", "So it is needed to optimize many components such as code, algorithms, numeric accuracy and hardware, to make them \"efficient and usable\".", "All these optimizations can help us to produce incredibly accurate and fast learning models." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0, 0.12121211737394333, 0, 0.1538461446762085, 0, 0.060606054961681366, 0.14814814925193787 ]
SyxkWkdPoX
false
[ "Embedded architecture for deep learning on optimized devices for face detection and emotion recognition " ]
[ "Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text.", "In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates.", "In this paper, we propose a new tensor decomposition model for word embeddings with covariates.", "Our model jointly learns a \\emph{base} embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding.", "To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue.", "The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix.", "Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data.", "Furthermore, our model encourages the embeddings to be ``topic-aligned'' in the sense that the dimensions have specific independent meanings.", "This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis.", "We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates.", "The use of factorizations of co-occurrence statistics in learning low-dimensional representations of words is an area that has received a large amount of attention in recent years, perhaps best represented by how widespread algorithms such as GloVe BID10 and Word2Vec BID8 are in downstream applications.", "In particular, suppose we have a set of words i ∈ [n], where n is the size of the vocabulary.", "The aim is to, for a fixed dimensionality d, assign a vector v i ∈ R d to each word in the vocabulary in a way that preserves semantic structure.In many settings, we have a corpus with additional covariates on individual documents.", "For example, we might have news articles from both conservative and liberal-leaning publications, and using the same word embedding for all the text can lose interesting information.", "Furthermore, we suggest that there are meaningful semantic relationships that can be captured by exploiting the differences in these conditional statistics.", "To this end, we propose the following two key questions that capture the problems that our work addresses, and for each, we give a concrete motivating example of a problem in the semantic inference literature that it encompasses.Question 1: How can we leverage conditional co-occurrence statistics to capture the effect of a covariate on word usage?For", "example, did William Shakespeare truly write all the works credited to him, or have there been other \"ghostwriters\" who have contributed to the Shakespeare canon? This", "is the famous Shakespeare authorship question, for which historians have proposed various candidates as the true authors of particular plays or poems BID5 . If the", "latter scenario is the case, what in particular distinguishes the writing style of one candidate from another, and how can we infer who the most likely author of a work is from a set of candidates?Question", "2: Traditional factorization-based embedding methods are rotationally invariant, so that individual dimensions do not have semantic meaning. How can", "we break this invariance to yield a model which aligns topics with interpretable dimensions?There has", "been much interest in the differences in language and rhetoric that appeal to different demographics. For example", ", studies have been done regarding \"ideological signatures\" specific to voters by partisan alignment (Robinson et al.) in which linguistic differences were proposed along focal axes, such as the \"mind versus the body\" in texts with more liberal or conservative ideologies. How can we", "systematically infer topical differences such as these between different communities?Questions such", "as these, or more broadly covariate-specific trends in word usage, motivated this study. Concretely, our", "goal is to provide a general framework through which embeddings of sets of objects with co-occurrence structure, as well as the effects of conditioning on particular covariates, can be learned jointly. As a byproduct,", "our model also gives natural meaning to the different dimensions of the embeddings, by breaking the rotational symmetry of previous embedding-learning algorithms, such that the resulting vector representations of words and covariates are \"topic-aligned\".Previous Work Typically", ", algorithms for learning embeddings rely on the intuition that some function of the co-occurrence statistics is low rank. Studies such as GloVe", "and Word2Vec proposed based on minimizing low-rank approximation-error of nonlinear transforms of the co-occurrence statistics. let A be the n × n matrix", "with A ij the co-occurrence between words i and j, where co-occurrence is defined as the (possibly weighted) number of times the words occur together in a window of fixed length. For example, GloVe aimed", "to find vectors v i ∈ R d and biases b i ∈ R such that the loss DISPLAYFORM0 was minimized, where f was some fixed increasing weight function. Word2Vec aimed to learn", "vector representations via minimizing a neural-network based loss function.A related embedding approach is to directly perform principal component analysis on the PMI (pointwise mutual information) matrix of the words (Bullinaria & Levy) . PMI-factorization based", "methods aim to find vectors {v i } such that DISPLAYFORM1 where the probabilities are taken over the co-occurrence matrix. This is essentially the", "same as finding a low-rank matrix V such that V T V ≈ P M I, and empirical results show that the resulting embedding captures useful semantic structure.The ideas of several previous studies on the geometry of word embeddings was helpful in formulating our model. A random-walk based mathematical", "framework for understanding these different successful learning algorithms was proposed BID1 , in which the corpus generation process is a random process driven by the random walk of a discrete-time discourse vector c t ∈ R d . In this framework, our work can", "be thought of as analyzing the effects of covariates on the random walk transition kernel and the stationary distribution. Additionally, there have been previous", "studies of \"multi-sense\" word embeddings BID11 BID9 , which is similar to our idea that the same word can have different meanings in different contexts. However, in the multi-sense setting, the", "idea is that the word intrinsically has different meanings (for example, \"crane\" can be an action, a bird, or a vehicle), whereas in ours, the different meanings are imposed by conditioning on a covariate. Finally, tensor methods have been used in", "other settings recently, such as collaborative filtering BID13 and (Li & Farias) , to learn the effects of conditioning on some summary statistics.Our Contributions There are several reasons why a joint learning model based on tensor factorization is more desirable than performing GloVe m times, where m is the number of covariates, so that each covariate-specific corpus has its own embedding. Our main contributions are a decomposition", "algorithm that addresses these issues, and the methods for systematic analysis we propose.The first issue that arises is sample complexity. In particular, because for the most part words", "are used in roughly similar ways across different contexts, the resulting embeddings should not be too different, except perhaps along specific dimensions. Thus, it is better to jointly train an embedding", "model along the covariates to aggregate the co-occurrence structure, especially in cases where the entire corpus is large, but many conditional corpuses (conditioned on a covariate) are small. Secondly, simply training a different embedding", "for each corpus makes it difficult to compare the embeddings across the covariate dimension. Because of issues such as rotation invariance of", "GloVelike models, specific dimensions mean different things across different runs (and initializations) of these algorithms. The model we propose has the additional property", "that it induces a natural basis to view the embeddings in, one which is \"topic-aligned\" in the sense that it is not rotation-invariant and thus implies independent topic meanings given to different dimensions.Paper Organization In section 2, we provide our embedding algorithm, as well as mathematical justification for its design. In section 3, we detail our dataset. In section", "4, we validate our algorithm with respect", "to intrinsic properties and standard metrics. In section 5, we propose several experiments for systematic", "downstream analysis." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09756097197532654, 0.19999998807907104, 0.2926829159259796, 0.2083333283662796, 0.19230768084526062, 0.09756097197532654, 0.21052631735801697, 0.1395348757505417, 0.09999999403953552, 0.1538461446762085, 0.09090908616781235, 0.13636362552642822, 0.1538461446762085, 0.19607841968536377, 0.1304347813129425, 0.138888880610466, 0.0416666604578495, 0.04081632196903229, 0.145454540848732, 0.08888888359069824, 0.1463414579629898, 0.1428571343421936, 0.057971008121967316, 0, 0.04878048226237297, 0.1428571343421936, 0.13793103396892548, 0.1249999925494194, 0.08695651590824127, 0.1071428507566452, 0.1111111044883728, 0.10169491171836853, 0.0833333283662796, 0.22535210847854614, 0.0615384578704834, 0.12765957415103912, 0.1538461446762085, 0.13114753365516663, 0.1860465109348297, 0.19230768084526062, 0.17543859779834747, 0.14035087823867798, 0.1304347813129425, 0.1249999925494194, 0.18918918073177338, 0.12121211737394333, 0.09756097197532654 ]
B1suU-bAW
true
[ "Using the same embedding across covariates doesn't make sense, we show that a tensor decomposition algorithm learns sparse covariate-specific embeddings and naturally separable topics jointly and data-efficiently." ]
[ "Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks.", "Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently results for the complexity of training deep neural networks have been obtained.", "In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming $P\\neq NP$, and improving the dependence on the parameter space dimension remains open.", "In particular, we obtain polynomial time algorithms for training for a given fixed network architecture.", "Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous results and obtain new complexity results for previously unstudied architectures in the proper learning setting." ]
[ 0, 0, 1, 0, 0 ]
[ 0.05128204822540283, 0.23529411852359772, 0.2526315748691559, 0.1621621549129486, 0.15094339847564697 ]
HkMwHsCctm
false
[ "Using linear programming we show that the computational complexity of approximate Deep Neural Network training depends polynomially on the data size for several architectures" ]
[ "The extended Kalman filter (EKF) is a classical signal processing algorithm which performs efficient approximate Bayesian inference in non-conjugate models by linearising the local measurement function, avoiding the need to compute intractable integrals when calculating the posterior.", "In some cases the EKF outperforms methods which rely on cubature to solve such integrals, especially in time-critical real-world problems.", "The drawback of the EKF is its local nature, whereas state-of-the-art methods such as variational inference or expectation propagation (EP) are considered global approximations.", "We formulate power EP as a nonlinear Kalman filter, before showing that linearisation results in a globally iterated algorithm that exactly matches the EKF on the first pass through the data, and iteratively improves the linearisation on subsequent passes.", "An additional benefit is the ability to calculate the limit as the EP power tends to zero, which removes the instability of the EP-like algorithm.", "The resulting inference scheme solves non-conjugate temporal Gaussian process models in linear time, $\\mathcal{O}(n)$, and in closed form.", "Temporal Gaussian process (GP, Rasmussen and Williams, 2006 ) models can be solved in linear computational scaling, O(n), in the number of data n (Hartikainen and Särkkä, 2010) .", "However, non-conjugate (i.e., non-Gaussian likelihood) GP models introduce a computational problem in that they generally involve approximating intractable integrals in order to update the posterior distribution when data is observed.", "The most common numerical method used in such scenarios is sigma-point integration (Kokkala et al., 2016) , with Gauss-Hermite cubature being a popular way to choose the sigma-point locations and weights.", "A drawback of this method is that the number of cubature points scales exponentially with the dimensionality d.", "Lower-order sigma-point methods allow accuracy to be traded off for scalability, for example the unscented transform (which forms the basis for the unscented Kalman filter, see Särkkä, 2013) requires only 2d + 1 cubature points.", "One significant alternative to cubature methods is linearisation.", "Although such an approach has gone out of fashion lately, García-Fernández et al. (2015) showed that a globally iterated version of the statistically linearised filter (SLF, Särkkä, 2013) , which performs linearisation w.r.t. the posterior rather than the prior, performs in line with expectation propagation (EP, Minka, 2001 ) in many modelling scenarios, whilst also providing local convergence guarantees (Appendix D explains the connection to our proposed method).", "Crucially, linearisation guarantees that the integrals required to calculate the posterior have a closed form solution, which results in significant computational savings if d is large.", "Motivated by these observations, and with the aim of illustrating the connections between classical filtering methods and EP, we formulate power EP (PEP, Minka, 2004) as a Gaussian filter parametrised by a set of local likelihood approximations.", "The linearisations used to calculate these approximations are then refined during multiple passes through the data.", "We show that a single iteration of our approach is identical to the extended Kalman filter (EKF, Jazwinski, 1970) , and furthermore that we are able to calculate exactly the limit as the EP power tends to zero, since there are no longer any intractable integrals that depend on the power.", "The result is a global approximate inference algorithm for temporal GPs that is efficient and stable, easy to implement, scales to problems with large data and high-dimensional latent states, and consistently outperforms the EKF.", "In Fig. 2 , we compare our approach (EKF-PEP, α = 1) to EP and the EKF on two nonconjugate GP tasks (see Appendix E for the full formulations).", "Whilst our method is suited to large datasets, we focus here on small time series for ease of comparison.", "In the left-hand plot, a log-Gaussian Cox process (approximated with a Poisson model for 200 equal time interval bins) is used to model the intensity of coal mining accidents.", "EKF-PEP and the EKF match the EP posterior well, with EKF-PEP obtaining an even tighter match to both the mean and marginal variances.", "The right-hand plot shows a similar comparison for 133 accelerometer readings in a simulated motorcycle crash, using a heteroscedastic noise model.", "Linearisation in this model is a crude approximation to the true likelihood, but we observe that iteratively refining the linearisation vastly improves the posterior is some regions.", "This new perspective on linearisation in approximate inference unifies the PEP and EKF paradigms for temporal data, and provides an improvement to the EKF that requires no additional implementation effort.", "Key areas for further exploration are the effect of adjusting α (i.e., changing the cavity and the linearisation point), and the use of statistical linearisation as an alternative method for obtaining the local approximations.", "Appendix A. The proposed globally iterated EKF-PEP algorithm Algorithm 1 Globally iterated extended Kalman filter with power EP-style updates", "and discretised state space model h, H, J x , J r , α measurement model, Jacobian and EP power m 0 ← 0, P 0 ← P ∞ , e 1:n = 0 initial state while not converged do iterated EP-style loop for k = 1 to n do forward pass (FILTERING)", "evaluate Jacobian" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.31884056329727173, 0.14814814925193787, 0.17241378128528595, 0.3333333134651184, 0.14814814925193787, 0.07843136787414551, 0.13333332538604736, 0.1846153736114502, 0.15625, 0.07999999821186066, 0.0937499925494194, 0.0952380895614624, 0.2448979616165161, 0.20338982343673706, 0.21212120354175568, 0.07999999821186066, 0.34210526943206787, 0.15625, 0.16129031777381897, 0.07547169178724289, 0.13333332538604736, 0.1538461446762085, 0.07547169178724289, 0.17241378128528595, 0.26229506731033325, 0.12903225421905518, 0.23076923191547394, 0.15789473056793213 ]
HkxNKk2VKS
true
[ "We unify the extended Kalman filter (EKF) and the state space approach to power expectation propagation (PEP) by solving the intractable moment matching integrals in PEP via linearisation. This leads to a globally iterated extension of the EKF." ]
[ "This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size.", "The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural network be learned from labeled samples from it.", "While learnability is different from (in fact often higher than) test accuracy, the results herein suggest that there is a strong correlation between small generalization errors and high learnability.\n", "This work also shows that there exist significant qualitative differences in shallow networks as compared to popular deep networks.", "More broadly, this paper extends in a new direction, previous work on understanding the properties of learned neural networks.", "Our hope is that such an empirical study of understanding learned neural networks might shed light on the right assumptions that can be made for a theoretical study of deep learning.", "Over the last few years neural networks have significantly advanced state of the art on several tasks such as image classification BID23 ), machine translation BID32 ), structured prediction BID2 ) and so on, and have transformed the areas of computer vision and natural language processing.", "Despite the success of neural networks in making these advances, the reasons for their success are not well understood.", "Understanding the performance of neural networks and reasons for their success are major open problems at the moment.", "Questions about the performance of neural networks can be broadly classified into two groups:", "i) optimization i.e., how are we able to train large neural networks well even though it is NP-hard to do so in the worst case, and", "ii) generalization i.e., how is it that the training error and test error are close to each other for large neural networks where the number of parameters in the network is much larger than the number of training examples (highly overparametrized).", "This paper explores three aspects of generalization in neural networks.The first aspect is the performance of neural networks on random training labels.", "While neural networks generalize well (i.e., training and test error are close to each other) on real datasets even in highly overparametrized settings, BID33 shows that neural networks are nevertheless capable of achieving zero training error on random training labels.", "Since any given network will have large error on random test labels, BID33 concludes that neural networks are indeed capable of poor generalization.", "However since the labels of the test set are random and completely independent of the training data, this leaves open the question of whether neural networks learn simple patterns even on random training data.", "Indeed the results of BID22 establish that even in the presence of massive label noise in the training data, neural networks obtain good test accuracy on real data.", "This suggests that neural networks might learn some simple patterns even with random training labels.", "The first question this paper asks is (Q1): Do neural networks learn simple patterns on random training data?A", "second, very curious, aspect about the generalization of neural networks is the observation that increasing the size of a neural network helps in achieving better test error even if a training error of zero has already been achieved (see, e.g., BID21 ) i.e., larger neural networks have better generalization error. This", "is contrary to traditional wisdom in statistical learning theory which holds that larger models give better training error but at the cost of higher generalization error. A recent", "line of work proposes that the reason for better generalization of larger neural networks is implicit regularization, or in other words larger learned models are simpler than smaller learned models. See Neyshabur", "(2017) for references. The second question", "this paper asks is (Q2): Do larger neural networks learn simpler patterns compared to smaller neural networks when trained on real data?The third aspect about", "generalization that this paper considers is the widely observed phenomenon that using large minibatches for stochastic gradient descent (SGD) leads to poor generalization LeCun et al..(Q3): Are neural networks", "learned with small minibatch sizes simpler compared to those learned with large minibatch sizes?All the above questions have", "been looked at from the point of view of flat/sharp minimizers BID11 . Here flat/sharp corresponds", "to the curvature of the loss function around the learned neural network. BID18 for true vs random data", ", BID24 for large vs small neural networks and BID16 for small vs large minibatch training, all look at the sharpness of minimizers in various settings and connect it to the generalization performance of neural networks. While there certainly seems", "to be a connection between the sharpness of the learned neural network, there is as yet no unambiguous notion of this sharpness to quantify it. See BID4 for more details.", "This paper takes a complementary", "approach: it looks at the above questions through the lens of learnability. Let us say we are considering a", "multi-class classification problem with c classes and let D denote a distribution over the inputs x ∼ R d . Given a neural network N , draw", "n independent samples x tr 1 , · · · , x tr n from D and train a neural network N on training data DISPLAYFORM0 The learnability of a neural network N is defined to be DISPLAYFORM1 Note that L(N ) implicitly depends on D, the architecture and learning algorithm used to learn N as well as n. This dependence is suppressed in", "the notation but will be clear from context. Intuitively, larger the L(N ), easier", "it is to learn N from data. This notion of learnability is not new", "and is very closely related to probably approximately correct (PAC) learnability Valiant (1984); BID15 . In the context of neural networks, learnability", "has been well studied from a theoretical point as we discuss briefly in Sec.2. There we also discuss some related empirical results", "but to the best of our knowledge there has been no work investigating the learnability of neural networks that are encountered in practice.This paper empirically investigates the learnability of neural networks of varying sizes/architectures and minibatch sizes, learned on real/random data in order to answer (Q1) and (Q2) and (Q3). The main contributions of this paper are as follows:", "DISPLAYFORM2 The results in this paper suggest that there is a strong correlation between generalizability and learnability of neural networks i.e., neural networks that generalize well are more learnable compared to those that do not generalize well. Our experiments suggest that• Neural networks do not", "learn simple patterns on random data.• Learned neural networks of large size/architectures", "that achieve higher accuracies are more learnable.• Neural networks learned using small minibatch sizes", "are more learnable compared to those learned using large minibatch sizes.Experiments also suggest that there are qualitative differences between learned shallow networks and deep networks and further investigation is warranted.Paper organization: The paper is organized as follows. Section 2 gives an overview of related work. Section", "3 presents the experimental setup and results", ". Section 5 concludes the paper with some discussion of", "results and future directions.", "This paper explores the learnability of learned neural networks under various scenarios.", "The results herein suggest that while learnability is often higher than test accuracy, there is a strong correlation between low generalization error and high learnability of the learned neural networks.", "This paper also shows that there are some qualitative differences between shallow and popular deep neural networks.", "Some questions that this paper raises are the effect of optimization algorithms, hyperparameter selection and initialization schemes on learnability.", "On the theoretical front, it would be interesting to characterize neural networks that can be learned efficiently via backprop.", "Given the strong correlation between learnability and generalization, driving the network to converge to learnable networks might help achieve better generalization." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.125, 0.11764705926179886, 0.05714285373687744, 0, 0.1538461446762085, 0.11428570747375488, 0.08695651590824127, 0.1666666567325592, 0.1666666567325592, 0.1904761791229248, 0.05882352590560913, 0.0952380895614624, 0.14814814925193787, 0.0476190447807312, 0.06666666269302368, 0.11764705926179886, 0.12903225421905518, 0, 0, 0.0833333283662796, 0.11764705926179886, 0.11428570747375488, 0, 0, 0.05714285373687744, 0.09090908616781235, 0.19999998807907104, 0.17391303181648254, 0.10526315122842789, 0.12121211737394333, 0, 0.1599999964237213, 0.0624999962747097, 0.07407407462596893, 0.09999999403953552, 0.09999999403953552, 0.14814814925193787, 0, 0.07692307233810425, 0.09090908616781235, 0.19999998807907104, 0.09090908616781235, 0.040816325694322586, 0.1428571343421936, 0.25, 0, 0.21052631735801697, 0.11428570747375488, 0, 0.1538461446762085, 0.07999999821186066, 0.07692307233810425 ]
rJ1RPJWAW
true
[ "Exploring the Learnability of Learned Neural Networks" ]
[ " With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.", "Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \\emph{why} or \\emph{how} a particular method is better and how dataset biases influence the choices of model design.\n ", "In this paper, we present a general methodology for {\\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text.", "The proposed evaluation method enables us to interpret the \\textit{model biases}, \\textit{dataset biases}, and how the \\emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches.", "By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area." ]
[ 0, 0, 0, 1, 0 ]
[ 0.1621621549129486, 0.16326530277729034, 0.16326530277729034, 0.19999998807907104, 0.10526315122842789 ]
HJxTgeBtDr
false
[ "We propose a generalized evaluation methodology to interpret model biases, dataset biases, and their correlation." ]
[ "The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers.", "We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an ill-posed problem, and observe that humans do not stray from a common language, because they are social creatures and have to communicate with many people everyday, and it is far easier to stick to a common language even at the cost of some efficiency loss.", "Using this as inspiration, we propose and evaluate a multi-agent dialog framework where each agent interacts with, and learns from, multiple agents, and show that this results in more relevant and coherent dialog (as judged by human evaluators) without sacrificing task performance (as judged by quantitative metrics).", "Intelligent assistants like Siri and Alexa are increasingly becoming an important part of our daily lives, be it in the household, the workplace or in public places.", "As these systems become more advanced, we will have them interacting with each other to achieve a particular goal BID9 .", "We want these conversations to be interpretable to humans for the sake of transparency and ease of debugging.", "Having the agents communicate in natural language is one of the most universal ways of ensuring interpretability.", "This motivates our work on goal-driven agents which interact in coherent language understandable to humans.To that end, this paper builds on work by BID2 on goal-driven visual dialog agents.", "The task is formulated as a conversation between two agents, a Question (Q-) and an Answer (A-) bot.", "The A-Bot is given an image, while the QBot is given only a caption to the image.", "Both agents share a common objective, which is for the Q-Bot to form an accurate mental representation of the unseen image using which it can retrieve, rank or generate that image.", "This is facilitated by the exchange of 10 pairs of questions and answers between the two agents, using a shared vocabulary.", "BID2 trained the agents first in isolation via supervision from the VisDial dataset BID1 , followed by making them interact and adapt to each other via reinforcement learning by optimizing for better task performance.", "While trying to maximize performance, the agents learn to communicate in non-grammatical and semantically meaningless sentences in order to maximize the exchange of information.", "This reduces transparency of the AI system to human observers and is undesirable.", "We address this problem by proposing a multi-agent dialog framework where each agent interacts with multiple agents.", "This is motivated by our observation that humans adhere to syntactically and semantically coherent language, which we hypothesize is because they have to interact with an entire community, and having a private language for each person would be extremely inefficient.", "We show that our multi-agent (with multiple Q-Bots and multiple A-Bots) dialog system results in more coherent and human-interpretable dialog between agents, without compromising on task performance, which also validates our hypothesis.", "This makes them seem more helpful, transparent and trustworthy.", "We will make our code available as open-source.", "1", "In this paper we propose a novel Multi-Agent Dialog Framework (MADF), inspired from human communities, to improve the dialog quality of AI agents.", "We show that training 2 agents with supervised learning can lead to uninformative and repetitive dialog.", "Furthermore, we observe that the task performance (measured by the image retrieval percentile scores) for the system trained via supervision only deteriorates as dialog round number increases.", "We hypothesize that this is because the agents were trained in isolation and never allowed to interact during supervised learning, which leads to failure during testing when they encounter out of distribution samples (generated by the other agent, instead of ground truth) for the first time.", "We show how allowing a single pair of agents to interact and learn from each other via reinforcement learning dramatically improve their percentile scores, which additionally does not deteriorate over multiple rounds of dialog, since the agents have interacted with one another and been exposed to the other's generated questions or answers.", "However, the agents, in an attempt to improve task performance end up developing their own private language which does not adhere to the rules and conventions of human languages, and generates nongrammatical and non-sensical statements.", "As a result, the dialog system loses interpretability and sociability.", "Figure 4: Two randomly selected images from the VisDial dataset followed by the ground truth (human) and generated dialog about that image for each of our 4 systems (SL, RL-1Q,1A, RL-1Q,3A, RL-3Q,1A).", "These images were also used in the human evaluation results shown in Table 2 .multi-agent", "dialog framework based on self-play reinforcement learning, where a single A-Bot is allowed to interact with multiple Q-Bots and vice versa. Through a human", "evaluation study, we show that this leads to significant improvements in dialog quality measured by relevance, grammar and coherence. This is because", "interacting with multiple agents prevents any particular pair from maximizing performance by developing a private language, since it would harm performance with all the other agents." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10526315122842789, 0.1230769231915474, 0.11538460850715637, 0.052631575614213943, 0.3030303120613098, 0.06896550953388214, 0.2857142686843872, 0.21052631735801697, 0.06666666269302368, 0.14814814925193787, 0.1463414579629898, 0.0624999962747097, 0.22727271914482117, 0.25, 0.07692307233810425, 0.19999998807907104, 0.1599999964237213, 0.04878048226237297, 0, 0, 0.1666666567325592, 0.13793103396892548, 0, 0.14814814925193787, 0.20000000298023224, 0.13636362552642822, 0.08695651590824127, 0.045454543083906174, 0.07407406717538834, 0.1111111044883728, 0.11428570747375488, 0.1666666567325592 ]
BJxX9mx8y7
true
[ "Social agents learn to talk to each other in natural language towards a goal" ]
[ "Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables.", "This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA).", "We identify how local maxima can emerge from the marginal log-likelihood of pPCA, which yields similar local maxima for the evidence lower bound (ELBO).", "We show that training a linear VAE with variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions.", "We provide empirical evidence that the presence of local maxima causes posterior collapse in deep non-linear VAEs.", "Our findings help to explain a wide range of heuristic approaches in the literature that attempt to diminish the effect of the KL term in the ELBO to reduce posterior collapse.", "The generative process of a deep latent variable model entails drawing a number of latent factors from an uninformative prior and using a neural network to convert such factors to real data points.", "Maximum likelihood estimation of the parameters requires marginalizing out the latent factors, which is intractable for deep latent variable models.", "The influential work of BID21 and BID28 on Variational Autoencoders (VAEs) enables optimization of a tractable lower bound on the likelihood via a reparameterization of the Evidence Lower Bound (ELBO) BID18 BID4 .", "This has created a surge of recent interest in automatic discovery of the latent factors of variation for a data distribution based on VAEs and principled probabilistic modeling BID15 BID5 BID8 BID13 .Unfortunately", ", the quality and the number of the latent factors learned is directly controlled by the extent of a phenomenon known as posterior collapse, where the generative model learns to ignore a subset of the latent variables. Most existing", "work suggests that posterior collapse is caused by the KL-divergence term in the ELBO objective, which directly encourages the variational distribution to match the prior. Thus, a wide", "range of heuristic approaches in the literature have attempted to diminish the effect of the KL term in the ELBO to alleviate posterior collapse. By contrast,", "we hypothesize that posterior collapse arises due to spurious local maxima in the training objective. Surprisingly", ", we show that these local maxima may arise even when training with exact marginal log-likelihood.While linear autoencoders BID30 have been studied extensively BID2 BID23 , little attention has been given to their variational counterpart. A well-known", "relationship exists between linear autoencoders and PCAthe optimal solution to the linear autoencoder problem has decoder weight columns which span the subspace defined by the principal components. The Probabilistic", "PCA (pPCA) model BID32 recovers the principal component subspace as the maximum likelihood solution of a Gaussian latent variable model. In this work, we", "show that pPCA is recovered exactly using linear variational autoencoders. Moreover, by specifying", "a diagonal covariance structure on the variational distribution we recover an identifiable model which at the global maximum has the principal components as the columns of the decoder.The study of linear VAEs gives us new insights into the cause of posterior collapse. Following the analysis", "of BID32 , we characterize the stationary points of pPCA and show that the variance of the observation model directly impacts the stability of local stationary points -if the variance is too large then the pPCA objective has spurious local maxima, which correspond to a collapsed posterior. Our contributions include:•", "We prove that linear VAEs can recover the true posterior of pPCA and using ELBO to train linear VAEs does not add any additional spurious local maxima. Further, we prove that at its", "global optimum, the linear VAE recovers the principal components.• We shows that posterior collapse", "may occur in optimization of marginal log-likelihood, without powerful decoders. Our experiments verify the analysis", "of the linear setting and show that these insights extend even to high-capacity, deep, non-linear VAEs.• By learning the observation noise", "carefully, we are able to reduce posterior collapse.We present evidence that the success of existing approaches in alleviating posterior collapse depends on their ability to reduce the stability of spurious local maxima.", "By analyzing the correspondence between linear VAEs and pPCA we have made significant progress towards understanding the causes of posterior collapse.", "We have shown that for simple linear VAEs posterior collapse is caused by spurious local maxima in the marginal log-likelihood and we demonstrated empirically that the same local maxima seem to play a role when optimizing deep non-linear VAEs.", "In future work, we hope to extend this analysis to other observation models and provide theoretical support for the non-linear case." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.1304347813129425, 0.20408162474632263, 0.17391303181648254, 0.21739129722118378, 0.3333333432674408, 0.20408162474632263, 0.07692307233810425, 0.09302324801683426, 0.07692307233810425, 0.145454540848732, 0.1818181723356247, 0.3199999928474426, 0.13333332538604736, 0.19512194395065308, 0.16393442451953888, 0.07843136787414551, 0.04255318641662598, 0.2631579041481018, 0.2222222238779068, 0.16129031777381897, 0.2222222238779068, 0.25641024112701416, 0.09999999403953552, 0.17391303181648254, 0.23076923191547394, 0.17777776718139648, 0.508474588394165, 0 ]
r1xaVLUYuE
true
[ "We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play." ]
[ "Transformers have achieved state-of-the-art results on a variety of natural language processing tasks. \n", "Despite good performance, Transformers are still weak in long sentence modeling where the global attention map is too dispersed to capture valuable information.\n", "In such case, the local/token features that are also significant to sequence modeling are omitted to some extent.\n", "To address this problem, we propose a Multi-scale attention model (MUSE) by concatenating attention networks with convolutional networks and position-wise feed-forward networks to explicitly capture local and token features.", "Considering the parameter size and computation efficiency, we re-use the feed-forward layer in the original Transformer and adopt a lightweight dynamic convolution as implementation. \n", "Experimental results show that the proposed model achieves substantial performance improvements over Transformer, especially on long sentences, and pushes the state-of-the-art from 35.6 to 36.2 on IWSLT 2014 German to English translation task, from 30.6 to 31.3 on IWSLT 2015 English to Vietnamese translation task.", "We also reach the state-of-art performance on WMT 2014 English to French translation dataset, with a BLEU score of 43.2.", "In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation (Vaswani et al., 2017; Dehghani et al., 2018 ), text classification (Devlin et al., 2018; , language modeling (Sukhbaatar et al., 2019b; , etc.", "It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely.", "The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.", "However, recent research (Tang et al., 2018) has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention.", "As shown in Figure 1", "(a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences.", "The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1", "(b), and only a small number of tokens are represented by attention.", "It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly.", "In recent work, local attention that constrains the attention to focus on only part of the sequences (Child et al., 2019; Sukhbaatar et al., 2019a ) is used to address this problem.", "However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.", "To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE.", "It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel.", "The convolution compensates for the in- The left figure shows that the performance drops largely with the increase of sentence length on the De-En dataset.", "The right figure shows the attention map from the 3-th encoder layer.", "As we can see, the attention map is too dispersed to capture sufficient information.", "For example, \"[EOS]\", contributing little to word alignment, is surprisingly over attended.", "sufficient use of local information while the self-attention focuses on capturing the dependencies.", "Moreover, this parallel structure is highly extensible, i.e., new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.", "The main contributions are summarized as follows:", "• We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning.", "The proposed method tries to address this problem and achieves much better performance on generating long sequence.", "• We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.", "• MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.", "• The proposed method enables parallel representation learning.", "Experiments show that the inference speed can be increased by 31% on GPUs.", "Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.", "To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple.", "MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning.", "And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations.", "Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multiscale learning.", "Beyond the inspiring new state-of-the-art results on three machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.", "In future work, we would like to explore the detailed effects of shared projection on contextual representation learning.", "We are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.06896550953388214, 0.10256409645080566, 0.1249999925494194, 0.19999998807907104, 0.05405404791235924, 0.07547169178724289, 0.1111111044883728, 0.08510638028383255, 0, 0.060606054961681366, 0.04999999701976776, 0, 0.05882352590560913, 0, 0.07407406717538834, 0.20512820780277252, 0.04651162400841713, 0.17142856121063232, 0.1395348757505417, 0, 0.0555555522441864, 0, 0.13793103396892548, 0.07407406717538834, 0.07407406717538834, 0.10526315122842789, 0, 0.1666666567325592, 0.1249999925494194, 0.15789473056793213, 0.05128204822540283, 0.08695651590824127, 0, 0.1463414579629898, 0, 0.2222222238779068, 0.12121211737394333, 0.10526315122842789, 0.1111111044883728, 0.12121211737394333, 0.052631575614213943 ]
SJe-3REFwr
true
[ "This paper propose a new model which combines multi scale information for sequence to sequence learning." ]
[ "Training neural networks with verifiable robustness guarantees is challenging.", "Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures.", "Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training.", "In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.", "CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks.", "We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness.\n", "Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255.", "The success of deep neural networks (DNNs) has motivated their deployment in some safety-critical environments, such as autonomous driving and facial recognition systems.", "Applications in these areas make understanding the robustness and security of deep neural networks urgently needed, especially their resilience under malicious, finely crafted inputs.", "Unfortunately, the performance of DNNs are often so brittle that even imperceptibly modified inputs, also known as adversarial examples, are able to completely break the model (Goodfellow et al., 2015; Szegedy et al., 2013) .", "The robustness of DNNs under adversarial examples is well-studied from both attack (crafting powerful adversarial examples) and defence (making the model more robust) perspectives (Athalye et al., 2018; Carlini & Wagner, 2017a; b; Goodfellow et al., 2015; Madry et al., 2018; Papernot et al., 2016; Xiao et al., 2019b; 2018b; c; Eykholt et al., 2018; Chen et al., 2018; Xu et al., 2018; Zhang et al., 2019b) .", "Recently, it has been shown that defending against adversarial examples is a very difficult task, especially under strong and adaptive attacks.", "Early defenses such as distillation (Papernot et al., 2016) have been broken by stronger attacks like C&W (Carlini & Wagner, 2017b) .", "Many defense methods have been proposed recently (Guo et al., 2018; Song et al., 2017; Buckman et al., 2018; Ma et al., 2018; Samangouei et al., 2018; Xiao et al., 2018a; 2019a) , but their robustness improvement cannot be certified -no provable guarantees can be given to verify their robustness.", "In fact, most of these uncertified defenses become vulnerable under stronger attacks (Athalye et al., 2018; He et al., 2017) .", "Several recent works in the literature seeking to give provable guarantees on the robustness performance, such as linear relaxations (Wong & Kolter, 2018; Mirman et al., 2018; Wang et al., 2018a; Dvijotham et al., 2018b; Weng et al., 2018; Zhang et al., 2018) , interval bound propagation (Mirman et al., 2018; Gowal et al., 2018) , ReLU stability regularization (Xiao et al., 2019c) , and distributionally robust optimization (Sinha et al., 2018) and semidefinite relaxations (Raghunathan et al., 2018a; Dvijotham et al.) .", "Linear relaxations of neural networks, first proposed by Wong & Kolter (2018) , is one of the most popular categories among these certified defences.", "They use the dual of linear programming or several similar approaches to provide a linear relaxation of the network (referred to as a \"convex adversarial polytope\") and the resulting bounds are tractable for robust optimization.", "However, these methods are both computationally and memory intensive, and can increase model training time by a factor of hundreds.", "On the other hand, interval bound propagation (IBP) is a simple and efficient method for training verifiable neural networks (Gowal et al., 2018) , which achieved state-of-the-art verified error on many datasets.", "However, since the IBP bounds are very loose during the initial phase of training, the training procedure can be unstable and sensitive to hyperparameters.", "In this paper, we first discuss the strengths and weakness of existing linear relaxation based and interval bound propagation based certified robust training methods.", "Then we propose a new certified robust training method, CROWN-IBP, which marries the efficiency of IBP and the tightness of a linear relaxation based verification bound, CROWN (Zhang et al., 2018) .", "CROWN-IBP bound propagation involves a IBP based fast forward bounding pass, and a tight convex relaxation based backward bounding pass (CROWN) which scales linearly with the size of neural network output and is very efficient for problems with low output dimensions.", "Additional, CROWN-IBP provides flexibility for exploiting the strengths of both IBP and convex relaxation based verifiable training methods.", "The efficiency, tightness and flexibility of CROWN-IBP allow it to outperform state-of-the-art methods for training verifiable neural networks with ∞ robustness under all settings on MNIST and CIFAR-10 datasets.", "In our experiment, on MNIST dataset we reach 7.02% and 12.06% IBP verified error under ∞ distortions = 0.3 and = 0.4, respectively, outperforming the state-of-the-art baseline results by IBP (8.55% and 15.01%).", "On CIFAR-10, at = 2 255 , CROWN-IBP decreases the verified error from 55.88% (IBP) to 46.03% and matches convex relaxation based methods; at a larger , CROWN-IBP outperforms all other methods with a noticeable margin.", "We propose a new certified defense method, CROWN-IBP, by combining the fast interval bound propagation (IBP) bound and a tight linear relaxation based bound, CROWN.", "Our method enjoys high computational efficiency provided by IBP while facilitating the tight CROWN bound to stabilize training under the robust optimization framework, and provides the flexibility to trade-off between the two.", "Our experiments show that CROWN-IBP consistently outperforms other IBP baselines in both standard errors and verified errors and achieves state-of-the-art verified test errors for ∞ robustness.", "Given a fixed neural network (NN) f (x), IBP gives a very loose estimation of the output range of f (x).", "However, during training, since the weights of this NN can be updated, we can equivalently view IBP as an augmented neural network, which we denote as an IBP-NN ( Figure A) .", "Unlike a usual network which takes an input x k with label y k , IBP-NN takes two points x L = x k − and x U = x k + as inputs (where x L ≤ x ≤ x U , element-wisely).", "The bound propagation process can be equivalently seen as forward propagation in a specially structured neural network, as shown in Figure A .", "After the last specification layer C (typically merged into W (L) ), we can obtain m(x k , ).", "Then, −m(x k , ) is sent to softmax layer for prediction.", "Importantly, since [m(x k , )] y k = 0 (as the y k -th row in C is always 0), the top-1 prediction of the augmented IBP network is y k if and only if all other elements of m(x k , ) are positive, i.e., the original network will predict correctly for all x L ≤ x ≤ x U .", "When we train the augmented IBP network with ordinary cross-entropy loss and desire it to predict correctly on an input x k , we are implicitly doing robust optimization (Eq.", "(2)).", "The simplicity of IBP-NN may help a gradient based optimizer to find better solutions.", "On the other hand, while the computation of convex relaxation based bounds can also be cast as an equivalent network (e.g., the \"dual network\" in Wong & Kolter (2018)), its construction is significantly more complex, and sometimes requires non-differentiable indicator functions (the sets I + , I − and I in Wong & Kolter (2018)).", "As a consequence, it can be challenging for the optimizer to find a good solution, and the optimizer tends to making the bounds tighter naively by reducing the norm of weight matrices and over-regularizing the network, as demonstrated in Figure 1 ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07692307233810425, 0.08695651590824127, 0.038461532443761826, 0.3404255211353302, 0.0624999962747097, 0.19512194395065308, 0, 0, 0.04878048226237297, 0.0833333283662796, 0.0624999962747097, 0.15789473056793213, 0, 0.07843136787414551, 0, 0.029411761090159416, 0.04999999329447746, 0.1304347813129425, 0.1111111044883728, 0.16326530277729034, 0.05128204822540283, 0.10256409645080566, 0.30434781312942505, 0.07692307233810425, 0.11428570747375488, 0.17777776718139648, 0.039215680211782455, 0.039215680211782455, 0.3499999940395355, 0.04444443807005882, 0.25641024112701416, 0.05714285373687744, 0, 0.04444443807005882, 0.0555555522441864, 0, 0.06896550953388214, 0.032258059829473495, 0, 0.06451612710952759, 0, 0.11999999731779099 ]
Skxuk1rFwB
true
[ "We propose a new certified adversarial training method, CROWN-IBP, that achieves state-of-the-art robustness for L_inf norm adversarial perturbations." ]
[ "Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings.", "A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning.", "During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy.", "In this work, we make several surprising observations which contradict common beliefs.", "For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights.", "For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch.", "Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned ``important'' weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited ``important'' weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.", "Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. ", "We also compare with the \"Lottery Ticket Hypothesis\" (Frankle & Carbin 2019), and find that with optimal learning rate, the \"winning ticket\" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.", "Over-parameterization is a widely-recognized property of deep neural networks (Denton et al., 2014; Ba & Caruana, 2014) , which leads to high computational cost and high memory footprint for inference.", "As a remedy, network pruning BID12 Hassibi & Stork, 1993; Han et al., 2015; Molchanov et al., 2016; BID14 has been identified as an effective technique to improve the efficiency of deep networks for applications with limited computational budget.", "A typical procedure of network pruning consists of three stages:", "1) train a large, over-parameterized model (sometimes there are pretrained models available),", "2) prune the trained large model according to a certain criterion, and", "3) fine-tune the pruned model to regain the lost performance.", "2015).", "Thus most existing pruning techniques choose to fine-tune a pruned model instead of training it from scratch.", "The preserved weights after pruning are usually considered to be critical, as how to accurately select the set of important weights is a very active research topic in the literature (Molchanov et al., 2016; BID14 Luo et al., 2017; He et al., 2017b; BID4 Suau et al., 2018) .In", "this work, we show that both of the beliefs mentioned above are not necessarily true for structured pruning methods, which prune at the levels of convolution channels or larger. Based", "on an extensive empirical evaluation of state-of-the-art pruning algorithms on multiple datasets with multiple network architectures, we make two surprising observations. First", ", for structured pruning methods with predefined target network architectures (Figure 2) ,", "directly training the small target model from random initialization can achieve the same, if not better, performance, as the model obtained from the three-stage pipeline. In this", "case, starting with a large model is not necessary and one could instead directly train the target model from scratch. Second,", "for structured pruning methods with autodiscovered target networks, training the pruned model from scratch can also achieve comparable or even better performance than fine-tuning. This observation", "shows that for these pruning methods, what matters more may be the obtained architecture, instead of the preserved weights, despite training the large model is needed to find that target architecture. Interestingly, for", "a unstructured pruning method (Han et al., 2015) that prunes individual parameters, we found that training from scratch can mostly achieve comparable accuracy with pruning and fine-tuning on smaller-scale datasets, but fails to do so on the large-scale ImageNet benchmark. Note that in some", "cases, if a pretrained large model is already available, pruning and fine-tuning from it can save the training time required to obtain the efficient model. The contradiction", "between some of our results and those reported in the literature might be explained by less carefully chosen hyper-parameters, data augmentation schemes and unfair computation budget for evaluating baseline approaches.Predefined: prune x% channels in each layer Automatic: prune a%, b%, c%, d% channels in each layer A 4-layer model Figure 2 : Difference between predefined and automatically discovered target architectures, in channel pruning as an example. The pruning ratio", "x is userspecified, while a, b, c, d are determined by the pruning algorithm. Unstructured sparse", "pruning can also be viewed as automatic.Our results advocate a rethinking of existing structured network pruning algorithms. It seems that the over-parameterization", "during the first-stage training is not as beneficial as previously thought. Also, inheriting weights from a large model", "is not necessarily optimal, and might trap the pruned model into a bad local minimum, even if the weights are considered \"important\" by the pruning criterion. Instead, our results suggest that the value", "of automatic structured pruning algorithms sometimes lie in identifying efficient structures and performing implicit architecture search, rather than selecting \"important\" weights. For most structured pruning methods which prune", "channels/filters, this corresponds to searching the number of channels in each layer. In section 5, we discuss this viewpoint through", "carefully designed experiments, and show the patterns in the pruned model could provide design guidelines for efficient architectures.The rest of the paper is organized as follows: in Section 2, we introduce background and some related works on network pruning; in Section 3, we describe our methodology for training the pruned model from scratch; in Section 4 we experiment on various pruning methods and show our main results for both pruning methods with predefined or automatically discovered target architectures; in Section 5, we discuss the value of automatic pruning methods in searching efficient network architectures; in Section 6 we discuss some implications and conclude the paper.", "Our results encourage more careful and fair baseline evaluations of structured pruning methods.", "In addition to high accuracy, training predefined target models from scratch has the following benefits over conventional network pruning procedures:", "a) since the model is smaller, we can train the model using less GPU memory and possibly faster than training the original large model;", "b) there is no need to implement the pruning criterion and procedure, which sometimes requires fine-tuning layer by layer (Luo et al., 2017) and/or needs to be customized for different network architectures BID14 BID4 ;", "c) we avoid tuning additional hyper-parameters involved in the pruning procedure.Our results do support the viewpoint that automatic structured pruning finds efficient architectures in some cases.", "However, if the accuracy of pruning and fine-tuning is achievable by training the pruned model from scratch, it is also important to evaluate the pruned architectures against uniformly pruned baselines (both training from scratch), to demonstrate the method's value in identifying efficient architectures.", "If the uniformly pruned models are not worse, one could also skip the pipeline and train them from scratch.Even if pruning and fine-tuning fails to outperform the mentioned baselines in terms of accuracy, there are still some cases where using this conventional wisdom can be much faster than training from scratch:", "a) when a pre-trained large model is already given and little or no training budget is available; we also note that pre-trained models can only be used when the method does not require modifications to the large model training process;", "b) there is a need to obtain multiple models of different sizes, or one does not know what the desirable size is, in which situations one can train a large model and then prune it by different ratios." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1764705777168274, 0.17142856121063232, 0.06896550953388214, 0.5238094925880432, 0.19999998807907104, 0.125, 0.05714285373687744, 0.04081632196903229, 0.043478257954120636, 0.1090909019112587, 0.07692307233810425, 0.13793103396892548, 0.13793103396892548, 0.23076923191547394, 0.4117647111415863, 0.06896550953388214, 0.04444443807005882, 0.10810810327529907, 0.20689654350280762, 0.20512819290161133, 0.2702702581882477, 0.4651162624359131, 0.08695651590824127, 0.24137930572032928, 0.2857142686843872, 0.026315785944461823, 0, 0.1538461446762085, 0.23529411852359772, 0.12765957415103912, 0.04651162400841713, 0.0555555522441864, 0.14457830786705017, 0.06666666269302368, 0.2702702581882477, 0.10526315122842789, 0.07843136787414551, 0.04878048226237297, 0.23999999463558197, 0.15625, 0.1599999964237213, 0.11538460850715637 ]
rJlnB3C5Ym
true
[ "In structured network pruning, fine-tuning a pruned model only gives comparable performance with training it from scratch." ]
[ "Brushing techniques have a long history with the first interactive selection tools appearing in the 1990's.", "Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues.", "Selection is especially difficult in large datasets where many visual items tangle and create overlapping.", "This paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area.", "Firstly, the user brushes the region where trajectories of interest are visible.", "Secondly, the shape of the brushed area is used to select similar items.", "Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories.", "This technique encompasses two types of comparison metrics, the piece-wise Pearson correlation and the similarity measurement based on information geometry.", "We apply it to concrete scenarios with datasets from air traffic control, eye-tracking data and GPS trajectories.\n", "Aircraft trajectories can be visually represented as connected line segments that form a path on a map.", "Given the flight level (altitude) of the aircraft, the trajectories can be presented in 3D and visualized by varying their appearances [7] or changing their representation to basic geometry types [11] .", "Since the visualization considers a large number of trajectories that compete for the visual space, these visualizations often present occlusion and visual clutter issues, rendering exploration difficult.", "Edge bundling techniques [34] have been used to reduce clutter and occlusion but they come at the cost of distorting the trajectory shapes which might not always be desirable.", "Analysts need to explore this kind of datasets in order to perform diverse tasks.", "Some of these tasks compare expected aircraft trajectories with the actual trajectories.", "Other tasks detect unexpected patterns and perform out traffic analysis in complex areas with dense traffic [7, 30] .", "To this end, various trajectory properties such as aircraft direction, flight level and shape are examined.", "However, most systems only support selection techniques that rely on starting and end points, or predefined regions.", "We argue that the interactive shape brush technique would be helpful for these kinds of tasks, as they require the visual inspection of the data, the detection of the specific patterns and then their selection for further examination.", "As these specific patterns might differ from the rest of the data precisely because of their shape, a technique that enables their selection through this characteristic will make their manipulation easier, as detailed in the example scenario.", "We consider a dataset that includes 4320 aircraft trajectories of variable lengths from one day of flight traffic over the French airspace.", "The proposed brushing technique leverages existing methods with the novel usage of the shape of the brush as an additional filtering parameter.", "The interaction pipeline shows different data processing steps where the comparison algorithm between the brushed items and the shape of the brush plays a central role.", "While the presented pipeline contains two specific and complementary comparison metric computations, another one can be used as long as it fulfills the continuity and metric se- Figure 10 .", "Three different trajectories containing three different event sequences from [60] .", "mantic requirements (DR2).", "There are indeed many standard approaches (ED, DTW, Discrete FrÃl'chet distance) that are largely used by the community and could be used to extend our technique when faced with different datasets.", "Furthermore, the contribution of this paper is a novel shape-based brushing technique and not simply a shape similarity measure.", "In our work, we found two reasonable similarity measures that fulfill our shape-based brushing method: The FPCA distance comparison provides an accurate curve similarity measurement while the Pearson metric provides a complementary criteria with the direction of the trajectory.", "In terms of visualization, the binning process provides a valuable overview of the order of the trajectory shapes.", "This important step eases the filtering and adjustment of the selected items.", "It is important to mention that this filtering operates in a continuous manner as such trajectories are added or removed one by one when adjusting this filtering parameter.", "This practice helps to fine tune the selected items with accurate filtering parameters.", "The presented scenario shows how small multiple interaction can provide flexibility.", "This is especially the case when the user brushes specific trajectories to be then removed when setting the compatibility metrics to uncorrelated.", "This operation performs a brush removal.", "The proposed filtering method can also consider other types of binning and allows different possible representations (i.e. various visual mapping solutions).", "This paper illustrates the shape based brushing technique with three application domains (air traffic, eye tracking, gps data), but it can be extended to any moving object dataset.", "However, our evaluation is limited by the number of studied application domains.", "Furthermore, even if various users and practitioners participated in the design of the technique, and assessed the simplicity and intuitiveness of the method, we did not conduct a more formal evaluation.", "The shape based brush is aimed at complementing the traditional brush, and in no way do we argue that it is more efficient or effective than the original technique for all cases.", "The scenarios are examples of how this technique enables the selection of trails that would be otherwise difficult to manipulate, and how the usage of the brush area and its shape to perform comparison opens novel brushing perspectives.", "We believe they provide strong evidence of the potential of such a technique.", "The technique also presents limitations in its selection flexibility, as it is not yet possible to combine selections.", "Many extensions can be applied to the last step of the pipeline to support this.", "This step mainly addresses the DR4 where the selection can be refined thanks to user inputs.", "As such, multiple selections can be envisaged and finally be composed.", "Boolean operations can be considered with the standard And, Or, Not.", "While this composition is easy to model, it remains difficult for an end user to master the operations when there are more than 2 subset operations [57] [31] .", "As a solution, Hurter et al.", "proposed an implicit item composition with a simple drag and drop technique [31] .", "The pipeline can be extended with the same paradigm where a place holder can store filtered items and then be composed to produce the final result.", "The user can then refine the selection by adding, removing or merging multiple selections.", "In this paper, a novel sketch-based brushing technique for trail selection was proposed and investigated.", "This approach facilitates user selection in occluded and cluttered data visualization where the selection is performed on a standard brush basis while taking into account the shape of the brush area as a filtering tool.", "This brushing tool works as follows.", "Firstly, the user brushes the trajectory of interest trying to follow its shape as closely as possible.", "Then the system pre-selects every trajectory which touches the brush area.", "Next, the algorithm computes a distance between every brushed shape and the shape of the brushed area.", "Comparison scores are then sorted and the system displays visual bins presenting trajectories from the lowest scores (unrelated -or dissimilar trajectories) to the highest values/scores (highly correlated or similar trajectories).", "The user can then adjust a filtering parameter to refine the actual selected trajectories that touch the brushed area and which have a suitable correlation with the shape of the brushed area.", "The cornerstone of this shape-based technique relies on the shape comparison method.", "Therefore, we choose two algorithms which provide enough flexibility to adjust the set of selected trajectories.", "One algorithm relies on functional decomposition analysis which insures a shape curvature comparison, while the other method insures an accurate geometric based comparison (Pearson algorithm).", "To validate the efficiency of this method, we show three examples of usage with various types of trail datasets.", "This work can be extended in many directions.", "We can first extend it with additional application domains and other types of dataset such as car or animal movements or any type of time varying data.", "We can also consider other types of input to extend the mouse pointer usage.", "Virtual Reality data exploration with the so called immersive analytic domain gives a relevant work extension which will be investigated in the near future.", "Finally, we can also consider adding machine learning to help users brush relevant trajectories.", "For instance, in a very dense area, where the relevant trajectories or even a part of the trajectories are not visible due to the occlusion, additional visual processing may be useful to guide the user during the brushing process." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.060606054961681366, 0.1249999925494194, 0.2631579041481018, 0.1428571343421936, 0.27586206793785095, 0.19999998807907104, 0.1666666567325592, 0.11428570747375488, 0, 0.2222222238779068, 0.0952380895614624, 0.17777776718139648, 0.2666666507720947, 0.1428571343421936, 0.11764705181121826, 0.12121211737394333, 0, 0.2083333283662796, 0.16326530277729034, 0.10526315122842789, 0.3333333134651184, 0.19999998807907104, 0.0476190410554409, 0, 0, 0.21739129722118378, 0.2857142686843872, 0.15686273574829102, 0.19354838132858276, 0.1428571343421936, 0.1428571343421936, 0.13333332538604736, 0, 0.11428570747375488, 0.08695651590824127, 0.05128204822540283, 0.2222222238779068, 0.20689654350280762, 0.1428571343421936, 0.21276594698429108, 0.2916666567325592, 0.20689654350280762, 0.17142856121063232, 0.19999998807907104, 0.1249999925494194, 0, 0.0714285671710968, 0.09302324801683426, 0, 0.06666666269302368, 0.09999999403953552, 0.12903225421905518, 0.1249999925494194, 0.3404255211353302, 0.08695651590824127, 0.3125, 0.2222222238779068, 0.19999998807907104, 0.09302324801683426, 0.1860465109348297, 0.27586206793785095, 0.1818181723356247, 0.09756097197532654, 0.1764705777168274, 0.07999999821186066, 0.0476190410554409, 0.19354838132858276, 0.09999999403953552, 0.12903225421905518, 0.2448979616165161 ]
yaeJLwvTr
true
[ "Interactive technique to improve brushing in dense trajectory datasets by taking into account the shape of the brush." ]
[ "Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism, but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.", "Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem.", "In this work, we propose to model object composition in a GAN framework as a self-consistent composition-decomposition network.", "Our model is conditioned on the object images from their marginal distributions and can generate a realistic image from their joint distribution.", "We evaluate our model through qualitative experiments and user evaluations in scenarios when either paired or unpaired examples for the individual object images and the joint scenes are given during training.", "Our results reveal that the learned model captures potential interactions between the two object domains given as input to output new instances of composed scene at test time in a reasonable fashion.", "Generative Adversarial Networks (GANs) have emerged as a powerful method for generating images conditioned on a given input.", "The input cue could be in the form of an image BID1 BID20 , a text phrase BID32 BID23 a; BID10 or a class label layout BID18 BID19 BID0 .", "The goal in most of these GAN instantiations is to learn a mapping that translates a given sample from source distribution to generate a sample from the output distribution.", "This primarily involves transforming either a single object of interest (apples to oranges, horses to zebras, label to image etc.), or changing the style and texture of the input image (day to night etc.).", "However, these direct input-centric transformations do not directly capture the fact that a natural image is a 2D projection of a composition of multiple objects interacting in a 3D visual world.", "In this work, we explore the role of compositionality in learning a function that maps images of different objects sampled from their marginal distributions (e.g., chair and table) into a combined sample (table-chair) that captures their joint distribution.Modeling compositionality in natural images is a challenging problem due to the complex interaction possible among different objects with respect to relative scaling, spatial layout, occlusion or viewpoint transformation.", "Recent work using spatial transformer networks BID9 within a GAN framework BID14 decomposes this problem by operating in a geometric warp parameter space to find a geometric modification for a foreground object.", "However, this approach is only limited to a fixed background and does not consider more complex interactions in the real world.", "Another recent work on scene generation conditioned on text and a scene graph and explicitly provides reasoning about objects and their relations BID10 .We", "develop a novel approach to model object compositionality in images. We", "consider the task of composing two input object images into a joint image that captures their joint interaction in natural images. For", "instance, given an image of a chair and a table, our formulation should be able to generate an image containing the same chair-table pair interacting naturally. For", "a model to be able to capture the composition correctly, it needs to have the knowledge of occlusion ordering, i.e., a table comes in front of chair, and spatial layout, i.e., a chair slides inside table. To", "the best of our knowledge, we are among the first to solve this problem in the image conditional space without any prior explicit information about the objects' layout.Our key insight is to reformulate the problem of composition of two objects into first composing the given object images to generate the joint combined image which models the object interaction, and then decomposing the joint image back to obtain individual ones. This", "reformulation enforces a selfconsistency constraint ) through a composition-decomposition network. However", ", in some scenarios, one does not have access to the paired examples of same object instances with their combined compositional image, for instance, to generate the joint image from the image of a given table and a chair, we might not have any example of that particular chair besides that particular table while we might have images of other chairs and other tables together. We add", "an inpainting network to our composition-decomposition layers to handle the unpaired case as well.Through qualitative and quantitative experiments, we evaluate our proposed Compositional-GAN approach in two training scenarios: (a) paired", ": when we have access to paired examples of individual object images with their corresponding composed image, (b) unpaired", ": when we have a dataset from the joint distribution without being paired with any of the images from the marginal distributions.", "In this paper, we proposed a novel Compositional GAN model addressing the problem of object composition in conditional image generation.", "Our model captures the relative linear and viewpoint transformations needed to be applied on each input object (in addition to their spatial layout and occlusions) to generate a realistic joint image.", "To the best of our knowledge, we are among the first to solve the compositionality problem without having any explicit prior information about object's layout.", "We evaluated our compositional GAN through multiple qualitative experiments and user evaluations for two cases of paired versus unpaired training data.", "In the future, we plan to extend this work toward generating images composed of multiple (more than two) and/or non-rigid objects." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1599999964237213, 0.10810810327529907, 0.46666666865348816, 0.24242423474788666, 0.2380952388048172, 0.22727271914482117, 0.13333332538604736, 0.09756097197532654, 0.2222222238779068, 0.1428571343421936, 0.09999999403953552, 0.1428571343421936, 0.2926829159259796, 0.23529411852359772, 0.12121211737394333, 0.9166666865348816, 0.24242423474788666, 0.10810810327529907, 0.1818181723356247, 0.1230769231915474, 0.08695651590824127, 0.19354838132858276, 0.1428571343421936, 0.1875, 0.1249999925494194, 0.3636363446712494, 0.19512194395065308, 0.1111111044883728, 0.11764705181121826, 0.11764705181121826 ]
rygPUoR9YQ
true
[ "We develop a novel approach to model object compositionality in images in a GAN framework." ]
[ "Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications.", "Nonetheless, as unique data properties have inspired distinct and powerful learning principles, this paper aims to explore their potentials towards mitigating adversarial inputs.", "In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples.", "Tested on the automatic speech recognition (ASR) tasks and three recent audio adversarial attacks, we find that", "(i) input transformation developed from image adversarial defense provides limited robustness improvement and is subtle to advanced attacks;", "(ii) temporal dependency can be exploited to gain discriminative power against audio adversarial examples and is resistant to adaptive attacks considered in our experiments.", "Our results not only show promising means of improving the robustness of ASR systems, but also offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarial examples.", "Deep Neural Networks (DNNs) have been widely adopted in a variety of machine learning applications BID18 BID20 .", "However, recent work has demonstrated that DNNs are vulnerable to adversarial perturbations BID32 BID10 .", "An adversary can add negligible perturbations to inputs and generate adversarial examples to mislead DNNs, first found in image-based machine learning tasks BID10 BID2 BID21 BID7 a; BID30 .Beyond", "images, given the wide application of DNN-based audio recognition systems, such as Google Home and Amazon Alexa, audio adversarial examples have also been studied recently BID0 BID8 BID17 . Comparing", "between image and audio learning tasks, although their state-of-the-art DNN architectures are quite different (i.e., convolutional v.s. recurrent neural networks), the attacking methodology towards generating adversarial examples is fundamentally unanimous -finding adversarial perturbations through the lens of maximizing the training loss or optimizing some designed attack objectives. For example", ", the same attack loss function proposed in BID8 ) is used to generate adversarial examples in both visual and speech recognition models. Nonetheless", ", different types of data usually possess unique or domain-specific properties that can potentially be used to gain discriminative power against adversarial inputs. In particular", ", the temporal dependency in audio data is an innate characteristic that has already been widely adopted in the machine learning models. However, in", "addition to improving learning performance on natural audio examples, it is still an open question on whether or not the temporal dependency can be exploited to help mitigate negative effects of adversarial examples.The focus of this paper has two folds. First, we investigate", "the robustness of automatic speech recognition (ASR) models under input transformation, a commonly used technique in the image domain to mitigate adversarial inputs. Our experimental results", "show that four implemented transformation techniques on audio inputs, including waveform quantization, temporal smoothing, down-sampling and autoencoder reformation, provide limited robustness improvement against the recent attack method proposed in BID1 , which aims to circumvent the gradient obfuscation issue incurred by input transformations. Second, we demonstrate that", "temporal dependency can be used to gain discriminative power against adversarial examples in ASR. We perform the proposed temporal", "dependency method on both the LIBRIS BID11 and Mozilla Common Voice datasets against three state-of-the-art attack methods BID0 BID36 considered in our experiments and show that such an approach achieves promising identification of non-adaptive and adaptive attacks. Moreover, we also verify that the", "proposed method can resist strong proposed adaptive attacks in which the defense implementations are known to an attacker. Finally, we note that although this", "paper focuses on the case of audio adversarial examples, the methodology of leveraging unique data properties to improve model robustness could be naturally extended to different domains. The promising results also shed new", "lights in designing adversarial defenses against attacks on various types of data.Related work An adversarial example for a neural network is an input x adv that is similar to a natural input x but will yield different output after passing through the neural network. Currently, there are two different", "types of attacks for generating audio adversarial examples: the Speech-toLabel attack and the Speech-to-Text attack. The Speech-to-Label attack aims to", "find an adversarial example x adv close to the original audio x but yields a different (wrong) label. To do so, Alzantot et al. proposed", "a genetic algorithm BID0 , and Cisse et al. proposed a probabilistic loss function BID8 . The Speech-to-Text attack requires", "the transcribed output of the adversarial audio to be the same as the desired output, which has been made possible by BID16 . Yuan et al. demonstrated the practical", "\"wav-to-API\" audio adversarial attacks BID36 . Another line of research focuses on adversarial", "training or data augmentation to improve model robustness BID28 BID26 BID29 BID31 , which is beyond our scope. Our proposed approach focuses on gaining the discriminative", "power against adversarial examples through embedded temporal dependency, which is compatible with any ASR model and does not require adversarial training or data augmentation. TO AUDIO DOMAIN?", "This paper proposes to exploit the temporal dependency property in audio data to characterize audio adversarial examples.", "Our experimental results show that while four primitive input transformations on audio fail to withstand adaptive adversarial attacks, temporal dependency is shown to be resistant to these attacks.", "We also demonstrate the power of temporal dependency for characterizing adversarial examples generated by three state-of-the-art audio adversarial attacks.", "The proposed method is easy to operate and does not require model retraining.", "We believe our results shed new lights in exploiting unique data properties toward adversarial robustness." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.29629629850387573, 0.08695651590824127, 0, 0.20689654350280762, 0, 0, 0, 0, 0.05882352590560913, 0.036363635212183, 0, 0, 0.2222222238779068, 0.12765957415103912, 0, 0.07843136787414551, 0.1666666567325592, 0.04444444179534912, 0, 0.0555555522441864, 0, 0.08695651590824127, 0.06666666269302368, 0, 0.06666666269302368, 0.11764705181121826, 0, 0.0624999962747097, 0.2857142686843872, 0.1875, 0.25, 0, 0 ]
r1g4E3C9t7
true
[ "Adversarial audio discrimination using temporal dependency" ]
[ "In order to alleviate the notorious mode collapse phenomenon in generative adversarial networks (GANs), we propose a novel training method of GANs in which certain fake samples can be reconsidered as real ones during the training process.", "This strategy can reduce the gradient value that generator receives in the region where gradient exploding happens.", "We show that the theoretical equilibrium between the generators and discriminations actually can be seldom realized in practice.", "And this results in an unbalanced generated distribution that deviates from the target one, when fake datepoints overfit to real ones, which explains the non-stability of GANs.", "We also prove that, by penalizing the difference between discriminator outputs and considering certain fake datapoints as real for adjacent real and fake sample pairs, gradient exploding can be alleviated.", "Accordingly, a modified GAN training method is proposed with a more stable training process and a better generalization.", "Experiments on different datasets verify our theoretical analysis.", "In the past few years, Generative Adversarial Networks (GANs) Goodfellow et al. (2014) have been one of the most popular topics in generative models and achieved great success in generating diverse and high-quality images recently (Brock et al. (2019) ; Karras et al. (2019) ; ).", "GANs are powerful tools for learning generative models, which can be expressed as a zero-sum game between two neural networks.", "The generator network produces samples from the arbitrary given distribution, while the adversarial discriminator tries to distinguish between real data and generated data.", "Meanwhile, the generator network tries to fool the discriminator network by producing plausible samples which are close to real samples.", "When a final theoretical equilibrium is achieved, discriminator can never distinguish between real and fake data.", "However, we show that a theoretical equilibrium often can not be achieved with discrete finite samples in datasets during the training process in practice.", "Although GANs have achieved remarkable progress, numerous researchers have tried to improve the performance of GANs from various aspects ; Nowozin et al. (2016) ; Gulrajani et al. (2017) ; Miyato et al. (2018) ) because of the inherent problem in GAN training, such as unstability and mode collapse.", "Arora et al. (2017) showed that a theoretical generalization guarantee does not be provided with the original GAN objective and analyzed the generalization capacity of neural network distance.", "The author argued that for a low capacity discriminator, it can not provide generator enough information to fit the target distribution owing to lack of ability to detect mode collapse.", "Thanh-Tung et al. (2019) argued that poor generation capacity in GANs comes from discriminators trained on discrete finite datasets resulting in overfitting to real data samples and gradient exploding when generated datapoints approach real ones.", "As a result, Thanh-Tung et al. (2019) proposed a zero-centered gradient penalty on linear interpolations between real and fake samples (GAN-0GP-interpolation) to improve generalization capability and prevent mode collapse resulted from gradient exploding.", "Recent work Wu et al. (2019) further studied generalization from a new perspective of privacy protection.", "In this paper, we focus on mode collapse resulted from gradient exploding studied in Thanh-Tung et al. (2019) and achieve a better generalization with a much more stable training process.", "Our contributions are as follows: discriminator with sigmoid function in the last layer removed D r = {x 1 , · · · , x n } the set of n real samples D g = {y 1 , · · · , y m } the set of m generated samples D f = {f 1 , · · · , f m } the candidate set of M 1 generated samples to be selected as real D F AR ⊂ {f 1 , · · · , f m } the set of M 0 generated samples considered as real", "1. We show that a theoretical equilibrium, when optimal discriminator outputs a constant for both real and generated data, is unachievable for an empirical discriminator during the training process.", "Due to this fact, it is possible that gradient exploding happens when fake datapoints approach real ones, resulting in an unbalanced generated distribution that deviates from the target one.", "2. We show that when generated datapoints are very close to real ones in distance, penalizing the difference between discriminator outputs and considering fake as real can alleviate gradient exploding to prevent overfitting to certain real datapoints.", "3. We show that when more fake datapoints are moved towards a single real datapoint, gradients of the generator on fake datapoints very close to the real one can not be reduced, which partly explains the reason of a more serious overfitting phenomenon and an increasingly unbalanced generated distribution.", "4. Based on the zero-centered gradient penalty on data samples (GAN-0GP-sample) proposed in Mescheder et al. (2018) , we propose a novel GAN training method by considering some fake samples as real ones according to the discriminator outputs in a training batch to effectively prevent mode collapse.", "Experiments on synthetic and real world datasets verify that our method can stabilize the training process and achieve a more faithful generated distribution.", "In the sequel, we use the terminologies of generated samples (datapoints) and fake samples (datapoints) indiscriminately.", "Tab.", "1 lists some key notations used in the rest of the paper.", "In this paper, we explain the reason that an unbalanced distribution is often generated in GANs training.", "We show that a theoretical equilibrium for empirical discriminator is unachievable during the training process.", "We analyze the affection on the gradient that generator receives from discriminator with respect to restriction on difference between discriminator outputs on close real and fake pairs and trick of considering fake as real.", "Based on the theoretical analysis, we propose a novel GAN training method by considering some fake samples as real ones according to the discriminator outputs in a training batch.", "Experiments on diverse datasets verify that our method can stabilize the training process and improve the performance by a large margin.", "For empirical discriminator, it maximizes the following objective:", "When p g is a discrete uniform distribution on D r , and generated samples in D g are the same with real samples in D r .", "It is obvious that the discriminator outputs 1 2 to achieve the optimal value when it cannot distinguish fake samples from real ones.", "For continues distribution p g , Thanh-Tung et al. (2019) has proved that an -optimal discriminator can be constructed as a one hidden layer MLP with O(d x (m + n)) parameters, namely D(x", ") ≥ 1 2 + 2 , ∀x ∈ D r and D(y", ") ≤ 1 2 − 2 , ∀y ∈ D g , where D r and D g are disjoint with probability 1. In", "this case, discriminator objective has a larger value than the theoretical optimal version:", "So the optimal discriminator output on D r and D g is not a constant 1 2 in this case.", "Even discriminator has much less parameters than O(d x (m + n)), there exists a real datapoint x 0 and a generated datapoint y 0 satisfying D(x 0 ) ≥ 1 2 + 2 and D(y 0 ) ≤ 1 2 − 2 .", "Whether p g is a discrete distribution only cover part samples in D r or a continues distribution, there exists a generated datapoint y 0 satisfying y 0 ∈ D r .", "Assume that samples are normalized:", "Let W 1 ∈ R 2×dx , W 2 ∈ R 2×2 and W 3 ∈ R 2 be the weight matrices, b ∈ R 2 offset vector and k 1 ,k 2 a constant, We can construct needed discriminator as a MLP with two hidden layer containing O(2d x ) parameters.", "We set weight matrices", "For any input v ∈ D r ∪ D g , the discriminator output is computed as:", "where σ(x) = 1 1+e −x is the sigmoid function.", "Let α = W 1 v − b, we have", "where l < 1.", "Let β = σ(k 1 α), we have", "as k 2 → ∞.", "Hence, for any input v ∈ D r ∪ D g , discriminator outputs", "In this case, discriminator objective also has a more optimal value than the theoretical optimal version:", "So the optimal discriminator output on D r and D g is also not a constant 1 2 in this case." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5454545617103577, 0, 0.10526315122842789, 0.12765957415103912, 0.3333333432674408, 0.3333333432674408, 0, 0.03448275476694107, 0.09756097197532654, 0.1904761791229248, 0.21621620655059814, 0.21621620655059814, 0.1818181723356247, 0.19999998807907104, 0.12765957415103912, 0.16326530277729034, 0.14814814925193787, 0.31372547149658203, 0.05405404791235924, 0.23999999463558197, 0.12121211737394333, 0.25531914830207825, 0.12244897335767746, 0.3396226465702057, 0.19354838132858276, 0.49180328845977783, 0.3255814015865326, 0.1764705777168274, 0, 0.052631575614213943, 0.2222222238779068, 0.2916666567325592, 0.5531914830207825, 0.3414634168148041, 0, 0.1904761791229248, 0.1860465109348297, 0.072727270424366, 0.060606054961681366, 0.051282044500112534, 0.05882352590560913, 0.09999999403953552, 0.11764705181121826, 0.08695651590824127, 0.07692307233810425, 0.1355932205915451, 0.07999999821186066, 0, 0, 0, 0, 0, 0.07692307233810425, 0, 0.0555555522441864, 0.09756097197532654 ]
HyxsR24tvS
true
[ " We propose a novel GAN training method by considering certain fake samples as real to alleviate mode collapse and stabilize training process." ]
[ "We present a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection. ", "Evaluating generative models of discrete sequences from a continuous latent space is a challenging problem, since their optimization involves multiple competing objective terms. ", "We introduce a model-selection pipeline to compare and filter models throughout consecutive stages of more complex and expensive metrics.", "We present the pipeline in an interactive visual tool to enable the exploration of the metrics, analysis of the learned latent space, and selection of the best model for a given task. ", "We focus specifically on the variational auto-encoder family in a case study of modeling peptide sequences, which are short sequences of amino acids.", "This task is especially interesting due to the presence of multiple attributes we want to model.", "We demonstrate how an interactive visual comparison can assist in evaluating how well an unsupervised auto-encoder meaningfully captures the attributes of interest in its latent space.", "Unsupervised representation learning and generation of text from a continuous space is an important topic in natural language processing.", "This problem has been successfully addressed by variational auto-encoders (VAE) BID16 and variations, which we will introduce in Section 2. The same methods are also relevant to areas like drug discovery, as the therapeutic small molecules and macromolecules (nucleic acids, peptides, proteins) can be represented as discrete linear sequences, analogous to text strings.", "Our case study of interest is modeling peptide sequences.In the VAE formulation, we define the sequence representation as a latent variable modeling problem of inputs x and latent variables z, where the joint distribution p(x, z) is factored as p(z)p θ (x|z) and the inference of the hidden variable z for a given input x is approximated through an inference network q φ (z|x).", "The auto-encoder training typically aims to minimize two competing objectives:", "(a) reconstruction of the input and", "(b) regularization in the latent space.", "Term", "(b) acts as a proxy to two real desiderata:", "(i) \"meaningful\" representations in latent space, and", "(ii) the ability to sample new datapoints from p(x) through p(z)p θ (x|z).", "These competing goals and objectives form a fundamental trade-off, and as a consequence, there is no easy way to measure the success of an auto-encoder model.", "Instead, measuring success requires careful consideration of multiple different metrics.", "The discussion of the metrics is in Section 2.2, and they will be incorporated in the IVELS tool (Section 5.1 and 5.2).For", "generating discrete sequences while controlling user-specific attributes, for example peptide sequences with specific functionality, it is crucial to consider conditional generation. The", "most Figure 1 : Overview of the IVELS tool. In", "every stage, we can filter the models to select the ones with satisfactory performance. In", "the first stage, models can be compared using the static metrics that are typically computed during training (left). In", "the second stage, we investigate the activity vs noise of the learned latent space (top right) and evaluate whether we can linearly separate attributes (not shown). During", "the third stage, the tool enables interactive exploration of the attributes in a 2D projection of the latent space (bottom right). straightforward", "approach would be limiting the training set to those sequences with the desired attributes. However, this would", "require large quantities of data labeled with exactly those attributes, which is often not available. Moreover, the usage", "of those models that are trained on a specific set of labeled data will likely be restricted to that domain. In contrast, unlabeled", "sequence data is often freely available. Therefore, a reasonable", "approach for model training is to train a VAE on a large corpus without requiring attribute labels, then leveraging the structure in the latent space for conditional generation based on attributes which are introduced post-hoc. As a prerequisite for this", "goal, we focus on how q φ (z|x) encodes the data with specific attributes. We introduce the encoding", "of the data subset corresponding to a specific attribute, i.e. the subset marginal posterior, in Section 3. This will be important in the IVELS tool (Section 5.3 and 5.4). Now that we introduced our", "models (VAE family), the importance of conditioning on attributes, and our case study of interest (peptide generation), we turn to the focus of our paper. To assist in the model selection", "process, we present a visual tool for interactive exploration and selection of auto-encoder models. Instead of selecting models by one", "single unified metric, the tool enables a machine learning practitioner to interactively compare different models, visualize several metrics of interest, and explore the latent space of the encoder. This exploration is building around", "distributions in the latent space of data subsets, where the subsets are defined by the attributes of interest. We will quantify whether a linear classifier", "can discriminate attributes in the latent space, and enable visual exploration of the attributes with 2D projections. The setup allows the definition of new ad-hoc", "attributes and sets to assist users in understanding the learned latent space. The tool is described in Section 5.In Section", "6, we discuss some observations we made using IVELS as it relates to (1) our specific domain of peptide modeling and (2) different variations of VAE models.", "We presented a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection focused on auto-encoder models for peptide sequences.", "Even though we present the tool with this use case, the principle is generally useful for models which do not have a single metric to compare and evaluate.", "With some adaptation to the model and metrics, this tool could be extended to evaluate other latent variable models, either for sequences or images, speech synthesis models, etc.", "In all those scenarios, having a usable, visual and interactive tool for model architects and model trainers will enable efficient exploration and selection of different model variations.", "The results from this evaluation can further guide the generation of samples with the desired attribute(s)." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.34285715222358704, 0.2790697515010834, 0.2631579041481018, 0.5106382966041565, 0.3333333134651184, 0.22857142984867096, 0.41860464215278625, 0.25641024112701416, 0.08571428060531616, 0.260869562625885, 0.13333332538604736, 0.23076923191547394, 0.23076923191547394, 0.13793103396892548, 0.14814814925193787, 0.12121211737394333, 0.3181818127632141, 0.06666666269302368, 0.1904761791229248, 0.19512194395065308, 0.20000000298023224, 0.11764705181121826, 0.052631575614213943, 0.27272728085517883, 0.3684210479259491, 0.22857142984867096, 0.10526315122842789, 0.1463414579629898, 0.06896550953388214, 0.25925925374031067, 0.1621621549129486, 0.23529411852359772, 0.17391303181648254, 0.42105263471603394, 0.4000000059604645, 0.3333333134651184, 0.2926829159259796, 0.3589743673801422, 0.17777776718139648, 0.4000000059604645, 0.2978723347187042, 0.30434781312942505, 0.2790697515010834, 0.11428570747375488 ]
Hkl8EILFdN
true
[ "We present a visual tool to interactively explore the latent space of an auto-encoder for peptide sequences and their attributes." ]
[ "Neural networks trained through stochastic gradient descent (SGD) have been around for more than 30 years, but they still escape our understanding.", "This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons.", "While being the core building block of deep neural networks, the way they encode information about the inputs and how such encodings emerge is still unknown.", "We report experiments providing strong evidence that hidden neurons behave like binary classifiers during training and testing.", "During training, analysis of the gradients reveals that a neuron separates two categories of inputs, which are impressively constant across training.", "During testing, we show that the fuzzy, binary partition described above embeds the core information used by the network for its prediction.", "These observations bring to light some of the core internal mechanics of deep neural networks, and have the potential to guide the next theoretical and practical developments.", "Deep neural networks are methods full of good surprises.", "Today, to perform image classification, one can train a 100M parameters convolutional neural network (CNN) with 1M training examples.", "Beyond raising questions about generalization (Zhang et al., 2017) , it appears that the classification models derived from those CNNs offer object detectors for free, simply by thresholding activation maps BID22 Zhou et al., 2015; BID5 .", "The learned representations also appear to be universal enough to be re-used on new tasks even in an entirely different domain (e.g. from natural to medical images in BID10 ).", "If memory or computation are bottlenecks, no problem, networks with binary weights and binary activations work just as well BID23 .", "What characteristics of SGD trained neural networks allow these intriguing behaviour to emerge?Deep", "neural networks also have their limitations. They", "currently pose lots of difficulties with respect to continuous learning BID15 , robustness BID27 BID22 , or unsupervised learning BID6 . Are", "there other good surprises to expect in those fields, or do those difficulties correspond to fundamental limitations of SGD trained deep neural networks?In order", "to answer both questions, a better understanding of deep neural networks is definitely needed. Since the", "intricate nature of the network hinders theoretical developments, we believe experiments offer a valuable alternative path to offer an insight into the key mechanisms supporting the success of neural networks, thereby paving the way both for future theoretical and practical developments. In other", "words: analysing how something works helps understanding why it works, and gives ideas to make it work better.In particular, the workings of hidden neurons, while being the core building block of deep neural networks, are still a mystery. It is tempting", "to associate hidden neurons to the detection of semantically relevant concepts. Accordingly, many", "works studying neurons have focused on their interpretability. A common and generally", "admitted conception consists in considering that they represent concepts with a level of abstraction that grows with the layer depth BID19 . This conception has been", "supported by several works showing that intermediate feature maps in convolutional neural networks can be used to detect higher level objects through simple thresholding BID22 Zhou et al., 2015; BID5 . However, it is not clear", "if these observations reflect the entire relevant information captured by that feature map, or, on the contrary, if this interpretation is ignoring important aspects of it. In other words, the complete", "characterization of the way a neuron encodes information about the input remains unknown. Moreover, the dynamics of training", "that lead to the encoding of information used by a neuron is -to our knowledge-unexplored. This paper uses an experimental approach", "that advances the understanding of both these aspects of neurons. The main finding of our paper is the following", ": the encodings and dynamics of a neuron can approximately be characterized by the behaviour of a binary classifier. More precisely:1. During training, we observe", "that the sign of", "the partial derivative of the loss with respect to the activation of a sample in a given neuron is impressively constant (except when the neuron is too far from the output layer). We observe experimentally that this leads a", "neuron to push activation of samples either up, or down, partitioning the inputs in two categories of nearly equal size.2. During testing, quantization and binarization", "experiments show that the fuzzy, binary partition observed in point 1. embeds the core information used by the network", "for its predictions.This surprisingly simple behaviour has been observed across different layers, different networks and at different problem scales (MNIST, CIFAR-10 and ImageNet). It seems like hidden neurons have a clearly defined", "behaviour that naturally emerges in neural networks trained with stochastic gradient descent. This behaviour has -to our knowledge-remained undiscovered", "until now, and raises intriguing questions to address in future investigations.", "In this paper, we try to validate an ambitious hypothesis describing the behaviour of a neuron in a neural network during training and testing.", "Our hypothesis is surprisingly simple: a neuron behaves like a binary classifier, separating two categories of inputs.", "The categories, of nearly equal size, are provided by the backpropagated gradients and are impressively consistent during training for layers close enough to the output.", "While stronger validation is needed, our current experiments, ran on networks of different depths and widths, all validate this behaviour.Our results have direct implications on the interpretability of neurons.", "Studies analysing interpretability focused on the highest activations, e.g. above the 99.5 percentile in BID5 .", "While these activations are the ones who are the most clearly discriminated by the neuron, we show that they do not reflect the complete behaviour of the neuron at all.", "Our experiments reveal that neurons tend to consistently learn concepts that distinguish half of the observed samples, which is fundamentally different.We expect that our observations stimulate further investigations in a number of intriguing research directions disclosed by our analysis.Firstly, since our analysis observes (in FIG2 but does not explain the binary behaviour of neurons in the first layers of a very deep network, it would be interesting to investigate further the regularity of gradients (cfr. Section 4.1), in layers far from the output.", "This could potentially unveil simple training dynamics which are currently hidden by noise or, on the contrary, reveal that the unstable nature of the backpropagated gradients is a fundamental ingredient supporting the convergence of first layer neurons.", "Ultimately, these results would provide the missing link for a complete characterization of training dynamics in deep networks.", "Secondly, our work offers a new perspective on the role of activation functions.", "Their current motivation is that adding non-linearities increases the expressivity of the network.", "This, however, does not explain why one particular non-linearity is better than another.", "Our lack of understanding of the role of activation functions heavily limits our ability to design them.", "Our results suggest a local and precise role for activation functions: promoting and facilitating the emergence of a binary encoding in neurons.", "This could be translated in activation functions with a forward pass consisting of well-positioned binarization thresholds, and a backward pass that takes into account how well a sample is partitioned locally, at the neuron level.Finally, we believe that our work provides a new angle of attack for the puzzle of the generalization gap observed in Zhang et al. (2017) .", "Indeed, combining our observations with the works on neuron interpretability tells us that a neuron, while not able to finish its partitioning before convergence, seems to prioritize samples with common patterns (cfr. Figure 2) .", "This prioritization effect during training has already been observed indirectly in BID3 , and we are now Figure 4: Sliding window binarization experiment: pre-activations inside a window with a width of percentile rank 10 are mapped to 1, pre-activations outside of it to 0.", "Information that remains in the signal is only the fact that the pre-activation was inside or outside the window.", "Observing if a new network can use this information for classification reveals structure about the encoding: which window positions provide the most important information for a classifier?", "The results show a clear pattern across all layers and networks that confirms an encoding based on a fuzzy, binary partition of the inputs in two categories of nearly equal size.", "As detailed in Section 3, the layers from the first two rows are part of a network trained on MNIST (with ReLU and sigmoid activation functions respectively), the third and fourth row on CIFAR-10 (with ReLU and no activation function respectively) and the fifth row on ImageNet (with ReLU activation).able", "to localize and study it in depth. The", "dynamics behind this prioritization between samples of a same category should provide insights about the generalization puzzle. While", "most previous works have focused on the width of local minima BID16 , the regularity of the gradients and the prioritization effect suggest that the slope leading to it also matters: local minima with good generalization abilities are stronger attractors and are reached more rapidly.", "Two main lessons emerge from our original experimental investigation.The first one arises from the observation that the sign of the loss function partial derivative with respect to the activation of a specific sample is constant along training for the neurons that are sufficiently close to the output, and states that those neurons simply aim at partitioning samples with positive/negative partial derivative sign.The second one builds on two experiments that challenge the partitioning behaviour of neurons in all network layers, and concludes that, as long as it separates large and small pre-activations, a binarization of the neuron's pre-activations in an arbitrary layer preserves most of the information embedded in this layer about the network task.As a main outcome, rather than supporting definitive conclusions, the unique observations made in our paper raise a number of intriguing and potentially very important questions about network learning capabilities.", "Those include questions related to the convergence of first layer neurons in presence of noisy/unstable partial derivatives, the design of activation functions, and the generalization puzzle." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.05405404791235924, 0.04878048226237297, 0.7647058963775635, 0.21621620655059814, 0.10810810327529907, 0.05128204822540283, 0, 0.1111111044883728, 0.039215680211782455, 0, 0.1111111044883728, 0, 0, 0, 0, 0.060606054961681366, 0.1111111044883728, 0.0714285671710968, 0, 0.06896550953388214, 0.10256409645080566, 0.037735845893621445, 0.04444443807005882, 0.1875, 0.15789473056793213, 0.060606054961681366, 0.25641024112701416, 0.0952380895614624, 0.1702127605676651, 0.0952380895614624, 0.1764705777168274, 0.12765957415103912, 0.05714285373687744, 0.0714285671710968, 0.29999998211860657, 0.3030303120613098, 0.14999999105930328, 0.04444443807005882, 0, 0.0952380895614624, 0.12048192322254181, 0.11999999731779099, 0.11428570747375488, 0.06666666269302368, 0.06896550953388214, 0, 0, 0.1621621549129486, 0.11940298229455948, 0.12244897335767746, 0.145454540848732, 0.0624999962747097, 0.04999999329447746, 0.17391303181648254, 0.07407406717538834, 0.07999999821186066, 0.05714285373687744, 0.07547169178724289, 0.08474575728178024, 0.05128204822540283 ]
H1srNebAZ
true
[ "We report experiments providing strong evidence that a neuron behaves like a binary classifier during training and testing" ]
[ "Automatic classification of objects is one of the most important tasks in engineering\n", "and data mining applications.", "Although using more complex and advanced\n", "classifiers can help to improve the accuracy of classification systems, it can be\n", "done by analyzing data sets and their features for a particular problem.", "Feature\n", "combination is the one which can improve the quality of the features.", "In this paper,\n", "a structure similar to Feed-Forward Neural Network (FFNN) is used to generate an\n", "optimized linear or non-linear combination of features for classification.", "Genetic\n", "Algorithm (GA) is applied to update weights and biases.", "Since nature of data sets\n", "and their features impact on the effectiveness of combination and classification\n", "system, linear and non-linear activation functions (or transfer function) are used\n", "to achieve more reliable system.", "Experiments of several UCI data sets and using\n", "minimum distance classifier as a simple classifier indicate that proposed linear and\n", "non-linear intelligent FFNN-based feature combination can present more reliable\n", "and promising results.", "By using such a feature combination method, there is no\n", "need to use more powerful and complex classifier anymore.", "A quick review of engineering problems reveals importance of classification and its application in medicine, mechanical and electrical engineering, computer science, power systems and so on.", "Some of its important applications include disease diagnosis using classification methods to diagnosis Thyroid (Temurtas (2009)), Parkinson BID4 ) and Alzheimers disease BID7 ); or fault detection in power systems such as BID6 ) which uses classification methods to detect winding fault in windmill generators; BID12 ) using neuro-fuzzy based classification method to detect faults in AC motor; and also fault detection in batch processes in chemical engineering BID22 ).", "In all classification problems extracting useful knowledge and features from data such as image, signal, waveform and etcetera can lead to design efficient classification systems.", "As extracted data and their features are not usually suitable for classification purpose, two major approaches can be substituted.", "First approach considers all the classifiers and tries to select effective ones, even if their complexity and computational cost are increased.", "Second approach focusing on the features, enhances the severability of data, and then uses improved features and data for classification.Feature combination is one of the common actions used to enhance features.", "In classic combination methods, deferent features vectors are lumped into a single long composite vector BID19 ).", "In some modern techniques, in addition to combination of feature vectors, dimension of feature space is reduced.", "Reduction process can be done by feature selection, transmission, and projection or mapping techniques, such as Linear Discriminate Analysis (LDA), Principle Component Analysis (PCA), Independent Component Analysis (ICA) and boosting BID19 ).", "In more applications, feature combination is fulfilled to improve the efficiency of classification system such as BID3 ), that PCA and Modular PCA (MPCA) along Quad-Tree based hierarchically derived Longest Run (QTLR) features are used to recognize handwritten numerals as a statistical-topological features combination.", "The other application of feature combination is used for English character recognition, here structure and statistical features combine then BP network is used as a classifier ).", "Feature combination has many applications; however before using, some questions should be answered: which kind of combination methods is useful for studied application and available data set.", "Is reduction of feature space dimension always useful?", "Is linear feature combination method better than non-linear one?In", "this paper, using structure of Feed-Forward Neural Network (FFNN) along with Genetic Algorithm (GA) as a powerful optimization algorithm, Linear Intelligent Feature Combination (LIFC) and Non-Linear Intelligent Feature Combination (NLIFC) systems is introduced to present adaptive combination systems with the nature of data sets and their features. In", "proposed method, original features are fed into semi-FFNN structure to map features into new feature space, and then outputs of this intelligent mapping structure are classified by minimum distance classifier via cross-validation technique. In", "each generation, weights and biases of semi-FFNN structure are updated by GA and correct recognition rate (or error recognition rate) is evaluated.In the rest of this paper, overview of minimum distance classifier, Feed-Forward Neural Network structure and Genetic Algorithm are described in sections2, 3and 4, respectively. In", "section 5, proposed method and its mathematical consideration are presented. Experimental", "results, comparison between proposed method and other feature combinations and classifiers using the same database are discussed in section 6. Eventually,", "conclusion is presented in section 7." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08695651590824127, 0.13333332538604736, 0.11764705181121826, 0.3478260934352875, 0.260869562625885, 0.1904761791229248, 0, 0.08695651590824127, 0.29999998211860657, 0.19999998807907104, 0, 0.2857142686843872, 0.09090908616781235, 0.1249999925494194, 0.10526315122842789, 0.09090908616781235, 0, 0.1428571343421936, 0, 0.19999998807907104, 0.1764705777168274, 0.13114753365516663, 0.23529411852359772, 0.2666666507720947, 0.12903225421905518, 0.2631579041481018, 0.0714285671710968, 0.07692307233810425, 0.05128204822540283, 0.20000000298023224, 0.1666666567325592, 0.10810810327529907, 0, 0.09999999403953552, 0.11538460850715637, 0.1463414579629898, 0.039215683937072754, 0.1818181723356247, 0.12903225421905518, 0 ]
HJqUtdOaZ
true
[ "A method for enriching and combining features to improve classification accuracy" ]
[ "Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions.", "Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions.", "However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve.", "This is especially true for images that should contain multiple distinct objects at different spatial locations.", "We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator.", "Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed.", "The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes.", "The global pathway focuses on the image background and the general image layout.", "We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set.", "Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations.", "We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image background.", "Understanding how to learn powerful representations from complex distributions is the intriguing goal behind adversarial training on image data.", "While recent advances have enabled us to generate high-resolution images with Generative Adversarial Networks (GANs), currently most GAN models still focus on modeling images that either contain only one centralized object (e.g. faces (CelebA), objects (ImageNet), birds (CUB-200), flowers (Oxford-102), etc.", ") or on images from one specific domain (e.g. LSUN bedrooms, LSUN churches, etc.) .", "This means that, overall, the variance between images used for training GANs tends to be low BID14 .", "However, many real-life images contain multiple distinct objects at different locations within the image and with different relations to each other.", "This is for example visible in the MS-COCO data set BID11 , which consists of images of different objects at different locations within one image.", "In order to model images with these complex relationships, we need models that can model images containing multiple objects at distinct locations.", "To achieve this, we need control over what kind of objects are generated (e.g. persons, animals, objects, etc.), the location, and the size of these objects.", "This is a much more challenging task than generating a single object in the center of an image.Current work BID10 BID9 BID6 Wang et al., 2018) often approaches this challenge by using a semantic layout as additional conditional input.", "While this can be successful in controlling the image layout and object placement, it also places a high burden on the generating process since a complete scene layout must be obtained first.", "We propose a model that does not require a full semantic layout, but instead only requires the desired object locations and identities (see Figure 1 ).", "One part of our model, called the global pathway, is responsible for generating the general layout of the complete image, while a second path, the object pathway, is used to explicitly generate the features of different objects based on the relevant object label and location.", "Our experiments indicate that we do indeed get additional control over the image generation process through the introduction of object pathways in GANs.", "This enables us to control the identity and location of multiple objects within a given image based on bounding boxes and thereby facilitates the generation of more complex scenes.", "We further find that the division of work on a global and object pathway seems to improve the image quality both subjectively and based on quantitative metrics such as the Inception Score and the Fréchet Inception Distance.The results further indicate that the focus on global image statistics by the global pathway and the more fine-grained attention to detail of specific objects by the object pathway works well.", "This is visualized for example in rows C and D of Figure 5 .", "The global pathway (row C) generates features for the general image layout and background but does not provide sufficient details for individual objects.", "The object pathway (row D), on the other hand, focuses entirely on the individual objects and generates features specifically for a given object at a given location.", "While this is the desired behavior Published as a conference paper at ICLR 2019 of our model it can also lead to sub-optimal images if there are not bounding boxes for objects that should be present within the image.", "This can often be the case if the foreground object is too small (in our case less than 2% of the total image) and is therefore not specifically labeled.", "In this case, the objects are sometimes not modeled in the image at all, despite being prominent in the respective image caption, since the object pathway does not generate any features.", "We can observe this, for example, in images described as \"many sheep are standing on the grass\", where the individual sheep are too small to warrant a bounding box.", "In this case, our model will often only generate an image depicting grass and other background details, while not containing any sheep at all.Another weakness is that bounding boxes that overlap too much (empirically an overlap of more than roughly 30%) also often lead to sub-optimal objects at that location.", "Especially in the overlapping section of bounding boxes we often observe local inconsistencies or failures.", "This might be the result of our merging of the different features within the object pathway since they are simply added to each other at overlapping areas.", "A more sophisticated merging procedure could potentially alleviate this problem.Another approach would be to additionally enhance the bounding box layout by predicting the specific object shape within each bounding box, as done for example by BID6 .Finally", ", currently our model does not generate the bounding boxes and labels automatically. Instead", ", they have to be provided at test time which somewhat limits the usability for unsupervised image generation. However", ", even when using ground truth bounding boxes, our models still outperform other current approaches that are tested with ground truth bounding boxes (e.g. BID6 ) based on the IS and FID. This is", "even without the additional need of learning to specify the shape within each bounding box as done by BID6 . In the", "future, this limitation can be avoided by extracting the relevant bounding boxes and labels directly from the image caption, as it is done for example by BID6", "With the goal of understanding how to gain more control over the image generation process in GANs, we introduced the concept of an additional object pathway.", "Such a mechanism for differentiating between a scene representation and object representations allows us to control the identity, location, and size of arbitrarily many objects within an image, as long as the objects do not overlap too strongly.", "In parallel, a global pathway, similar to a standard GAN, focuses on the general scene layout and generates holistic image features.", "The object pathway, on the other hand, gets as input an object label and uses this to generate features specifically for this object which are then placed at the location given by a bounding box The object pathway is applied iteratively for each object at each given location and as such, we obtain a representation of individual objects at individual locations and of the general image layout (background, etc.) as a whole.", "The features generated by the object and global pathway are then concatenated and are used to generate the final image output.", "Our tests on synthetic and real-world data sets suggest that the object pathway is an extension that can be added to common GAN architectures without much change to the original architecture and can, along with more fine-grained control over the image layout, also lead to better image quality." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0.11764705181121826, 0.21621620655059814, 0.25, 0.2790697515010834, 0.15789473056793213, 0.17142856121063232, 0.07407406717538834, 0.06896550953388214, 0.39024388790130615, 0.10526315122842789, 0.05714285373687744, 0.14035087823867798, 0.06666666269302368, 0.12121211737394333, 0.3888888955116272, 0.25641024112701416, 0.277777761220932, 0.2926829159259796, 0.0363636314868927, 0.045454539358615875, 0.1463414579629898, 0.15686273574829102, 0.15789473056793213, 0.3333333432674408, 0.13333332538604736, 0.13793103396892548, 0.10526315122842789, 0.10526315122842789, 0.18518517911434174, 0.09756097197532654, 0.04878048226237297, 0.0952380895614624, 0.13114753365516663, 0.06451612710952759, 0.14999999105930328, 0.07843136787414551, 0.06666666269302368, 0.05714285373687744, 0.0416666604578495, 0.17142856121063232, 0.04878048226237297, 0.20512819290161133, 0.2448979616165161, 0.1111111044883728, 0.17910447716712952, 0.1764705777168274, 0.21052631735801697 ]
H1edIiA9KQ
true
[ "Extend GAN architecture to obtain control over locations and identities of multiple objects within generated images." ]
[ "The demand for abstractive dialog summary is growing in real-world applications.", "For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction.", "However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets.", "We propose an abstractive dialog summarization dataset based on MultiWOZ.", "If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched.", "To address these two drawbacks, we propose Scaffold Pointer Network (SPNet) to utilize the existing annotation on speaker role, semantic slot and dialog domain.", "SPNet incorporates these semantic scaffolds for dialog summarization.", "Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text.", "On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.", "Summarization aims to condense a piece of text to a shorter version, retaining the critical information.", "On dialogs, summarization has various promising applications in the real world.", "For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records.", "There is also a general demand for summarizing meetings in order to track project progress in the industry.", "Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents.", "Hence, dialog summarization will be a potential field in summarization track.", "There are two types of summarization: extractive and abstractive.", "Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information.", "Previous dialog summarization research mostly study extractive summarization (Murray et al., 2005; Maskey & Hirschberg, 2005) .", "Extractive methods merge selected important utterances from a dialog to form summary.", "Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns.", "Therefore, extractive summarization is not the best approach to summarize dialogs.", "However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora.", "Popular abstractive summarization dataset like CNN/Daily Mail (Hermann et al., 2015) is on news documents.", "AMI meeting corpus (McCowan et al., 2005) is the common benchmark, but it only has extractive summary.", "In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ (Budzianowski et al., 2018) .", "Seq2Seq models such as Pointer-Generator (See et al., 2017) have achieved high-quality summaries of news document.", "However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally.", "To address these problems, we propose Scaffold Pointer Network (SPNet).", "SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain.", "Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles.", "Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary.", "Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task.", "We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ.", "SPNet outperforms Pointer-Generator (See et al., 2017) and Transformer (Vaswani et al., 2017) on all the metrics.", "2 RELATED WORK Rush et al. (2015) first applied modern neural models to abstractive summarization.", "Their approach is based on Seq2Seq framework (Sutskever et al., 2014) and attention mechanism (Bahdanau et al., 2015) , achieving state-of-the-art results on Gigaword and DUC-2004 dataset.", "Gu et al. (2016) proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach.", "See et al. (2017) applied pointing (Vinyals et al., 2015) as copy mechanism and use coverage mechanism (Tu et al., 2016) to discourage repetition.", "Most recently, reinforcement learning (RL) has been employed in abstractive summarization.", "RL-based approaches directly optimize the objectives of summarization (Ranzato et al., 2016; Celikyilmaz et al., 2018) .", "However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias (Bahdanau et al., 2017) .", "Recently, pre-training methods are popular in NLP applications.", "BERT (Devlin et al., 2018) and GPT (Radford et al., 2018) have achieved state-of-the-art performance in many tasks, including summarization.", "For instance, proposed a method to pre-train hierarchical document encoder for extractive summarization.", "Hoang et al. (2019) proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance.", "However, there has not been much research on adapting pre-trained models to dialog summarization.", "Dialog summarization, specifically meeting summarization, has been studied extensively.", "Previous work generally focused on statistical machine learning methods in extractive dialog summarization: Galley (2006) used skip-chain conditional random fields (CRFs) (Lafferty et al., 2001 ) as a ranking method in extractive meeting summarization.", "Wang & Cardie (2013) compared support vector machines (SVMs) (Cortes & Vapnik, 1995) with LDA-based topic models (Blei et al., 2003) for producing decision summaries.", "However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark.", "Recent work (Wang & Cardie, 2016; Goo & Chen, 2018; Pan et al., 2018) created abstractive dialog summary benchmarks with existing dialog corpus.", "Goo & Chen (2018) annotated topic descriptions in AMI meeting corpus as the summary.", "However, topics they defined are coarse, such as \"industrial designer presentation\".", "They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization.", "Moreover, Li et al. (2019) first built a model to summarize audio-visual meeting data with an abstractive method.", "However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.", "We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset.", "We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality.", "We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE.", "SPNet outperforms baseline methods in both automatic and human evaluation metrics.", "It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.", "Moreover, we can easily extend SPNet to other summarization tasks.", "We plan to apply semantic slot scaffold to news summarization.", "Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary.", "We also plan to collect a human-human dialog dataset with more diverse human-written summaries.", "A SUPPLEMENT TO CASE STUDY Supplement Summary Transformer: You are planning your trip in Cambridge.", "You are looking for a place to stay.", "The hotel doesn't need to include internet and should include free parking.", "The hotel should be in the type of guesthouse.", "If there is no such hotel, how about one that is in the moderate price range?", "Once you find the hotel, you want to book it for 6 people and 4 nights starting from Sunday.", "Make sure you get the reference number.", "You are also looking forward to dine.", "The restaurant should be in the centre.", "Make sure you get the reference number." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.06666666269302368, 0.19354838132858276, 0.38461539149284363, 0.11999999731779099, 0.25, 0.4166666567325592, 0.10526315122842789, 0.11764705181121826, 0.13333332538604736, 0.07407406717538834, 0.05714285373687744, 0.1818181723356247, 0.06666666269302368, 0.23076923191547394, 0.07999999821186066, 0.24390242993831635, 0.1249999925494194, 0.2142857164144516, 0.10526315122842789, 0.14814814925193787, 0.2222222238779068, 0.1249999925494194, 0, 0.2857142686843872, 0, 0.13636362552642822, 0.1538461446762085, 0.19354838132858276, 0.1818181723356247, 0.11764705181121826, 0.19354838132858276, 0.0714285671710968, 0, 0.19354838132858276, 0, 0.05405404791235924, 0.0555555522441864, 0.14814814925193787, 0.06451612710952759, 0.05714285373687744, 0, 0.05882352590560913, 0.27586206793785095, 0.2702702581882477, 0.19999998807907104, 0, 0.12244897335767746, 0.04878048226237297, 0.32258063554763794, 0.10526315122842789, 0, 0, 0.3870967626571655, 0.23529411852359772, 0.05714285373687744, 0.4285714328289032, 0.4615384638309479, 0.2857142686843872, 0, 0.32258063554763794, 0.1538461446762085, 0.3199999928474426, 0.05128204822540283, 0.2666666507720947, 0, 0.25, 0.07407406717538834, 0, 0, 0.11764705181121826, 0, 0.08695651590824127, 0, 0 ]
B1eibJrtwr
true
[ "We propose a novel end-to-end model (SPNet) to incorporate semantic scaffolds for improving abstractive dialog summarization." ]
[ "Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information.", "A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities.", "Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple.", "Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them.", "We propose a new algorithm, MINERVA, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity.", "Since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths.", "On a comprehensive evaluation on seven knowledge base datasets, we found MINERVA to be competitive with many current state-of-the-art methods.", "Automated reasoning, the ability of computing systems to make new inferences from observed evidence, has been a long-standing goal of artificial intelligence.", "We are interested in automated reasoning on large knowledge bases (KB) with rich and diverse semantics BID44 BID1 BID5 .", "KBs are highly incomplete BID26 , and facts not directly stored in a KB can often be inferred from those that are, creating exciting opportunities and challenges for automated reasoning.", "For example, consider the small knowledge graph in Figure 1 .", "We can answer the question \"Who did Malala Yousafzai share her Nobel Peace prize with?\" from the following reasoning path: Malala Yousafzai → WonAward → Nobel Peace Prize 2014 → AwardedTo → Kailash Satyarthi.", "Our goal is to automatically learn such reasoning paths in KBs.", "We frame the learning problem as one of query answering, that is to say, answering questions of the form (Malala Yousafzai, SharesNobelPrizeWith, ?).From", "its early days, the focus of automated reasoning approaches has been to build systems that can learn crisp symbolic logical rules BID24 BID34 . Symbolic", "representations have also been integrated with machine learning especially in statistical relational learning BID29 BID15 BID21 BID22 , but due to poor generalization performance, these approaches have largely been superceded by distributed vector representations. Learning", "embedding of entities and relations using tensor factorization or neural methods has been a popular approach BID31 BID2 Socher et al., 2013, inter alia) , but these methods cannot capture chains of reasoning expressed by KB paths. Neural multi-hop", "models BID30 BID17 BID47 address the aforementioned problems to some extent by operating on KB paths embedded in vector space. However, these models", "take as input a set of paths which are gathered by performing random walks Figure 1: A small fragment of a knowledge base represented as a knowledge graph. Solid edges are observed", "and dashed edges are part of queries. Note how each query relation", "(e.g. SharesNobelPrizeWith, Nationality, etc.) can be answered by traversing the graph via \"logical\" paths between entity 'Malala Yousafzai' and the corresponding answer.independent of the query relation. Additionally, models such as", "those developed in BID30 ; BID9 use the same set of initially collected paths to answer a diverse set of query types (e.g. MarriedTo, Nationality, WorksIn etc.).This paper presents a method", "for efficiently searching the graph for answer-providing paths using reinforcement learning (RL) conditioned on the input question, eliminating any need for precomputed paths. Given a massive knowledge graph", ", we learn a policy, which, given the query (entity 1 , relation, ?), starts from entity 1 and learns to walk to the answer node by choosing to take a labeled relation edge at each step, conditioning on the query relation and entire path history. This formulates the query-answering", "task as a reinforcement learning (RL) problem where the goal is to take an optimal sequence of decisions (choices of relation edges) to maximize the expected reward (reaching the correct answer node). We call the RL agent MINERVA for \"Meandering", "In Networks of Entities to Reach Verisimilar Answers.\"Our RL-based formulation has many desirable properties. First, MINERVA has the built-in flexibility", "to take paths of variable length, which is important for answering harder questions that require complex chains of reasoning BID42 . Secondly, MINERVA needs no pretraining and", "trains on the knowledge graph from scratch with reinforcement learning; no other supervision or fine-tuning is required representing a significant advance over prior applications of RL in NLP. Third, our path-based approach is computationally", "efficient, since by searching in a small neighborhood around the query entity it avoids ranking all entities in the KB as in prior work. Finally, the reasoning paths found by our agent automatically", "form an interpretable provenance for its predictions.The main contributions of the paper are: (a) We present agent MINERVA, which learns to do query answering", "by walking on a knowledge graph conditioned on an input query, stopping when it reaches the answer node. The agent is trained using reinforcement learning, specifically", "policy gradients ( § 2). (b) We evaluate MINERVA on several benchmark datasets and compare", "favorably to Neural Theorem Provers (NTP) BID39 and Neural LP , which do logical rule learning in KBs, and also state-of-the-art embedding based methods such as DistMult BID54 and ComplEx BID48 and ConvE BID12 . (c) We also extend MINERVA to handle partially structured natural", "language queries and test it on the WikiMovies dataset ( § 3.3) BID25 .We also compare to DeepPath BID53 which uses reinforcement learning", "to pick paths between entity pairs. The main difference is that the state of their RL agent includes the", "answer entity since it is designed for the simpler task of predicting if a fact is true or not. As such their method cannot be applied directly to our more challenging", "query answering task where the second entity is unknown and must be inferred. Nevertheless, MINERVA outperforms DeepPath on their benchmark NELL-995", "dataset when compared in their experimental setting ( § 3.2.2).", "We explored a new way of automated reasoning on large knowledge bases in which we use the knowledge graphs representation of the knowledge base and train an agent to walk to the answer node conditioned on the input query.", "We achieve state-of-the-art results on multiple benchmark knowledge base completion tasks and we also show that our model is robust and can learn long chains-ofreasoning.", "Moreover it needs no pretraining or initial supervision.", "Future research directions include applying more sophisticated RL techniques and working directly on textual queries and documents.", "Table 10 : Few example 1-to-M relations from FB15K-237 with high cardinality ratio of tail to head." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.04878048226237297, 0.09756097197532654, 0.08888888359069824, 0.05714285373687744, 0.1904761791229248, 0.30188679695129395, 0.277777761220932, 0.10810810327529907, 0.22857142984867096, 0.08888888359069824, 0.1538461446762085, 0.0952380895614624, 0.07407406717538834, 0.10526315122842789, 0.04999999701976776, 0.04255318641662598, 0.07547169178724289, 0.10526315122842789, 0.1904761791229248, 0.1428571343421936, 0.12765957415103912, 0.12765957415103912, 0.20512819290161133, 0.2545454502105713, 0.2745097875595093, 0.1111111044883728, 0.19512194395065308, 0.20408162474632263, 0.09090908616781235, 0.2926829159259796, 0.2926829159259796, 0.25806450843811035, 0.18518517911434174, 0.29999998211860657, 0.1764705777168274, 0.1304347813129425, 0.1621621549129486, 0, 0.42553192377090454, 0.19999998807907104, 0, 0.25, 0.060606054961681366 ]
Syg-YfWCW
true
[ "We present a RL agent MINERVA which learns to walk on a knowledge graph and answer queries" ]