source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "In this paper, we introduce a novel method to interpret recurrent neural networks (RNNs), particularly long short-term memory networks (LSTMs) at the cellular level.", "We propose a systematic pipeline for interpreting individual hidden state dynamics within the network using response characterization methods.", "The ranked contribution of individual cells to the network's output is computed by analyzing a set of interpretable metrics of their decoupled step and sinusoidal responses.", "As a result, our method is able to uniquely identify neurons with insightful dynamics, quantify relationships between dynamical properties and test accuracy through ablation analysis, and interpret the impact of network capacity on a network's dynamical distribution.", "Finally, we demonstrate generalizability and scalability of our method by evaluating a series of different benchmark sequential datasets.", "In this paper, we proposed a method for response characterization for LSTM networks to predict cell-contributions to the overall decision of a learned network on both the cell and network-level resolution.", "We further verified and validated our predictions by performing an ablation analysis to identify cell's which contribution heavily to the network's output decision with our simple response characterization method.", "The resulting method establishes a novel building block for interpreting LSTM networks.", "The LSTM network's dynamic-space is broad and cannot be fully captured by fundamental input sequences.", "However, our methodology demonstrates that practical sub-regions of dynamics are reachable by response metrics which we use to build a systematic testbench for LSTM interpretability.", "We have open-sourced our algorithm to encourage other researchers to further explore dynamics of LSTM cells and interpret the kinetics of their sequential models.In the future, we aim to extend our approach to even more data modalities and analyze the training phase of LSTMs to interpret the learning of the converged dynamics presented in this work.7", "Acknowledgment" ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.29999998211860657, 0.2857142686843872, 0.04878048226237297, 0.07843136787414551, 0.05882352590560913, 0.3181818127632141, 0.13636362552642822, 0.27586206793785095, 0, 0.1428571343421936, 0.09999999403953552 ]
HygkbYBw3X
true
[ "Introducing the response charactrization method for interpreting cell dynamics in learned long short-term memory (LSTM) networks. " ]
[ "Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties.", "The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns.", "However, the mechanisms and functional significance of these spatial representations remain largely mysterious.", "As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs.", "Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells.", "All these different functional types of neurons have been observed experimentally.", "The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies.", "Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.\n", "Understanding the neural code in the brain has long been driven by studying feed-forward architectures, starting from Hubel and Wiesel's famous proposal on the origin of orientation selectivity in primary visual cortex BID19 .", "Inspired by the recent development in deep learning BID25 BID30 BID18 BID39 , there has been a burst of interest in applying deep feedforward models, in particular convolutional neural networks (CNN) BID29 , to study the sensory systems, which hierarchically extract useful features from sensory inputs (see e.g., BID61 ; BID24 ; BID22 ; BID60 ).For", "more cognitive tasks, neural systems often need to maintain certain internal representations of relevant variables in the absence of external stimuli-a process that requires more than feature extraction. We", "will focus on spatial navigation, which typically requires the brain to maintain a representation of self-location and update it according to the animal's movements and landmarks of the environment. Physiological", "studies done in rodents and other mammals (including humans, non-human primates and bats) have revealed a variety of neural correlates of space in Hippocampus and Entorhinal Cortex (EC), including place cells BID41 , grid cells BID10 BID15 BID11 BID62 BID23 BID20 , along with border cells BID49 , band-like cells BID27 and others (see FIG0 ). In particular", ", each grid cell only fires when the animal occupies a distinct set of physical locations, and strikingly these locations lie on a lattice. The study of", "the neural underpinning of spatial cognition has provided an important window into how high-level cognitive functions are supported in the brain BID0 .How might the", "spatial navigation task be solved using a network of neurons? Recurrent neural", "networks (RNNs) BID18 BID12 BID43 BID54 BID13 BID53 seem particularly useful for these tasks. Indeed, recurrent-based", "continuous attractor networks have been one popular type of models proposed for the formation of grid cells BID4 BID5 and place cells BID45 . Such models have provided", "valuable insights into one set of possible mechanisms that could support the formation of the grids. However, these models typically", "rely on fine-tuned connectivity patterns, in particular the models need a subtle yet systematic asymmetry in the connectivity pattern to move the attractor state according to the animal's own movement. The existence of such a specific", "2D connectivity in rodent EC remains unclear. Additionally, previous models have", "mainly focused on grid cells, while other types of responses that co-exist in the Entorhinal Cortex have been largely ignored. It would be useful to have a unified", "model that can simultaneously explain different types of neural responses in EC.Motivated by these considerations, here we present an alternative modeling approach for understanding the representation of space in the neural system. Specifically, we trained a RNN to perform", "some spatial navigation tasks. By leveraging the recent development in RNN", "training and knowledge of the navigation system in the brain, we show that training a RNN with biologically relevant constraints naturally gives rise to a variety of spatial response profiles as observed in EC, including grid-like responses. To our knowledge, this is the first study to", "show that grid-like responses could emerge from training a RNN to perform navigation. Our result implies that the neural representation", "in EC may be seen as a natural way for the brain to solve the navigation task efficiently BID55 . More generally, it suggests that RNNs can be a powerful", "tool for understanding the neural mechanisms of certain high-level cognitive functions. recorded when an animal navigates in a square environment", ", replotted from BID27 , with the heat map representing the firing rate of this neuron as a function of the animal's location (red corresponds to high firing rate); a \"band-like\" cell from BID27 ; a border cell from BID49 ; an irregular spatially tuned cell from BID7 ; a \"speed cell\" from BID26 , which exhibits roughly linear dependence on the rodent's running speed; a \"heading direction cell\" from BID46 , which shows systematic change of firing rate depending on animal's heading direction. b) The network consists of N = 100 recurrently connected", "units (or neurons) which receive two external inputs, representing the animal's speed and heading direction. The two outputs linearly weight the neurons in the RNN.", "The goal of training is to make the responses of the two", "output neurons accurately represent the animal's physical location. c) Typical trajectory after training. As shown, the output", "of the RNN can accurately, though not", "perfectly, track the animal's location during navigation.", "In this paper, we trained RNNs to perform path integration (dead-reckoning) in 2D arenas.", "We found that after training RNNs with appropriate regularization, the model neurons exhibit a variety of spatial and velocity tuning profiles that match neurophysiology in EC.", "What's more, there is also similarity in terms of when these distinct neuron types emerge during training/development.", "The EC has long been thought to be involved in path integration and localization of the animal's location .", "The general agreement between the different response properties in our model and the neurophysiology provide strong evidence supporting the hypothesis that the neural population in EC may provide an efficient code for representation self-locations based on the velocity input.Recently, there has been increased interest in using complex neural network models to understand the neural code.", "But the focus has been on using feedforward architectures, in particular CNNs BID29 .", "Given the abundant recurrent connections in the brain, it seems a particularly fruitful avenue to take advantage of the recent development in RNNs to help with neuroscience questions BID34 BID50 BID37 BID53 .", "Here, we only show one instance following this approach.", "However, the insight from this work could be general, and potentially useful for other cognitive functions as well.The finding that metabolic constraints lead to the emergence of grid-like responses may be seen as conceptually related to the efficient coding hypothesis in visual processing BID1 , in particular the seminal work on the emergence of the V1-like Gabor filters in a sparse coding model by BID42 .", "Indeed, our work is partly inspired by these results.", "While there are conceptual similarities, however, we should also note there are differences between the sparse coding work and ours.", "First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way.", "Second, the grid-like responses are not the most sparse solution one could imagine.", "In fact, they are still quite dense compared to a more spatially localized representation.", "Third, the grid-like patterns that emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations.", "Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells BID55 .", "It has been shown that efficient coding considerations could explain the particular set of grid scales observed in rodents BID52 .", "However, in that work, the firing patterns of the neurons are assumed to have a lattice structure to start with.", "Furthermore, our work is related to the study by Sussillo and others BID53 , in which they show that regularization of RNN models are important for generating solutions that are similar to the neural activity observed in motor cortex.", "In Sussillo et al., a smoothness constraint together with others lead to simple oscillatory neural dynamics that well matches the neural data.", "We have not incorporated a smoothness constraint into our network.Additionally, we note that there are a few recent studies which use place cells as the input to generate grid cells BID8 BID51 , which are fundamentally different from our work.", "In these feedforward network models, the grid cells essentially perform dimensionality reduction based on the spatial input from place cells.", "However, the main issue with these models is that, it is unclear how place cells acquire spatial tuning in the first place.", "To the contrary, our model takes the animal's velocity as the input, and addresses the question of how the spatial tuning can be generated from such input, which are known to exist in EC BID46 BID26 .", "In another related study BID21 , the authors train a RNN with LSTM units BID18 to perform different navigation tasks.", "However, no grid-like spatial firing patterns are reported.Although our model shows a qualitative match to the neural responses observed in the EC, nonetheless it has several major limitations, with each offering interesting future research directions.", "First, the learning rule we use seems to be biologically implausible.", "We are interested in exploring how a more biologically plausible learning rule could give rise to similar results BID32 BID37 BID14 .", "Second, the simulation results do not show a variety of spatial scales in grid-like cells.", "Experimentally, it is known that grid cells have multiple spatial scales, that scale geometrically with a ratio 1.4 BID52 , and this particular scale ratio is predicted by efficient coding of space BID55 .", "We are investigating how to modify the model to get a hierarchy of spatial scales, perhaps by incorporating more neurons or modifying the regularization.", "Last but not least, we have focused on the representation produced by the trained RNN.", "An equally important set of questions concern how the networks actually support the generation of such a representation.", "As a preliminary effort, we have examined the connectivity patterns of the trained network, and they do not seem to resemble the connectivity patterns required by standard attractor network models.", "Maybe this should not be seen as too surprising.", "After all, the trained networks can produce a diverse set of neural responses, while the previous models only led to grid responses.", "It would be interesting for future work to systematically examine the questions related to the underlying mechanisms.", "To quantify the speed selectivity of each unit we first fit a line to the tuning curve of unit activity as a function of speed.", "The speed selectivity is the absolute value of the slope.", "If the unit activity is not modulated by speed then the speed selectivity is 0.", "To quantify the direction selectivity of each unit we calculated the average unit activity as a function of direction input and then took the maximum minus minimum of this tuning curve.", "If the unit activity is not modulated by direction then the direction selectivity is 0.", "To quantify the spatial selectivity we used lifetime sparseness BID56 .", "If the unit activity is not modulated by spatial location then the spatial selectivity is 0.", "Each dot in the figures below show the selectivity for a single unit." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19607841968536377, 0.17543859779834747, 0.1702127605676651, 0.27586206793785095, 0.24561403691768646, 0.08888888359069824, 0.31372547149658203, 0.3333333134651184, 0.1875, 0.1904761791229248, 0.19672130048274994, 0.16949151456356049, 0.19512194395065308, 0.16949151456356049, 0.17543859779834747, 0.21739129722118378, 0.03999999538064003, 0.14035087823867798, 0.11538460850715637, 0.158730149269104, 0.04444444179534912, 0.16393442451953888, 0.2028985470533371, 0.17777776718139648, 0.5753424763679504, 0.4528301954269409, 0.19999998807907104, 0.18518517911434174, 0.1855670064687729, 0.1071428507566452, 0.22727271914482117, 0.08163265138864517, 0.09756097197532654, 0.09756097197532654, 0.1666666567325592, 0.20338982343673706, 0.15686273574829102, 0.19230768084526062, 0.17721518874168396, 0.08510638028383255, 0.22580644488334656, 0.09302325546741486, 0.25581395626068115, 0.09302325546741486, 0.07692307233810425, 0.26229506731033325, 0.1304347813129425, 0.0833333283662796, 0.22580644488334656, 0.18518517911434174, 0.18518517911434174, 0.19230768084526062, 0.3235293924808502, 0.1428571343421936, 0.22857142984867096, 0.19230768084526062, 0.22641508281230927, 0.307692289352417, 0.25925925374031067, 0.23188404738903046, 0.08888888359069824, 0.1818181723356247, 0.2857142686843872, 0.1875, 0.178571417927742, 0.0416666641831398, 0.1599999964237213, 0.19999998807907104, 0.09302325546741486, 0.1818181723356247, 0.08163265138864517, 0.2641509473323822, 0.1395348757505417, 0.08695651590824127, 0.23728813230991364, 0.08695651590824127, 0.09090908616781235, 0.08510638028383255, 0.17391304671764374 ]
B17JTOe0-
true
[ "To our knowledge, this is the first study to show how neural representations of space, including grid-like cells and border cells as observed in the brain, could emerge from training a recurrent neural network to perform navigation tasks." ]
[ "Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song.", "Such components include voice, bass, drums and any other accompaniments.", "While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models.", "We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a (transposed) convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks (Ronneberger et al., 2015).", "Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net (Stoller et al., 2018), the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights. ", "Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio (SDR) nearly 2.2 points higher than the best waveform-to-waveform competitor (from 3.2 to 5.4 SDR).", "This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches.", "Cherry first noticed the \"cocktail party effect\" (Cherry, 1953) : how the human brain is able to separate a single conversation out of a surrounding noise from a room full of people chatting.", "Bregman later tried to understand how the brain was able to analyse a complex auditory signal and segment it into higher level streams.", "His framework for auditory scene analysis (Bregman, 1990 ) spawned its computational counterpart, trying to reproduce or model accomplishments of the brains with algorithmic means (Wang & Brown, 2006) , in particular regarding source separation capabilities.", "When producing music, recordings of individual instruments called stems are arranged together and mastered into the final song.", "The goal of source separation is to recover those individual stems from the mixed signal.", "Unlike the cocktail party problem, there is not a single source of interest to differentiate from an unrelated background noise, but instead a wide variety of tones and timbres playing in a coordinated way.", "In the SiSec Mus evaluation campaign for music separation (Stöter et al., 2018) , those individual stems were grouped into 4 broad categories: (1) drums, (2) bass, (3) other, (4) vocals.", "Given a music track which is a mixture of these four sources, also called the mix, the goal is to generate four waveforms that correspond to each of the original sources.", "We consider here the case of supervised source separation, where the training data contain music tracks (i.e., mixtures), together with the ground truth waveform for each of the sources.", "State-of-the-art approaches in music source separation still operate on the spectrograms generated by the short-time Fourier transform (STFT).", "They produce a mask on the magnitude spectrums for each frame and each source, and the output audio is generated by running an inverse STFT on the masked spectrograms reusing the input mixture phase Takahashi et al., 2018) .", "Several architectures trained end-to-end to directly synthesize the waveforms have been proposed (Lluís et al., 2018; Jansson et al., 2017) , but their performances are far below the state-of-the-art: in Figure 1 : Mel-spectrogram for a 0.8 seconds extract of the bass source from the track \"Stich Up\" of the MusDB test.", "From left to right: ground truth, Conv-Tasnet estimate and Demucs estimate.", "We observe that Conv-Tasnet missed one note entirely.", "the last SiSec Mus evaluation campaign (Stöter et al., 2018) , the best model that directly predicts waveforms achieves an average signal-to-noise ratio (SDR) over all four sources of 3.2, against 5.3 for the best approach that predicts spectrograms masks (also see Table 1 in Section 6).", "An upper bound on the performance of all methods relying on masking spectrograms is given by the SDR obtained when using a mask computed using the ground truth sources spectrograms, for instance the Ideal Ratio Mask (IRM) or the Ideal Binary Mask (IBM) oracles.", "For speech source separation, Luo & Mesgarani (2019) proposed Conv-Tasnet, a model that reuses the masking approach of spectrogram methods but learns the masks jointly with a convolutional front-end, operating directly in the waveform domain for both the inputs and outputs.", "Conv-Tasnet surpasses both the IRM and IBM oracles.", "Our first contribution is to adapt the Conv-Tasnet architecture, originally designed for monophonic speech separation and audio sampled at 8 kHz, to the task of sterephonic music source separation for audio sampled at 44.1 kHz.", "Our experiments show that Conv-Tasnet outperforms all previous methods by a large margin, with an SDR of 5.7, but still under the SDR of the IRM oracle at 8.2 (Stöter et al., 2018) .", "However, while Conv-Tasnet separates with a high accuracy the different sources, we observed artifacts when listening to the generated audio: a constant broadband noise, hollow instruments attacks or even missing parts.", "They are especially noticeable on the drums and bass sources and we give one such example on Figure 1 .", "Conv-Tasnet uses an over-complete linear representation on which it applies a mask obtained from a deep convolutional network.", "Because both the encoder and decoder are linear, the masking operation cannot synthesize new sounds.", "We conjecture that the overlap of multiples instruments sometimes lead to a loss of information that is not reversible by a masking operation.", "To overcome the limitations of Conv-Tasnet, our second contribution is to propose Demucs, a new architecture for music source separation.", "Similarly to Conv-Tasnet, Demucs is a deep learning model that directly operates on the raw input waveform and generates a waveform for each source.", "Demucs is inspired by models for music synthesis rather than masking approaches.", "It is a U-net architecture with a convolutional encoder and a decoder based on wide transposed convolutions with large strides inspired by recent work on music synthesis (Défossez et al., 2018) .", "The other critical features of the approach are a bidirectional LSTM between the encoder and the decoder, increasing the number of channels exponentially with depth, gated linear units as activation function (Dauphin et al., 2017) which also allow for masking, and a new initialization scheme.", "We present experiments on the MusDB benchmark, which first show that both Conv-Tasnet and Demucs achieve performances significantly better than the best methods that operate on the spectrogram, with Conv-Tasnet being better than Demucs in terms of SDR.", "We also perform human evaluations that compare Conv-Tasnet and our Demucs, which show that Demucs has significantly better perceived quality.", "The smaller SDR of Demucs is explained by more contamination from other sources.", "We also conduct an in-depth ablation study of the Demucs architecture to demonstrate the impact of the various design decisions.", "Finally, we carry out additional experiments by adding 150 songs to the training set.", "In this experiment, Demucs and TasNet both achieve an SDR of 6.3, suggesting that the gap in terms of SDR between the two models diminishes with more data, making the Demucs approach promising.", "The 6.3 points of SDR also set a new state-of-the-art, since it improves on the best previous result of 6.0 on the MusDB test set obtained by training with 800 additional songs.", "We discuss in more detail the related work in the next Section.", "We then describe the original ConvTasnet model of Luo & Mesgarani (2018) and its adaptation to music source separation.", "Our Demucs architecture is detailed in Section 4.", "We present the experimental protocol in Section 5, and the experimental results compared to the state-of-the-art in Section 6.", "Finally, we describe the results of the human evaluation and the ablation study.", "We showed that Conv-Tasnet, a state-of-the-art architecture for speech source separation that predicts masks on a learnt front-end over the waveform domain, achieves state-of-the-art performance for music source separation, improving over all previous spectrogram or waveform domain methods by 0.4 SDR.", "While Conv-Tasnet has excellent performance to separate sources, it suffers from noticeable artifacts as confirmed by human evaluations.", "We developed an alternative approach, Demucs, that combines the ability to mask over a learnt representation with stronger decoder capacity that allows for audio synthesis.", "We conjecture that this can be useful when information is lost in the mix of instruments and cannot simply be recovered by masking.", "We show that our approach produces audio of significantly higher quality as measures by mean opinion scores and matches the SDR of Conv-Tasnet when trained with 150 extra tracks.", "We believe those results make it a promising alternative to methods based on masking only." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0, 0.17391303181648254, 0.26923075318336487, 0.16949151456356049, 0.09302324801683426, 0.3243243098258972, 0.13636362552642822, 0.10810810327529907, 0.19607843458652496, 0.12121211737394333, 0.13333332538604736, 0.17391303181648254, 0.043478257954120636, 0.1538461446762085, 0.2380952388048172, 0.1249999925494194, 0.0833333283662796, 0.19672130048274994, 0, 0.08695651590824127, 0.13793103396892548, 0.15686273574829102, 0.3461538553237915, 0.08695651590824127, 0.09090908616781235, 0.1666666567325592, 0.13636362552642822, 0.0624999962747097, 0.0624999962747097, 0.06896550953388214, 0.22857142984867096, 0.17142856121063232, 0.21621620655059814, 0, 0.1395348757505417, 0.145454540848732, 0.2222222238779068, 0.05882352590560913, 0.0714285671710968, 0.1875, 0.06896550953388214, 0.1818181723356247, 0.1818181723356247, 0.23999999463558197, 0.23529411852359772, 0.08695651590824127, 0.20689654350280762, 0.1538461446762085, 0.2800000011920929, 0.060606054961681366, 0.20512820780277252, 0.21621620655059814, 0.23255813121795654, 0.19999998807907104 ]
HJx7uJStPH
true
[ "We match the performance of spectrogram based model with a model trained end-to-end in the waveform domain" ]
[ "Although challenging, strategy profile evaluation in large connected learner networks is crucial for enabling the next wave of machine learning applications.", "Recently, $\\alpha$-Rank, an evolutionary algorithm, has been proposed as a solution for ranking joint policy profiles in multi-agent systems.", "$\\alpha$-Rank claimed scalability through a polynomial time implementation with respect to the total number of pure strategy profiles.", "In this paper, we formally prove that such a claim is not grounded.", "In fact, we show that $\\alpha$-Rank exhibits an exponential complexity in number of agents, hindering its application beyond a small finite number of joint profiles.", "Realizing such a limitation, we contribute by proposing a scalable evaluation protocol that we title $\\alpha^{\\alpha}$-Rank.", "Our method combines evolutionary dynamics with stochastic optimization and double oracles for \\emph{truly} scalable ranking with linear (in number of agents) time and memory complexities.", "Our contributions allow us, for the first time, to conduct large-scale evaluation experiments of multi-agent systems, where we show successful results on large joint strategy profiles with sizes in the order of $\\mathcal{O}(2^{25})$ (i.e., $\\approx \\text{$33$ million strategies}$) -- a setting not evaluable using current techniques.", "Scalable policy evaluation and learning have been long-standing challenges in multi-agent reinforcement learning (MARL) with two difficulties obstructing progress.", "First, joint-strategy spaces exponentially explode when a large number of strategic decision-makers is considered, and second, the underlying game dynamics may exhibit cyclic behavior (e.g. the game of Rock-Paper-Scissor) rendering an appropriate evaluation criteria non-trivial.", "Focusing on the second challenge, much work in multi-agent systems followed a game-theoretic treatment proposing fixed-points, e.g., Nash (Nash et al., 1950) equilibrium, as potentially valid evaluation metrics.", "Though appealing, such measures are normative only when prescribing behaviors of perfectly rational agents -an assumption rarely met in reality Grau-Moya et al. (2018) ; Wen et al. (2019) .", "In fact, many game dynamics have been proven not converge to any fixed-point equilibria (Hart & Mas-Colell, 2003; Viossat, 2007) , but rather to limit cycles (Palaiopanos et al., 2017; Bowling & Veloso, 2001) .", "Apart from these aforementioned inconsistencies, solving for a Nash equilibrium even for \"simple\" settings, e.g. two-player games is known to be PPAD-complete (Chen & Deng, 2005 ) -a demanding complexity class when it comes to computational requirements.", "To address some of the above limitations, recently proposed α-Rank as a graph-based game-theoretic solution to multi-agent evaluation.", "α-Rank adopts Markov Conley Chains to highlight the presence of cycles in game dynamics, and attempts to compute stationary distributions as a mean for strategy profile ranking.", "Though successful in small-scale applications, α-Rank severely suffers in scalability contrary to polynomial time claims made in .", "In fact, we show that α-Rank exhibits exponential time and memory complexities shedding light on the small-scale empirical study conducted in , whereby the largest reported game included only four agents with four available strategies each.", "In this work, we put forward α α -Rank as a scalable alternative for multi-agent evaluation with linear time and memory demands.", "Our method combines numerical optimization with evolutionary game theory for a scalable solver capable of handling large joint spaces with millions of strategy profiles.", "To handle even larger profiles, e.g., tens to hundreds of millions, we further introduce an oracle Figure 1: Example of population based evaluation on N = 3 learners each with 3 strategies and 5 copies.", "a) Each population obtains a fitness value P i depending on the strategies chosen,", "b) mutation strategy (red star), and", "c) population either selecting original strategy, or adopting the novel strategy.", "( McMahan et al., 2003) mechanism transforming joint evaluation into a sequence of incremental sub-games with varying sizes.", "Given our algorithmic advancements, we justify our claims in a largescale empirical study involving systems with O(2 25 ) possible strategy profiles.", "We first demonstrate the computation advantages of α α -Rank on varying size stochastic matrices against other implementations in Numpy, PyTorch, and OpenSpiel .", "With these successes, we then consider experiments unsolvable by current techniques.", "Precisely, we evaluate multi-agent systems in self-driving and Ising model scenarios each exhibiting a prohibitively-large strategy space (i.e., order of thousands for the former, and tens of millions for the latter).", "Here, we again show that α α -Rank is capable of recovering correct strategy ranking in such complex domains.", "So far, we have presented scalable multi-agent evaluations through stochastic optimization.", "We can further boost scalability (to tens of millions of joint profiles) of our method by introducing an oracle mechanism.", "The heuristic of oracles was first introduced in solving large-scale zero-sum matrix games (McMahan et al., 2003) .", "The idea is to first create a restricted sub-game in which all players are only allowed to play a restricted number of strategies, which are then expanded by adding incorporating each of the players' best-responses to opponents; the sub-game will be replayed with agents' augmented strategy pools before a new round of best responses is found.", "The worse-case scenario of introducing oracles would be to solve the original evaluation problem in full size.", "The best response is assumed to be given by an oracle that can be simply implemented by a grid search.", "Precisely, given the top-rank profile π", "at iteration k, the goal for agent i is to select 4 the optimal π * i from the pre-defined strategy pool S i to maximize the reward", "with x [k]", "h denoting the state, u", "−i,h ) denoting the actions from agent i and the opponents, respectively.", "The heuristic of solving the full game from restricted sub-games is crucial especially when it is prohibitively expensive to list all joint-strategy profiles, e.g., in scenarios involving tens-of-millions of joint profiles.", "For a complete exposition, we summarize the pseudo-code in Algorithm 1.", "In the first phase, vanilla α α -Rank is executed (lines 4-9), while in the second (lines 11 -13), α α -Rank with Oracle (if turned on) is computed.", "To avoid any confusion, we refer to the latter as α α -Oracle.", "Note that even though in the two-player zero-sum games, the oracle algorithm (McMahan et al., 2003) is guaranteed to converge to the minimax equilibrium.", "Providing valid convergence guarantees for α α -Oracle is an interesting direction for future work.", "In this paper, we rather demonstrate the effectiveness of such an approach in a large-scale empirical study as shown in Section 4.", "In this paper, we demonstrated that the approach in exhibits exponential time and memory complexities.", "We then proposed α α -Rank as a scalable solution for multi-agent evaluation with linear time and memory demands.", "In a set of experiments, we demonstrated that our method is truly scalable capable of handling large strategy spaces.", "There are a lot of interesting avenues for future research.", "First, we plan to theoretically analyze convergence properties of the resulting oracle algorithm, and further introduce policy learning through oracles.", "Second, we plan take our method to the real-world by conducting multi-robot experiments.", "joint and transition probability matrix T [k] .", "The second-smallest eigenvalue of the normalized Laplacian of the graph associated with the Markov chain is given by:", ", with s i denoting the number of strategies of agent i.", "Proof : For simplicity we drop round index k in the below derivation.", "Notice, the underlying graph for the constructed Markov Chain can be represented as a Cartesian product of N complete graphs", "Indeed, two vertices π [k] ,π [k] ∈ G are connected by the edge if and if only these joint strategy profiles differ in at most one individual strategy, i.e ∃!i", "∈ {1, . . . , N } :", "−i }.Hence", ", the spectral properties of G can be described in terms of spectral properties of K si as follows (Barik et al., 2015) :", ") is the i th eigenvalue of the unnormalized Laplacian of the complete graph K sj and ϑ i,j is the corresponding eigenvector 7 .", "The spectrum of unnormalized Laplacian of the complete graph K si is given by Spectr(K si ) = {0, s i − 1} and the only eigenvector corresponding to zero eigenvalue is 1 ∈ R si .", "Therefore, the minimum non-zero eigenvalue of unnormalized Laplacian of G is given by min i s i − 1.", "Finally, due to the fact that G is a regular graph (with degree of each node is equal to N i=1 s i − N + 1), the smallest non-zero eigenvalue of the normalized Laplacian of G is given by", "Giving this result, the overall time complexity of Power Method is bounded by O n × log", "= O (log n).", "As for the memory complexity, Power Method requires has the same requirements as PageRank algorithm.", "8 These results imply that Power Method scales exponentially with number of agents N , and therefore, inapplicable when N is large." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.19999998807907104, 0.307692289352417, 0.05882352590560913, 0.22727271914482117, 0.17142856121063232, 0.3636363446712494, 0.20895521342754364, 0.25641024112701416, 0.18518517911434174, 0.15686273574829102, 0.1249999925494194, 0.037735845893621445, 0.10526315122842789, 0.307692289352417, 0.21276594698429108, 0.1666666567325592, 0.2181818187236786, 0.4285714328289032, 0.1860465109348297, 0.178571417927742, 0.05714285373687744, 0.07407406717538834, 0, 0.19999998807907104, 0.1428571343421936, 0.1860465109348297, 0, 0.19999998807907104, 0.10256409645080566, 0.1249999925494194, 0.10256409645080566, 0.10256409645080566, 0.1846153736114502, 0.21052631735801697, 0.10256409645080566, 0, 0.04651162400841713, 0.0833333283662796, 0, 0.0624999962747097, 0.11538460850715637, 0.1249999925494194, 0.09302324801683426, 0.060606054961681366, 0.09302324801683426, 0, 0.1428571343421936, 0.2222222238779068, 0.5641025304794312, 0.1538461446762085, 0.12903225421905518, 0.1463414579629898, 0.05882352590560913, 0.0714285671710968, 0.1111111044883728, 0.19354838132858276, 0.05882352590560913, 0.09999999403953552, 0.07843136787414551, 0, 0.1463414579629898, 0.09999999403953552, 0.11538460850715637, 0.052631575614213943, 0.11538460850715637, 0.15789473056793213, 0, 0.05714285373687744, 0.2380952388048172 ]
Hkg_8xBYDS
true
[ "We provide a scalable solution to multi-agent evaluation with linear rate complexity in both time and memory in terms of number of agents" ]
[ "Deep neural networks are known to be annotation-hungry.", "Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks.", "Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data.", "In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques.", "In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner.", "To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network.", "During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively.", "Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods.", "Code is available at https://github.com/LiJunnan1992/DivideMix .", "The remarkable success in training deep neural networks (DNNs) is largely attributed to the collection of large datasets with human annotated labels.", "However, it is extremely expensive and time-consuming to label extensive data with high-quality annotations.", "On the other hand, there exist alternative and inexpensive methods for mining large-scale data with labels, such as querying commercial search engines (Li et al., 2017a) , downloading social media images with tags (Mahajan et al., 2018) , leveraging machine-generated labels (Kuznetsova et al., 2018) , or using a single annotator to label each sample (Tanno et al., 2019) .", "These alternative methods inevitably yield samples with noisy labels.", "A recent study (Zhang et al., 2017) shows that DNNs can easily overfit to noisy labels and results in poor generalization performance.", "Existing methods on learning with noisy labels (LNL) primarily take a loss correction approach.", "Some methods estimate the noise transition matrix and use it to correct the loss function (Patrini et al., 2017; Goldberger & Ben-Reuven, 2017) .", "However, correctly estimating the noise transition matrix is challenging.", "Some methods leverage the predictions from DNNs to correct labels and modify the loss accordingly (Reed et al., 2015; Tanaka et al., 2018) .", "These methods do not perform well under high noise ratio as the predictions from DNNs would dominate training and cause overfitting.", "To overcome this, Arazo et al. (2019) adopt MixUp augmentation.", "Another approach selects or reweights samples so that noisy samples contribute less to the loss (Jiang et al., 2018; Ren et al., 2018) .", "A challenging issue is to design a reliable criteria to select clean samples.", "It has been shown that DNNs tend to learn simple patterns first before fitting label noise (Arpit et al., 2017) .", "Therefore, many methods treat samples with small loss as clean ones (Jiang et al., 2018; Arazo et al., 2019) .", "Among those methods, Co-teaching (Han et al., 2018) and Co-teaching+ train two networks where each network selects small-loss samples in a mini-batch to train the other.", "Another active area of research that also aims to reduce annotation cost is semi-supervised learning (SSL).", "In SSL, the training data consists of unlabeled samples in addition to the labeled samples.", "Significant progress has been made in leveraging unlabeled samples by enforcing the model to produce low entropy predictions on unlabeled data (Grandvalet & Bengio, 2004) or consistent predictions on perturbed input (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2019) .", "Recently, Berthelot et al. (2019) propose MixMatch, which unifies several dominant SSL approaches in one framework and achieves state-of-the-art performance.", "Despite the individual advances in LNL and SSL, their connection has been underexplored.", "In this work, we propose DivideMix, which addresses learning with label noise in a semi-supervised manner.", "Different from most existing LNL approaches, DivideMix discards the sample labels that are highly likely to be noisy, and leverages the noisy samples as unlabeled data to regularize the model from overfitting and improve generalization performance.", "The key contributions of this work are:", "• We propose co-divide, which trains two networks simultaneously.", "For each network, we dynamically fit a Gaussian Mixture Model (GMM) on its per-sample loss distribution to divide the training samples into a labeled set and an unlabeled set.", "The divided data is then used to train the other network.", "Co-divide keeps the two networks diverged, so that they can filter different types of error and avoid confirmation bias in self-training.", "• During SSL phase, we improve MixMatch with label co-refinement and co-guessing to account for label noise.", "For labeled samples, we refine their ground-truth labels using the network's predictions guided by the GMM for the other network.", "For unlabeled samples, we use the ensemble of both networks to make reliable guesses for their labels.", "• We experimentally show that DivideMix significantly advances state-of-the-art results on multiple benchmarks with different types and levels of label noise.", "We also provide extensive ablation study and qualitative results to examine the effect of different components.", "2 RELATED WORK", "In this paper, we propose DivideMix for learning with noisy labels by leveraging SSL.", "Our method trains two networks simultaneously and achieves robustness to noise through dataset co-divide, label co-refinement and co-guessing.", "Through extensive experiments across multiple datasets, we show that DivideMix consistently exhibits substantial performance improvements compared to state-of-the-art methods.", "For future work, we are interested in incorporating additional ideas from SSL to LNL, and vice versa.", "Furthermore, we are also interested in adapting DivideMix to other domains such as NLP." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.13793103396892548, 0.3571428656578064, 0.5, 0.21276594698429108, 0, 0.11764705181121826, 0.07999999821186066, 0, 0.1111111044883728, 0.0714285671710968, 0.0952380895614624, 0.260869562625885, 0.1621621549129486, 0.5, 0, 0, 0.05714285373687744, 0, 0, 0.11428570747375488, 0.07692307233810425, 0, 0.0624999962747097, 0.04999999701976776, 0.13333332538604736, 0, 0.038461536169052124, 0.11764705181121826, 0, 0.3333333134651184, 0.13333332538604736, 0, 0.17391303181648254, 0.09756097197532654, 0, 0, 0.06666666269302368, 0.0624999962747097, 0.06451612710952759, 0.17142856121063232, 0.06666666269302368, 0, 0.3571428656578064, 0, 0.060606054961681366, 0, 0 ]
HJgExaVtwr
true
[ "We propose a novel semi-supervised learning approach with SOTA performance on combating learning with noisy labels." ]
[ "We present a new algorithm to train a robust neural network against adversarial attacks. \n", "Our algorithm is motivated by the following two ideas.", "First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness. \n", "Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way.", "Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net.", "Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks.", "On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble (Liu, 2017) under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet.", "Deep neural networks have demonstrated state-of-the-art performances on many difficult machine learning tasks.", "Despite the fundamental breakthroughs in various tasks, deep neural networks have been shown to be utterly vulnerable to adversarial attacks BID32 BID11 .", "Carefully crafted perturbations can be added to the inputs of the targeted model to drive the performances of deep neural networks to chance-level.", "In the context of image classification, these perturbations are imperceptible to human eyes but can change the prediction of the classification model to the wrong class.", "Algorithms seek to find such perturbations are denoted as adversarial attacks BID5 BID4 BID28 , and some attacks are still effective in the physical world BID17 BID9 .", "The inherent weakness of lacking robustness to adversarial examples for deep neural networks brings out security concerns, especially for security-sensitive applications which require strong reliability.To defend from adversarial examples and improve the robustness of neural networks, many algorithms have been recently proposed BID27 BID37 BID17 BID12 .", "Among them, there are two lines of work showing effective results on medium-sized data (e.g., CIFAR-10).", "The first line of work uses adversarial training to improve robustness, and the recent algorithm proposed in BID25 has been recognized as one of the most successful defenses, as shown in .", "The second line of work adds stochastic components in the neural network to hide gradient information from attackers.", "In the black-box setting, stochastic outputs can significantly increase query counts for attacks using finite-difference techniques BID5 , and even in the white-box setting the recent Random Self-Ensemble (RSE) approach proposed by BID23 achieves similar performance to Madry's adversarial training algorithm.In this paper, we propose a new defense algorithm called Adv-BNN.", "The idea is to combine adversarial training and Bayesian network, although trying BNNs in adversarial attacks is not new (e.g. BID20 BID10 BID30 ), and very recently BID36 also tried to combine Bayesian learning with adversarial training, this is the first time we scale the problem to complex data and our approach achieves better robustness than previous defense methods.", "The contributions of this paper can be summarized below:• Instead of adding randomness to the input of each layer (as what has been done in RSE), we directly assume all the weights in the network are stochastic and conduct training with techniques commonly used in Bayesian Neural Network (BNN).•", "We propose a new mini-max formulation to combine adversarial training with BNN, and show the problem can be solved by alternating between projected gradient descent and SGD.•", "We test the proposed Adv-BNN approach on CIFAR10, STL10 and ImageNet143 datasets, and show significant improvement over previous approaches including RSE and adversarial training.Notations A neural network parameterized by weights w ∈ R d is denoted by f (x; w), where x ∈ R p is an input example and y is the corresponding label, the training/testing dataset is D tr/te with size N tr/te respectively. When", "necessary, we abuse D tr/te to define the empirical distribu- DISPLAYFORM0 δ(x i )δ(y i ), where δ(·) is the Dirac delta function. x o", "represents the original input and x adv denotes the adversarial example. The", "loss function is represented as f (x i ; w), y i , where i is the index of the data point. Our", "approach works for any loss but we consider the cross-entropy loss in all the experiments. The", "adversarial perturbation is denoted as ξ ∈ R p , and adversarial example is generated by x adv = x o + ξ. In", "this paper, we focus on the attack under norm constraint BID25 , so that ξ ≤ γ. In", "order to align with the previous works, in the experiments we set the norm to · ∞ . The", "Hadamard product is denoted as .", "To conclude, we find that although the Bayesian neural network has no defense functionality, when combined with adversarial training, its robustness against adversarial attack increases significantly.", "So this method can be regarded as a non-trivial combination of BNN and the adversarial training: robust classification relies on the controlled local Lipschitz value, while adversarial training does not generalize this property well enough to the test set; if we train the BNN with adversarial examples, the robustness increases by a large margin.", "Admittedly, our method is still far from the ideal case, and it is still an open problem on what the optimal defense solution will be." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3870967626571655, 0, 0.08163265138864517, 0.1538461446762085, 0.25641024112701416, 0.06666666269302368, 0.14814814925193787, 0.06666666269302368, 0.21052631735801697, 0.11428570747375488, 0.052631575614213943, 0.1428571343421936, 0.13793103396892548, 0.05714285373687744, 0.13636362552642822, 0.11428570747375488, 0.2153846174478531, 0.1818181723356247, 0.09836065024137497, 0.22727271914482117, 0.13698630034923553, 0.04999999329447746, 0.0714285671710968, 0, 0, 0.05405404791235924, 0, 0.0624999962747097, 0, 0.1904761791229248, 0.16129031777381897, 0.1538461446762085 ]
rk4Qso0cKm
true
[ "We design an adversarial training method to Bayesian neural networks, showing a much stronger defense to white-box adversarial attacks" ]
[ "Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.", "In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence.", "We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates.", "To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective.", "We follow up by using parameter estimation for the benign agents' updates to improve on attack success.", "Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable.", "Our results indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies.", "Federated learning introduced by BID11 has recently emerged as a popular implementation of distributed stochastic optimization for large-scale deep neural network training.", "It is formulated as a multi-round strategy in which the training of a neural network model is distributed between multiple agents.", "In each round, a random subset of agents, with local data and computational resources, is selected for training.", "The selected agents perform model training and share only the parameter updates with a centralized parameter server, that facilitates aggregation of the updates.", "Motivated by privacy concerns, the server is designed to have no visibility into an agents' local data and training process.", "The aggregation algorithm is agnostic to the data distribution at the agents.In this work, we exploit this lack of transparency in the agent updates, and explore the possibility of a single malicious agent performing a model poisoning attack.", "The malicious agent's objective is to cause the jointly trained global model to misclassify a set of chosen inputs with high confidence, i.e., it seeks to introduce a targeted backdoor in the global model.", "In each round, the malicious agent generates its update by optimizing for a malicious objective different than the training loss for federated learning.", "It aims to achieve this by generating its update by directly optimizing for the malicious objective.", "However, the presence of a multitude of other agents which are simultaneously providing updates makes this challenging.", "Further, the malicious agent must ensure that its update is undetectable as aberrant.Contributions: To this end, we propose a sequence of model poisoning attacks, with the aim of achieving the malicious objective while maintaining attack stealth.", "For each strategy, we consider both attack strength as well as stealth.", "We start with malicious update boosting, designed to negate the combined effect of the benign agents, which enables the adversary to achieve its malicious objective with 100% confidence.", "However, we show that boosted updates can be detected as aberrant using two measures of stealth, accuracy checking on the benign objective and parameter update statistics.", "Observing that the only parameter updates that need to be boosted are those that con-tribute to the malicious objective, we design an alternating minimization strategy that improves attack stealth.", "This strategy alternates between training loss minimization and the boosting of updates for the malicious objective and is able to achieve high success rate on both the benign and malicious objectives.", "In addition, we show that estimating the other agents' updates improves attack success rates.", "Finally, we use a suite of interpretability techniques to generate visual explanations of the decisions made by a global model with and without a targeted backdoor.", "Interestingly, we observe that the explanations are nearly visually indistinguishable.", "This establishes the attack stealth along yet another axis of measurement and indicates that backdoors can be inserted without drastic changes in model focus at the input.", "In this paper, we have started an exploration of the vulnerability of multi-party machine learning algorithms such as federated learning to model poisoning adversaries, who can take advantage of the very privacy these models are designed to provide.", "In future work, we plan to explore more sophisticated detection strategies at the server, which can provide guarantees against the type of attacker we have considered here.", "In particular, notions of distances between weight distributions are promising defensive tools.", "Our attacks in this paper demonstrate that federated learning in its basic form is very vulnerable to model poisoning adversaries, as are recently proposed Byzantine resilient aggregation mechanisms.", "While detection mechanisms can make these attacks more challenging, they can be overcome, demonstrating that multi-party machine learning algorithms robust to attackers of the type considered here must be developed." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12765957415103912, 0.4000000059604645, 0.09999999403953552, 0, 0.12121211737394333, 0.1395348757505417, 0.2800000011920929, 0.10526315122842789, 0.11428570747375488, 0.05882352590560913, 0.1111111044883728, 0.0555555522441864, 0.1666666567325592, 0.260869562625885, 0.1111111044883728, 0.06451612710952759, 0.0624999962747097, 0.12244897335767746, 0, 0.10256409645080566, 0.0952380895614624, 0.04999999701976776, 0.1904761791229248, 0, 0.20512819290161133, 0, 0.0952380895614624, 0.2448979616165161, 0.09756097197532654, 0.0714285671710968, 0.2790697515010834, 0.1818181723356247 ]
BkewX2C9tX
true
[ "Effective model poisoning attacks on federated learning able to cause high-confidence targeted misclassification of desired inputs" ]
[ "Despite rapid advances in speech recognition, current models remain brittle to superficial perturbations to their inputs.", "Small amounts of noise can destroy the performance of an otherwise state-of-the-art model.", "To harden models against background noise, practitioners often perform data augmentation, adding artificially-noised examples to the training set, carrying over the original label.", "In this paper, we hypothesize that a clean example and its superficially perturbed counterparts shouldn't merely map to the same class--- they should map to the same representation.", "We propose invariant-representation-learning (IRL): At each training iteration, for each training example, we sample a noisy counterpart.", "We then apply a penalty term to coerce matched representations at each layer (above some chosen layer).", "Our key results, demonstrated on the LibriSpeech dataset are the following:", "(i) IRL significantly reduces character error rates (CER)on both `clean' (3.3% vs 6.5%) and `other' (11.0% vs 18.1%) test sets;", "(ii) on several out-of-domain noise settings (different from those seen during training), IRL's benefits are even more pronounced.", "Careful ablations confirm that our results are not simply due to shrinking activations at the chosen layers." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.06451612710952759, 0.1463414579629898, 0.6976743936538696, 0.05882352590560913, 0.0555555522441864, 0.06896550953388214, 0, 0, 0.1666666567325592 ]
ryz4mqPx9m
false
[ " In this paper, we hypothesize that superficially perturbed data points shouldn’t merely map to the same class---they should map to the same representation." ]
[ "In this paper we design a harmonic acoustic model for pitch detection.", "This model arranges conventional convolution and sparse convolution in a way such that the global harmonic patterns captured by sparse convolution are composed of the enough number of local patterns captured by layers of conventional convolution.", "When trained on the MAPS dataset, the harmonic model outperforms all existing pitch detection systems trained on the same dataset.", "Most impressively, when trained on MAPS with simple data augmentation, the harmonic model with an LSTM layer on top surpasses an up-to-date, more complex pitch detection system trained on the MAESTRO dataset to which complicated data augmentation is applied and whose training split is an order-of-magnitude larger than the training split of MAPS.", "The harmonic model has demonstrated potential to be used for advanced automatic music transcription (AMT) systems.", "In this paper we designed a harmonic acoustic model for pitch detection.", "This model effectively captures the complex frequency interactions characterizing polyphonic pitched music through conventional convolution and sparse convolution inspired by the harmonic structure of pitched music.", "In its pure form without RNN and data augmentation, the harmonic model outperformed most of the existing pitch detection systems.", "Most noticeably, when trained on MAPS and data augmentation is done, the harmonic model with an LSTM layer on top outdid the complex system in Hawthorne et al. (2019) trained on MAESTRO whose training split 15 times as large as the training split of MAPS.", "Thus, the harmonic model has shown great potential to be used for building advanced AMT systems.", "A possible future direction is to make more potential of complex spectrograms, instead of using only amplitude spectrograms.", "A mixture of signal can be inseparable in the real number domain but could be separable in the complex number domain.", "Trabelsi et al. (2018) has done some preliminary study in this direction.", "However, our own study showed that the technique of deep complex network proposed in Trabelsi et al. (2018) did not yield a performance comparable with that of real networks.", "Therefore, definitely more can be done." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.1428571343421936, 0.21052631735801697, 0.09302325546741486, 0.21052631735801697, 0.4000000059604645, 0.1599999964237213, 0.1818181723356247, 0.10256410390138626, 0.21052631735801697, 0, 0, 0, 0, 0 ]
S1ln1TNKvH
true
[ "harmonic acoustic model" ]
[ "Learning domain-invariant representation is a dominant approach for domain generalization.", "However, previous methods based on domain invariance overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and the invariance.", "This study proposes a novel method {\\em adversarial feature learning under accuracy constraint (AFLAC)}, which maximizes domain invariance within a range that does not interfere with accuracy.", "Empirical validations show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering the dependency and the efficacy of the proposed method to overcome the problem.", "In supervised learning we typically assume that samples are obtained from the same distribution in training and testing; however, because this assumption does not hold in many practical situations it reduces the classification accuracy for the test data BID20 .", "One typical situation is domain generalization (DG) BID1 BID18 BID19 BID2 : we have labeled data from several source domains and collectively exploit them such that the trained system generalizes to other unseen, but somewhat similar, target domain(s).", "This paper considers DG under the situation where domain d and class y labels are statistically dependent owing to some common latent factor z FIG0 -(c)), which we referred to as domainclass dependency.", "For example, the WISDM Activity Prediction dataset (WISDM, BID10 ), where y and d correspond to activities and wearable device users, exhibits this dependency because (1) some activities (e.g., jogging) are strenuous to the extent that some unathletic subjects avoided them (data characteristics), or (2) other activities were added only after the study began and the initial subjects could not perform them (data-collection errors).", "The dependency is common in real-world datasets BID23 and a similar setting has been investigated in domain adaptation (DA) studies, but most prior DG studies overlooked the dependency.Most prior DG methods utilize invariant feature learning (IFL) (e.g., ).", "IFL attempts to learn feature representation h from input data x which is invariant to d.", "When source and target domains have some common structure (see, ), IFL prevents the classifier from overfitting to source domains FIG0 ).", "However, under the dependency, merely imposing the domain invariance can adversely affect the classification accuracy as pointed out by BID21 and illustrated in FIG0 .", "Although that trade-off occurs in source domains (because DG uses only source data during optimization), it can also negatively affect the classification performance for target domain(s).", "For example, if the target domain has characteristics similar (or same as an extreme case) to those of a certain source domain, giving priority to domain invariance obviously interferes with the DG performance ( FIG0 ).In", "this paper, considering that prioritizing domain invariance under the trade-off can negatively affect the DG performance, we propose a novel method adversarial feature learning under accuracy constraint (AFLAC), which maximizes domain invariance within a range that does not interfere with the classification accuracy FIG0 -(e)) on adversarial training. Specifically", ", AFLAC is intended to achieve accuracy-constrained domain invariance, which we define as the maximum H(d|h) (H denotes entropy) value under the condition H(y|x) = H(y|h) (h has as much y information as x). Empirical validations", "show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering domain-class dependency and the efficacy of the proposed approach for overcoming the issue.", "In this paper, we proposed a novel method AFLAC, which maximizes domain invariance within a range that does not interfere with classification accuracy on adversarial training.", "Empirical validations show the superior DG performance of AFLAC to the baseline methods, supporting the importance of the domain-class dependency in domain generalization tasks and the efficacy of the proposed method for overcoming the issue." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0833333283662796, 0.37837836146354675, 0.10256409645080566, 0.1621621549129486, 0.03999999538064003, 0.11538460850715637, 0.1304347813129425, 0.057971011847257614, 0.11764705181121826, 0, 0.11764705181121826, 0.1666666567325592, 0.1538461446762085, 0.1249999925494194, 0.18518517911434174, 0.08510638028383255, 0.1621621549129486, 0.1538461446762085, 0.19512194395065308 ]
Hkxj_LpvvV
true
[ "Address the trade-off caused by the dependency of classes on domains by improving domain adversarial nets" ]
[ "Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images.", "Following their successful application in image processing and representation learning, an important next step is to consider videos.", "Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects.", "While recent generative models of video have had some success, current progress is hampered by the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples.", "To this extent we propose Fréchet Video Distance (FVD), a new metric for generative models of video based on FID.", "We contribute a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos.", "Recent advances in deep generative models have lead to remarkable success in synthesizing highquality images (Karras et al., 2018; Brock et al., 2018) .", "A natural next challenge is to consider video generation.", "This is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects.", "Generative models of video will enable many applications, including missing-frame prediction (Jiang et al., 2018) , improved instance segmentation (Haller & Leordeanu, 2017) , or complex (relational) reasoning tasks by conducting inference (Lerer et al., 2016) .While", "great progress has been made in recent years, video generation models are still in their infancy, and generally unable to synthesize more than a few seconds of video (Babaeizadeh et al., 2017) . Learning", "a good dynamics model remains a major challenge in generating real world videos. However,", "in order to qualitatively measure progress in synthesizing videos, we require metrics that consider visual quality, temporal coherence, and diversity of generated samples.We contribute Fréchet Video Distance (FVD), a new metric for generative models of video. FVD builds", "on the principles underlying Fréchet Inception Distance (FID; Heusel et al. (2017) ), which has been successfully applied to images. We introduce", "a feature representation that captures the temporal coherence of the content of a video, in addition to the quality of each frame. Unlike popular", "* Both authors contributed equally to this work while interning at Google Brain.Figure 1: Generated videos by various models ranked according to FVD (lower is better). metrics such as", "Peak Signal to Noise Ratio (PSNR) or the Structural Similarity (SSIM; Wang et al. (2004) ) index, FVD considers a distribution over videos, thereby avoiding the drawbacks of framelevel metrics (Huynh-Thu & Ghanbari, 2012) . We contribute extensive", "experiments to evaluate FVD, including a large-scale human study which confirms that FVD coincides well with qualitative human judgment of generated videos.", "We introduced the Fréchet Video Distance (FVD), a new evaluation metric for generative models of video, and an important step towards better evaluation of models for video generation.", "Our experiments confirm that FVD is accurate in evaluating videos that were modified to include static noise, and temporal noise.", "More importantly, a large scale human study among generated videos from several recent generative models reveals that FVD consistently outperforms SSIM and PSNR in agreeing with human judgment." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.043478257954120636, 0.19607841968536377, 0.2142857164144516, 0.5, 0.6521739363670349, 0.08163265138864517, 0.10810810327529907, 0.0833333283662796, 0.09677419066429138, 0.13333332538604736, 0.09756097197532654, 0.3692307770252228, 0.07999999821186066, 0.12765957415103912, 0.10526315122842789, 0.1249999925494194, 0.5714285373687744, 0.3461538553237915, 0.12765957415103912, 0.4000000059604645 ]
rylgEULtdN
true
[ "We propose FVD: a new metric for generative models of video based on FID. A large-scale human study confirms that FVD correlates well with qualitative human judgment of generated videos." ]
[ "Despite advances in deep learning, artificial neural networks do not learn the same way as humans do.", "Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -- this phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from incoming data.", "In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequentially incoming tasks, while averting catastrophic forgetting.", "Specifically, our model consists of a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences.", "We", "(i) substantiate our claim that replay should be generative,", "(ii) show the benefits of generative replay and dual memory via experiments, and", "(iii) demonstrate improved performance retention even for small models with low capacity.", "Our architecture displays many important characteristics of the human memory and provides insights on the connection between sleep and learning in humans.", "Many machine learning models, when trained sequentially on tasks, forget how to perform the previously learnt tasks.", "This phenomenon called catastrophic forgetting is prominent in neural networks BID23 .", "Without a way to avert catastrophic forgetting, a learning system needs to store all training data and relearn on it along with new incoming data, when retraining.", "Hence, it is an important challenge to overcome in order to enable systems to learn continuously.", "BID23 first suggested that the underlying cause of forgetting was the distributed shared representation of tasks via network weights.", "Subsequent works attempted to remedy the issue by reducing representational overlap between input representations via activation sharpening algorithms BID17 , orthogonal recoding of inputs BID19 or orthogonal activations at all hidden layers BID24 BID5 .", "More recent works have explored activations like dropout BID9 and local winner-takes-all BID36 to create sparse, less correlated feature representations.", "But such sparse encodings can be task specific at times and in general act as heuristics to mildly pacify the underlying problem.Further, natural cognitive systems are also connectionist in nature and yet they forget gradually but not 'catastrophically'.", "For instance, humans demonstrate gradual systematic forgetting.", "Frequently and recently encountered tasks tend to survive much longer in the human memory, while those rarely encountered are slowly forgotten.", "Some of the earlier tasks may be seen again, but it is not necessary for them to be retained in memory BID7 .", "Hence only sparsifying representations does not solve the problem.", "Instead, neuroscientific evidence suggests that humans have evolved mechanisms to separately learn new incoming tasks and consolidate the learning with previous knowledge to avert catastrophic forgetting BID22 BID29 BID7 .Complementary", "learning systems: BID22 suggested that this separation has been achieved in the human brain via evolution of two separate areas of the brain, the hippocampus and the neocortex. The neocortex", "is a long term memory which specializes in consolidating new information with previous knowledge and gradually learns the joint structure of all tasks and experiences; whereas the hippocampus acts as a temporary memory to rapidly learn new tasks and then slowly transfer the knowledge to neocortex after acquisition.Experience replay: Another factor deemed essential for sequential learning is experience replay. BID22 ; BID29", "have emphasized the importance of replayed data patterns in the human brain during sleep and waking rest. BID31 BID32 proposed", "several replay techniques (a.k.a. pseudopattern rehearsal) to achieve replay, but they involved generating replay data without storing input representations and our experiments show that they lack the accuracy required for consolidation.Weight consolidation or freezing: Recent evidence from neuroscience also suggests that mammalian brain protects knowledge in the neocortex via task-specific consolidation of neural synapses over long periods of time BID37 BID0 . Such techniques have", "recently been employed in progressive neural networks BID34 and Pathnets BID4 both of which freeze neural network weights after learning tasks. BID16 have used the", "fisher information matrix (FIM) to slow down learning on network weights which correlate with previously acquired knowledge.In this paper, we address the catastrophic forgetting problem by drawing inspiration from the above neuroscientific insights and present a method to overcome catastrophic forgetting. More specifically,", "we propose a dual-memory architecture for learning tasks sequentially while averting catastrophic forgetting. Our model comprises", "of two generative models: a short-term memory (STM) to emulate the human hippocampal system and a long term memory (LTM) to emulate the neocortical learning system. The STM learns new", "tasks without interfering with previously learnt tasks in the LTM. The LTM stores all", "previously learnt tasks and aids the STM in learning tasks similar to previous tasks. During sleep/down-time", ", the STM generates and transfers samples of learnt tasks to the LTM. These are gradually consolidated", "with the LTM's knowledge base of previous tasks via generative replay.Our approach is inspired from the strengths of deep generative models, experience replay and the complementary learning systems literature. We demonstrate our method's effectiveness", "in averting catastrophic forgetting by sequentially learning multiple tasks. Moreover, our experiments shed light on some", "characteristics of human memory as observed in the psychology and neuroscience literature.", "In this section we show that DGDMN shares some more remarkable characteristics with the human memory and present a discussion of some more related ideas.", "Due to space constraints, visualizations of the learnt latent structures when training jointly vs. sequentially have been deferred to appendix A. The hyperparameters of DGDMN (κ and n ST M ) have intuitive interpretations and we have provided simple heuristics to choose them without any complex searches (in appendix B).Resilience", "to noise and occlusion: We use a VAE to be able to reconstruct representations of samples. Reconstructed", "images are less noisy and can recover from partial occlusion, which gives our model human-like abilities to recognize objects in noisy, distorted or occluded images. We test our LTM", "model and a NN model by jointly training on uncorrupted Digits data and testing on noisy and occluded images. We see that the", "LTM is more robust to noisy and occluded images and exhibits smoother degradation in classification accuracy because of its denoising reconstructive properties (see FIG7 ). The choice of underlying", "generative model: Our consolidation ability and retention performance relies heavily on the generation and reconstruction ability of the underlying generative model. We chose a VAE for its reconstructive", "capabilities but our architecture is agnostic to the choice of the underlying generative model as long as the generator can generate reliable samples and reconstruct incoming samples accurately. Hence, variants of Generative Adversarial", "Networks (GAN) Goodfellow et al. (2014) like BiGANs BID2 , ALI (Dumoulin et al., 2017) and AVB BID25 can also be used for the generative model depending on the modeled domain.Why use short-term memory?: Our LTM always learns from STTMs and never", "from real data, and the STTMs' errors slowly propagate into the LTM and contribute to forgetting. An alternative could be to directly store data", "from new incoming tasks, consolidate it into the LTM after periodic intervals, and then discard the data. We show the accuracy curves on Digits dataset", "for this approach in FIG8 . This results in higher retention compared to", "DGDMN in FIG3 because LTM now learns from real data. However, this approach is not truly online since", "recently learnt tasks cannot be used immediately until after a sleep phase. Since the STM's error can be made smaller by using", "high capacity generators and classifiers, we suggest using a STM for true online continual learning.Connections to knowledge distillation: Previous works on (joint) multitask learning have also proposed approaches to learn individual tasks with small networks and then \"distilling\" them jointly into a larger neural network . Such distillation can sometimes improve performance", "on individual tasks if they share structure and at other times mitigate inter-task interference due to refinement of learnt functions while distilling BID30 . Though we do not use temperature-controlled soft-labels", "while consolidating tasks into the LTM (unlike distillation), we surmise that due to refinement and compression during consolidation phase, DGDMN is also able to learn joint task structure effectively while mitigating interference between tasks.Approaches based on synaptic consolidation: Though our architecture draws inspiration from complementary learning systems and experience replay in the human brain, there is also considerable neuroscientific evidence for synaptic consolidation in the human brain (like in EWC). It might be interesting to explore how synaptic consolidation", "can be incorporated in our dual memory architecture without causing stagnation and we leave this to future work. We also plan to extend our architecture to learning optimal policies", "over time via reinforcement learning without explicit replay memories.", "In this work, we have developed a model capable of learning continuously on sequentially incoming tasks, while averting catastrophic forgetting.", "Our model employs a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences.", "We have shown that generative replay performs the best for long-term performance retention even for neural networks with small capacity, while demonstrating the benefits of using generative replay and a dual memory architecture via our experiments.", "Our model hyperparameters have simple interpretations and can be set without much tuning.", "Moreover, our architecture displays remarkable parallels with the human memory system and provides useful insights about the connection between sleep and learning in humans.", "Deep Generative Replay (algorithm 1), as described in section 3.1, consolidates new tasks for a DGM with previously learnt tasks.", "It first computes sampling fractions for new tasks (η tasks ) and previously learnt tasks (η gen ) and ensures a minimum fraction (κ) per new task (lines 3-6).", "Then it computes the number of samples to generate from previous tasks and whether to subsample the incoming task samples to satisfy the memory capacity N max (lines 7-12).", "Finally, it generates the required number of samples from previous tasks, reconstructs all data and trains the DGM on resulting data (lines 13-16).", "For a dictionary D, D is the total number of tasks in D counting repetitions, while |D| is the total number of tasks without repetitions.", "|X| is the number of samples in set X. BID35 have recently proposed a similar idea independently and BID27 have also employed a generative replay in two-layer restricted boltzmann machines, but they do not describe balancing new and generated samples and cannot recognize repeated tasks (section 4.2).", "Their generative replay without a dual memory architecture is costly to train (section 4.3) and a lack of reconstruction for new samples makes their representations less robust to noise and occlusions (section 5)." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.060606054961681366, 0.2142857164144516, 0.5853658318519592, 0.260869562625885, 0, 0.13793103396892548, 0, 0.1621621549129486, 0.1764705777168274, 0.1428571343421936, 0.1428571343421936, 0.12903225421905518, 0.05882352590560913, 0.03999999538064003, 0.05405404791235924, 0.03703703358769417, 0.0833333283662796, 0.1621621549129486, 0.10526315122842789, 0, 0.21739129722118378, 0.0952380895614624, 0.08695651590824127, 0.1111111044883728, 0.07894736528396606, 0, 0.14035087823867798, 0.3636363446712494, 0.1463414579629898, 0, 0.06451612710952759, 0.060606054961681366, 0.08510638028383255, 0.24242423474788666, 0.13793103396892548, 0.09999999403953552, 0.09836065024137497, 0.0624999962747097, 0.09090908616781235, 0, 0.045454539358615875, 0, 0.13333332538604736, 0.0357142798602581, 0.15789473056793213, 0.14999999105930328, 0.0714285671710968, 0.05714285373687744, 0, 0.0615384578704834, 0.08510638028383255, 0.17283950746059418, 0.1904761791229248, 0, 0.37837836146354675, 0.2666666507720947, 0.16326530277729034, 0, 0.1538461446762085, 0, 0, 0.19512194395065308, 0.10526315122842789, 0.05714285373687744, 0, 0.1702127605676651 ]
BkVsWbbAW
true
[ "A dual memory architecture inspired from human brain to learn sequentially incoming tasks, while averting catastrophic forgetting." ]
[ "Most research on lifelong learning applies to images or games, but not language.\n", "We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling.\n", "LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity.\n", "Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples.\n", "When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task.\n", "The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. \n", "Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound.\n", "The source code is available at https://github.com/jojotenya/LAMOL.", "The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose; this is isolated learning (Chen & Liu, 2016, p. 150) .", "In isolated learning, the model is unable to retain and accumulate the knowledge it has learned before.", "When a stream of tasks are joined to be trained sequentially, isolated learning faces catastrophic forgetting (McCloskey & Cohen, 1989) due to a non-stationary data distribution that biases the model (left figure of Figure 1 ).", "In contrast, lifelong learning is designed to address a stream of tasks by accumulating interconnected knowledge between learned tasks and retaining the performance of those tasks.", "A human easily achieves lifelong learning, but this is nontrivial for a machine; thus lifelong learning is a vital step toward artificial general intelligence.", "In this paper, we focus on lifelong language learning, where a machine achieves lifelong learning on a stream of natural language processing (NLP) tasks.", "To the best of our knowledge, lifelong language learning has been studied in only a few instances; for sentiment analysis (Chen et al., 2015b; Xia et al., 2017) , conversational agents (Lee, 2017) , word representation learning (Xu et al., 2018) , sentence representation learning (Liu et al., 2019 ), text classification, and question answering (d'Autume et al., 2019) .", "However, in all previous work, the tasks in the stream are essentially the same task but in different domains.", "To achieve lifelong language learning on fundamentally different tasks, we propose LAMOL -LAnguage MOdeling for Lifelong language learning.", "It has been shown that many NLP tasks can be considered question answering (QA) (Bryan McCann & Socher, 2018) .", "Therefore, we address multiple NLP tasks with a single model by training a language model (LM) that generates an answer based on the context and the question.", "Treating QA as language modeling is beneficial because the LM can be pre-trained on a large number of sentences without any labeling (Radford et al., 2019) ; however, this does not directly solve the problem of LLL.", "If we train an LM on a stream of tasks, catastrophic forgetting still occurs.", "However, as an LM is intrinsically a text generator, we can use it to answer questions while generating pseudo-samples of Figure 1 : Left: After learning Task 2, the learner has already forgetten how to solve Task 1.", "This is \"catastrophic forgetting\".", "Middle: The basic idea of the data-based LLL approach.", "A generator is learned to generate examples it has seen before.", "Using the generator, the learner also learns from examples from the previous task to prevent it from forgetting.", "Right: A language model that simultaneously takes on the roles of learner and generator.", "the previous task to be replayed later.", "LAMOL is inspired by the data-based approach for LLL in which a generator learns to generate samples in previous tasks (middle of Figure 1 ) (Hanul Shin & Kim, 2017; Kemker & Kanan, 2017) .", "In contrast to previous approaches, LAMOL needs no extra generator (right of Figure 1 ).", "LAMOL is also similar to multitask training, but the model itself generates data from previous tasks instead of using real data.", "Our main contributions in this paper are:", "• We present LAMOL, a simple yet effective method for LLL.", "Our method has the advantages of no requirements in terms of extra memory or model capacity.", "We also do not need to know how many tasks to train in advance and can always train on additional tasks when needed.", "• Experimental results show that our methods outperform baselines and other state-of-the-art methods by a considerable margin and approaches the multitasking upper bound within 2-3%.", "• Furthermore, we propose adding task-specific tokens during pseudo-sample generation to evenly split the generated samples among all previous tasks.", "This extension stabilizes LLL and is particularly useful when training on a large number of tasks.", "• We analyze how different amounts of pseudo-samples affect the final performance of LAMOL, considering results both with and without the task-specific tokens.", "• We open-source our code to facilitate further LLL research.", "We propose LAMOL, a simple yet effective method for LLL based on language modeling.", "A single LM achieves LLL without additional model components and without keeping old examples.", "Moreover, any pre-trained LM can be used to leverage a large amount of unlabeled text to improve LLL.", "Finally, more tasks can be added whenever needed." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.29999998211860657, 0.43478259444236755, 0, 0.0833333283662796, 0.07692307233810425, 0.060606058686971664, 0, 0, 0.1111111119389534, 0, 0.05128204822540283, 0.13793103396892548, 0.2222222238779068, 0.23076923191547394, 0.15686273574829102, 0, 0.3636363744735718, 0, 0.06666666269302368, 0.0952380895614624, 0, 0.04878048598766327, 0, 0, 0, 0, 0.09999999403953552, 0, 0.052631575614213943, 0, 0, 0, 0.11764705181121826, 0, 0, 0, 0, 0, 0, 0, 0.29999998211860657, 0, 0, 0 ]
Skgxcn4YDS
true
[ "Language modeling for lifelong language learning." ]
[ "Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words.", "A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors.", "Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors.", "However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory.", "Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way.", "Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.", "Modern deep learning approaches for natural language processing (NLP) often rely on vector representation of words to convert discrete space of human language into continuous space best suited for further processing through a neural network.", "For a language with vocabulary of size d, a simple way to achieve this mapping is to use one-hot representation -each word is mapped to its own row of a d × d identity matrix.", "There is no need to actually store the identity matrix in memory, it is trivial to reconstruct the row from the word identifier.", "Word embedding approaches such as word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) use instead vectors of dimensionality p much smaller than d to represent words, but the vectors are not necessarily extremely sparse nor mutually orthogonal.", "This has two benefits: the embeddings can be trained on large text corpora to capture the semantic relationship between words, and the downstream neural network layers only need to be of width proportional to p, not d, to accept a word or a sentence.", "We do, however, need to explicitly store the d × p embedding matrix in GPU memory for efficient access during training and inference.", "Vocabulary sizes can reach d = 10 5 or 10 6 (Pennington et al., 2014) , and dimensionality of the embeddings used in current systems ranges from p = 300 (Mikolov et al., 2013; Pennington et al., 2014) to p = 1024 (Devlin et al., 2018) .", "The d × p embedding matrix thus becomes a substantial, often dominating, part of the parameter space of a learning model.", "In classical computing, information is stored in bits -a single bit represents an element from the set B = {0, 1}, it can be in one of two possible states.", "A quantum equivalent of a bit, a qubit, is fully described by a single two-dimensional complex unit-norm vector, that is, an element from the set C 2 .", "A state of an n-qubit quantum register corresponds to a vector in C 2 n .", "To have exponential dimensionality of the state space, though, the qubits in the register have to be interconnected so that their states can become entangled; a set of all possible states of n completely separated, independent qubits can be fully represented by C 2n instead of C 2 n .", "Entanglement is a purely quantum phenomenon -we can make quantum bits interconnected, so that a state of a two-qubit system cannot be decomposed into states of individual qubits.", "We do not see entanglement in classical bits, which are always independent -we can describe a byte by separately listing the state of each of the eight bits.", "We can, however, approximate quantum register classically -store vectors of size m using O (log m) space, at the cost of losing the ability to express all possible m-dimensional vectors that an actual O (log m)-qubit quantum register would be able to represent.", "As we show in this paper, the loss of representation power does not have a significant impact on NLP machine learning algorithms that use the approximation approaches to store and manipulate the high-dimensional word embedding matrix." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.05128204822540283, 0.0555555522441864, 0, 0.17777776718139648, 0.0952380895614624, 0.043478257954120636, 0.13636362552642822, 0.17142856121063232, 0.1111111044883728, 0.11320754140615463, 0.10256409645080566, 0.11764705181121826, 0, 0.04444443807005882, 0.1463414579629898, 0.12903225421905518, 0.07547169178724289, 0.09999999403953552, 0.0476190410554409, 0.15686273574829102, 0.1599999964237213 ]
HkxARkrFwB
true
[ "We use ideas from quantum computing to proposed word embeddings that utilize much fewer trainable parameters." ]
[ "One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary.", "Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. hand- engineered features).", "Humans, however, do not learn to communicate based on well-summarized features.", "In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols.", "The agents play an image description game where the image contains factors such as colors and shapes.", "We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding.", "Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.", "One of the key requirements for artificial general intelligence (AGI) to thrive in the real world is its ability to communicate with humans in natural language.", "Natural language processing (NLP) has been an active field of research for a long time, and the introduction of deep learning BID18 enabled great progress in NLP tasks such as translation, image captioning, text generation and visual question answering Vinyals et al., 2015; BID13 BID10 Serban et al., 2016; BID19 BID0 .", "However, training machines in a supervised manner with a large dataset has its limits when it comes to communication.", "Supervised methods are effective for capturing statistical associations between discrete symbols (i.e. words, letters).", "The essence of communication is more than just predicting the most likely word to come next; it is a means to coordinate with others and potentially achieve a common goal BID1 BID7 Wittgenstein, 1953 ).An", "alternative path to teaching machines the art of communication is to give them a specific task and encourage them to learn how to communicate on their own. This", "approach will encourage the agents to use languages grounded to task-related entities as well as communicate with other agents, which is one of the ways humans learn to communicate BID5 . Recently", ", there have been several notable works that demonstrated the emergence of communication between neural network agents. Even though", "each work produced very interesting results of its own, in all cases, communication was either achieved with a single discrete symbol (as opposed to a sequence of discrete symbols) BID8 BID17 or via a continuous value (Sukhbaatar et al., 2016; BID12 . Not only is", "human communication un-differentiable, but also using a single discrete symbol is quite far from natural language communication. One of the", "key features of human language is its compositional nature; the meaning of a complex expression is determined by its structure and the meanings of its constituents BID9 . More recently", ", BID22 and BID16 trained the agents to communicate in grounded, compositional language. In both studies", ", however, inputs given to the agents were hand-engineered features (disentangled input) rather than raw perceptual signals that we receive as humans.In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. Unlike previous", "works, our setup poses greater challenges to the agents since visual understanding and discrete communication have to be induced from scratch in parallel. We place the agents", "in a two-person image description game, where images contain objects of various color and shape. Inspired by the pioneering", "work of BID3 , we employ a communication philosophy named obverter to train the agents. Having its root in the theory", "of mind (Premack & Woodruff, 1978) and human language development BID21 , the obverter technique motivates an agent to search over messages and generate the ones that maximize their own understanding. The contribution of our work", "can be summarized as follows:• We train artificial agents to learn to disentangle raw image pixels and communicate in compositional language at the same time.• We describe how the obverter", "technique, a differentiable learning algorithm for discrete communication, could be employed in a communication game with raw visual input.• We visualize how the agents are", "perceiving the images and show that they learn to disentangle color and shape without any explicit supervision other than the communication one.• Experiment results suggest that", "the agents could develop, out of raw image input, a language with compositional properties, given a proper pressure from the environment (i.e. the image description game).Finally, while our exposition follows", "a multi-agent perspective, it is also possible to interpret our results in the single-agent setting. We are effectively learning a neural", "network that is able to learn disentangled compositional representations of visual scenes, without any supervision. Subject to the constraints imposed by", "their environment, our agents learn disentangled concepts, and how to compose these to form new concepts. This is an important milestone in the", "path to AGI.", "In this work, we used the obverter technique to train neural network agents to communicate in a two-person image description game.", "Through qualitative analysis, visualization and the zero-shot test, we have shown that even though the agents receive raw perception in the form of image pixels, under the right environment pressures, the emerged language had properties consistent with the ones found in compositional languages.As an evaluation strategy, we followed previous works and focused on assessing the necessary conditions of compositional languages.", "However, the exact definition of compositional language is still somewhat debatable, and, to the best of our knowledge, there is no reliable way to mathematically quantify the degree of compositionality of an arbitrary language.", "Therefore, in order to encourage active research and discussion among researchers in this domain, we propose for future work, a quantitatively measurable definition of compositionality.", "We believe compositionality of a language is not binary (e.g. language A is compositional/not compositional), but a spectrum.", "For example, human language has some aspects that are compositional (e.g., syntactic constructions, most morphological combinations) and some that are not (e.g., irregular verb tenses in English, character-level word composition).", "It is also important to clearly define grounded language and compositional language.", "If one agent says abc (eat red apple) and another says cba (apple red eat), and they both understand each other, are they speaking compositional language?", "We believe such questions should be asked and addressed to shape the definition of compositionality.In addition to the definition/evaluation of compositional languages, there are numerous directions of future work.", "Observing the emergence of a compositional language among more than two agents is an apparent next step.", "Designing an environment to motivate the agents to disentangle more than two factors is also an interesting direction.", "Training agents to consider the context (i.e. pragmatics), such as giving each agent several images instead of one, is another exciting future work.", "A EMERGENCE OF GRAMMAR, BID3 In BID3 , the author successfully trained neural agents to develop a structured (i.e. grammatical) language using disentangled meaning vectors as the input.", "Using 10 subject vectors and 10 predicate vectors, all represented as explicit binary vectors, total 100 meaning vectors could be composed TAB7 ).", "Each digit in the subject vector 5a serves a clear role, respectively representing speaker(sp), hearer(hr), other(ot), and plural(pl).", "The predicate vector values, on the other hand, are randomly chosen so that each predicate vector will have three 1's and three 0's.", "The combination of ten subject vectors and ten predicate vectors allows 100 meaning vectors.The author used twenty neural agents for the experiment.", "Each agent was implemented with the vanilla recurrent neural networks (RNN), where the hidden vector h's size was 10, same as the size of the meaning vector m in order to treat h as the agent's understanding of m.", "In each training round a single learner (i.e. listener) and ten teachers (i.e. speaker) were randomly chosen.", "Each teacher, given all 100 m's in random order, generates a message s 5 for each m and sends it to the learner.", "The messages are generated using the obverter techinque, which is described in Algorithm 1.", "The learner is trained to minimize the mean squared error (MSE) between h (after consuming the s) and m.", "After the learner has learned from all ten teachers, the next round begins, repeating the process until the error goes below some threshold.Algorithm 1: Message generation process used in BID3 .", "DISPLAYFORM0 9 Append i to s; DISPLAYFORM1 Terminate;When the training was complete, the author was able to find strong patterns in the messages used by the agents ( Table 6 ).", "Note that the messages using predicates tired, scared, sick and happy especially follow a very clear pattern.", "Batali also conducted a zero-shot test where the agents were trained without the diagonal elements in Table 6 and tested with all 100 meaning vectors.", "The agents were able to successfully communicate even when held-out meaning vectors were used, but the Table 6 : (Top) Messages used by a majority of the population for each of the given meanings.(Bottom", ") A potential", "analysis of the system in terms of a root plus modifications. Italic symbols", "are used to specify predicates and roman symbols are used to specify subjects. Messages in parentheses", "cannot be made to fit into this analysis.messages used for the held-out meaning vectors did not show as strong compositional patterns as the non-zero-shot case." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.35555556416511536, 0.07407406717538834, 0.4390243887901306, 0.0624999962747097, 0.22857142984867096, 0.31111109256744385, 0.1538461446762085, 0.0624999962747097, 0.1764705777168274, 0, 0.12244897335767746, 0.09999999403953552, 0.1428571343421936, 0.17142856121063232, 0.10526315122842789, 0.17142856121063232, 0.1538461446762085, 0.25, 0.2950819730758667, 0.19999998807907104, 0.05714285373687744, 0.2222222238779068, 0.08163265138864517, 0.3181818127632141, 0.2926829159259796, 0.04999999701976776, 0.3181818127632141, 0.2222222238779068, 0.1666666567325592, 0.10526315122842789, 0.10526315867900848, 0.3333333432674408, 0.1846153736114502, 0.1428571343421936, 0.09999999403953552, 0.1875, 0.09090908616781235, 0.2222222238779068, 0.052631575614213943, 0.1428571343421936, 0.24242423474788666, 0.1249999925494194, 0.09999999403953552, 0.3255814015865326, 0, 0.05882352590560913, 0, 0.11428570747375488, 0.13333332538604736, 0.060606054961681366, 0.10256409645080566, 0, 0.05882352590560913, 0.04651162400841713, 0.0952380895614624, 0.060606054961681366, 0.14999999105930328, 0.12765957415103912, 0, 0.0714285671710968, 0.0714285671710968, 0.09999999403953552 ]
rknt2Be0-
true
[ "We train neural network agents to develop a language with compositional properties from raw pixel input." ]
[ "The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles).", "In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions.", "It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode).", "While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data.", "In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories.", "The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples.", "Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation.", "To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP).", "Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories.", "Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space.", "We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.", "Forecasting future trajectories of vehicles and human has many useful applications in autonomous driving, virtual reality and assistive living.", "What makes trajectory forecasting challenging is that the future is uncertain and multi-modal -vehicles can choose different routes and people can perform different future actions.", "In many safety-critical applications, it is important to consider a diverse set of possible future trajectories, even those that are less likely, so that necessary preemptive actions can be taken.", "For example, an autonomous vehicle should understand that a neighboring car can merge into its lane even though the car is most likely to keep driving straight.", "To address this requirement, we need to take a generative approach to trajectory forecasting that can fully characterize the multimodal distribution of future trajectories.", "To capture all modes of a data distribution, variational autoencoders (VAEs) are well-suited generative models.", "However, random samples from a learned VAE model with Gaussian latent codes are not guaranteed to be diverse for two reasons.", "First, the sampling procedure is stochastic and the VAE samples can fail to cover some minor modes even with a large number of samples.", "Second, since VAE sampling is based on the implicit likelihood function encoded in the training data, if most of the training data is centered around a specific mode while other modes have less data ( Fig. 1", "(a) ), the VAE samples will reflect this bias and concentrate around the major mode ( Fig. 1", "(b) ).", "To tackle this problem, we propose to learn a diversity sampling function (DSF) that can reliably generate a diverse set of trajectory samples ( Fig. 1", "(c) ).", "The proposed DSF is a deterministic parameterized function that maps forecasting context features (e.g., past trajectories) to a set of latent codes.", "The latent codes are decoded by the VAE docoder into a set of future trajectory samples, denoted as the DSF samples.", "In order to optimize the DSF, we formulate a diversity loss based on a determinantal point process (DPP) (Macchi, 1975) to evaluate the diversity of the DSF samples.", "The DPP defines the probability of choosing a random subset from the set of trajectory samples.", "It models the negative correlations between samples: the inclusion of a sample reduces the probability of including a similar sample.", "This makes the DPP an ideal tool for modeling the diversity within a set.", "In particular, we use the expected cardinality of the DPP as the diversity measure, which is defined as the expected size of a random subset drawn from the set of trajectory samples according to the DPP.", "Intuitively, since the DPP inhibits selection of similar samples, if the set of trajectory samples is more diverse, the random subset is more likely to select more samples from the set.", "The expected cardinality of the DPP is easy to compute and differentiable, which allows us to use it as the objective to optimize the DSF to enable diverse trajectory sampling.", "Our contributions are as follows: (1) We propose a new forecasting approach that learns a diversity sampling function to produce a diverse set of future trajectories; (2) We propose a novel application of DPPs to optimize a set of items (trajectories) in continuous space with a DPP-based diversity measure; (3) Experiments on synthetic data and human motion validate that our method can reliably generate a more diverse set of future trajectories compared to state-of-the-art generative models.", "We proposed a novel forecasting approach using a DSF to optimize over the sample space of a generative model.", "Our method learns the DSF with a DPP-based diversity measure to generate a diverse set of trajectories.", "The diversity measure is a novel application of DPPs to optimize a set of items in continuous space.", "Experiments have shown that our approach can generate more diverse vehicle trajectories and human motions compared to state-of-the-art baseline forecasting approaches.", "2: Output: cVAE encoder network f φ (x, ψ) and decoder network g θ (z, ψ) 3: Initialize φ and θ randomly 4: while not converged do 5:", "Compute parameters (µ, σ) of the posterior distribution q φ (z|x, ψ) using f φ (x, ψ)", "Sample V Gaussian noises { 1 , . . . , V } from N (0, I)", "Transform noises to latent samples from q φ (z|x, ψ):", "Decode latent samples into reconstructed trajectories {x 1 , . . . ,x V } using g θ (z, ψ)", "Calculate the cVAE loss L cvae according to Eq. 6 11:", "Update φ and θ with ∇ φ L cvae and ∇ θ L cvae 12:", "end for 13: end while Figure 6 : Network architectures for synthetic data and human motion.", "Top: for synthetic data, we use a CNN to process the obstacle map f and directly flatten trajectories x and h into vectors.", "The reconstructed trajectoryx is decoded with an MLP.", "Bottom: for human motion, we use Bi-LSTMs to extract temporal features for x and h and decode the reconstructed trajectoryx with a forward LSTM." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.23255813121795654, 0.19512194395065308, 0.2539682388305664, 0.4736841917037964, 0.3636363446712494, 0.25806450843811035, 0.2702702581882477, 0.1904761791229248, 0.29411762952804565, 0.1621621549129486, 0.05714285373687744, 0, 0.21739129722118378, 0.09302324801683426, 0.19999998807907104, 0.1875, 0.3684210479259491, 0.307692289352417, 0.16326530277729034, 0.05882352590560913, 0.4761904776096344, 0.25, 0.21621620655059814, 0.25, 0.32258063554763794, 0.1249999925494194, 0.19999998807907104, 0.3255814015865326, 0.25641024112701416, 0.1904761791229248, 0.3243243098258972, 0.3529411852359772, 0.42424240708351135, 0.3636363446712494, 0.10526315122842789, 0, 0.0624999962747097, 0.06666666269302368, 0.2222222238779068, 0.05714285373687744, 0.0714285671710968, 0.07692307233810425, 0, 0.10256409645080566, 0.07999999821186066, 0.1538461446762085 ]
ryxnY3NYPS
true
[ "We learn a diversity sampling function with DPPs to obtain a diverse set of samples from a generative model." ]
[ "There is mounting evidence that pretraining can be valuable for neural network language understanding models, but we do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn.", "With this in mind, we compare four objectives---language modeling, translation, skip-thought, and autoencoding---on their ability to induce syntactic and part-of-speech information, holding constant the genre and quantity of training data.", "We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data, which suggests that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information.", "We also find that a randomly-initialized, frozen model can perform strikingly well on our auxiliary tasks, but that this effect disappears when the amount of training data for the auxiliary tasks is reduced.", "Representation learning with deep recurrent neural networks has revolutionized natural language processing and replaced many of the expert-designed, linguistic features previously used.", "Recently, researchers have begun to investigate the properties of representations learned by networks by training auxiliary classifiers that use the hidden states of frozen pretrained models to perform other tasks.", "These investigations have shown that when deep LSTM RNNs (Hochreiter and Schmidhuber, 1997) are trained on tasks like machine translation, they latently identify substantial syntactic and semantic information about their input sentences, including part-of-speech (Shi et al., 2016; Belinkov et al., 2017a,b; Blevins et al., 2018) .These", "intriguing findings lead us to ask the following questions:1. How does", "the training task affect how well models latently learn syntactic properties? Which tasks", "are better at inducing these properties?2. How does", "the", "amount of data the model is trained on affect these results? When does training", "on more data help?We investigate these", "questions by holding the data source and model architecture constant, while varying both the training task and the amount of training data. Specifically, we examine", "models trained on English-German (En-De) translation, language modeling, skip-thought (Kiros et al., 2015) , and autoencoding, in addition to an untrained baseline model. We control for the data", "domain by exclusively training on datasets from the 2016 Conference on Machine Translation (WMT; Bojar et al., 2016) . We train models on all", "tasks using the parallel En-De corpus and a small subset of that corpus, which allows us to make a fair comparison across all five models. Additionally, we augment", "the parallel dataset with a large monolingual corpus from WMT to examine how the performance of the unsupervised tasks (all but translation) scale with more data.Throughout our work, we focus on the syntactic evaluation tasks of part-of-speech (POS) tagging and Combinatorial Categorical Grammar (CCG) supertagging. Supertagging is a building", "block for parsing as these tags constrain the ways in which words can compose, largely determining the parse of the sentence. CCG supertagging thus allows", "us to measure the degree to which models learn syntactic structure above the word. We focus our analysis on representations", "learned by language models and by the encoders of sequence-to-sequence models, as translation encoders have been found to learn richer representations of POS and morphological information than translation decoders (Belinkov et al., 2017a) .We find that for POS and CCG tagging, bidirectional", "language models (BiLMs)-created by separately training forward and backward language models, and concatenating their hidden statesoutperform models trained on all other tasks. Even BiLMs trained on relatively small amounts of data", "(1 million sentences) outperform translation and skip-thought models trained on larger datasets (5 million and 63 million sentences respectively).Our inclusion of an untrained LSTM baseline allows us to", "study the effect of training on state representations. We find, surprisingly, that randomly initialized LSTMs underperform", "our best trained models by only a few percentage points when we use all of the available labeled data to train classifiers for our auxiliary tasks. When we reduce the amount of classifier training data though, the performance", "of the randomly initialized LSTM model drops far below those of trained models. We hypothesize that this occurs because training the classifiers on large amounts", "of auxiliary task data allows them to memorize configurations of words seen in the training set and their associated tags. We test this hypothesis by training classifiers to predict the identity of neighboring", "words from a given hidden state, and find that randomly initialized models outperform all trained models on this task. Our findings demonstrate that our best trained models do well on the tagging tasks because", "they are truly learning representations that conform to our notions of POS and CCG tagging, and not because the classifiers we train are able to recover neighboring word identity information well.", "By controlling for the genre and quantity of the training data, we make fair comparisons between several data-rich training tasks in their ability to induce syntactic information.", "We find that bidirectional language models (BiLMs) do better than translation and skip-thought encoders at extracting useful features for POS tagging and CCG supertagging.", "Moreover, this improvement holds even when the BiLMs are trained on substantially less data than competing models.", "Although, due to limited parallel data, we could not compare BiLMs and translation encoders on more than 5 million sentences, our results suggest that for syntactic information, there is no need to compare these two models trained on more data, as BiLMs consistently outperform translation encoders in all data regimes.We also find that randomly initialized encoders extract usable features for POS and CCG tagging, at least when the auxiliary POS and CCG classifiers are themselves trained on reasonably large amounts of data.", "However, the performance of untrained models drops sharply relative to trained ones when using smaller amounts of the classifier data.", "We investigate further and find that untrained models outperform trained ones on the task of neighboring word identity prediction, which confirms that trained encoders do not perform well on tagging tasks because the classifiers are simply memorizing word identity information.", "We also find that both trained and untrained LSTMs store more local neighboring word identity information in lower layers and more distant word identity information in upper layers, which suggests that depth in LSTMs allow them to capture larger context information.Our results suggest that for transfer learning, bidirectional language models like ELMo (Peters et al., 2018) capture more useful features than translation encoders-and that this holds even on genres or languages for which data is not abundant.", "However, the scope of our experiments is limited, and we still know little about the representations of models trained on other supervised tasks, or precisely how the choice of training task affects the type of syntactic information that is learned.", "Our work also highlights the interesting behavior of randomly initialized LSTMs, which show an ability to preserve the contents of their inputs significantly better than trained models." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08163265138864517, 0.04651162400841713, 0.3333333432674408, 0.17777776718139648, 0.05405404791235924, 0.19512194395065308, 0.10526315122842789, 0, 0.2142857164144516, 0.08695651590824127, 0.06896550953388214, 0.0952380895614624, 0, 0.1395348757505417, 0.1621621549129486, 0.0952380895614624, 0.13793103396892548, 0, 0.1818181723356247, 0.19607843458652496, 0.19512194395065308, 0.1463414579629898, 0.06451612710952759, 0.12765957415103912, 0.10526315122842789, 0.04651162400841713, 0.1818181723356247, 0, 0.09999999403953552, 0.31578946113586426, 0.1875, 0.20000000298023224, 0.060606054961681366, 0.20408162474632263, 0.12820512056350708, 0.1249999925494194, 0.14999999105930328 ]
BJeYYeaVJ7
true
[ "Representations from language models consistently perform better than translation encoders on syntactic auxiliary prediction tasks." ]
[ "We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied. ", "Consider, for example, the trade-off between thermal resistance, electrical conductivity, and mechanical stability needed to design a nano-porous template with optimal thermoelectric efficiency. ", "To that end, we leverage the posterior regularization framework andshow that this constraint satisfaction problem can be formulated as sampling froma Gibbs distribution. ", "The main challenges come from the black-box nature ofthose physical constraints, since they are obtained via solving highly non-linearPDEs.", "To overcome those difficulties, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling.", "We explore two surrogate approaches.", "The first approach exploits zero-order approximation of gradients in the Langevin Sampling and we refer to it as Zero-Order Langevin.", "In practice, this approach can be prohibitive since we still need to often query the expensive PDE solvers.", "The second approach approximates the gradients in the Langevin dynamics with deep neural networks, allowing us an efficient sampling strategy using the surrogate model.", "We prove the convergence of those two approaches when the target distribution is log-concave and smooth.", "We show the effectiveness of both approaches in designing optimal nano-porous material configurations, where the goal is to produce nano-pattern templates with low thermal conductivity and reasonable mechanical stability.", "In many real-world design problems, the optimal design needs to simultaneously satisfy multiple constraints, which can be expensive to estimate.", "For example, in computational material design, the goal is to come up with material configurations, or samples, satisfying a list of physical constraints that are given by black-box numerical Partial Differential Equations (PDE) solvers.", "Such solvers (for example, the Boltzmann Transport Equation solver) are often complex, expensive to evaluate, and offer no access to their inner variables or their gradients.", "We pose this design-under-constraints problem as sampling from a Gibbs distribution defined on some compact support.", "The problem of sampling from a distribution with unknown likelihood that can only be point-wise evaluated is called black-box sampling (Chen & Schmeiser, 1998; Neal, 2003) .", "We show in this paper that constrained black-box sampling can be cast as a constrained Langevin dynamics with gradient-free methods.", "Zero-order optimization via Gaussian smoothing was introduced in Nesterov & Spokoiny (2017) and extended to black-box sampling with Langevin dynamics in Shen et al. (2019) .", "We extend this approach to the constrained setting from a black-box density with compact support.", "However, one shortcoming of this approach is that it is computationally very expensive since it requires repeatedly querying PDE solvers in order to get an estimate of the gradient.", "To alleviate computational issues, we propose Surrogate Model Based Langevin dynamics, that consists of two steps:", "(i) Learning (using training data) an approximation of the gradient of the potential of the Gibbs distribution.", "We show that learning the gradient, rather than the potential itself, is important for the mixing of the Langevin dynamics towards the target Gibbs distribution.", "We devise several objective functions, as well as deep neural-network architectures for parameterizing the approximating function class, for learning the gradient of the potential function.", "(ii) We then use the surrogate gradient model in the constrained Langevin dynamics in lieu of the black-box potential.", "Using the surrogate enables more efficient sampling, since it avoids querying the expensive PDE solvers, and obtaining gradients is as efficient as evaluating the functions themselves using automatic differentiation frameworks such as PyTorch or TensorFlow.", "To summarize, our main contributions are as follows:", "1. We cast the problem of generating samples under constraints in the black-box setting as sampling from a Gibbs distribution.", "2. We introduce Constrained Zero-Order Langevin Monte Carlo, using projection or proximal methods, and provide the proof of its convergence to the target Gibbs distribution.", "3. We introduce Surrogate Model Based Projected Langevin Monte Carlo via learning the gradient of the potential of the Gibbs distribution using deep neural networks or reproducing kernel spaces, and prove its convergence to the target distribution when used in conjunction with projection or proximal based methods.", "We shed the light on the importance of the approximation of the gradient of the potential, and we show how to achieve this using Hermite and Taylor learning.", "4. We showcase the usability and effectiveness of the proposed methods for the design of nanoporous configurations with improved thermoelectric efficiency.", "The design consists of finding new configurations with optimized pore locations, such that the resulting configurations have favorable thermal conductivity (i.e., minimal κ) and desired mechanical stability (von Mises Stress σ ≤ τ , where τ is some preset threshold).", "In this paper we introduced Surrogate-Based Constrained Langevin Sampling for black-box sampling from a Gibbs distribution defined on a compact support.", "We studied two approaches for defining the surrogate: the first through zero-order methods and the second via learning gradient approximations using deep neural networks.", "We showed the proofs of convergence of the two approaches in the log-concave and smooth case.", "While zero-order Langevin had prohibitive computational cost, learned surrogate model Langevin enjoy a good tradeoff of lightweight computation and approximation power.", "We applied our black-box sampling scheme to the problem of nano-material configuration design, where the black box constraints are given by expensive PDE solvers, and showed the efficiency and the promise of our method in finding optimal configurations.", "Among different approaches for approximating the gradient, the zero-order ones (PLMC, ProxLMC) show overall superior performance, at a prohibitive computational cost.", "We established that the deep the surrogate (Taylor-1 ProxLMC) is a viable alternative to zero-order methods, achieving reasonable performance, and offering 15x speedup over zero-order methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0.15789473056793213, 0.05405404791235924, 0, 0.2222222238779068, 0.21052631735801697, 0.12121211737394333, 0, 0.277777761220932, 0.06896550953388214, 0.2380952388048172, 0.0624999962747097, 0.12765957415103912, 0, 0.13333332538604736, 0.10256409645080566, 0.3030303120613098, 0.21052631735801697, 0.13793103396892548, 0.04999999701976776, 0.13333332538604736, 0, 0.11428570747375488, 0.05882352590560913, 0.2666666507720947, 0.045454539358615875, 0, 0.1818181723356247, 0.15789473056793213, 0.1818181723356247, 0.05714285373687744, 0.1875, 0.07547169178724289, 0.1764705777168274, 0.0555555522441864, 0.14814814925193787, 0.11764705181121826, 0.17391303181648254, 0, 0.10526315122842789 ]
H1l_gA4KvH
true
[ "We propose surrogate based Constrained Langevin sampling with application in nano-porous material configuration design." ]
[ "There is growing interest in geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures, with natural applications to transitive relational data such as entailment graphs.", "Recent work has extended these ideas beyond deterministic hierarchies to probabilistically calibrated models, which enable learning from uncertain supervision and inferring soft-inclusions among concepts, while maintaining the geometric inductive bias of hierarchical embedding models.", "We build on the Box Lattice model of Vilnis et al. (2018), which showed promising results in modeling soft-inclusions through an overlapping hierarchy of sets, parameterized as high-dimensional hyperrectangles (boxes).", "However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile. ", "In this work, we present a novel hierarchical embedding model, inspired by a relaxation of box embeddings into parameterized density functions using Gaussian convolutions over the boxes.", "Our approach provides an alternative surrogate to the original lattice measure that improves the robustness of optimization in the disjoint case, while also preserving the desirable properties with respect to the original lattice.", "We demonstrate increased or matching performance on WordNet hypernymy prediction, Flickr caption entailment, and a MovieLens-based market basket dataset.", "We show especially marked improvements in the case of sparse data, where many conditional probabilities should be low, and thus boxes should be nearly disjoint.", "Embedding methods have long been a key technique in machine learning, providing a natural way to convert semantic problems into geometric problems.", "Early examples include the vector space BID17 and latent semantic indexing BID4 ) models for information retrieval.", "Embeddings experienced a renaissance after the publication of Word2Vec BID12 , a neural word embedding method BID2 BID13 ) that could run at massive scale.Recent years have seen an interest in structured or geometric representations.", "Instead of representing e.g. images, words, sentences, or knowledge base concepts with points, these methods instead associate them with more complex geometric structures.", "These objects can be density functions, as in Gaussian embeddings BID21 BID0 , convex cones, as in order embeddings BID20 BID9 , or axis-aligned hyperrectangles, as in box embeddings BID22 BID18 .", "These geometric objects more naturally express ideas of asymmetry, entailment, ordering, and transitive relations than simple points in a vector space, and provide a strong inductive bias for these tasks.In this work, we focus on the probabilistic Box Lattice model of BID22 , because of its strong empirical performance in modeling transitive relations, probabilistic interpretation (edges in a relational DAG are replaced with conditional probabilities), and ability to model complex joint probability distributions including negative correlations.", "Box embeddings (BE) are a generalization of order embeddings (OE) BID20 and probabilistic order embeddings (POE) BID9 that replace the vector lattice ordering (notions of overlapping and enclosing convex cones) in OE and POE with a more general notion of overlapping boxes (products of intervals).While", "intuitively appealing, the \"hard edges\" of boxes and their ability to become easily disjoint, present difficulties for gradient-based optimization: when two boxes are disjoint in the model, but have overlap in the ground truth, no gradient can flow to the model to correct the problem. This", "is of special concern for (pseudo-)sparse data, where many boxes should have nearly zero overlap, while others should have very high overlap. This", "is especially pronounced in the case of e.g. market basket models for recommendation, where most items should not be recommended, and entailment tasks, most of which are currently artificially resampled into a 1:1 ratio of positive to negative examples. To address", "the disjoint case, BID22 introduce an ad-hoc surrogate function. In contrast", ", we look at this problem as inspiration for a new model, based on the intuition of relaxing the hard edges of the boxes into smoothed density functions, using a Gaussian convolution with the original boxes.We demonstrate the superiority of our approach to modeling transitive relations on WordNet, Flickr caption entailment, and a MovieLens-based market basket dataset. We match or", "beat existing state of the art results, while showing substantial improvements in the pseudosparse regime.", "We presented an approach to smoothing the energy and optimization landscape of probabilistic box embeddings and provided a theoretical justification for the smoothing.", "Due to a decreased number of hyper-parameters this model is easier to train, and, furthermore, met or surpassed current state-ofthe-art results on several interesting datasets.", "We further demonstrated that this model is particularly effective in the case of sparse data and more robust to poor initialization.Tackling the learning problems presented by rich, geometrically-inspired embedding models is an open and challenging area of research, which this work is far from the last word on.", "This task will become even more pressing as the embedding structures become more complex, such as unions of boxes or other non-convex objects.", "To this end, we will continue to explore both function lattices, and constraint-based approaches to learning." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1463414579629898, 0, 0, 0.1818181723356247, 0, 0, 0, 0, 0.0833333283662796, 0.0476190447807312, 0, 0, 0, 0, 0, 0, 0.043478257954120636, 0, 0.035087715834379196, 0, 0.07407406717538834, 0, 0.08163265138864517, 0.07407406717538834, 0 ]
H1xSNiRcF7
true
[ "Improve hierarchical embedding models using kernel smoothing" ]
[ "We present a weakly-supervised data augmentation approach to improve Named Entity Recognition (NER) in a challenging domain: extracting biomedical entities (e.g., proteins) from the scientific literature.", "First, we train a neural NER (NNER) model over a small seed of fully-labeled examples.", "Second, we use a reference set of entity names (e.g., proteins in UniProt) to identify entity mentions with high precision, but low recall, on an unlabeled corpus.", "Third, we use the NNER model to assign weak labels to the corpus.", "Finally, we retrain our NNER model iteratively over the augmented training set, including the seed, the reference-set examples, and the weakly-labeled examples, which results in refined labels.", "We show empirically that this augmented bootstrapping process significantly improves NER performance, and discuss the factors impacting the efficacy of the approach.", "The increasing wealth of available data fuels numerous machine learning applications.", "Unfortunately, much of this data is unlabeled, unstructured and noisy.", "Supervised learning achieves the best task performance, but obtaining training labels is expensive.", "Crowd-sourcing could provide labels at scale, but may not be feasible for acquiring high-quality labels in technical domains, such as biomedicine that requires expert annotators.", "In this paper, we explore augmented bootstrapping methods that leverage automatically assigned noisy labels obtained from a large unlabeled corpus.", "The biomedical literature is a high-impact domain with scarce annotations.", "Unlocking the knowledge in this data requires machine reading systems that automatically extract important concepts in the text, such as entities and their relations.", "A critical component of such systems is reliable Named Entity Recognition (NER), which aims to identify parts of the text that refer to a named entity (e.g., a protein).", "In line with advancements in many domains, most state-of-the-art NER approaches use a deep neural network model that relies on a large labeled training set, which is not usually available in biomedical domains.", "To address label scarcity, we propose a framework to train any effective neural NER model by leveraging partially labeled data.", "We do this by creating an augmented training set using a small fully-labeled seed set, and an unlabeled corpus set, which we weakly and automatically label, and then refine its labels via an iterative process.", "Our main contributions include: (1) An augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve NER in a challenging domain (biomedicine) where labelling is expensive.", "(2) A detailed analysis in a controlled setting to study different aspects affecting performance.", "(3) An analysis of reference-based automated approaches to labeling data, showing that naive labeling decreases performance and how to overcome it.", "We proposed a method to improve NER with limited labeled data, which is often the case in technical domains, such as biomedicine.", "Our method combines bootstrapping and weakly-labeled data augmentation by using a small fully-labeled seed dataset and a large unlabeled corpus, automated labelling using a reference set, and an iterative label refinement process.", "Our experimental evaluation shows performance equivalent to systems trained with an order of magnitude more labeled data.", "In future work, we aim to explore additional augmentation methods over other challenging datasets.", "We plan to apply the findings of these controlled experiments to a much larger in-the-wild scenario where we use all the available labeled data as the seed and operate over a large corpus (e.g., all of PubMed, PubMed Central) to improve state-of-the-art NER performance." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.36734694242477417, 0.1111111044883728, 0.23999999463558197, 0.12121211737394333, 0.04444443807005882, 0.1428571343421936, 0.060606054961681366, 0.0624999962747097, 0.05714285373687744, 0.04347825422883034, 0.1904761791229248, 0.25, 0, 0.19999998807907104, 0.11320754140615463, 0.0952380895614624, 0.1538461446762085, 0.5925925970077515, 0.1111111044883728, 0.09756097197532654, 0.1818181723356247, 0.16326530277729034, 0.1538461446762085, 0.0555555522441864, 0.13114753365516663 ]
S1ghJiRVd4
true
[ "Augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve Name Entity Recognition from biomedical literature." ]
[ "Quantum machine learning methods have the potential to facilitate learning using extremely large datasets.", "While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels.", "One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors.", "Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines.", "The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.", "Data sets used for training machine learning models are becoming increasingly large, leading to continued interest in fast methods for solving large-scale classification problems.", "One of the approaches being explored is training the predictive model using a quantum algorithm that has access to the training set stored in quantum-accessible memory.", "In parallel to research on efficient architectures for quantum memory (Blencowe, 2010) , work on quantum machine learning algorithms and on quantum learning theory is under way (see for example Refs.", "(Biamonte et al., 2017; Dunjko & Briegel, 2018; Schuld & Petruccione, 2018) and (Arunachalam & de Wolf, 2017) for review).", "An early example of this approach is Quantum LS-SVM (Rebentrost et al., 2014a) , which achieves exponential speedup compared to classical LS-SVM algorithm.", "Quantum LS-SVM uses quadratic least-squares loss and squared-L 2 regularizer, and the optimization problem can be solved using the seminal HHL (Harrow et al., 2009 ) algorithm for solving quantum linear systems of equations.", "While progress has been made in quantum algorithms for supervised learning, it has been recently advocated that the focus should shift to unsupervised and semi-supervised setting (Perdomo-Ortiz et al., 2018) .", "In many domains, the most laborious part of assembling a training set is the collection of sample labels.", "Thus, in many scenarios, in addition to the labeled training set of size m we have access to many more feature vectors with missing labels.", "One way of utilizing these additional data points to improve the classification model is through semi-supervised learning.", "In semi-supervised learning, we are given m observations x 1 , ..., x m drawn from the marginal distribution p(x), where the l (l m) first data points come with labels y 1 , ..., y l drawn from conditional distribution p(y|x).", "Semi-supervised learning algorithms exploit the underlying distribution of the data to improve classification accuracy on unseen samples.", "In the approach considered here, the training samples are connected by a graph that captures their similarity.", "Here, we introduce a quantum algorithm for semi-supervised training of a kernel support vector machine classification model.", "We start with the existing Quantum LS-SVM (Rebentrost et al., 2014a) , and use techniques from sample-based Hamiltonian simulation (Kimmel et al., 2017) to add a semisupervised term based on Laplacian SVM (Melacci & Belkin, 2011) .", "As is standard in quantum machine learning (Li et al., 2019) , the algorithm accesses training points and the adjacency matrix of the graph connecting samples via a quantum oracle.", "We show that, with respect to the oracle, the proposed algorithm achieves the same quantum speedup as LS-SVM, that is, adding the semisupervised term does not lead to increased computational complexity." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1818181723356247, 0.17777776718139648, 0.1818181723356247, 0.05714285373687744, 0.2666666507720947, 0.09302324801683426, 0.23255813121795654, 0.08888888359069824, 0, 0.0952380895614624, 0.15094339847564697, 0.20408162474632263, 0.2222222238779068, 0.380952388048172, 0.21621620655059814, 0.1599999964237213, 0.1666666567325592, 0.0555555522441864, 0.1666666567325592, 0.14814814925193787, 0.1702127605676651, 0.21276594698429108 ]
ByeqyxBKvS
true
[ "We extend quantum SVMs to semi-supervised setting, to deal with the likely problem of many missing class labels in huge datasets." ]
[ "Deep neural networks have become the state-of-the-art models in numerous machine learning tasks.", "However, general guidance to network architecture design is still missing.", "In our work, we bridge deep neural network design with numerical differential equations.", "We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations.", "This finding brings us a brand new perspective on the design of effective deep architectures.", "We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks.", "As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations.", "The LM-architecture is an effective structure that can be used on any ResNet-like networks.", "In particular, we demonstrate that LM-ResNet and LM-ResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters.", "In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress (>50%) the original networks while maintaining a similar performance.", "This can be explained mathematically using the concept of modified equation from numerical analysis.", "Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks.", "Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture.", "As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "Deep learning has achieved great success in may machine learning tasks.", "The end-to-end deep architectures have the ability to effectively extract features relevant to the given labels and achieve state-of-the-art accuracy in various applications BID3 ).", "Network design is one of the central task in deep learning.", "Its main objective is to grant the networks with strong generalization power using as few parameters as possible.", "The first ultra deep convolutional network is the ResNet BID16 which has skip connections to keep feature maps in different layers in the same scale and to avoid gradient vanishing.", "Structures other than the skip connections of the ResNet were also introduced to avoid gradient vanishing, such as the dense connections BID20 , fractal path BID27 and Dirac initialization BID50 .", "Furthermore, there has been a lot of attempts to improve the accuracy of image classifications by modifying the residual blocks of the ResNet.", "BID49 suggested that we need to double the number of layers of ResNet to achieve a fraction of a percent improvement of accuracy.", "They proposed a widened architecture that can efficiently improve the accuracy.", "BID51 pointed out that simply modifying depth or width of ResNet might not be the best way of architecture design.", "Exploring structural diversity, which is an alternative dimension in network design, may lead to more effective networks.", "In BID43 , BID51 , BID47 , and BID19 , the authors further improved the accuracy of the networks by carefully designing residual blocks via increasing the width of each block, changing the topology of the network and following certain empirical observations.", "In the literature, the network design is mainly empirical.It remains a mystery whether there is a general principle to guide the design of effective and compact deep networks.Observe that each residual block of ResNet can be written as u n+1 = u n + ∆tf (u n ) which is one step of forward Euler discretization (AppendixA.1) of the ordinary differential equation (ODE) u t = f (u) (E, 2017) .", "This suggests that there might be a connection between discrete dynamic systems and deep networks with skip connections.", "In this work, we will show that many state-of-the-art deep network architectures, such as PolyNet BID51 , FractalNet BID27 and RevNet BID12 , can be consider as different discretizations of ODEs.", "From the perspective of this work, the success of these networks is mainly due to their ability to efficiently approximate dynamic systems.", "On a side note, differential equations is one of the most powerful tools used in low-level computer vision such as image denoising, deblurring, registration and segmentation BID36 BID2 BID4 .", "This may also bring insights on the success of deep neural networks in low-level computer vision.", "Furthermore, the connection between architectures of deep neural networks and numerical approximations of ODEs enables us to design new and more effective deep architectures by selecting certain discrete approximations of ODEs.", "As an example, we design a new network structure called linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method in numerical ODEs BID1 .", "This architecture can be applied to any ResNet-like networks.", "In this paper, we apply the LM-architecture to ResNet and ResNeXt BID47 ) and achieve noticeable improvements on CIFAR and ImageNet with comparable numbers of trainable parameters.", "We also explain the performance gain using the concept of modified equations from numerical analysis.It is known in the literature that introducing randomness by injecting noise to the forward process can improve generalization of deep residual networks.", "This includes stochastic drop out of residual blocks BID21 and stochastic shakes of the outputs from different branches of each residual block BID11 .", "In this work we show that any ResNet-like network with noise injection can be interpreted as a discretization of a stochastic dynamic system.", "This gives a relatively unified explanation to the stochastic learning process using stochastic control.", "Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the proposed LM-architecture.", "As an example, we introduce stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.06666666269302368, 0.42424240708351135, 0.2380952388048172, 0.4000000059604645, 0.3333333134651184, 0.09999999403953552, 0.11764705181121826, 0.1090909019112587, 0.04999999329447746, 0.1764705777168274, 0.08695651590824127, 0.10810810327529907, 0, 0, 0.09302324801683426, 0.12903225421905518, 0.10810810327529907, 0.08510638028383255, 0.04255318641662598, 0.051282044500112534, 0.052631575614213943, 0, 0.051282044500112534, 0.21621620655059814, 0.11764705181121826, 0.15189872682094574, 0.21052631735801697, 0.12244897335767746, 0.1538461446762085, 0.12244897335767746, 0.277777761220932, 0.45454543828964233, 0.13636362552642822, 0.13793103396892548, 0.08888888359069824, 0.18518517911434174, 0.10256409645080566, 0.1428571343421936, 0.060606054961681366, 0.10526315122842789, 0 ]
ryZ283gAZ
true
[ "This paper bridges deep network architectures with numerical (stochastic) differential equations. This new perspective enables new designs of more effective deep neural networks." ]
[ "Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications.", "In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).", "The process of implementing client-side software based on a Graphical User Interface (GUI) mockup created by a designer is the responsibility of developers.", "Implementing GUI code is, however, time-consuming and prevent developers from dedicating the majority of their time implementing the actual functionality and logic of the software they are building.", "Moreover, the computer languages used to implement such GUIs are specific to each target runtime system; thus resulting in tedious and repetitive work when the software being built is expected to run on multiple platforms using native technologies.", "In this paper, we describe a model trained end-to-end with stochastic gradient descent to simultaneously learns to model sequences and spatio-temporal visual features to generate variable-length strings of tokens from a single GUI image as input.Our first contribution is pix2code, a novel application of Convolutional and Recurrent Neural Networks to generate computer tokens from a single GUI screenshot as input.", "That is, no engineered feature extraction pipeline nor expert heuristics was designed to process the input data; our model learns from the pixel values of the input image alone.", "Our experiments demonstrate the effectiveness of our method for generating computer code for various platforms (i.e. iOS and Android native mobile interfaces, and multi-platform web-based HTML/CSS interfaces) without the need for any change or specific tuning to the model.", "In fact, pix2code can be used as such to support different target languages simply by being trained on a different dataset.", "A video demonstrating our system is available online 1 .Our", "second contribution is the release of our synthesized datasets consisting of both GUI screenshots and associated source code for three different platforms. Our", "datasets and our pix2code implemention are publicly available 2 to foster future research." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.307692289352417, 0.1538461446762085, 0, 0.1111111044883728, 0.08510638028383255, 0.10526315122842789, 0.052631575614213943, 0.12765957415103912, 0.0624999962747097, 0, 0.11764705181121826, 0.1599999964237213 ]
H1zRea1Mf
true
[ "CNN and LSTM to generate markup-like code describing graphical user interface images." ]
[ "Computer vision tasks such as image classification, image retrieval and few-shot learning are currently dominated by Euclidean and spherical embeddings, so that the final decisions about class belongings or the degree of similarity are made using linear hyperplanes, Euclidean distances, or spherical geodesic distances (cosine similarity).", "In this work, we demonstrate that in many practical scenarios hyperbolic embeddings provide a better alternative.", "Figure 1: An example of two-dimensional Poincaré embeddings computed by a hyperbolic neural network trained on MNIST, and evaluated additionally on Omniglot.", "Ambiguous and unclear images from MNIST, as well as most of the images from Omniglot are embedded near the center, while samples with clear class labels (or characters from Omniglot similar to one of the digits) lie near the boundary.", "High-dimensional embeddings are ubiquitous in modern computer vision.", "Many, perhaps most, modern computer vision systems learn non-linear mappings (in the form of deep convolutional networks) from the space of images or image fragments into high-dimensional spaces.", "The operations at the end of deep networks imply a certain type of geometry of the embedding spaces.", "For example, image classification networks (Krizhevsky et al., 2012; LeCun et al., 1989) use linear operators (matrix multiplication) to map embeddings in the penultimate layer to class logits.", "The class boundaries in the embedding space are thus piecewise-linear, and pairs of classes are separated by Euclidean hyperplanes.", "The embeddings learned by the model in the penultimate layer, therefore, live in the Euclidean space.", "The same can be said about systems where Euclidean distances are used to perform image retrieval (Oh Song et al., 2016; Sohn, 2016; Wu et al., 2017) , face recognition (Parkhi et al., 2015; Wen et al., 2016) or one-shot learning (Snell et al., 2017) .", "Alternatively, some few-shot learning (Vinyals et al., 2016) , face recognition (Schroff et al., 2015) and person re-identification methods (Ustinova & Lempitsky, 2016; Yi et al., 2014) learn spherical embeddings, so that sphere projection operator is applied at the end of a network that computes the embeddings.", "Cosine similarity (closely associated with sphere geodesic distance) is then used by such architectures to match images.", "Euclidean spaces with their zero curvature and spherical spaces with their positive curvature have certain profound implications on the nature of embeddings that existing computer vision systems can learn.", "In this work, we argue that hyperbolic spaces with negative curvature might often be more appropriate for learning embedding of images.", "Towards this end, we add the recently-proposed hyperbolic network layers to the end of several computer vision networks, and present a number of experiments corresponding to image classification, one-shot, and few-shot learning and person re-identification.", "We show that in many cases, the use of hyperbolic geometry improves the performance over Euclidean or spherical embeddings.", "Motivation for hyperbolic image embeddings.", "The use of hyperbolic spaces in natural language processing (Nickel & Kiela, 2017; Tifrea et al., 2018; Dhingra et al., 2018 ) is motivated by their natural ability to embed hierarchies (e.g., tree graphs) with low distortion (Sarkar, 2011) .", "Hierarchies are ubiquitous in natural language processing.", "First, there are natural hierarchies corresponding to, e.g., biological taxonomies and linguistic ontologies.", "Likewise, a more generic short phrase can have many plausible continuations and is therefore semantically-related to a multitude of long phrases that are not necessarily closely related to each other (in the semantic sense).", "The innate suitability of hyperbolic spaces to embedding hierarchies (Sala et al., 2018a; Sarkar, 2011) explains the success of such spaces in natural language processing (Nickel & Kiela, 2017) .", "Here, we argue that similar hierarchical relations between images are common in computer vision tasks (Figure 2 ).", "One can observe the following example cases:", "• In image retrieval, an overview photograph is related to many images that correspond to the close-ups of different distinct details.", "Likewise, for classification tasks in-the-wild, an image containing the representatives of multiple classes is related to images that contain representatives of the classes in isolation.", "Embedding a dataset that contains composite images into continuous space is therefore similar to embedding a hierarchy.", "• In some tasks, more generic images may correspond to images that contain less information and are therefore more ambiguous.", "E.g., in face recognition, a blurry and/or low-resolution face image taken from afar can be related to many high-resolution images of faces that clearly belong to distinct people.", "Again natural embeddings for image datasets that have widely varying image quality/ambiguity calls for retaining such hierarchical structure.", "In order to build deep learning models which operate on the embeddings to hyperbolic spaces, we capitalize on recent developments , which construct the analogues of familiar layers (such as a feed-forward layer, or a multinomial regression layer) in hyperbolic spaces.", "We show that many standard architectures used for tasks of image classification, and in particular in the few-shot learning setting can be easily modified to operate on hyperbolic embeddings, which in many cases also leads to their improvement." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814925193787, 0.19354838132858276, 0.1111111044883728, 0.04444444179534912, 0.3478260934352875, 0.09756097197532654, 0, 0.09756097197532654, 0.060606054961681366, 0.0714285671710968, 0.03999999538064003, 0.1071428507566452, 0, 0.19999998807907104, 0.1666666567325592, 0.17777776718139648, 0.3030303120613098, 0.29999998211860657, 0.037735845893621445, 0.09090908616781235, 0.06666666269302368, 0.08510638028383255, 0.04651162400841713, 0.24242423474788666, 0, 0.05714285373687744, 0.1666666567325592, 0.06451612710952759, 0.1818181723356247, 0.04651162400841713, 0.19354838132858276, 0.07999999821186066, 0.2448979616165161 ]
SkgC6yHtvB
true
[ "We show that hyperbolic embeddings are useful for high-level computer vision tasks, especially for few-shot classification." ]
[ "High-dimensional time series are common in many domains.", "Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations.", "However, most representation learning algorithms for time series data are difficult to interpret.", "This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time.\n", "To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling.", "This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance.", "We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original.", "Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space.\n", "This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty.\n", "We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.", "Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data.", "Interpretable representation learning on time series is a seminal problem for uncovering the latent structure in complex systems, such as chaotic dynamical systems or medical time series.", "In areas where humans have to make decisions based on large amounts of data, interpretability is fundamental to ease the human task.", "Especially when decisions have to be made in a timely manner and rely on observing some chaotic external process over time, such as in finance or medicine, the need for intuitive interpretations is even stronger.", "However, many unsupervised methods, such as clustering, make misleading i.i.d. assumptions about the data, neglecting their rich temporal structure and smooth behaviour over time.", "This poses the need for a method of clustering, where the clusters assume a topological structure in a lower dimensional space, such that the representations of the time series retain their smoothness in that space.", "In this work, we present a method with these properties.We choose to employ deep neural networks, because they have a very successful tradition in representation learning BID5 .", "In recent years, they have increasingly been combined with generative modeling through the advent of generative adversarial networks (GANs) BID13 and variational autoencoders (VAEs) BID18 .", "However, the representations learned by these models are often considered cryptic and do not offer the necessary interpretability .", "A lot of work has been done to improve them in this regard, in GANs as well as VAEs BID16 BID9 .", "Alas, these works have focused entirely on continuous representations, while discrete ones are still underexplored.In order to define temporal smoothness in a discrete representation space, the space has to be equipped with a topological neighborhood relationship.", "One type of representation space with such a structure is induced by the self-organizing map (SOM) BID21 .", "The SOM allows to map states from an uninterpretable continuous space to a lower-dimensional space with a predefined topologically interpretable structure, such as an easily visualizable two-dimensional grid.", "However, while yielding promising results in visualizing static state spaces, such as static patient states BID27 , the classical SOM formulation does not offer a notion of time.", "The time component can be incorporated using a probabilistic transition model, e.g. a Markov model, such that the representations of a single time point are enriched with information from the adjacent time points in the series.", "It is therefore potentially fruitful to apply the approaches of probabilistic modeling alongside representation learning and discrete dimensionality reduction in an end-to-end model.In this work, we propose a novel deep architecture that learns topologically interpretable discrete representations in a probabilistic fashion.", "Moreover, we introduce a new method to overcome the non-differentiability in discrete representation learning architectures and develop a gradient-based version of the classical selforganizing map algorithm with improved performance.", "We present extensive empirical evidence for the model's performance on synthetic and real world time series from benchmark data sets, a synthetic dynamical system with chaotic behavior and real world medical data.", "A schematic overview of our proposed model is depicted in FIG0 .", "An input x ∈ R d is mapped to a latent encoding z e ∈ R m (usually m < d) by computing z e = f θ (x), where f θ (·) is parameterized by the encoder neural network.", "The encoding is then assigned to an embedding z q ∈ R m in the dictionary of embeddings E = {e 1 , . . . , e k | e i ∈ R m } by sampling z q ∼ p(z q |z e ).", "The form of this distribution is flexible and can be a design choice.", "In order for the model to behave similarly to the original SOM algorithm (see below), in our experiments we choose the distribution to be categorical with probability mass 1 on the closest embedding to z e , i.e. p(z q |z e ) = 1[z q = arg min e∈E z e − e 2 ], where 1[·] is the indicator function.", "A reconstructionx of the input can then be computed asx = g φ (z), where g φ (·) is parameterized by the decoder neural network.", "Since the encodings and embeddings live in the same space, one can compute two different reconstructions, namelyx e = g φ (z e ) andx q = g φ (z q ).To", "achieve a topologically interpretable neighborhood structure, the embeddings are connected to form a self-organizing map. A", "self-organizing map consists of k nodes V = {v 1 , . . . , v k }, where every node corresponds to an embedding in the data space e v ∈ R d and a representation in a lower-dimensional discrete space m v ∈ M , where usually M ⊂ N 2 . During", "training on a data set D = {x 1 , . . . , x n }, a winner nodẽ v is chosen for every point x i according toṽ = arg min v∈V e v − x i 2 . The embedding", "vector for every [red] . In order to achieve", "a discrete representation, every latent data point (z e ) is mapped to its closest node in the SOM (z q ). A Markov transition", "model [blue] is learned to predict the next discrete representation (z t+1 q ) given the current one (z t q ). The discrete representations", "can then be decoded by another neural network back into the original data space. node u ∈ V is then updated according", "to e u ← e u + N (m u , mṽ)η(x i − e u ), where η is the learning rate and N (m u , mṽ) is a neighborhood function between the nodes defined on the representation space M . There can be different design choices", "for N (m u , mṽ). A more thorough review of the self-organizing", "map algorithm is deferred to the appendix (Sec. A).We choose to use a two-dimensional SOM because", "it facilitates visualization similar to BID27 . Since we want the architecture to be trainable", "end-to-end, we cannot use the standard SOM training algorithm described above. Instead, we devise a loss function term whose", "gradient corresponds to a weighted version of the original SOM update rule (see below). We implement it in such a way that any time an", "embedding e i,j at position (i, j) in the map gets updated, it also updates all the embeddings in its immediate neighborhood N (e i,j ). The neighborhood is defined as N (e i,j ) = {e", "i−1,j , e i+1,j , e i,j−1 , e i,j+1 } for a two-dimensional map.The loss function for a single input x looks like DISPLAYFORM0 where x, z e , z q ,x e andx q are defined as above and α and β are weighting hyperparameters.Every term in this function is specifically designed to optimize a different model component. The first term is the reconstruction loss L reconstruction", "(x,x q ,x e ) = x−x q 2 + x−x e 2 . The first subterm of this is the discrete reconstruction loss", ", which encourages the assigned SOM node z q (x) to be an informative representation of the input. The second subterm encourages the encoding z e (x) to also be", "an informative representation. This ensures that all parts of the model have a fully differentiable", "credit assignment path to the loss function, which facilitates training. Note that the reconstruction loss corresponds to the evidence lower", "bound (ELBO) of the VAE part of our model BID18 . Since we assume a uniform prior over z q , the KL-term in the ELBO", "is constant w.r.t. the parameters and can be ignored during optimization.The term L commitment encourages the encodings and assigned SOM nodes to be close to each other and is defined as DISPLAYFORM1 2 . Closeness of encodings and embeddings should be expected to already", "follow from the L reconstruction term in a fully differentiable architecture. However, due to the nondifferentiability of the embedding assignment", "in our model, the L commitment term has to be explicitly added to the objective in order for the encoder to get gradient information about z q . DISPLAYFORM2 2 , where N (·) is the set of neighbors in the discrete", "space as defined above and sg [·] is the gradient stopping operator that does not change the outputs during the forward pass, but sets the gradients to 0 during the backward pass. It encourages the neighbors of the assigned SOM node z q to also be", "close to z e , thus enabling the embeddings to exhibit a self-organizing map property, while stopping the gradients on z e such that the encoding is not pulled in the direction of the neighbors. This term enforces a neighborhood relation between the discrete codes", "and encourages all SOM nodes to ultimately receive gradient information from the data. The gradient stopping in this term is motivated by the observation that", "the data points themselves do not get moved in the direction of their assigned SOM node's neighbors in the original SOM algorithm either (see above). We want to optimize the embeddings based on their neighbors, but not the", "respective encodings, since any single encoding should be as close as possible to its assigned embedding and not receive gradient information from any other embeddings that it is not assigned to. Note that the gradient update of a specific SOM node in this formulation", "depends on its distance to the encoding, while the step size in the original SOM algorithm is constant. It will be seen that this offers some benefits in terms of optimization", "and convergence (see Sec. 4.1).", "The SOM-VAE can recover topologically interpretable state representations on time series and static data.", "It provides an improvement to standard methods in terms of clustering performance and offers a way to learn discrete two-dimensional representations of the data manifold in concurrence with the reconstruction task.", "It introduces a new way of overcoming the non-differentiability of the discrete representation assignment and contains a gradient-based variant of the traditional self-organizing map that is more performant than the original one.", "On a challenging real world medical data set, our model learns more informative representations with respect to medically relevant prediction targets than competitor methods.", "The learned representations can be visualized in an interpretable way and could be helpful for clinicians to understand patients' health states and trajectories more intuitively.It will be interesting to see in future work whether the probabilistic component can be extended to not just improve the clustering and interpretability of the whole model, but also enable us to make predictions.", "Promising avenues in that direction could be to increase the complexity by applying a higher order Markov Model, a Hidden Markov Model or a Gaussian Process.", "Another fruitful avenue of research could be to find more theoretically principled ways to overcome the non-differentiability and compare them with the empirically motivated ones.", "Lastly, one could explore deviating from the original SOM idea of fixing a latent space structure, such as a 2D grid, and learn the neighborhood structure as a graph directly from data." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.19512194395065308, 0.1764705777168274, 0.19999998807907104, 0.27272728085517883, 0.2790697515010834, 0.2448979616165161, 0.14999999105930328, 0.08695651590824127, 0.19672130048274994, 0.15789473056793213, 0.17391303181648254, 0.0952380895614624, 0.145454540848732, 0.08695651590824127, 0.2083333283662796, 0.2083333283662796, 0.08888888359069824, 0.15789473056793213, 0.04999999329447746, 0.1090909019112587, 0.10526315122842789, 0.17777776718139648, 0.0833333283662796, 0.2745097875595093, 0.20338982343673706, 0.1666666567325592, 0.3333333432674408, 0, 0.07692307233810425, 0.0357142798602581, 0.11764705181121826, 0.05714285373687744, 0, 0.04347825422883034, 0.2222222238779068, 0.12903225421905518, 0.07407406717538834, 0.06896550953388214, 0.09090908616781235, 0.09756097197532654, 0, 0.14035087823867798, 0.05882352590560913, 0.1621621549129486, 0.05882352590560913, 0.051282044500112534, 0.17777776718139648, 0, 0.0833333283662796, 0, 0.045454539358615875, 0.0555555522441864, 0.05405404791235924, 0.04651162400841713, 0.07017543166875839, 0.1538461446762085, 0.03703703358769417, 0.06779660284519196, 0.14035087823867798, 0.13636362552642822, 0.11764705181121826, 0.13793103396892548, 0.08163265138864517, 0.07407406717538834, 0.34285715222358704, 0.2083333283662796, 0.12765957415103912, 0.13333332538604736, 0.14492753148078918, 0.09090908616781235, 0.09090908616781235, 0.1666666567325592 ]
rygjcsR9Y7
true
[ "We present a method to learn interpretable representations on time series using ideas from variational autoencoders, self-organizing maps and probabilistic models." ]
[ "We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. ", "The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. ", "It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network.", "The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. ", "The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks.", "The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible.", "Time series forecasting is focused on modeling the predictors of future values of time series given their past.", "As in many cases the relationship between past and future observations is not deterministic, this amounts to expressing the conditional probability distribution as a function of the past observations: p(X t+d |X t , X t−1 , . . .) = f (X t , X t−1 , . . .).This", "forecasting problem has been approached almost independently by econometrics and machine learning communities.In this paper we examine the capabilities of convolutional neural networks (CNNs), BID25 in modeling the conditional mean of the distribution of future observations; in other words, the problem of autoregression. We focus", "on time series with multivariate and noisy signal. In particular", ", we work with financial data which has received limited public attention from the deep learning community and for which nonparametric methods are not commonly applied. Financial time", "series are particularly challenging to predict due to their low signal-to-noise ratio (cf. applications of Random Matrix Theory in econophysics BID24 BID3 ) and heavy-tailed distributions BID8 . Moreover, the", "predictability of financial market returns remains an open problem and is discussed in many publications (cf. efficient market hypothesis of BID11 ).A common situation", "with financial data is that the same signal (e.g. value of an asset) is observed from different sources (e.g. financial news, analysts, portfolio managers in hedge funds, marketmakers in investment banks) in asynchronous moments of time. Each of these sources", "may have a different bias and noise with respect to the original signal that needs to be recovered (cf. time series in FIG0 ). Moreover, these sources", "are usually strongly correlated and lead-lag relationships are possible (e.g. a market-maker with more clients can update its view more frequently and precisely than one with fewer clients). Therefore, the significance", "of each of the available past observations might be dependent on some other factors that can change in time. Hence, the traditional econometric", "models such as AR, VAR, VARMA (Hamilton, 1994) might not be sufficient. Yet their relatively good performance", "motivates coupling such linear models with deep neural networks that are capable of learning highly nonlinear relationships. Quotes from four different market participants", "(sources) for the same CDS 1 throughout one day. Each trader displays from time to time the prices", "for which he offers to buy (bid) and sell (ask) the underlying CDS. The filled area marks the difference between the", "best sell and buy offers (spread) at each time.For these reasons, we propose SignificanceOffset Convolutional Neural Network, a Convolutional Network extension of standard autoregressive models BID34 BID35 equipped with a nonlinear weighting mechanism and provide empirical evidence on its competitiveness with standard multilayer CNN and recurrent Long-Short Term Memory network BID18 . The mechanism is inspired by the gating systems", "that proved successful in recurrent neural networks BID18 BID6 and highway networks BID37 .2 RELATED WORK 2.1 TIME SERIES FORECASTING Literature in time series forecasting is rich and has a long history in the field of econometrics which makes extensive use of linear stochastic models such as AR, ARIMA and GARCH processes to mention a few. Unlike in machine learning, research in econometrics", "is more focused on explaining variables rather than improving out-of-sample prediction power. In practice, one can notice that these models 'over-fit", "' on financial time series: their parameters are unstable and out-of-sample performance is poor.Reading through recent proceedings of the main machine learning venues (e.g. ICML, NIPS, AIS-TATS, UAI), one can notice that time series are often forecast using Gaussian processes BID31 BID38 BID19 , especially when time series are irregularly sampled BID9 BID26 . Though still largely independent, researchers have started", "to \"bring together the machine learning and econometrics communities\" by building on top of their respective fundamental models yielding to, for example, the Gaussian Copula Process Volatility model BID42 . Our paper is in line with this emerging trend by coupling", "AR models and neural networks.Over the past 5 years, deep neural networks have surpassed results from most of the existing literature in many fields BID33 : computer vision BID23 , audio signal processing and speech recognition BID32 , natural language processing (NLP) BID1 BID7 BID14 BID21 . Although sequence modeling in NLP, i.e. prediction of the", "next character or word, is related to our forecasting problem (1), the nature of the sequences is too dissimilar to allow using the same cost functions and architectures. Same applies to the adversarial training proposed by BID28", "for video frame prediciton, as such approach favors most plausible scenarios rather than outputs close to all possible outputs, while the latter is usually required in financial time series due to stochasticity of the considered processes.Literature on deep learning for time series forecasting is still scarce (cf. BID12 for a recent review). Literature on deep learning for financial time series forecasting", "is even scarcer though interest in using neural networks for financial predictions is not new BID30 BID29 . More recent papers include BID36 that used 4-layer perceptrons in", "modeling price change distributions in Limit Order Books, and BID2 who applied more recent WaveNet architecture of van den BID39 to several short univariate and bivariate time-series (including financial ones). Despite claim of applying deep learning, BID17 use autoencoders with", "a single hidden layer to compress multivariate financial data. Besides these and claims of secretive hedge funds (it can be marketing", "surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge. In this paper, we investigate the gold standard architectures' (simple", "Convolutional Neural Network (CNN), Residual Network, multi-layer LSTM) capabilities on AR-like artificial asynchronous and noisy time series, and on real financial data from the credit default swap market where some inefficiencies may exist, i.e. time series may not be totally random.", "In this article, we proposed a weighting mechanism that, coupled with convolutional networks, forms a new neural network architecture for time series prediction.", "The proposed architecture is designed for regression tasks on asynchronous signals in the presence of high amount of noise.", "This approach has proved to be successful in forecasting financial and artificially generated asynchronous time series outperforming popular convolutional and recurrent networks.The proposed model can be further extended by adding intermediate weighting layers of the same type in the network structure.", "Another possible generalization that requires further empirical studies can be obtained by leaving the assumption of independent offset values for each past observation, i.e. considering not only 1x1 convolutional kernels in the offset sub-network.Finally, we aim at testing the performance of the proposed architecture on other real-life datasets with relevant characteristics.", "We observe that there exists a strong need for common 'econometric' datasets benchmark and, more generally, for time series (stochastic processes) regression.APPENDIX A NONLINEARITY IN THE ASYNCHRONOUSLY SAMPLED AUTOREGRESSIVE TIME SERIES Lemma 1.", "Let X(t) be an AR(2) time series given by DISPLAYFORM0 where (ε(t)) t=1,2,... are i.i.d. error terms.", "Then DISPLAYFORM1 for any t > k ≥ 2, where a k , b k are rational functions of a and b.Proof.", "The proof follows a simple induction.", "It is sufficient to show that DISPLAYFORM2 where DISPLAYFORM3 and E k (t) is a linear combination of {ε(t − i), i = 0, 1, . . . , k − 2}.", "Basis of the induction is trivially satisfied via 15.", "In the induction step, we assume that 17 holds for k.", "For t > k + 1 we have DISPLAYFORM4 .", "Multiplying sides of this equation by b and adding av k X(t − 1) we obtain DISPLAYFORM5 Since aX(t − 1) + bX(t − 2) = X(t) − ε(t) we get DISPLAYFORM6 As DISPLAYFORM7 is a linear combination of {ε(t − i), i = 0, 1, . . . , k − 1}, the above equation proves 17 for k = k + 1." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.06896550953388214, 0.1538461446762085, 0.25, 0.07999999821186066, 0.1428571343421936, 0.29629629850387573, 0.03999999538064003, 0.12244897335767746, 0.1904761791229248, 0.15789473056793213, 0.10256409645080566, 0.05882352590560913, 0.09090908616781235, 0.10810810327529907, 0, 0.1249999925494194, 0, 0.11764705181121826, 0.14814814925193787, 0.06666666269302368, 0.125, 0.125, 0, 0.11764705926179886, 0.12244897335767746, 0.03389830142259598, 0.0952380895614624, 0.21052631735801697, 0.0555555522441864, 0.08163265138864517, 0.0624999962747097, 0.0952380895614624, 0.12244897335767746, 0.24242423474788666, 0.20689654350280762, 0.16326530277729034, 0.10169491171836853, 0.13636362552642822, 0.13793103396892548, 0.13333332538604736, 0, 0.052631575614213943, 0.09999999403953552, 0.09090908616781235, 0, 0.06896551698446274 ]
rJaE2alRW
true
[ "Convolutional architecture for learning data-dependent weights for autoregressive forecasting of time series." ]
[ "MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients.", "Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained.", "Because samples are preferentially moved in the direction of other classes \\iffalse -- which are typically clustered in input space -- \\fi we refer to this method as directional adversarial training, or DAT.", "We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT.", "We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples.", "We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes.", "Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp.", "In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.", "Deep learning applications often require complex networks with a large number of parameters (He et al., 2016; Zagoruyko & Komodakis, 2016; Devlin et al., 2018) .", "Although neural networks perform so well that their ability to generalize is an area of study in itself (Zhang et al., 2017a; Arpit et al., 2017) , their high complexity nevertheless causes them to overfit their training data (Kukacka et al., 2017) .", "For this reason, effective regularization techniques are in high demand.", "There are two flavors of regularization: complexity curtailing and data augmentation 1 .", "Complexity curtailing methods constrain models to learning in a subset of parameter space which has a higher probability of generalizing well.", "Notable examples are weight decay (Krogh & Hertz, 1991) and dropout (Srivastava et al., 2014) .", "Data augmentation methods add transformed versions of training samples to the original training set.", "Conventionally, transformed samples retain their original label, so that models effectively see a larger set of data-label training pairs.", "Commonly applied transformations in image applications include flips, crops and rotations.", "A recently devised family of augmentation schemes called adversarial training has attracted active research interest (Szegedy et al., 2013; Goodfellow et al., 2014; Miyato et al., 2016; Athalye et al., 2018; Shaham et al., 2018; He et al., 2018) .", "Adversarial training seeks to reduce a model's propensity to misclassify minimally perturbed training samples, or adversarials.", "While attack algorithms used for testing model robustness may search for adversarials in unbounded regions of input space, adversarial training schemes generally focus on perturbing training samples within a bounded region, while retaining the sample's original label (Goodfellow et al., 2015; Shaham et al., 2018) .", "Another recently proposed data augmentation scheme is MixUp (Zhang et al., 2017b) , in which new samples are generated by mixing pairs of training samples using linear coefficients.", "Despite its well established generalization performance (Zhang et al., 2017b; Guo et al., 2018; Verma et al., 2018) , the working mechanism of MixUp is not well understood.", "Guo et al. (2018) suggest viewing MixUp as imposing local linearity on the model using points outside of the data manifold.", "While this perspective is insightful, we do not believe it paints a full picture of how MixUp operates.", "A recent study (Lamb et al., 2019) provides empirical evidence that MixUp improves adversarial robustness, but does not present MixUp as a form of adversarial training.", "We build a framework to understand MixUp in a broader context: we argue that adversarial training is a central working principle of MixUp.", "To support this contention, we connect MixUp to a MixUplike scheme which does not perform label mixing, and we relate this scheme to adversarial training.", "Without label mixing, MixUp becomes a conventional augmentation scheme: input samples are moved, but their original labels are retained.", "Because samples are moved in the direction of other samples -which are typically clustered in input space -we describe this method as 'directional'.", "Because this method primarily moves training samples in the direction of adversarial classes, this method is analogous to adversarial training.", "We thus refer to MixUp without label mixing as directional adversarial training (DAT).", "We show that MixUp converges to a subset of DAT under mild conditions, and we thereby argue that adversarial training is a working principle of MixUp.", "Inspired by this new understanding of MixUp as a form of adversarial training, and upon realizing that MixUp is (asymptotically) a subset of DAT, we introduce Untied MixUp (UMixUp), a simple enhancement of MixUp which converges to the entire family of DAT schemes, as depicted in Figure 1 .", "Untied Mixup mixes data-label training pairs in a similar way to MixUp, with the distinction that the label mixing ratio is an arbitrary function of the sample mixing ratio.", "We perform experiments to show that UMixUp's classification performance improves upon MixUp.", "In short, this research is motivated by a curiosity to better understand the working of MixUp.", "In-sodoing we aim to:", "1. Establish DAT as analogous to adversarial training.", "This is discussed in section 4.", "2. Establish UMixUp as a superset of MixUp, and as converging to the entire family of DAT schemes.", "In-so-doing,", "a) establish MixUp's convergence to a subset of DAT, and thereby that it operates analogously to adversarial training; and", "b) establish UMixUp as a broader class of MixUp-like schemes that operate analogously to adversarial training.", "This is discussed in 5.", "3. Establish empirically that UMixUp's classification performance improves upon MixUp.", "This is discussed in section 6.", "Finally we note that this paper has another contribution.", "Conventionally, MixUp is only applicable to baseline models that use cross entropy loss.", "All analytical results we develop in this paper are applicable to a wider family of models using any loss function which we term target-linear.", "We define target-linearity and experiment with a new loss function called negative cosine-loss to show its potential.", "Regular (non-calligraphic) capitalized letters such as X will denote random variables, and their lowercase counterparts, e.g., x, will denote realizations of a random variable.", "Any sequence, (a 1 , a 2 , . . . , a n ) will be denoted by a n 1 .", "Likewise (A 1 , A 2 , . . . , A n ) will be denoted by A n 1 , and a sequence of sample pairs ((x 1 , x 1 ), (x 2 , x 2 ), . . . , (x n , x n )) denoted by (x, x ) n 1 .", "For any value a ∈ [0, 1], we will use a as a short notation for 1 − a.", "Classification Setting Consider a standard classification problem, in which one wishes to learn a classifier that predicts the class label for a sample.", "Formally, let X be a vector space in which the samples of interest live and let Y be the set of all possible labels associated with these samples.", "The set of training samples will be denoted by D, a subset of X .", "We will use t(x", ") to denote the true label of x. Let", "F be a neural network function, parameterized by θ, which maps X to another vector space Z. Let ϕ : Y → Z be a function that maps a label in Y to an element in Z such that for any y, y ∈ Y, if y = y , then ϕ(y)", "= ϕ(y ).", "In the space Z, we refer to F (x) as the model's prediction.", "With slight abuse of language, we will occasionally refer to both t(x) and ϕ(t(x)) as the \"label\" of x.", "Let : Z ×Z → R be a loss function, using which one defines an overall loss function as", "Here we have taken the notational convention that the first argument of represents the model's prediction and the second represents the target label.", "In this setting, the learning problem is formulated as minimizing L with respect to its characterizing parameters θ." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21739129722118378, 0.09302324801683426, 0.290909081697464, 0.25, 0.21276594698429108, 0.2380952388048172, 0.3333333134651184, 0.9411764740943909, 0.0833333283662796, 0.06779660284519196, 0.05714285373687744, 0.10810810327529907, 0.1818181723356247, 0.04878048226237297, 0.10526315122842789, 0.09090908616781235, 0.0555555522441864, 0.07407406717538834, 0.10256409645080566, 0.11940298229455948, 0.11538460850715637, 0.1249999925494194, 0.17777776718139648, 0.23255813121795654, 0.23999999463558197, 0.31111109256744385, 0.3478260934352875, 0.09302324801683426, 0.13333332538604736, 0.24390242993831635, 0.2631579041481018, 0.3404255211353302, 0.4126984179019928, 0.11999999731779099, 0.1621621549129486, 0.24390242993831635, 0.06896551698446274, 0.24242423474788666, 0, 0.24390242993831635, 0.2380952388048172, 0.2926829159259796, 0, 0.05714285373687744, 0, 0.11764705181121826, 0.10526315122842789, 0.25, 0.1904761791229248, 0.1666666567325592, 0.05128204822540283, 0.11999999731779099, 0.1463414579629898, 0.17391303181648254, 0.1666666567325592, 0.10526315122842789, 0.06896551698446274, 0.11764705181121826, 0.0923076868057251, 0, 0.1621621549129486, 0.23255813121795654, 0.1395348757505417, 0.1395348757505417, 0.1395348757505417 ]
SkgjKR4YwH
true
[ "We present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp" ]
[ "Plan recognition aims to look for target plans to best explain the observed actions based on plan libraries and/or domain models.", "Despite the success of previous approaches on plan recognition, they mostly rely on correct action observations. \n", "Recent advances in visual activity recognition have the potential of enabling applications such as automated video surveillance.", "Effective approaches for such problems would require the ability to recognize the plans of agents from video information.", "Traditional plan recognition algorithms rely on access to detailed planning domain models.", "One recent promising direction involves learning approximate (or shallow) domain models directly from the observed activity sequences.", "Such plan recognition approaches expect observed action sequences as inputs.", "However, visual inference results are often noisy and uncertain, typically represented as a distribution over possible actions.", "In this work, we develop a visual plan recognition framework that recognizes plans with an approximate domain model learned from uncertain visual data." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0, 0.07999999821186066, 0.07999999821186066, 0, 0, 0, 0, 0 ]
ByeOkyrtPS
false
[ "Handling Uncertainty in Visual Perception for Plan Recognition" ]
[ "We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB).", "In particular, we describe a neural module, DrKIT, that traverses textual data like a virtual KB, softly following paths of relations between mentions of entities in the corpus.", "At each step the operation uses a combination of sparse-matrix TFIDF indices and maximum inner product search (MIPS) on a special index of contextual representations.", "This module is differentiable, so the full system can be trained completely end-to-end using gradient based methods, starting from natural language inputs.", "We also describe a pretraining scheme for the index mention encoder by generating hard negative examples using existing knowledge bases.", "We show that DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset, cutting the gap between text-based and KB-based state-of-the-art by 70%.", "DrKIT is also very efficient, processing upto 10x more queries per second than existing state-of-the-art QA systems.", "Large knowledge bases (KBs), such as FreeBase and Wikidata, organize information around entities, which makes it easy to reason over their contents.", "For example, given a query like \"When was Grateful Dead's lead singer born?\", one can identify the entity Grateful Dead and the path of relations LeadSinger, BirthDate to efficiently extract the answer-provided that this information is present in the KB.", "Unfortunately, large KBs are often incomplete (Min et al., 2013) .", "While relation extraction methods can be used to populate KBs, this process is inherently error-prone, and errors in extraction can propagate to downstream tasks.", "Advances in open-domain QA (Moldovan et al., 2002; Yang et al., 2019) suggest an alternativeinstead of performing relation extraction, one could treat a large corpus as a virtual KB by answering queries with spans from the corpus.", "This ensures facts are not lost in the relation extraction process, but also poses challenges.", "One challenge is that it is relatively expensive to answer questions using QA models which encode each document in a query-dependent fashion (Chen et al., 2017; Devlin et al., 2019) -even with modern hardware (Strubell et al., 2019; Schwartz et al., 2019) .", "The cost of QA is especially problematic for certain complex questions, such as the example question above.", "If the passages stating that \"Jerry Garcia was the lead singer of Grateful Dead\" and \"Jerry Garcia was born in 1942\" are far apart in the corpus, it is difficult for systems that retrieve and read a single passage to find an answer-even though in this example, it might be easy to answer the question after the relations were explicitly extracted into a KB.", "More generally, complex questions involving sets of entities or paths of relations may require aggregating information from entity mentions in multiple documents, which is expensive.", "One step towards efficient QA is the recent work of Seo et al. (2018; on phrase-indexed question answering (PIQA), in which spans in the text corpus are associated with question-independent contextual representations and then indexed for fast retrieval.", "Natural language questions are then answered by converting them into vectors that are used to perform inner product search (MIPS) against the index.", "This ensures efficiency during inference.", "However, this approach cannot be directly used to answer complex queries, since by construction, the information stored in the index is about the local context around a span-it can only be used for questions where the answer can be derived by reading a single passage.", "This paper addresses this limitation of phrase-indexed question answering.", "We introduce an efficient, end-to-end differentiable framework for doing complex QA over a large text corpus that has been encoded in a query-independent manner.", "Specifically, we consider \"multi-hop\" complex queries which can be answered by repeatedly executing a \"soft\" version of the operation below, defined over a set of entities X and a relation R: Y = X.follow(R) = {x : ∃x ∈ X s.t. R(x, x ) holds} In past work soft, differentiable versions of this operation were used to answer multi-hop questions against an explicit KB (Cohen et al., 2019) .", "Here we propose a more powerful neural module which approximates this operation against an indexed corpus.", "In our module, the input X is a sparse vector representing a weighted set of entities, and the relation R is a dense feature vector, e.g. a vector derived from a neural network over a natural language query.", "The output Y is another sparse vector representing the weighted set of entities, aggregated over entity mentions in the top-k spans retrieved from the index.", "The spans in turn are retrieved using a MIPS query constructed from X and R, and we discuss pretraining schemes for the index in §2.3.", "For multi-hop queries, the output entities Y can be recursively passed as input to the next iteration of the same module.", "The weights of the entities in Y are differentiable w.r.t the MIPS queries, which allows end-to-end learning without any intermediate supervision.", "We discuss an implementation based on sparse matrix-vector products, whose runtime and memory depend only on the number of spans K retrieved from the index.", "This is crucial for scaling up to large corpora, and leads to upto 15x faster inference than existing state-of-the-art multi-hop and open-domain QA systems.", "The system we introduce is called DrKIT (for Differentiable Reasoning over a Knowledge base of Indexed Text).", "We test DrKIT on the MetaQA benchmark for complex question answering, and show that it improves on prior text-based systems by 5 points on 2-hop and 9 points on 3-hop questions, reducing the gap between text-based ad KB-based systems by 30% and 70%, respectively.", "We also test DrKIT on a new dataset of multi-hop slot-filling over Wikipedia articles, and show that it outperforms DrQA (Chen et al., 2017) and PIQA (Seo et al., 2019) adapted to this task.", "We present DrKIT, a differentiable module that is capable of answering multi-hop questions directly using a large entity-linked text corpus.", "DrKIT is designed to imitate traversal in KB over the text corpus, providing ability to follow relations in the \"virtual\" KB over text.", "We achieve state-of-theart results on MetaQA dataset for answering natural language questions, with a 9 point increase in the 3-hop case.", "We also developed an efficient implementation using sparse operations and inner product search, which led to a 10x increase in QPS over baseline approaches.", "We use p = 400 dimensional embeddings for the mentions and queries, and 200-dimensional embeddings each for the start and end positions.", "This results in an index of size 750MB.", "When computing A E→M , the entity to mention co-occurrence matrix, we only retain mentions in the top 50 paragraphs matched with an entity, to ensure sparsity.", "Further we initialize the first 4 layers of the question encoder with the Transformer network from pre-training.", "For the first hop, we assign Z 0 as a 1-hot vector for the least frequent entity detected in the question using an exact match.", "The number of nearest neighbors K and the softmax temperature λ were tuned on the dev set of each task, and we found K = 10000 and λ = 4 to work best.", "We pretrain the index on a combination of the MetaQA corpus, using the KB provided with MetaQA for distance data, and the Wikidata corpus.", "Table 3 ." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3448275923728943, 0.15789473056793213, 0.22857142984867096, 0, 0.1249999925494194, 0, 0, 0.11764705181121826, 0.1249999925494194, 0, 0.060606054961681366, 0.08695651590824127, 0, 0.08510638028383255, 0.06896550953388214, 0.09677419066429138, 0.0555555522441864, 0.1666666567325592, 0.05882352590560913, 0, 0.08510638028383255, 0.0952380895614624, 0.05714285373687744, 0.10666666179895401, 0.1428571343421936, 0.09302325546741486, 0.05714285373687744, 0.0555555522441864, 0.19354838132858276, 0.05882352590560913, 0.05714285373687744, 0.11764705181121826, 0.27586206793785095, 0, 0.1818181723356247, 0.19354838132858276, 0.06896550953388214, 0.060606054961681366, 0.1111111044883728, 0, 0.09999999403953552, 0.05405404791235924, 0.07407406717538834, 0.05714285373687744, 0.10526315122842789, 0.1249999925494194, 0 ]
SJxstlHFPH
true
[ "Differentiable multi-hop access to a textual knowledge base of indexed contextual representations" ]
[ "In spite of their great success, traditional factorization algorithms typically do not support features (e.g., Matrix Factorization), or their complexity scales quadratically with the number of features (e.g, Factorization Machine).", "On the other hand, neural methods allow large feature sets, but are often designed for a specific application.", "We propose novel deep factorization methods that allow efficient and flexible feature representation.", "For example, we enable describing items with natural language with complexity linear to the vocabulary size—this enables prediction for unseen items and avoids the cold start problem.", "We show that our architecture can generalize some previously published single-purpose neural architectures.", "Our experiments suggest improved training times and accuracy compared to shallow methods.", "In recent years, predictive tasks that traditionally have been solved with factorization are now being studied within the context of neural networks.", "These solutions often work as black boxes, and many times they are designed specifically for a single task with an arbitrary network that may not have much justification.", "We propose Deep Structured Factorization Machine, a family of general-purpose factorization techniques that can be used stand-alone or as a \"design pattern\" within a larger neural network.", "Our work provides some insight into how to enable general-purpose factorization within neural architectures without losing interpretability and a principled design.Previous factorization methods do not scale to large feature sets and make strong assumptions about their latent structure.", "Our main contribution is that we enable a general-purpose framework that enables efficient factorization of datasets with complex feature sets.", "For example, applications of factorization in natural language scale quadratically in the number of words in the vocabulary.", "Our solution allows inference with linear runtime complexity on the vocabulary size.", "Previous work has explored how to improve factorization's accuracy (see § 3.3) with its current limitations withstanding; alternatively, some have proposed how to make it tractable for a particular domain-for example, text BID22 .", "We believe that we are the first ones to propose an efficient general-purpose method.", "Interestingly, our experiments indicate that Structured Deep Factorization has large improvements in predictive accuracy and runtime compared to some recent ad-hoc models.", "We present a general purpose method for factorizing large feature sets; we demonstrate it in several applications, such as using text to enable prediction for unseen items and circumvent the cold-start problem.", "Future work may soften our requirement of domain knowledge-in general, our methods require feature groups and feature extraction functions defined by experts.", "We did not pursue an exhaustive comparison with previously published methods; for example, there are other algorithms that rely on Bayesian optimization BID3 to infer the item embeddings from text which we did not benchmark.", "Although we apply our methods on six datasets altogether, further experimentation may be able to situate under which conditions our methods are effective.Our methods generalize previously published single-purpose neural networks.", "For example, TagSpace BID20 ) is a very successful method, but it is limited to a single textual feature.", "With the correct feature extraction function, Structured Deep-In Factorization Machine can be used to implement a TagSpace model.Compared to previous general-purpose approaches, our work makes less assumptions about the training data and allows more flexibility.", "We provide evidence that the factorization hypothesis may be too restrictive-when relaxed we see higher predictive accuracy with a dramatic improvement of training speed.", "We show experimental results outperforming an algorithm specifically designed for text-even when using the same feature extraction CNN.", "This suggests that the need for ad-hoc networks should be situated in relationship to the improvements over a general-purpose method.", "To the extent of our knowledge, our work is the first to propose a general purpose factorization algorithm that enables efficient inference on arbitrary feature sets." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04999999701976776, 0, 0.0833333283662796, 0.22857142984867096, 0, 0.08695651590824127, 0.060606054961681366, 0, 0.1111111044883728, 0.12765957415103912, 0.13333332538604736, 0.07999999821186066, 0, 0.04651162400841713, 0.1599999964237213, 0.060606054961681366, 0.1428571343421936, 0, 0.045454543083906174, 0.05128204822540283, 0.0714285671710968, 0.08888888359069824, 0.05714285373687744, 0, 0.13333332538604736, 0.11428570747375488 ]
HJsk5-Z0W
true
[ "Scalable general-purpose factorization algorithm-- also helps to circumvent cold start problem." ]
[ "Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of “situated instructions”.", "These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task.", "Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself.", "The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces.", "Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial.", "This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.", "Physical task guidance can be delivered via Augmented Reality (AR) since assembly often requires both hands and continuous attention to the task [19] .", "Additionally, assembly tutorials have instructions directly associated with physical objects, so AR can reduce the need for excessive context switching between the instructions and the physical structure by projecting those instructions into the environment.", "These benefits have been demonstrated in fields such as Facilities Management [19] , Maintenance [47] , and Internet of Things (IoT) device management [19, 46] .", "Additionally, prior work in AR assembly guidance has shown that these benefits can translate to carrying out assembly tasks [2, 17, 20, 39] .", "While significant previous work has looked at the benefits of following tutorials in AR, much less has looked at how to author these tutorials.", "Beyond the technical requirements of an authoring interface, an ideal tutorial may look different depending on the end user of the tutorial.", "This problem is exacerbated in AR as there are many different modalities in which tutorial content can be presented.", "While one person may appreciate guiding animations in AR, another may prefer static text and images, and yet another may prefer video tutorials from one or multiple perspectives.", "With AuthAR, we present a system for building tutorials for assembly tasks that can accommodate the needs of these different types of end users.", "AuthAR generates video, and pictorial representations semi-automatically while the tutorial author completes the task.", "Furthermore, AuthAR allows tutorial authors to create and refine a tutorial in situ, integrating content authoring into the process of completing the task.", "This approach adds little additional overhead and reduces the need for post-processing of the tutorial.", "This paper presents the AuthAR system for generating mixed media assembly tutorials.", "Informed by prior work on content/tutorial authoring, and tutorial playback and walkthrough, we build the system with an eye toward non-obtrusive content authoring and generation of important components for tutorial playback, summarized in a set of design guidelines.", "We validate the system's ability to create a tutorial by stepping through the process of creating a tutorial to build a laptop stand, automatically generating an XML representation of the tutorial.", "Initial observations suggest the tool will be valuable, and possible ways the system could be extended and refined in future iterations.", "Toward validating AuthAR, we discuss our initial observations in testing with tutorial authors, present an example application that parses and displays the generated tutorial for end users, and explain extensibility beyond the presented use case.", "In doing so, we consider improvements to AuthAR, and design considerations for other in situ AR content authoring tools.", "AuthAR enables tutorial authors to generate mixed media tutorials semi-automatically to guide end users through the assembly process.", "We automatically record expert demonstration where possible and allow for in situ editing for refinements and additions.", "We built AuthAR with several design guidelines in mind, validated with the authoring of a tutorial for assembling a laptop stand, and discuss the extensibility to assembly of other tasks by simply loading different virtual models into AuthAR.", "We see AuthAR enabling authoring of tutorials that could reach a widespread population with mixed media tutorials flexible to the preferences of each individual user." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.2800000011920929, 0.14999999105930328, 0.12244897335767746, 0.3243243098258972, 0.2926829159259796, 0.09302324801683426, 0.12244897335767746, 0.13333332538604736, 0.1395348757505417, 0.09756097197532654, 0.15789473056793213, 0.10256409645080566, 0.1860465109348297, 0.2790697515010834, 0.11764705181121826, 0.2857142686843872, 0.17142856121063232, 0.24242423474788666, 0.2545454502105713, 0.1818181723356247, 0.1538461446762085, 0.18867923319339752, 0.19999998807907104, 0.21052631735801697, 0.2222222238779068, 0.29629629850387573, 0.3181818127632141 ]
La2hggaEcW
true
[ "We present a mixed media assembly tutorial authoring system that streamlines creation of videos, images, text and dynamic instructions in situ." ]
[ "Monitoring patients in ICU is a challenging and high-cost task.", "Hence, predicting the condition of patients during their ICU stay can help provide better acute care and plan the hospital's resources.", "There has been continuous progress in machine learning research for ICU management, and most of this work has focused on using time series signals recorded by ICU instruments.", "In our work, we show that adding clinical notes as another modality improves the performance of the model for three benchmark tasks: in-hospital mortality prediction, modeling decompensation, and length of stay forecasting that play an important role in ICU management.", "While the time-series data is measured at regular intervals, doctor notes are charted at irregular times, making it challenging to model them together.", "We propose a method to model them jointly, achieving considerable improvement across benchmark tasks over baseline time-series model.", "With the advancement of medical technology, patients admitted into the intensive care unit (ICU) are monitored by different instruments on their bedside, which measure different vital signals about patient's health.", "During their stay, doctors visit the patient intermittently for check-ups and make clinical notes about the patient's health and physiological progress.", "These notes can be perceived as summarized expert knowledge about the patient's state.", "All these data about instrument readings, procedures, lab events, and clinical notes are recorded for reference.", "Availability of ICU data and enormous progress in machine learning have opened up new possibilities for health care research.", "Monitoring patients in ICU is a challenging and high-cost task.", "Hence, predicting the condition of patients during their ICU stay can help plan better resource usage for patients that need it most in a cost-effective way.", "Prior works (Harutyunyan et al., 2017; BID4 BID18 BID16 BID1 have focused exclusively on modeling the problem using the time series signals from medical instruments.", "Expert knowledge from doctor's notes has been ignored in the literature.In this work, we use clinical notes in addition to the time-series data for improved prediction on benchmark ICU management tasks (Harutyunyan et al., 2017) .", "While the time-series data is measured continuously, the doctor notes are charted at intermittent times.", "This creates a new challenge to model continuous time series and discrete time note events jointly.", "We propose such a multi-modal deep neural network that comprises of recurrent units for the time-series and convolution network for the clinical notes.", "We demonstrate that adding clinical notes improves the AUC-PR scores on in-hospital mortality prediction (+7.8%) and modeling decompensation (+6.1%), and kappa score on length of stay forecasting (+3.4%).", "Identifying the patient's condition in advance is of critical importance for acute care and ICU management.", "Literature has exclusively focused on using time-series measurements from ICU instruments to this end.", "In this work, we demonstrate that utilizing clinical notes along with time-series data can improve the prediction performance significantly.", "In the future, we expect to improve more using advanced models for the clinical notes since text summarizes expert knowledge about a patient's condition." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.10256409645080566, 0.2222222238779068, 0.3214285671710968, 0.1463414579629898, 0.1666666567325592, 0.12765957415103912, 0.15789473056793213, 0.1249999925494194, 0.17142856121063232, 0.15789473056793213, 0.13793103396892548, 0.1818181723356247, 0.1818181723356247, 0.37735849618911743, 0.1818181723356247, 0, 0.25641024112701416, 0.2916666567325592, 0.22857142984867096, 0.24242423474788666, 0.31578946113586426, 0.1904761791229248 ]
B1ep_rBAp4
true
[ "We demostarte that using clinical notes in conjuntion with ICU instruments data improves the perfomance on ICU management benchmark tasks" ]
[ "Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly.", "However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league.", "Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination.", "The main difficulties of expensive coordination are that", "i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and", "ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time.", "In this work, we address this problem through an event-based deep RL approach.", "Our main contributions are threefold.", "(1) We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy.", "(2) We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors.", "(3) We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers.", "Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.", "Deep Multi-Agent Reinforcement Learning (MARL) has been widely used in coordinating cooperative agents to jointly complete certain tasks where the agent is assumed to be selfless (fully cooperative), i.e., the agent is willing to sacrifice itself to maximize the team reward.", "However, in many cases of the real world, the agents are self-interested, such as taxi drivers in a taxi company (fleets) and clubs in a league.", "For instance, in the example of taxi fleets (Miao et al., 2016) , drivers may prefer to stay in the area with high customer demand to gain more reward.", "It is unfair and not efficient to compel the taxi driver to selflessly contribute to the company, e.g., to stay in the low customer demand area.", "Forcing the drivers to selflessly contribute may increase the income for the company in a short-term but it will finally causes the low efficient and unsustainable of that company in the long run because the unsatisfied drivers may be demotivated and even leave the company.", "Another important example is that the government wants some companies to invest on the poverty area to achieve the fairness of the society, which may inevitably reduce the profits of companies.", "Similar to previous example, the companies may leave when the government forces them to invest.", "A better way to achieve coordination among followers and achieve the leader's goals is that the manager of the company or the government needs to provide bonuses to followers, like the taxi company pays extra bonuses for serving the customers in rural areas and the government provides subsidies for investing in the poverty areas, which we term as expensive coordination.", "In this paper, we solve the large-scale sequential expensive coordination problem with a novel RL training scheme.", "There are several lines of works related to the expensive coordination problem, including mechanism design (Nisan & Ronen, 2001 ) and the principal-agent model (Laffont & Martimort, 2009 ).", "However, these works focus more on static decisions (each agent only makes a single decision).", "To consider sequential decisions, the leader-follower MDP game (Sabbadin & Viet, 2013; 2016) and the RL-based mechanism design (Tang, 2017; Shen et al., 2017) are introduced but most of their works only focus on matrix games or small-scale Markov games, which cannot be applied to the case with the large-scale action or state space.", "The most related work is M 3 RL (Shu & Tian, 2019) where the leader assigns goals and bonuses by using a simple attention mechanism (summing/averaging the features together) and mind (behaviors) tracking to predict the followers' behaviors and makes response to the followers' behaviors.", "But they only consider the rule-based followers, i.e., followers with fixed preference, and ignore the followers' behaviors responding to the leader's policy, which significantly simplifies the problem and leads the unreasonability of the model.", "In the expensive coordination problem, there are two critical issues which should be considered: 1) the leader's long-term decision process where the leader has to consider both the long-term effect of itself and long-term behaviors of the followers when determining his action to incentivise the coordination among followers, which is not considered in (Sabbadin & Viet, 2013; Mguni et al., 2019) ; and 2) the complex interactions between the leader and followers where the followers will adapt their policies to maximize their own utility given the leader's policy, which makes the training process unstable and hard, if not unable, to converge in large-scale environment, especially when the leader changes his actions frequently, which is ignored by (Tharakunnel & Bhattacharyya, 2007; Shu & Tian, 2019) .", "In this work, we address these two issues in the expensive coordination problem through an abstraction-based deep RL approach.", "Our main contributions are threefold.", "(1) We model the leader's decision-making process as a semiMarkov Decision Process (semi-MDP) and propose a novel event-based policy gradient to learn the leader's policy considering the long-term effect (leader takes actions at important points rather than at each step to avoid myopic decisions.) (Section 4.1).", "(2) A well-performing leader's policy is also highly dependent on how well the leader knows the followers.", "To predict the followers' behaviors precisely, we show the leader-follower consistency scheme.", "Based on the scheme, the follower-aware module, the follower-specific attention module, and the sequential decision module are proposed to capture these followers' behaviors and make accurate response to their behaviors (Section 4.2).", "(3) To accelerate the training process, we propose an action abstraction-based policy gradient algorithm for the followers.", "This approach is able to reduce followers' decision space and thus simplifies the interaction between the leader and followers as well as accelerates the training process of followers (Section 4.3).", "Experiments in resource collections, navigation and predatorprey show that our method outperforms the state-of-the-art methods dramatically.", "This paper proposes a novel RL training scheme for Stackelberg Markov Games with single leader and multiple self-interested followers, which considers the leader's long-term decision process and complicated interaction between followers with three contributions.", "1) To consider the long-term effect of the leader's behavior, we develop an event-based policy gradient for the leader's policy.", "2) To predict the followers' behaviors and make accurate response to their behaviors, we exploit the leader-follower consistency to design a novel follower-aware module and follower-specific attention mechanism.", "3) We propose an action abstraction-based policy gradient algorithm to accelerate the training process of followers.", "Experiments in resource collections, navigation, and predator-prey game reveal that our method outperforms the state-of-the-art methods dramatically.", "We are willing to highlight that SMGs contribute to the RL (especially MARL) community with three key aspects: 1).", "As we mentioned in the Introduction, most of the existing MARL methods assume that all the agents are willing to sacrifice themselves to maximize the total rewards, which is not true in many real-world non-cooperative scenarios.", "On the contrary, our proposed method realistically assumes that agents are self-interested.", "Thus, SMGs provide a new scheme focusing more on the self-interested agents.", "We think this aspect is the most significant contribution to the RL community.", "2).", "The SMGs can be regarded as the multi-agent system with different roles (the leader and the followers) (Wilson et al., 2008) and our method provides a solution to that problem.", "3).", "Our methods also contribute to the hierarchical RL, i.e., it provides a non-cooperative training scheme between the high-level policy (the leaders) and the low-level policy (the followers), which plays an important role when the followers are self-interested.", "Moreover, our EBPG also propose an novel policy gradient method for the temporal abstraction structure.", "There are several directions we would like to investigate to further extend our SMG model:", "i) we will consider multiple cooperative/competitive leaders and multiple self-interested followers, which is the case in the labor market,", "ii) we will consider multi-level leaders, which is the case in the hierarchical organizations and companies and", "iii) we will consider the adversarial attacks to our SMG model, which may induce extra cost to the leader for efficient coordination.", "We believe that our work is a preliminary step towards a deeper understanding of the leader-follower scheme in both research and the application to society." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.1621621549129486, 0.1428571343421936, 0, 0.24242423474788666, 0.21621620655059814, 0.13333332538604736, 0, 0.4000000059604645, 0.25, 0.5, 0.22857142984867096, 0.1111111044883728, 0.1538461446762085, 0.13636362552642822, 0.19512194395065308, 0.15686273574829102, 0.0952380895614624, 0.12903225421905518, 0.16393442451953888, 0.05714285373687744, 0.13636362552642822, 0, 0.2028985470533371, 0.145454540848732, 0.1666666567325592, 0.13592232763767242, 0.1621621549129486, 0, 0.2711864411830902, 0.23529411852359772, 0.13793103396892548, 0.13636362552642822, 0.4117647111415863, 0.22727271914482117, 0.1764705777168274, 0.19999998807907104, 0.29411762952804565, 0.1860465109348297, 0.5294117331504822, 0.22857142984867096, 0.1666666567325592, 0.12244897335767746, 0.06666666269302368, 0.06666666269302368, 0.19999998807907104, 0.1702127605676651, 0.23076923191547394, 0.3636363446712494, 0.0624999962747097, 0.17142856121063232, 0.1818181723356247, 0.15789473056793213, 0.2926829159259796 ]
ryeG924twB
true
[ "We propose an event-based policy gradient to train the leader and an action abstraction policy gradient to train the followers in leader-follower Markov game." ]
[ "Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task.", "Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning.", "Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality.", "In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language.", "We show that this implicit cultural transmission encourages the resulting languages to exhibit better compositional generalization.", "Compositionality is an important structure of language that reflects a disentangled understanding of the world -enabling the expression of infinitely many concepts using finitely many elements.", "Agents that have compositional understandings of the world generalize in obviously correct ways even in the face of limited training examples (Lake & Baroni, 2018) .", "For example, an agent with a compositional understanding of blue squares and purple triangles should also understand purple squares without directly observing any of them.", "Developing artificial agents that can ground, understand, and produce compositional (and therefore more interpretable) language could greatly improve generalization to new instances and ease human-AI interactions.", "In building theories of how compositionality emerges in human languages, work in evolutionary linguistics looks to the process of cultural transmission (Kirby, 2001; Kirby et al., 2008) .", "Cultural transmission of language occurs when a group of agents pass their language on to a new group of agents, e.g. parents who teach their children to speak as they do.", "Because this education is incomplete and biased, it allows the language itself to change over time via a process known as cultural evolution.", "This paradigm (Kirby et al., 2014) explains the emergence of compositionality as a result of expressivity and compressibility -i.e. to be most effective, a language should be expressive enough to differentiate between all possible meanings (e.g., objects) and compressible enough to be learned easily.", "Work in the evolutionary linguistics community has shown that over multiple 'generations' these competing pressures result in the emergence of compositional languages both in simulation (Kirby, 2001 ) and with human subjects (Kirby et al., 2008) .", "These studies aim to understand humans whereas we want to understand and design artificial neural networks.", "Approaching the problem from another direction, recent work in AI has studied language emergence in such multi-agent, goal-driven tasks.", "These works have demonstrated that agent languages will emerge to enable coordination-centric tasks to be solved without direct or even indirect language supervision (Foerster et al., 2016; Sukhbaatar et al., 2016; Lazaridou et al., 2017; Das et al., 2017) .", "However, the resulting languages are usually not compositional and are difficult to interpret, even by other machines (Andreas et al., 2017) .", "Some existing work has studied means to encourage compositional language formation (Mordatch & Abbeel, 2018; , but these settings study fixed populations of agents -i.e. examining language within a single generation.", "In this work we bridge these two areas -examining the effect of generational cultural transmission on the compositionality of emergent languages in a multi-agent, goal-driven setting.", "We introduce cultural transmission into language emergence between neural agents.", "The starting point of our study is a goal-oriented dialog task (similar to that of ), summarized in Fig. 1a .", "During learning we periodically replace some agents with new ones (gray agents).", "These new agents do not know any language, but instead of creating one they learn it from older agents.", "This creates generations of language that become more compositional over time.", "We study this in the context of a cooperative dialog-based reference game involving two agents communicating in discrete symbols ; an example dialog is shown in Fig. 1a .", "To examine cultural transmission, we extend this setting to a population of agents (Fig. 1b) and introduce a simple mechanism to induce the expressivity and compressibility pressures inherent in cultural transmission.", "Specifically, we periodically re-initialize some subset of the agents in the population.", "In order to perform well at the task, the population's emergent language must be sufficiently expressive to reference all the objects (expressivity) and must be easily learnable by these 'new' agents (compressibility).", "The new agents have a randomized language whereas the surviving agents already know a grounded language.", "This \"knowledge gap\" creates an implicit 'teaching' setting that is analogous to the explicit transmission stage in models of iterative learning (Kirby, 2001 ).", "Through our experiments and analysis, we show that periodic agent replacement is an effective way to induce cultural transmission and yields more compositionally generalizable language in our setting.", "To summarize, our contributions are: -We propose a method for inducing implicit cultural transmission in neural language models.", "-We introduce new metrics to measure the similarity between agent languages and verify cultural transmission has occurred as a result of our periodic agent replacement protocol.", "-We show our cultural transmission procedure induces compositionality in neural language models, going from 13% accuracy on a compositionally novel test set to 46% in the best configuration.", "Further, we show this is complementary with previous priors which encourage compositionality.", "In this work we investigated cultural transmission in deep neural dialog agents, applying it to language emergence.", "The evolutionary linguistics community has long used cultural transmission to explain how compositional languages could have emerged.", "The deep learning community, having recently become interested in language emergence, has not investigated that link until now.", "Instead of explicit models of cultural transmission familiar in evolutionary linguistics, we favor an implicit model where language is transmitted from generation to generation only because it helps agents achieve their goals.", "We show that this does indeed cause cultural transmission and compositionality.", "Future work.", "While our work used an implicit version of cultural transmission, we are interested in investigating the effect of explicit versions of cultural transmission on language structure.", "In another direction, cultural transmission may also provide an appropriate prior for neural representations of non-language information." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.10526315122842789, 0.23255813121795654, 0.23255813121795654, 0.375, 0.052631575614213943, 0.10526315122842789, 0, 0.19512194395065308, 0.2380952388048172, 0.1463414579629898, 0.10256409645080566, 0.1090909019112587, 0.11999999731779099, 0.13333332538604736, 0.11764705181121826, 0.1666666567325592, 0.10810810327529907, 0.12765957415103912, 0.25, 0.4615384638309479, 0.17142856121063232, 0.0714285671710968, 0.11764705181121826, 0.07407406717538834, 0.1428571343421936, 0.23255813121795654, 0.14814814925193787, 0.09302324801683426, 0.06896550953388214, 0.19999998807907104, 0.2380952388048172, 0.23529411852359772, 0.24390242993831635, 0.3255814015865326, 0.1428571343421936, 0.3030303120613098, 0.24242423474788666, 0.11764705181121826, 0.260869562625885, 0.37037035822868347, 0.1538461446762085, 0.1818181723356247 ]
r1gzoaNtvr
true
[ "We use cultural transmission to encourage compositionality in languages that emerge from interactions between neural agents." ]
[ "Based on our observation that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and that the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map, we propose a singular value decomposition (SVD) based approach to estimate the dimension of the deep manifolds for a typical convolutional neural network VGG19.", "We choose three categories from the ImageNet, namely Persian Cat, Container Ship and Volcano, and determine the local dimension of the deep manifolds of the deep layers through the tangent space of a target image.", "Through several augmentation methods, we found that the Gaussian noise method is closer to the intrinsic dimension, as by adding random noise to an image we are moving in an arbitrary dimension, and when the rank of the feature matrix of the augmented images does not increase we are very close\n", "to the local dimension of the manifold.", "We also estimate the dimension of the deep manifold based on the tangent space for each of the maxpooling layers.", "Our results show that the dimensions of different categories are close to each other and decline quickly along the convolutional layers and fully connected layers.", "Furthermore, we show that the dimensions decline quickly inside the Conv5 layer.", "Our work provides new insights for the intrinsic structure of deep neural networks and helps unveiling the inner organization of the black box of deep neural networks.", "To have a better understanding of deep neural networks, a recent important trend is to analyze the structure of the high-dimensional feature space.", "Capitalizing on the manifold hypothesis BID1 BID12 , the distribution of the generated data is assumed to concentrate in regions of low dimensionality.", "In other words, it is assumed that activation vectors of deep neural networks lie on different low dimensional manifolds embedded in high dimensional feature space.Note that the rationality of many manifold learning algorithms based on deep learning and autoencoders is that one learns an explicit or implicit coordinate system for leading factors of variation.", "These factors can be thought of as concepts or abstractions that help us understand the rich variability in the data, which can explain most of the structure in the unknown data distribution.", "See BID3 for more information.The dimension estimation is crucial in determining the number of variables in a linear system, or in determining the number of degrees of freedom of a dynamic system, which may be embedded in the hidden layers of neural networks.", "Moreover, many algorithms in manifold learning require the intrinsic dimensionality of the data as a crucial parameter.", "Therefore, the problem of estimating the intrinsic dimensionality of a manifold is of great importance, and it is also a crucial start for manifold learning.Unfortunately, the manifold of interest in AI (especially for deep neural networks), is such a rugged manifold with a great number of twists, ups and downs with strong curvature.", "Thus, there is a fundamental difficulty for the manifold learning, as raised in BID0 , that is, if the manifolds are not very smooth, one may need a considerable number of training examples to cover each one of these variations, and there is no chance for us to generalize to unseen variations.Our work is based on an important characterization of the manifold, namely, the set of its tangent hyperplanes.", "For a point p on a d-dimensional manifold, the tangent hyperplane is given by a local basis of d vectors that span the local directions of variations allowed on the manifold.", "As illustrated in Figure 1 , these local directions specify how one can change p infinitesmally while staying on the manifold.", "Figure 1 : A two-dimensional manifold with a small region where data points concentrate, along with a tangent plane and associated tangent directions, forming a basis that specifies the directions of small moves one can make to stay on the manifold.Based on above analysis, our work focuses on a thorough exploration of the local hyperplane dimension of the activation manifold in deep neural networks.", "Creating an artificial data cluster concentrated in regions of the local tangent hyperplane, we apply SVD to the data cluster in different layers or feature maps in neural networks.", "Through thorough analysis, we reach the following fascinating results.•", "There exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer.•", "For convolutional layers, the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map.•", "The dimensions of different image categories are close and the dimension declines quickly along the layers.To our knowledge this is the first thorough exploration of manifold dimension on very deep neural networks. We", "wish our work sheds light on new understandings and inspires further investigations on the structure of manifolds in deep neural networks.", "Through extensive experiments, we found that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and the dimension of the concatenated feature vector almost equals the summation of the dimension of each feature map for several feature maps randomly picked.", "Based on the interesting observations we obtained, we developed an efficient and effective SVD based method to estimate the local dimension of deep manifolds in the VGG19 neural network.", "We found that the dimensions are close for different images of the same category and even images of different categories, and the dimension declines quickly along the convolutional layers and fully connected layers.", "Our results supports the lowdimensional manifold hypothesis for deep networks, and our exploration helps unveiling the inner organization of deep networks.", "Our work will also inspire further possibility of observing every feature map separately for the dimension of convolutional layers, rather than directly working on the whole activation feature maps, which is costly or even impossible for the current normal computing power." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.31111109256744385, 0.17543859779834747, 0.5, 0.4117647111415863, 0.14999999105930328, 0.06896550953388214, 0.2631579041481018, 0.31578946113586426, 0.2631579041481018, 0.28125, 0.13636362552642822, 0.2857142686843872, 0.29411762952804565, 0.25925925374031067, 0.18918918073177338, 0.2380952388048172, 0.20512819290161133, 0.3529411852359772, 0.380952388048172, 0.0714285671710968, 0.15789473056793213, 0.1764705777168274, 0.3333333432674408, 0.31578946113586426, 0.145454540848732, 0.5, 0.1904761791229248, 0.2702702581882477, 0.14814814925193787 ]
Sy3nGCYXz
true
[ "We propose a SVD based method to explore the local dimension of activation manifold in deep neural networks." ]
[ "Large pre-trained Transformers such as BERT have been tremendously effective for many NLP tasks.", " However, inference in these large-capacity models is prohibitively slow and expensive", ". Transformers are essentially a stack of self-attention layers which encode each input position using the entire input sequence as its context", ". However, we find that it may not be necessary to apply this expensive sequence-wide self-attention over at all layers", ". Based on this observation, we propose a decomposition to a pre-trained Transformer that allows the lower layers to process segments of the input independently enabling parallelism and caching", ". We show that the information loss due to this decomposition can be recovered in the upper layers with auxiliary supervision during fine-tuning", ". We evaluate de-composition with pre-trained BERT models on five different paired-input tasks in question answering, sentence similarity, and natural language inference", ". Results show that decomposition enables faster inference (up to 4x), significant memory reduction (up to 70%) while retaining most (up to 99%) of the original performance", ". We will release the code at<anonymized url>.", "Inference in large Transformer-based NLP models such as BERT (Devlin et al., 2019) requires prohibitively high-levels of compute, making it expensive to support large volume processing in data centers, and almost infeasible to run on resource constrained mobile devices.", "These Transformer models create effective representations using self-attention, a mechanism that allows them to effectively account for wide textual contexts.", "However, applying self-attention over the entire input for all layers is computationally expensive.", "This raises a natural question: Is self-attention over the entire input necessary in all of the layers?", "Previous studies (Tenney et al., 2019; Hao et al., 2019; Clark et al., 2019b) have shown that lower layers tend to capture syntactic phenomena that mostly depend on local contexts and that higher layers capture more semantic phenomena that are relevant to downstream tasks, which depend on longer global contexts.", "This suggests that considering only local context in lower layers of Transformer and considering full global context in upper layers can provide speedup at a very small cost in terms of effectiveness.", "In this work we focus on paired-input NLP tasks such as reading comprehension, natural language inference and sentence pair similarity.", "These tasks provide a natural boundary for the locality of text (e.g., question vs. passage in QA).", "Because of this natural decomposition in two segments, we can compute representations for lower layers with only the local segment as the context and compute representations for upper layers with both segments as the context.", "This decomposition technique has multiple benefits: It allows for parallel processing of each segment, caching of segments that are available offline, and a significant reduction in runtime memory.", "Moreover, since the architecture remains largely same, the original pre-trained weights can be reused in the decomposed model.", "To compensate for the differences in the decomposed setting, we augment the fine-tuning loss on the target task with a distillation loss that minimizes the output-level as well as layer-level divergences.", "We evaluate the decomposition idea using the BERT model on five different pairwise tasks.", "The decomposition achieves substantial speedup (2 to 4.3x) and reduction in memory (51.1% to 76.8%) for only small loss in effectiveness (0.2 to 1.8 points).", "Moreover, we find that with decomposition the larger BERT model can even run faster than the original smaller BERT model, while still being more accurate.", "Transformers have improved the effectiveness of NLP tools by their ability to incorporate large contexts effectively in multiple layers.", "This however imposes a significant complexity cost.", "In this work, we showed that modeling such large contexts may not always be necessary and leverage this insight to build a decomposition of the Transformer model that provides substantial improvements in inference speed, memory reduction, while retaining most of the original model's accuracy.", "This decomposition model provides a simple yet strong starting point for efficient models as NLP moves towards increasingly larger models handling wider contexts." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08888888359069824, 0.190476194024086, 0.2745097875595093, 0.19999998807907104, 0.25, 0.38461539149284363, 0.11538460850715637, 0.2641509473323822, 0.10526315122842789, 0.23529411852359772, 0.15686273574829102, 0.22727271914482117, 0.21276594698429108, 0.08955223113298416, 0.21052631735801697, 0.039215680211782455, 0.1599999964237213, 0.24561403691768646, 0.24137930572032928, 0.21276594698429108, 0.178571417927742, 0.1818181723356247, 0.10344827175140381, 0.25925925374031067, 0.3199999928474426, 0.052631575614213943, 0.28169015049934387, 0.22641508281230927 ]
B1gKVeBtDH
true
[ "Inference in large Transformers is expensive due to the self-attention in multiple layers. We show a simple decomposition technique can yield a faster, low memory-footprint model that is just as accurate of the original models." ]
[ "Exploration while learning representations is one of the main challenges Deep\n", "Reinforcement Learning (DRL) faces today.", "As the learned representation is dependant in the observed data, the exploration strategy has a crucial role.", "The popular DQN algorithm has improved significantly the capabilities of Reinforcement\n", "Learning (RL) algorithms to learn state representations from raw data, yet, it uses\n", "a naive exploration strategy which is statistically inefficient.", "The Randomized\n", "Least Squares Value Iteration (RLSVI) algorithm (Osband et al., 2016), on the\n", "other hand, explores and generalizes efficiently via linearly parameterized value\n", "functions.", "However, it is based on hand-designed state representation that requires\n", "prior engineering work for every environment.", "In this paper, we propose a Deep\n", "Learning adaptation for RLSVI.", "Rather than using hand-design state representation, we use a state representation that is being learned directly from the data by a\n", "DQN agent.", "As the representation is being optimized during the learning process,\n", "a key component for the suggested method is a likelihood matching mechanism,\n", "which adapts to the changing representations.", "We demonstrate the importance of\n", "the various properties of our algorithm on a toy problem and show that our method\n", "outperforms DQN in five Atari benchmarks, reaching competitive results with the\n", "Rainbow algorithm.", "In Reinforcement Learning (RL), an agent seeks to maximize the cumulative rewards obtained from interactions with an unknown environment (Sutton et al., 1998) .", "Since the agent can learn only by its interactions with the environment, it faces the exploration-exploitation dilemma: Should it take actions that will maximize the rewards based on its current knowledge or instead take actions to potentially improve its knowledge in the hope of achieving better future performance.", "Thus, to find the optimal policy the agent needs to use an appropriate exploration strategy.", "Classic RL algorithms were designed to face problems in the tabular settings where a table containing a value for each state-action pair can be stored in the computer's memory.", "For more general settings, where generalization is required, a common practice is to use hand-designed state representation (or state-action), upon which a function approximation can be learned to represent the value for each state and action.", "RL algorithms based on linear function approximation have demonstrated stability, data efficiency and enjoys convergence guarantees under mild assumptions (Tsitsiklis & Van Roy, 1997; Lagoudakis & Parr, 2003) .", "They require that the desired learned function, e.g. Qfunction, will be a linear combination of the state representation.", "This is, of course, a hard constraint as the representation is hand-designed, where the designer often does not know how the optimal value-function will look like.", "Furthermore, hand-designed representation is environment-specific and requires re-designing for every new environment.", "The DQN algorithm (Mnih et al., 2015) has changed RL.", "Using Deep Neural Networks (DNN) as function approximators, the DQN algorithm enabled the learning of policies directly from raw highdimensional data and led to unprecedented achievements over a wide variety of domains (Mnih et al., 2015) .", "Over the years, many improvements to DQN were presented, suggesting more fitting network architectures (Wang et al., 2015) , reducing overestimation (Van Hasselt et al., 2016; Anschel et al., 2017) or improving its data efficiency .", "Despite its great success, DQN uses the overly simple -greedy strategy for exploration.", "This strategy is one of the simplest exploration strategies that currently exist.", "The agent takes random action with probability and takes the optimal action according to its current belief with probability 1 − .", "This strategy is commonly used despite its simplicity and proven inefficiency (Osband et al., 2016) .", "The main shortcoming of -greedy and similar strategies derives from the fact that they do not use observed data to improve exploration.", "To explore, it takes a completely random action, regardless of the experience obtained by the agent.", "Thompson Sampling (TS) (Thompson, 1933) , is one of the oldest heuristics to address the 'exploration/exploitation' trade-off in sequential decision-making problems.", "Its variations were proposed in RL (Wyatt, 1998; Strens, 2000) and various bandits settings (Chapelle & Li, 2011; Scott, 2010) .", "For Multi-Armed Bandit (MAB) problems, TS is very effective both in theory (Agrawal & Goyal, 2012; and practice (Chapelle & Li, 2011) .", "Intuitively, TS randomly takes actions according to the probability it believes to be optimal.", "In practice, a prior distribution is assumed over the model's parameters p(w), and a posterior distribution p(w|D) is computed using the Bayes theorem, where D is the observed data.", "TS acts by sampling models from the posterior distribution, and plays the best action according to these samples.", "Randomized Least Squares Value Iteration (Osband et al., 2016) is an RL algorithm which uses linear function approximation and is inspired by Thompson Sampling.", "It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models.", "This algorithm was proven to be efficient in tabular settings, with a bound on the expected regret that match the worst-case lower bound up to logarithmic factors.", "More importantly, it demonstrates efficiency even when generalization is required.", "Alas, as it assumes a linearly parametrized value function on a hand-designed state representation, the success of this algorithm crucially depends on the quality of the given state representation.", "In this paper, we present a new DRL algorithm that combines the exploration mechanism of RLSVI with the representation learning mechanism of DQN; we call it the Deep Randomized Least Squares Value Iteration (DRLSVI) algorithm.", "We use standard DQN to learn state representation and explores by using the last layer's activations of DQN as state representation for RLSVI.", "To compensate for the constantly changing representation and the finite memory of DQN, we use a likelihood matching mechanism, which allows the transfer of information held by an old representation regarding past experience.", "We evaluate our method on a toy-problem -the Augmented Chain environment -for a qualitative evaluation of our method on a small MDP with a known optimal value function.", "Then, we compare our algorithm to the DQN and Rainbow algorithms on several Atari benchmarks.", "We show that it outperforms DQN both in learning speed and performance.", "A Deep Learning adaptation to RLSVI was presented which learn the state representation directly from the data.", "We demonstrated the different properties of our method in experiments and showed the promise of our method.", "We hope to further reduce the complexity and running time of our algorithm in future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.13333332538604736, 0, 0.0952380895614624, 0.08695651590824127, 0, 0.3478260934352875, 0, 0, 0, 0.11764705181121826, 0.2857142686843872, 0, 0, 0, 0, 0.13333332538604736, 0.0833333283662796, 0, 0.060606054961681366, 0.0416666641831398, 0, 0, 0, 0, 0.0714285671710968, 0.05882352590560913, 0, 0, 0.08888888359069824, 0, 0, 0.09090908616781235, 0, 0, 0.0624999962747097, 0.07999999821186066, 0.06666666269302368, 0, 0, 0, 0, 0, 0.29411762952804565, 0, 0, 0, 0.060606054961681366, 0.3589743673801422, 0.06666666269302368, 0.05128204822540283, 0.0624999962747097, 0, 0, 0.307692289352417, 0.08695651590824127, 0.07692307233810425 ]
Syetja4KPH
true
[ "A Deep Learning adaptation of Randomized Least Squares Value Iteration" ]
[ "The complexity of large-scale neural networks can lead to poor understanding of their internal details.", "We show that this opaqueness provides an opportunity for adversaries to embed unintended functionalities into the network in the form of Trojan horse attacks.", "Our novel framework hides the existence of a malicious network within a benign transport network.", "Our attack is flexible, easy to execute, and difficult to detect.", "We prove theoretically that the malicious network's detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise.", "Our attack exposes an important, previously unknown loophole that unveils a new direction in machine learning security.", "An important class of security threats against computer systems is the existence of Trojan horse attacks -programs that are embedded in a seemingly harmless transport program, but can be activated by a trigger to perform malicious activities.", "This threat is common in software, where the malicious program may steal user information or modify the underlying system's behavior (Felt et al., 2011) .", "Similar attacks have also been studied in depth for hardware circuits (Chakraborty et al., 2009) .", "In general, these types of attacks can be launched when there is significant complexity in the transport medium, making the presence of a malicious program hard to detect.", "Due to the complex architecture of modern neural networks, both the model and their behavior are arguably obscure to humans (Ribeiro et al., 2016; Selvaraju et al., 2017; Koh & Liang, 2017) .", "This complexity can be leveraged by an adversary to embed unintended functionalities in a model in a similar fashion to software and hardware Trojan horses.", "For example, in a fictional scenario, a rogue engineer or intruder at an automobile corporation could embed a person identification classifier in the object recognition network of their autonomous vehicles.", "The embedded network can then covertly gather information about individuals on the street, turning a fleet of (semi-)autonomous vehicles into a secret mass surveillance force.", "Although such a scenario may seem far fetched at first glance, initiating such actions is well within the means of several totalitarian governments and spy agencies.", "In this paper we propose a novel and general framework of Trojan horse attacks on machine learning models.", "Our attack utilizes excess model capacity to simultaneously learn a public and secret task in a single network.", "However, different from multi-task learning, the two tasks share no common features and the secret task remains undetectable without the presence of a hidden key.", "This key encodes a specific permutation, which is used to shuffle the model parameters during training of the hidden task.", "The gradient updates for the concealed model act similar to benign additive noise with respect to the gradients of the public model (Abadi et al., 2016) , which behaves indistinguishable to a standard classifier on the public task.", "We demonstrate empirically and prove theoretically that the identity and presence of a secret task cannot be detected without knowledge of the secret permutation.", "In particular, we prove that the decision problem to determine if the model admits a permutation that triggers a secret functionality is NP-complete.", "We experimentally validate our method on a standard ResNet50 network (He et al., 2016) and show that, without any increase in parameters, the model can achieve the same performance on the intended and on the secret tasks as if it was trained exclusively on only one of them.", "Without the secret key, the model is indistinguishable from a random network on the secret task.", "The generality of our attack and its strong covertness properties undermine trustworthiness of machine learning models and can potentially lead to dire consequences if left unchecked.", "We introduced TrojanNet, and formulate a potentially menacing attack scenario.", "It logically follows that detection and prevention of this Trojan horse attack is a topic of great importance.", "However, this may be a daunting task, as we show theoretically that the detection problem can be formulated as an NP-complete decision problem, and is therefore computationally infeasible in its general form.", "While strategies such as Markov Chain Monte Carlo have been used in similar contexts to efficiently reduce the search space (Diaconis, 2009) , the number of candidate permutations may be too large in our case.", "In fact, the number of permutations for a single convolutional layer of ResNet50 can be upwards of (64 × 64 × 3 × 3)!", "≈ 1.21 × 10 152336 !", "While our paper focuses on malicious uses of the TrojanNet framework, it can potentially be utilized for improving the security of neural networks as well.", "Our framework has striking resemblance to symmetric key encryption in cryptography (Katz & Lindell, 2014) .", "This enables the sharing of neural networks across an insecure, monitored communication channel in a similar fashion as steganography (Petitcolas et al., 1999) -the hiding of structured signals in files such as images, audio or text.", "We hope to explore benevolent uses of TrojanNet in future work.", "A APPENDIX" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25641024112701416, 0.2916666567325592, 0.21052631735801697, 0.05714285373687744, 0.08695651590824127, 0.0476190410554409, 0.2666666507720947, 0.04081632196903229, 0.04878048226237297, 0.23529411852359772, 0.18518517911434174, 0.25531914830207825, 0.1538461446762085, 0.20408162474632263, 0.11999999731779099, 0.1860465109348297, 0.1904761791229248, 0.1666666567325592, 0.22727271914482117, 0.2142857164144516, 0.17777776718139648, 0.17777776718139648, 0.20895521342754364, 0.21052631735801697, 0.12244897335767746, 0.05714285373687744, 0.1904761791229248, 0.1818181723356247, 0.13793103396892548, 0.2666666507720947, 0, 0.2916666567325592, 0.04999999701976776, 0.16949151456356049, 0.1111111044883728 ]
BJeGA6VtPS
true
[ "Parameters of a trained neural network can be permuted to produce a completely separate model for a different task, enabling the embedding of Trojan horse networks inside another network." ]
[ "In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis.", "While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network.", "As we show, this design allows to associate different layers of the generator with different regions of the latent space, providing their natural interpretability.", "With experiments on standard benchmarks, we demonstrate that RPGAN reveals several interesting insights about roles that different layers play in the image generation process.", "Aside from interpretability, the RPGAN model also provides competitive generation quality and allows efficient incremental learning on new data.", "Nowadays, deep generative models are an active research direction in the machine learning community.", "The dominant methods for generative modeling, such as Generative Adversarial Networks (GANs), are currently able to produce diverse photorealistic images (Brock et al., 2019; Karras et al., 2019) .", "These methods are not only popular among academicians, but are also a crucial component in a wide range of applications, including image editing (Isola et al., 2017; , super-resolution (Ledig et al., 2017) , video generation (Wang et al., 2018) and many others.", "Along with practical importance, a key benefit of accurate generative models is a more complete understanding of the internal structure of the data.", "Insights about the data generation process can result both in the development of new machine learning techniques as well as advances in industrial applications.", "However, most state-of-the-art generative models employ deep multi-layer architectures, which are difficult to interpret or explain.", "While many works investigate interpretability of discriminative models (Zeiler & Fergus, 2014; Simonyan et al., 2013; Mahendran & Vedaldi, 2015) , only a few (Chen et al., 2016; Bau et al., 2019) address the understanding of generative ones.", "In this work, we propose the Random Path GAN (RPGAN) -an alternative design of GANs that allows natural interpretability of the generator network.", "In traditional GAN generators, the stochastic component that influences individual samples is a noisy input vector, typically sampled from the standard Gaussian distribution.", "In contrast, RPGAN generators instead use stochastic routing during the forward pass as their source of stochasticity.", "In a nutshell, the RPGAN generator contains several instances of the corresponding layer.", "For each sample, only one random instance of each layer is activated during generation.", "The training of the RPGAN can then be performed in the same adversarial manner as in traditional GANs.", "In the sections below, we show how RPGAN allows to understand the factors of variation captured by the particular layer and reveals several interesting findings about the image generation process, e.g. that different layers are \"responsible for\" coloring or objection location.", "As a practical advantage, RPGANs can be efficiently updated to new data via the simple addition of new instances to the bucket, avoiding re-training the full model from scratch.", "Finally, we observe that RPGANs allow the construction of generative models without nonlinearities, which can significantly speed up the generation process for fully-connected layers.", "In summary, the main contributions of our paper are the following:", "• We introduce RPGAN -GAN with an alternative source of stochasticity, based on random routing.", "While being close to traditional GANs in terms of generation quality, RPGAN allows natural interpretability and efficient model updates with new data.", "• With extensive experiments on standard benchmarks we reveal several insights about the image generation process.", "Many of our insights confirm and extend recent findings from Bau et al. (2019) .", "Note, that our scheme is more general compared to the technique from Bau et al. (2019) as RPGAN does not require labeled datasets or pretrained segmentation models.", "• We open-source the PyTorch implementation of RPGAN with common generator architectures 1 .", "The rest of this paper is organized as follows.", "In Section 2 we review relevant ideas from prior art.", "The proposed Random Path GAN design is described in Section 3 and experimentally evaluated in Section 4.", "Section 5 concludes the paper and discusses possible directions for future work.", "In this paper, we address the interpretability of generative models.", "In particular, we have introduced RPGAN, an alternative design of generative adversarial networks, which allows natural interpretation of different generator layers via using random routing as a source of stochasticity.", "With experiments on several datasets, we provide evidence that different layers are responsible for the different factors of variation in generated images, which is consistent with findings from previous work.", "As a possible direction of future research, one can use the RPGAN analysis to construct efficient models, e.g., via identification of redundant parts of the generator for pruning or inference speedup.", "If the number of blocks is too low, the resulting latent space appears to have insufficient cardinality to cover the dataset.", "On the other hand, a too high number of blocks results in a difficult training procedure and also fails." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.40816324949264526, 0.17777776718139648, 0.09302324801683426, 0.08888888359069824, 0.04878048226237297, 0.2222222238779068, 0.12244897335767746, 0.06896550953388214, 0.1463414579629898, 0.1395348757505417, 0.15789473056793213, 0.14814814925193787, 0.1860465109348297, 0.09090908616781235, 0.05128204822540283, 0.05882352590560913, 0.05714285373687744, 0.15789473056793213, 0, 0.08510638028383255, 0.2222222238779068, 0, 0.37837836146354675, 0.09090908616781235, 0.052631575614213943, 0, 0.08163265138864517, 0.05714285373687744, 0.06451612710952759, 0, 0.1621621549129486, 0.05882352590560913, 0.1875, 0.3199999928474426, 0.15686273574829102, 0.11538460850715637, 0, 0.09999999403953552 ]
BJgctpEKwr
true
[ "We introduce an alternative GAN design based on random routes in generator, which can serve as a tool for generative models interpretability." ]
[ "Deep artificial neural networks can achieve an extremely small difference between training and test accuracies on identically distributed training and test sets, which is a standard measure of generalization.", "However, the training and test sets may not be sufficiently representative of the empirical sample set, which consists of real-world input samples.", "When samples are drawn from an underrepresented or unrepresented subset during inference, the gap between the training and inference accuracies can be significant.", "To address this problem, we first reformulate a classification algorithm as a procedure for searching for a source code that maps input features to classes.", "We then derive a necessary and sufficient condition for generalization using a universal cognitive similarity metric, namely information distance, based on Kolmogorov complexity.", "Using this condition, we formulate an optimization problem to learn a more general classification function.", "To achieve this end, we extend the input features by concatenating encodings of them, and then train the classifier on the extended features.", "As an illustration of this idea, we focus on image classification, where we use channel codes on the input features as a systematic way to improve the degree to which the training and test sets are representative of the empirical sample set.", "To showcase our theoretical findings, considering that corrupted or perturbed input features belong to the empirical sample set, but typically not to the training and test sets, we demonstrate through extensive systematic experiments that, as a result of learning a more general classification function, a model trained on encoded input features is significantly more robust to common corruptions, e.g., Gaussian and shot noise, as well as adversarial perturbations, e.g., those found via projected gradient descent, than the model trained on uncoded input features.", "Generalization error in deep learning is typically defined as the difference between training and test errors measured on identically distributed training and test sets.", "This traditional approach fails to take into account how representative these sets are of the empirical sample set from which input samples are drawn at inference time.", "When the training and test sets are not sufficiently representative of the empirical sample set, the difference between training and inference errors can be significant, thus rendering the learned classification function ineffective.", "The lack of the latter kind of generalization results in unreliable decisions, raising questions about how robust, fair, and safe a learned classification function is (Varshney & Alemzadeh, 2017) .", "A natural question then arises: is there a necessary and sufficient condition ensuring that deep learning classifiers generalize in this broader sense?", "If so, how can this condition be satisfied in a real-world setting?", "To answer these questions, we draw on algorithmic information theory, which proposes a complexity measure, Kolmogorov complexity, as the absolute information content of any object, e.g., a computer program, function, or set.", "After deriving a necessary and sufficient condition for generalization using the information distance (Bennett et al., 1998) , which is a universal cognitive similarity metric based on Kolmogorov complexity, and formulating an optimization problem for generalization, we turn our attention to coding theory in order to learn a more general classification function by extending the input features to a classifier with systematically generated encodings of the original features.", "We presented a theoretical and experimental framework for defining and understanding generalization in deep learning, defined as the difference between training and inference errors.", "The theoretical findings and experimental results show that a learned classification function must be sufficiently complex for a classification task in order to be closer to the true classification function.", "Another insight from this study is that concatenating encodings of input features to the original input features helps to achieve generalization in deep learning by enabling the classifier to learn relations between features not captured by the original inputs.", "Experiments demonstrate that a model trained on arbitrarily encoded input features is more robust to common corruptions and adversarial perturbations and that using more encodings may be beneficial to minimize the generalization error.", "Designing input codes to help a DNN learn a more general classification function with a minimum number of encodings is an intriguing research direction to achieve reliability in machine learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.04444443807005882, 0.04255318641662598, 0.12765957415103912, 0.25531914830207825, 0.04999999701976776, 0.13333332538604736, 0.13333332538604736, 0.15217390656471252, 0.260869562625885, 0, 0.038461532443761826, 0.11320754140615463, 0.21276594698429108, 0.10810810327529907, 0.17543859779834747, 0.24096384644508362, 0.42553192377090454, 0.2448979616165161, 0.145454540848732, 0.1111111044883728, 0.11538460850715637 ]
Bke7MANKvS
true
[ "We present a theoretical and experimental framework for defining, understanding, and achieving generalization, and as a result robustness, in deep learning by drawing on algorithmic information theory and coding theory." ]
[ "Many approaches to causal discovery are limited by their inability to discriminate between Markov equivalent graphs given only observational data.", "We formulate causal discovery as a marginal likelihood based Bayesian model selection problem.", "We adopt a parameterization based on the notion of the independence of causal mechanisms which renders Markov equivalent graphs distinguishable.", "We complement this with an empirical Bayesian approach to setting priors so that the actual underlying causal graph is assigned a higher marginal likelihood than its alternatives.", "Adopting a Bayesian approach also allows for straightforward modeling of unobserved confounding variables, for which we provide a variational algorithm to approximate the marginal likelihood, since this desirable feat renders the computation of the marginal likelihood intractable.", "We believe that the Bayesian approach to causal discovery both allows the rich methodology of Bayesian inference to be used in various difficult aspects of this problem and provides a unifying framework to causal discovery research.", "We demonstrate promising results in experiments conducted on real data, supporting our modeling approach and our inference methodology.", "Causal networks (CNs) are special Bayesian networks where all edges reflect causal relations (Pearl, 2009 ).", "The aim of causal structure learning is identifying the CN underlying the observed data.", "In this paper, we focus on the problem of scoring causal graphs using marginal likelihood in a way that identifies the unique causal generative graph.", "Succeeding to do so is very valuable, since once the correct CN is selected, various causal inference tasks such as estimating causal effects or examining confounder distributions becomes straightforward in a Bayesian framework.", "A central challenge in such an attempt, however, is adopting a prior selection policy that not only allows discriminating between Markov equivalent graphs but also assigns higher marginal likelihood score to the actual underlying CN.", "The key notion underlying our solution to first part of this challenge is the widely accepted principle of independence of the cause-effect mechanisms (Janzing et al., 2012) , that is, the natural mechanisms that generate the cause and the effect (based on cause) must be independent of each other.", "We embody this assumption by assuming the mutual independence of the parameters pertaining to cause and effect distributions in a Bayesian model, a line of reasoning that is natural to this modeling perspective, where parameters are modeled as random variables (Spiegelhalter et al., 1993; Heckerman et al., 1995; Geiger et al., 1997; Blei et al., 2003) .", "By assigning independent priors to the cause and effect variables, we render them statistically independent.", "Critically, this assignment of independent priors also breaks the likelihood equivalence between Markov equivalent graphs.", "This is contrast to other ways of selecting independent priors such as the BDeu prior, which leads to assigning equal marginal likelihood to Markov equivalent graphs (Heckerman et al., 1995) .", "As mentioned above, though breaking likelihood equivalence does not necessarily lead to assigning a higher marginal likelihood to the actual underlying CN, it is a prerequisite for doing so 1 .", "The second part of the problem is adapting a prior selection policy that leads to assigning a higher marginal likelihood to the actual CN compared to its alternatives.", "In this work, we use an empirical Bayesian approach in selecting the hyperparameters of the independent priors described above, as we learn the priors that lead to assigning higher marginal likelihood to the actual CN from labeled data.", "The current approach is in the intersection of various other approaches in the literature, thereby combining many of their respective advantages (Spirtes and Zhang, 2016; Glymour et al., 2019) .", "It is based on the notion of mechanism independence similar to Janzing et al. (2012) ; Zhang et al. (2015) , does not assume causal sufficiency similar to Silva et al. (2006) ; Shimizu et al. (2009) ; Janzing et al. ( , 2012 ; Zhang et al. (2015) ; Schölkopf et al. (2016) , can theoretically work on arbitrary graph structures that possibly include latent variables similar to Spirtes et al. (1993) , and can discriminate between Markov equivalent structures similar to Shimizu et al. (2006) ; Zhang and Hyvärinen (2008); Hoyer et al. (2009); Janzing et al. (2012); Zhang et al. (2015) .", "Our approach diverges from other Bayesian methods (Stegle et al., 2010; Shimizu and Bollen, 2014; Zhang et al., 2016) in various dimensions such as by being able to distinguish between Markov equivalent causal graphs, using marginal likelihood (or approximations thereof) instead of surrogate scores such as BIC, or being able to model non-linear relationships.", "In Section 2, we introduce an example model for continuous observations and latent categorical confounders.", "To approximate the marginal likelihood in graphs which include latent confounders, we present a variational inference algorithm in Section 3.", "After testing our approach on various real data sets in Section 4, we present our conclusions and further avenues of research in Section 5.", "Overall, we show that Bayesian model selection is a promising framework that can facilitate causal research significantly both through conceptual unification and increased performance.", "Given that Bayesian modeling is agnostic to specific variable types, conditional distributions, and to approximate inference methodology, the value of a successful Bayesian modeling approach for causal research is immense.", "Though our empirical Bayesian approach to setting priors can be useful in various contexts (e.g. in data sets where only some of the bivariate causal directions are known), finding other principled ways of assigning (or integrating out) priors that do not require labeled data is an important direction for future research.", "Conducting causal discovery with different variable types, and/or different distributions would also be beneficial for demonstrating current approach's viability in various contexts." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3636363446712494, 0.42105263471603394, 0.3255814015865326, 0.307692289352417, 0.178571417927742, 0.37037035822868347, 0.0952380895614624, 0.09999999403953552, 0.15789473056793213, 0.375, 0.25, 0.36666667461395264, 0.0923076868057251, 0.22857142984867096, 0.10256409645080566, 0.25, 0.2222222238779068, 0.11538460850715637, 0.20408162474632263, 0.21052631735801697, 0.07692307233810425, 0.22499999403953552, 0.24657534062862396, 0.04999999701976776, 0.1818181723356247, 0.04347825422883034, 0.25, 0.23529411852359772, 0.16438356041908264, 0.1304347813129425 ]
ByeB5k2Etr
true
[ "We cast causal structure discovery as a Bayesian model selection in a way that allows us to discriminate between Markov equivalent graphs to identify the unique causal graph." ]
[ " The goal of compressed sensing is to learn a structured signal $x$\n from a limited number of noisy linear measurements $y", "\\approx Ax$. In\n traditional compressed sensing, ``structure'' is represented by\n sparsity in some known basis. ", "Inspired by the success of deep\n learning in modeling images, recent work starting with~\\cite{BDJP17}\n has instead considered structure to come from a generative model\n $G: \\R^k \\to \\R^n$. We present two results establishing the\n difficulty of this latter task, showing that existing bounds are\n tight.\n\n ", "First, we provide a lower bound matching the~\\cite{BDJP17} upper\n bound for compressed sensing from $L$-Lipschitz generative models\n $G$. In particular, there exists such a function that requires\n roughly $\\Omega(k \\log L)$ linear measurements for sparse recovery\n to be possible. ", "This holds even for the more relaxed goal of\n \\emph{nonuniform} recovery.\n\n ", "Second, we show that generative models generalize sparsity as a\n representation of structure. ", "In particular, we construct a\n ReLU-based neural network $G: \\R^{2k} \\to \\R^n$ with $O(1)$ layers\n and $O(kn)$ activations per layer, such that the range of $G$\n contains all $k$-sparse vectors.\n", "In compressed sensing, one would like to learn a structured signal x ∈ R n from a limited number of linear measurements y ≈ Ax.", "This is motivated by two observations: first, there are many situations where linear measurements are easy, in settings as varied as streaming algorithms, single-pixel cameras, genetic testing, and MRIs.", "Second, the unknown signals x being observed are structured or \"compressible\": although x lies in R n , it would take far fewer than n words to describe x.", "In such a situation, one can hope to estimate x well from a number of linear measurements that is closer to the size of the compressed representation of x than to its ambient dimension n.", "In order to do compressed sensing, you need a formal notion of how signals are expected to be structured.", "The classic answer is to use sparsity.", "Given linear measurements 1 y = Ax of an arbitrary vector x ∈ R n , one can hope to recover an estimate x * of x satisfying", "for some constant C and norm · .", "In this paper, we will focus on the 2 norm and achieving the guarantee with 3/4 probability.", "Thus, if x is well-approximated by a k-sparse vector x , it should be accurately recovered.", "Classic results such as [CRT06] show that (1) is achievable when A consists of m = O(k log n k ) independent Gaussian linear measurements.", "This bound is tight, and in fact no distribution of matrices with fewer rows can achieve this guarantee in either 1 or 2 [DIPW10] .", "Although compressed sensing has had success, sparsity is a limited notion of structure.", "Can we learn a richer model of signal structure from data, and use this to perform recovery?", "In recent years, deep convolutional neural networks have had great success in producing rich models for representing the manifold of images, notably with generative adversarial networks (GANs) [GPAM + 14] and variational autoencoders (VAEs) [KW14] .", "These methods produce generative models G : R k → R n that allow approximate sampling from the distribution of images.", "So a natural question is whether these generative models can be used for compressed sensing.", "In [BJPD17] it was shown how to use generative models to achieve a guarantee analogous to (1): for any L-Lipschitz G : R k → R n , one can achieve", "where r, δ > 0 are parameters, B k (r) denotes the radius-r 2 ball in R k and Lipschitzness is defined with respect to the 2 -norms, using only m = O(k log Lr δ ) measurements.", "Thus, the recovered vector is almost as good as the nearest point in the range of the generative model, rather than in the set of k-sparse vectors.", "We will refer to the problem of achieving the guarantee in (2) as \"functionsparse recovery\".", "Our main theorem is that the [BJPD17] result is tight: for any setting of parameters n, k, L, r, δ, there exists an L-Lipschitz function G : R k → R n such that any algorithm achieving (2) with 3/4 probability must have Ω(min(k log Lr δ , n)) linear measurements.", "Notably, the additive error δ that was unnecessary in sparse recovery is necessary for general Lipschitz generative model recovery.", "A concurrent paper [LS19] proves a lower bound for a restricted version of (2).", "They show a lower bound when the vector that x lies in the image of G and for a particular value of δ.", "Our results, in comparison, apply to the most general version of the problem and are proven using a simpler communication complexity technique.", "The second result in this paper is to directly relate the two notions of structure: sparsity and generative models.", "We produce a simple Lipschitz neural network G sp : R 2k → R n , with ReLU activations, 2 hidden layers, and maximum width O(kn), so that the range of G contains all k-sparse vectors.", "A second result of [BJPD17] is that for ReLU-based neural networks, one can avoid the additive δ term and achieve a different result from (2):", "using O(kd log W ) measurements, if d is the depth and W is the maximum number of activations per layer.", "Applying this result to our sparsity-producing network G sp implies, with O(k log n) measurements, recovery achieving the standard sparsity guarantee (1).", "So the generative-model representation of structure really is more powerful than sparsity." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.13793103396892548, 0.10526315122842789, 0.3199999928474426, 0.07999999821186066, 0.2222222238779068, 0.045454543083906174, 0.05405404791235924, 0, 0, 0.09756097197532654, 0.06451612710952759, 0, 0, 0.09999999403953552, 0, 0, 0.052631575614213943, 0.0555555522441864, 0.1538461446762085, 0, 0.12765957415103912, 0.1818181723356247, 0.3571428656578064, 0.14999999105930328, 0, 0.060606054961681366, 0, 0.06666666269302368, 0.19354838132858276, 0.1538461446762085, 0.1818181723356247, 0, 0.1249999925494194, 0.04255318641662598, 0.10810810327529907, 0, 0, 0 ]
BkxP2mnq8S
true
[ "Lower bound for compressed sensing w/ generative models that matches known upper bounds" ]
[ "We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn ‘distributional similarity’ in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space.", "To validate our hypothesis, we focus on the ‘image’ side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant.", "We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis.", "We found that image captioning models", "(i) are capable of separating structure from noisy input representations;", "(ii) experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space;", "(iii) cluster images with similar visual and linguistic information together;", "(iv) are heavily reliant on test sets with a similar distribution as the training set;", "(v) repeatedly generate the same captions by matching images and ‘retrieving’ a caption in the joint visual-textual space.", "Our experiments all point to one fact: that our distributional similarity hypothesis holds.", "We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace.\n", "Image description generation, or image captioning (IC), is the task of automatically generating a textual description for a given image.", "The generated text is expected to describe, in a single sentence, what is visually depicted in the image, for example the entities/objects present in the image, their attributes, the actions/activities performed, entity/object interactions (including quantification), the location/scene, etc. (e.g. \"a man riding a bike on the street\").Significant", "progress has been made with end-to-end approaches to tackling this problem, where large-scale parallel image-description datasets such as Flickr30k BID41 and MSCOCO BID4 are used to train a CNN-RNN based neural network IC system BID36 BID17 . Such systems", "have demonstrated impressive performance in the COCO captioning challenge 1 according to automatic metrics, seemingly even surpassing human performance in many instances (e.g. CIDEr score > 1.0 vs. human's 0.85) BID3 . However, in", "reality, the performance of end-to-end systems is still far from satisfactory according to metrics based on human judgement 2 . Thus, despite", "the progress, this task is currently far from being a solved problem.In this paper, we challenge the common assumption that end-to-end IC systems are able to achieve strong performance because they have learned to 'understand' and infer semantic information from visual representations, i.e. they can for example deduce that \"a boy is playing football\" purely by learning directly from mid-level image features and the corresponding textual descriptions in an implicit manner, without explicitly modeling the presence of boy, ball, green field, etc. in the image. It is believed", "that the IC system has managed to infer that the phrase green field is associated with some 'green-like' area in the image and is thus generated in the output description, or that the word boy is generated because of some CNN activations corresponding to a young person. However, there", "seems to be no concrete evidence that this is the case. Instead, we hypothesize", "that the apparently strong performance of end-to-end systems is attributed to the fact that they are exploiting the distributional similarity in the multimodal feature space. To our best knowledge,", "our paper gives the first empirical analysis on visual representations for the task of image captioning.What we mean by 'distributional similarity' is that IC systems essentially attempt to match images from the training set that is most similar to a test image, and generate a caption from the most similar training instances (or generate a 'novel' description from a combination of training instances, for example by 'averaging' the descriptions). Previous work has alluded", "to this observation BID16 BID36 , but it has not been thoroughly investigated. This phenomena could also", "be in part attributed to the fact that the datasets are repetitive and simplistic, with an almost constant and predictable linguistic structure BID18 BID7 BID36 .In this paper we investigate", "the hypothesis of distributional similarity in IC by focusing on the image side of image captioning. Most previous work has concentrated", "on the text side of image captioning, e.g. by optimizing the language modelling capabilities of the RNN BID27 BID19 to improve its performance on automatic metrics. While there have been efforts on improving", "IC by utilizing or modeling images more effectively, for example by using attention over mid-level image features and high-level object proposals BID1 , in this work we are specifically interested in interpretability and we focus on using a simpler (and faster) model for empirical evaluation. We explore the basic yet effective CNN-RNN", "model BID17 , and investigate the representational contributions while keeping the RNN generator constant. More advanced models can be considered specific", "variants of BID17 .It is worth noting that we are interested in demonstrating", "the phenomenon of distributional similarity in IC, rather than achieving or improving state-of-the-art performance, As such, we do not resort to fine-tuning or extensive hyperparameter optimization or ensembles. Therefore, our model is not comparable to state-of-the-art", "models such as BID36 , which optimize IC by fine-tuning the image representations, exploring beam size, scheduled sampling, and using ensemble models. Instead, we vary only the image representation to demonstrate", "that end-to-end IC systems utilize distributional similarity on the image side to generate captions, regardless of the image representation used.Our main contributions are:• An IC experiment where we vary the input image representation but keep the RNN text generation model constant (Section 3). This experiment demonstrates that regardless of the image representation", "(a continuous image embedding or a sparse, low-dimensional vector), end-to-end IC systems seem to utilize a visual-semantic subspace for IC.• The introduction of a simple, sparse bag-of-objects representation that", "contains information about the presence of objects in the images. We use this as a tool to investigate the contribution of images in the image", "captioning framework.• The introduction of pseudo-random vectors derived from object-level representations", "as a means to evaluate IC systems. Our results show that end-to-end models in this framework are remarkably capable of separating", "structure from noisy input representations.• An experiment where IC models are conditioned on image representations factorized and compresssed", "to a lower dimensional space (Section 4.1). We show that high dimensional image embeddings that are factorized to a lower dimensional representation", "and used as input to an IC model result in virtually no significant loss in performance, further strengthening our claim that IC models perform similarity matching rather than image understanding.• An analysis of different image representations and their transformed representations (Sections 4.2 and", "4.3). We visualize the initial visual subspace and the learned joint visual semantic subspace and observe that", "the visual semantic subspace has learned to cluster images with similar visual and linguistic information together, further validating our claims of distributional similarity.• An experiment where the IC model is tested on an out-of-domain dataset (Section 4.4), which has a slightly", "different image distribution. We observe that models, including the state-of-the-art models, show a better performance on test sets that have", "a similar distribution as the training. However, their performance deteriorates when the distributions are slightly different.• An analysis on the uniqueness", "of captions generated by IC models using different image representations (Section 4.5)", ". We hypothesize that the captions are often repeated as they are usually generated by matching images in the joint space", "and retrieving a relevant caption. Our experiments validate this claim.Overall, the study suggests that regardless of the representation used, end-to-end", "IC models implicitly learn and exploit multimodal similarity spaces rather than performing actual image understanding.This study is in line with the recent work that explore understanding of deep learning models and the representational interpretations BID23 BID32 BID30 and works that have tried to delve into the image captioning task BID7 BID36 . To the best of our knowledge, ours is the first work that investigates IC focusing specifically on image representations", "and their effects.", "We hypothesized that IC systems essentially exploit a distributional similarity space to 'generate' image captions, by attempting to match a test image to similar training image(s) and generate an image caption from these similar images.", "Our study focused on the image side of image captioning:We varied the image representations while keeping the text generation component of an end-toend CNN-RNN model constant.", "We found that regardless of the image representation, end-to-end IC systems seem to match images and generate captions in a visual-semantic subspace for IC.", "We conclude that:• A sparse, low-dimensional bags-of-objects representation can be used as a tool to investigate the contribution of images in IC; we demonstrated that such a vector is sufficient for generating good image captions; • End-to-end IC models are remarkably capable of separating structure from noisy input representations, as demonstrated by pseudo-random vectors; • End-to-end IC models suffer virtually no significant loss in performance when a high dimensional representation is factorized to a lower dimensional space; • End-to-end IC models have learned a joint visual-textual semantic subspace by clustering images with similar visual and linguistic information together; • End-to-end IC models rely on test sets with a similar distribution as the training set for generating good captions; • End-to-end IC models repeatedly generate the same captions by matching images in the joint visual-textual space and 'retrieving' a caption in the learned joint space.All the observations above strengthen our distributional similarity hypothesis -that end-to-end IC performs image matching and generates captions for a test image from similar image(s) from the training set -rather than performing actual image understanding.", "Our findings provide novel insights into what end-to-end IC systems are actually doing, which previous work only suggests or hints at without concretely demonstrating the distributional similarity hypothesis.", "We believe our findings are important for the IC community to further advance image captioning in a more informed manner.", "(c) Bag of objects: person (1), tie (1) Figure 5 : Example outputs from our system with different representations, the sub-captions indicate the annotation along with the frequency in braces.", "We also show the CIDEr score and the difference in CIDEr score relative to the Bag of Objects representation.A ANALYSIS ON GENERATED CAPTIONSHere, we provide a qualitative analysis of different image representations presented and gain some insights into how they contribute to the the IC task.", "The Bag of Objects representation led to a strong performance in IC despite being extremely sparse and low-dimensional (80 dimensions).", "Analyzing the test split, we found that each vector consists of only 2.86 non-zero entries on average (standard deviation 1.8, median 2).", "Thus, with the minimal information being provided to the generator RNN, we find it surprising that it is able to perform so well.We compare the output of the remaining models against the Bag of Objects representation by investigating what each representation adds to or subtracts from this simple, yet strong model.", "We start by selecting images (from the test split) annotated with the exact same Bag of Objects representation -which should result in the same caption.", "For our qualitative analysis, several sets of one to three MSCOCO categories were manually chosen.", "For each set, images were selected such that there is exactly one instance of each category in the set and zero for others.", "We then shortlisted images where the captions generated by the Bag of Objects model produced the five highest and five lowest CIDEr scores (ten images per set).", "We then compare the captions sampled for each of the other representations.", "Figure 5 shows some example outputs from this analysis.", "In Figure 5a , Bag of Objects achieved a high CIDEr score despite only being given \"bird\" as input, mainly by 'guessing' that the bird will be perching/sitting on a branch.", "The object-based Softmax (VGG and ResNet) models gave an even more accurate description as \"owl\" is the top-1 prediction of both representations (96% confidence for VGG, 77% for ResNet).", "Places365 predicted \"swamp\" and \"forest\".", "The Penultimate features on the other hand struggled with representing the images correctly.", "In Figure 5b , Bag of Objects struggled with lack of information (only \"airplane\" is given), the Softmax features mainly predicted \"chainlink fence\", Places365 predicted \"kennel\" (hence the dog description), and it most likely that Penultimate has captured the fence-like features in the image rather than the plane.", "In Figure 5c , the Softmax features generally managed to generate a caption describing a woman despite not explicitly containing the 'woman' category.", "This is because other correlated categories were predicted, such as \"mask\", \"wig\", \"perfume\", \"hairspray\" and in the case of Places365 \"beauty salon\" and \"dressing room\"." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.21276594698429108, 0.05405404791235924, 0.14814814925193787, 0.06451612710952759, 0, 0.06451612710952759, 0.1111111044883728, 0.10526315122842789, 0, 0.21739129722118378, 0.31578946113586426, 0.09999999403953552, 0.03448275476694107, 0.07692307233810425, 0.1428571343421936, 0.1458333283662796, 0.13793103396892548, 0.05714285373687744, 0.08695651590824127, 0.3199999928474426, 0.052631575614213943, 0.16326530277729034, 0.25641024112701416, 0.1599999964237213, 0.17910447716712952, 0.09756097197532654, 0.05882352590560913, 0.07692307233810425, 0.12244897335767746, 0.1666666567325592, 0.1249999925494194, 0.14999999105930328, 0.1818181723356247, 0.0476190410554409, 0.20512819290161133, 0.052631575614213943, 0.23333333432674408, 0.11428570747375488, 0.16393442451953888, 0.20512819290161133, 0.19512194395065308, 0.23529411852359772, 0.051282044500112534, 0.1463414579629898, 0.2368420958518982, 0.0833333283662796, 0.1599999964237213, 0.2857142686843872, 0.22727271914482117, 0.0923076868057251, 0.04081632196903229, 0.19512194395065308, 0.1249999925494194, 0.2711864411830902, 0.09756097197532654, 0.13333332538604736, 0.0624999962747097, 0.09302324801683426, 0.0555555522441864, 0.1860465109348297, 0.13636362552642822, 0.25, 0.06666666269302368, 0.11764705181121826, 0.25, 0.07692307233810425, 0.12121211737394333, 0.12903225421905518, 0.0476190410554409, 0.17777776718139648 ]
HJNGGmZ0Z
true
[ "This paper presents an empirical analysis on the role of different types of image representations and probes the properties of these representations for the task of image captioning." ]
[ "We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory.", "To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles.", "We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query.", "When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.", "Many mission-critical applications in health care, financial markets, and business process management store their information in relational databases BID10 BID22 BID16 .", "Users access that information using a query language such as SQL.", "Although expressive and powerful, SQL is difficult to master for non-technical users.", "Even for an expert, writing SQL queries can be challenging as it requires knowing the exact schema of the database and the roles of various entities in the query.", "Hence, a long-standing goal has been to allow users to interact with the database through natural language BID0 BID24 .The", "key to achieving this goal is understanding the semantics of the natural language statements and mapping them to the intended SQL. This", "problem, also known as NL2SQL, was previously understudied largely due to the availability of annotation. Without", "paired natural language statement and SQL query, a weak supervision approach may be adopted which reduces supervision from annotated SQL queries to answers BID19 . This is", "a more difficult learning problem. Therefore", "only with recent release of a number of large-scale annotated NL2SQL datasets BID36 BID6 , we start to see a surge of interest in solving this problem.Existing NL2SQL approaches largely fall into two categories: sequence-to-sequence style neural \"machine translation \" systems BID36 BID5 and sets of modularized models with each predicting a specific part of the SQL queries BID32 BID34 . The former", "class suffer from the requirement of labeling a single ground truth query while multiple semantically equivalent queries exist for each intent. For example", ", as noticed by BID36 , the ordering of filtering conditions in a query does not affect execution but affects generation. To account", "for this, techniques such as reinforcement learning have been used on top of those sequenceto-sequence models. The second", "class of models employ a sequence-to-set approach: they first predict table columns present in the query and then independently predict the rest for each column. This avoids", "the ordering issue, but makes it harder to leverage inter-dependencies among conditions.In this work, we develop a sequence-to-action parsing approach (Section 3) for the", "NL2SQL problem. It incrementally", "fills the slots of a SQL query with actions from an inventory designed for this task. Taking inspiration", "from training oracles in incremental syntactic parsing BID8 , we further propose to use non-deterministic oracles (Section 4) for training the", "incremental parsers. These oracles permit", "multiple correct action continuations from a partial parse, thus are able to account for the logical form variations. Our model combines the", "advantage of a sequence-to-sequence model that captures inter-dependencies within sequence of predictions and a SELECT`Height (ft)Ẁ HERE Name=\"Willis Tower\" AND Location=\"Chicago\" DISPLAYFORM0 What is the height of Willis Tower in Chicago?Figure 1: Our running example", ". The input is a natural language", "question and a table schema, and the output is an executable SQL query. Table contents are shown here,", "but unknown to our models.modularized model that avoids any standarized linearization of the logical forms. We evaluate our models on the", "WikiSQL dataset and observe a performance improvement of 2.1% when comparing non-deterministic oracles with traditional static oracles. We further combine our approach", "and the execution-guided decoding strategy ) and achieve a new state-of-the-art performance with 87.1% test execution accuracy. Experiments on a filtered ATIS", "dataset in addition confirm that our models can be applied to other NL2SQL datasets.", "In this paper, we introduce a sequence-to-action incremental parsing approach for the NL2SQL task.", "With the observation that multiple SQL queries can have the same or very similar semantics corresponding to a given natural language question, we propose to use non-deterministic oracles during training.", "On the WikiSQL dataset, our model trained with the non-deterministic oracles achieves an execution accuracy of 83.7%, which is 2.3% higher than the current state of the art.", "We also discuss using execution-guided decoding in combination with our model.", "This leads to a further improvement of 3.4%, achieving a new state-of-the-art 87.1% execution accuracy on the test set.To the best of our knowledge, our work is the first to use non-deterministic oracles for training incremental semantic parsers.", "Designing such non-deterministic oracles requires identification of multiple correct transition sequences for a given training instance, and an algorithm that decides the possible continuations for any intermediate state that will lead to one of the desired terminal states.", "We have shown promising results for WikiSQL and filtered ATIS dataset and it would be interesting to extend our work to other more complex NL2SQL tasks and to other semantic parsing domains." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.2666666507720947, 0.1666666567325592, 0.04255318641662598, 0.04444443807005882, 0.0555555522441864, 0.1621621549129486, 0.07999999821186066, 0.09090908616781235, 0.09090908616781235, 0.04878048226237297, 0.08163265138864517, 0, 0.05063290521502495, 0.0833333283662796, 0.04255318641662598, 0.04651162400841713, 0.07999999821186066, 0.12244897335767746, 0, 0.09302324801683426, 0.27272728085517883, 0.20000000298023224, 0.21739129722118378, 0.035087715834379196, 0, 0.0476190410554409, 0.09090908616781235, 0.21276594698429108, 0.08695651590824127, 0.05128204822540283, 0.20512820780277252, 0.15094339847564697, 0.07843136787414551, 0.1111111044883728, 0.23333333432674408, 0.2711864411830902, 0.19230768084526062 ]
B1eZCjA9KX
true
[ "We design incremental sequence-to-action parsers for text-to-SQL task and achieve SOTA results. We further improve by using non-deterministic oracles to allow multiple correct action sequences. " ]
[ "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods.", "In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model.", "The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set.", "Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss.", "On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank.", "On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017).", "Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.", "Neural architecture search (NAS) has been applied successfully to design model architectures for image classification and language modeling BID0 BID3 BID6 .", "NAS however is computationally expensive and time consuming: for example, use 450 GPUs and train for 3-4 days.", "Meanwhile, using less resources tends to produce less compelling results BID31 BID0 .The", "main computational bottleneck of NAS is the training of each child model to convergence to measure its accuracy. We", "believe that it is very inefficient and wasteful to train every child model and then throw away all the trained weights even though the child models have much in common. The", "graph represents the entire search space while the red arrows define a model in the search space, which is decided by a controller. Here", "we assume that node 1 is the input to the model whereas nodes 3, 5, and 6 are the outputs of the model.The goal of this work is to remove this inefficiency by enabling more sharing between the child models. This", "idea is similar to the concept of weight inheritance in neuro-evolution (e.g., BID33 ). To understand", "our method, we first need to understand the standard NAS. In standard NAS", "BID0 , an RNN controller is trained by policy gradient to search for a good architecture, which is basically a computational graph. Our observation", "is that all of the graphs, that NAS has iterated over, can be viewed as sub-graphs of a larger graph. In other words,", "we can represent the space of these graphs as a single directed acyclic graph (DAG) . As illustrated", "in FIG0 , a neural network architecture can be found by taking a subset of edges in this DAG. This design is", "advantageous because it enables sharing parameters among all architectures in the search space.", "Neural Architecture Search (NAS) is an important advance that allows faster architecture design for neural networks.", "However, the computational expense of NAS prevents it from being widely adopted.", "In this paper, we presented ENAS, an alternative method to NAS, that requires three orders of magnitude less resources×time.", "The key insight of our method is to share parameters across child models during architecture search.", "This insight is implemented by having NAS search for a path within a larger model.", "We demonstrate empirically that the method works well on both CIFAR-10 and Penn Treebank datasets.The shared parameters ω between different recurrent cells thus consist of all the matrices DISPLAYFORM0 ,j , and W (h),j .", "The controller decides the connection j and the activation function f , for each ∈ {2, 3, ..., N }.", "The layers that are never selected by any subsequent layers are averaged and sent to a softmax head, or to higher recurrent layers.", "As in the case of convolutional models, to stabilize the training of ω, we add a batch normalization layer after the average of the layers that are not selected.B Details for CIFAR-10 Search Spaces B.1", "Details on Search Space 1: ChannelsWe use a block size of S = 32, resulting in C/S = 256/32 = 8 blocks per branch per layer.", "Each branch configuration has its own embedding and softmax head.", "To elaborate, this means that a time step in the controller RNN that predicts the configuration for any branch should have a softmax matrix of size H × (2 C/S − 1), where H = 64 is the hidden dimension of the RNN, and 2 C/S − 1 = 255 is the number of possible binary masks for that branch.", "Each branch also has an embedding matrix of size (2 C/S − 1) × H, from which the row corresponding to the sampled binary mask is selected and sent to the next time step.Layers 4 and 8 of our 12-layer network are max pooling layers with a kernel size of 2 × 2 and a stride of 2, and reduce each spatial dimension of the layers' outputs by a factor of 2.", "Within each group of 3 layers where the spatial dimensions of the layers remain constant, we connect each layer to all layers before it BID17 ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.11428570747375488, 0.05714285373687744, 0, 0.04651162400841713, 0.08888888359069824, 0.1818181723356247, 0.10810810327529907, 0, 0.1428571343421936, 0, 0.045454539358615875, 0.1111111044883728, 0.08163265138864517, 0, 0, 0.10526315122842789, 0.05405404791235924, 0, 0.1666666567325592, 0.06896550953388214, 0.1875, 0, 0.11428570747375488, 0.1249999925494194, 0.13333332538604736, 0.04081632196903229, 0, 0.11428570747375488, 0.04255318641662598, 0, 0, 0.03333332762122154, 0.02816900983452797, 0 ]
ByQZjx-0-
true
[ "An approach that speeds up neural architecture search by 10x, whilst using 100x less computing resource." ]
[ "Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech.", "Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable.", "In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems.", "In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data.", "In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning.", "With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks.", "We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data.", "The recent rise of deep neural networks (DNN) resulted in a substantial breakthrough for a large number of machine learning tasks in computer vision, natural language processing, speech recognition, reinforcement learning (Goodfellow et al., 2016) .", "Both gradient-based optimization via backpropagation (Rumelhart et al., 1985) and hierarchical representation learning appear to be crucial in increasing the performance of machine learning for these problems by a large margin.", "While the superiority of deep architectures in these domains is undoubtful, machine learning for tabular data still did not fully benefit from the DNN power.", "Namely, the state-of-the-art performance in problems with tabular heterogeneous data is often achieved by \"shallow\" models, such as gradient boosted decision trees (GBDT) (Friedman, 2001; Chen & Guestrin, 2016; Ke et al., 2017; Prokhorenkova et al., 2018) .", "While the importance of deep learning on tabular data is recognized by the ML community, and many works address this problem (Zhou & Feng, 2017; Miller et al., 2017; Lay et al., 2018; Feng et al., 2018; Ke et al., 2018) , the proposed DNN approaches do not consistently outperform the state-of-the-art shallow models by a notable margin.", "In particular, to the best of our knowledge, there is still no universal DNN approach that was shown to systematically outperform the leading GBDT packages (e.g., XGBoost (Chen & Guestrin, 2016) ).", "As additional evidence, a large number of Kaggle ML competitions with tabular data are still won by the shallow GBDT methods (Harasymiv, 2015) .", "Overall, at the moment, there is no dominant deep learning solution for tabular data problems, and we aim to reduce this gap by our paper.", "We introduce Neural Oblivious Decision Ensembles (NODE), a new DNN architecture, designed to work with tabular problems.", "The NODE architecture is partially inspired by the recent CatBoost package (Prokhorenkova et al., 2018) , which was shown to provide state-of-the-art performance on a large number of tabular datasets.", "In a nutshell, CatBoost performs gradient boosting on oblivious decision trees (decision tables) (Kohavi, 1994; Lou & Obukhov, 2017) , which makes inference very efficient, and the method is quite resistant to overfitting.", "In its essence, the proposed NODE architecture generalizes CatBoost, making the splitting feature choice and decision tree routing differentiable.", "As a result, the NODE architecture is fully differentiable and could be incorporated in any computational graph of existing DL packages, such as TensorFlow or PyTorch.", "Furthermore, NODE allows constructing multi-layer architectures, which resembles \"deep\" GBDT that is trained end-to-end, which was never proposed before.", "Besides the usage of oblivious decision tables, another important design choice is the recent entmax transformation (Peters et al., 2019) , which effectively performs a \"soft\" splitting feature choice in decision trees inside the NODE architecture.", "As discussed in the following sections, these design choices are critical to obtain state-of-the-art performance.", "In a large number of experiments, we compare the proposed approach with the leading GBDT implementations with tuned hyperparameters and demonstrate that NODE outperforms competitors consistently on most of the datasets.", "Overall, the main contributions of our paper can be summarized as follows:", "1. We introduce a new DNN architecture for machine learning on tabular data.", "To the best of our knowledge, our method is the first successful example of deep architectures that substantially outperforms leading GBDT packages on tabular data.", "2. Via an extensive experimental evaluation on a large number of datasets, we show that the proposed NODE architecture outperforms existing GBDT implementations.", "3. The PyTorch implementation of NODE is available online 1 .", "The rest of the paper is organized as follows.", "In Section 2 we review prior work relevant to our method.", "The proposed Neural Oblivious Decision Ensembles architecture is described in Section 3 and experimentally evaluated in Section 4.", "Section 5 concludes the paper.", "In this paper, we introduce a new DNN architecture for deep learning on heterogeneous tabular data.", "The architecture is differentiable deep GBDTs, trained end-to-end via backpropagation.", "In extensive experiments, we demonstrate the advantages of our architecture over existing competitors with the default and tuned hyperparameters.", "A promising research direction is incorporating the NODE layer into complex pipelines trained via back-propagation.", "For instance, in multi-modal problems, the NODE layer could be employed as a way to incorporate the tabular data, as CNNs are currently used for images, or RNNs are used for sequences.", "library to optimize Catboost, XGBoost, and FCNN hyperparameters.", "For each method, we perform 50 steps of Tree-structured Parzen Estimator (TPE) optimization algorithm.", "As a final configuration, we choose the set of hyperparameters, corresponding to the smallest loss on the validation set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0.06896550953388214, 0.19512194395065308, 0.3529411852359772, 0.15789473056793213, 0.14999999105930328, 0.4117647111415863, 0.1818181723356247, 0.1395348757505417, 0.3333333432674408, 0.0833333283662796, 0.24137930572032928, 0.045454543083906174, 0.17142856121063232, 0.2702702581882477, 0.3448275923728943, 0.1904761791229248, 0.08888888359069824, 0.06666666269302368, 0.10526315122842789, 0, 0.09090908616781235, 0, 0.10256409645080566, 0, 0.800000011920929, 0.23529411852359772, 0.17142856121063232, 0, 0, 0, 0.0714285671710968, 0, 0.7142857313156128, 0.1818181723356247, 0.06666666269302368, 0, 0.1538461446762085, 0, 0, 0.1428571343421936 ]
r1eiu2VtwH
true
[ "We propose a new DNN architecture for deep learning on tabular data" ]
[ "Person re-identification (re-ID) aims at identifying the same persons' images across different cameras.", "However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one.", "State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain.", "Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored.", "Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain.", "In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. ", "In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models.", "However, conventional triplet loss cannot work with softly refined labels.", "To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance.", "The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks.", "In this work, we propose an unsupervised Mutual Mean-Teaching (MMT) framework to tackle the problem of noisy pseudo labels in clustering-based unsupervised domain adaptation methods for person re-ID.", "The key is to conduct pseudo label refinery to better model inter-sample relations in the target domain by optimizing with the off-line refined hard pseudo labels and on-line refined soft pseudo labels in a collaborative training manner.", "Moreover, a novel soft softmax-triplet loss is proposed to support learning with softly refined triplet labels for optimal performances.", "Our method significantly outperforms all existing person re-ID methods on domain adaptation task with up to 18.2% improvements.", "Two temporal average models are introduced in our proposed MMT framework to provide more complementary soft labels and avoid training error amplification.", "Such average models are more de-coupled by ensembling the past parameters and provide more independent predictions, which is ignored by previous methods with peer-teaching strategy (Han et al., 2018; Zhang et al., 2018b ).", "Despite we have verified the effectiveness of such design in Table 2 by removing the temporal average model, denoted as \"Baseline+MMT-500 (w/o E[θ])\", we would like to visualize the training process by plotting the KL divergence between peer networks' predictions for further comparison.", "As illustrated in Figure 3 , the predictions by two temporal average models (\"Proposed MMT-500\") always keep a larger distance than predictions by two ordinary networks (\"Proposed MMT-500 (w/o E[θ])\"), which indicates that the temporal average models could prevent the two networks in our MMT from converging to each other soon under the collaborative training strategy.", "We utilize weighting factors of λ t tri = 0.8, λ t id = 0.5 in all our experiments by tuning on Duketo-Market task with IBN-ResNet-50 backbone and 500 pseudo identities.", "To further analyse the impact of different λ t tri and λ t id on different tasks, we conduct comparison experiments by varying the value of one parameter and keep the others fixed.", "Our MMT framework is robust and insensitive to different parameters except when the hard classification loss is eliminated with λ t id = 1.0.", "The weighting factor of hard and soft triplet losses λ t tri .", "In Figure 4 (a-b) , we investigate the effect of the weighting factor λ t tri in equation 9, where the weight for soft softmax-triplet loss is λ t tri and the weight for hard triplet loss is (1 − λ t tri ).", "We test our proposed MMT-500 with both ResNet-50 and IBN-ResNet-50 backbones when λ t tri is varying from 0.0, 0.3, 0.5, 0.8 and 1.0.", "Specifically, the soft softmax-triplet loss is removed from the final training objective (equation 9) when λ t tri is equal to 0.0, and the hard triplet loss is eliminated when λ t tri is set to 1.0.", "We observe" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05714285373687744, 0.1395348757505417, 0.3913043439388275, 0, 0.21052631735801697, 0.1904761791229248, 0.13636362552642822, 0.1875, 0.4680851101875305, 0.2448979616165161, 0.36734694242477417, 0.23076923191547394, 0.39024388790130615, 0.24390242993831635, 0.13636362552642822, 0.037735845893621445, 0.06666666269302368, 0.0615384578704834, 0.15686273574829102, 0.0833333283662796, 0.1304347813129425, 0.11764705181121826, 0.18867923319339752, 0.04347825422883034, 0.12244897335767746 ]
rJlnOhVYPS
true
[ "A framework that conducts online refinement of pseudo labels with a novel soft softmax-triplet loss for unsupervised domain adaptation on person re-identification." ]
[ "We present the first end-to-end verifier of audio classifiers.", "Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM).", "The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update.", "We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB.", "Recent advances in deep learning have enabled replacement of traditional voice recognition systems with a single neural network trained from data (Graves et al., 2013; Hannun et al., 2014; Amodei et al., 2016) .", "Wide adoption of these networks in consumer devices poses a threat to their safety when exposed to a malicious adversary.", "Indeed, it was recently shown that an adversary can inject noise unrecognizable to a human and force the network to misclassify (Szegedy et al., 2013; Goodfellow et al., 2014; Zhang et al., 2017; Carlini & Wagner, 2018; Carlini et al., 2016; Qin et al., 2019; Neekhara et al., 2019; Yang et al., 2019; Esmaeilpour et al., 2019) , exposing a serious security flaw.", "Ideally, when deploying an automated speech recognition system we would like to guarantee that the system is robust against noise injected by an adversary.", "There has been substantial recent work on certifying robustness of computer vision models (Katz et al., 2017; Ehlers, 2017; Ruan et al., 2018; Tjeng et al., 2019; Anderson et al., 2018; Wong et al., 2018; Raghunathan et al., 2018; Dvijotham et al., 2019; Weng et al., 2018; Zhang et al., 2018; Salman et al., 2019; Gehr et al., 2018; Singh et al., 2018; 2019a; Wang et al., 2018; Singh et al., 2019b) .", "However, the audio domain poses unique challenges not addressed by prior certification work for vision.", "Differences between audio and vision models Concretely, while an input to a vision model is a raw image, audio models typically come with a complex preprocessing stage (that involves non-trivial non-linear operations such as logarithm) which extracts relevant features from the signal.", "Additionally, audio systems typically use recurrent architectures (Chiu et al., 2017) which computer vision verifiers do not handle as they focus on fully-connected, convolutional and residual architectures.", "This work We address both of these challenges and propose an end-to-end verification method for neural network based audio classifiers and an implementation of this method in a system called DAC (Deep Audio Certifier).", "Our threat model assumes an attacker can introduce a noise-based perturbation to the raw audio input signal.", "The goal then is to certify that, for any signal that the attacker can produce, the neural network classifies the signal to the correct label.", "We perform verification of this property using the framework of abstract interpretation (Gehr et al., 2018) .", "At a high level, the idea is to maintain an abstraction capturing all possible behaviors of both the audio processing stage and the neural network.", "The flow of DAC is shown in Fig. 1 where all abstractions are dark blue shapes.", "Here, all possible signals an attacker can obtain are captured using an abstraction s (i) (a convex relaxation).", "This abstraction is then propagated through the audio processing stage (shown in green boxes).", "The key components of this step are abstract transformers.", "For each audio processing operation (e.g. FFT) we create an abstract transformer which receives an abstraction representing an approximation of all possible inputs to the operation and outputs a new abstraction which approximates all possible outputs of the operation.", "The result of the audio processing stage is the abstraction x (i) .", "The shape x (i) is then used as input to the recurrent LSTM unit (light blue) which maintains an abstraction of a hidden state h (i−1) .", "LSTM consists of multiple operations and we create a custom abstract transformer for each of those.", "The result of the transformers in LSTM is a new hidden state h (i) .", "If this was the last frame in the signal (meaning i = T ), then hidden state h (T ) is passed through the fully connected layer of the neural network and, again using the abstract transformer, the final abstract shape a is obtained at the output (at the right of Fig. 1 ).", "Finally, to certify the property we check if each concrete output in the abstraction a classifies to the correct label (this is typically easy).", "If this is true, the output of the network is correct for all inputs that the attacker can create.", "Related work on RNN certification The work of (Ko et al., 2019) proposes the POPQORN verifier for recurrent neural networks (RNN).", "We note that POPQORN does not handle the audio preprocessing pipeline.", "Even though POPQORN cannot directly verify audio classifiers, their approximations for LSTM non-linearities can be integrated in DAC.", "This results in ≈ 200× slowdown with small decrease in the volume of the approximation.", "The massive slowdown makes their approximations unsuitable for certifying audio classifiers.", "In contrast, using our custom abstract transformers for LSTM non-linearities, DAC can precisely certify end-to-end robustness of challenging audio classifiers in few minutes.", "Our main contributions are:", "1. A novel and efficient method to certify robustness of neural network audio classifiers to noise-based perturbations.", "The method is based on new abstract transformers which handle non-linear operations used in both audio processing and recurrent architectures.", "2. An implementation of both verification and provably robust training in a system called DAC.", "We evaluated DAC on common audio classification benchmarks, showing it scales to realistic networks and is far more precise (97% to 2%) than the next best scalable method.", "We presented the first verifier for certifying audio classifiers.", "The key idea was to create abstract transformers for non-linear operations used in the audio processing stage and the recurrent network.", "These transformers compute an optimal (area-wise) approximation under assumptions representable in the underlying convex relaxation and enable sound handling of the entire pipeline.", "Our evaluation shows that DAC is practically effective and achieves high verification rates on different datasets.", "by the smaller volume under the each plane.", "Then for any x, y, f (x, y 1 ) < f (x, y 2 ) and f (x 1 , y) < f (x 2 , y).", "Thus, since z u x is independent to y, it is sufficient to show z", "We can easily know that f (x, u y ) is concave at x ≥ 0 and convex at x ≤ 0 by the second derivation of f .", "(a) Consider the case of u x > 0.", "Let x 0 be the x coordinate of the crossing of f (x, u y ) and .", "Again, by convexity of .", "Again, by convexity of", "With analogous steps, z l y can be shown to lie under the curve.", "Choosing the plane with larger volume underneath it allows to minimize the expected difference between the true curve and the lower bound plane under the randomly chosen domain.", "The proof of upper bounds will follow the same steps with the first case.", "z u x in this case is exactly same as before, but since f (x, y) goes below 0 when y < 0, z u y has to anchor at (l x , l y ) instead of (u x , l y ) since f (l x , l y ) ≥ f (u x , l y ) and convexity of f in the region.", "The proof steps do not differ much from the previous proofs.", "Again, the proof for lower bound is similar as before, but note that z l x needs to choose maximum between the two slopes.", "This is due to the sign of the values.", "Since f (u x , l y ) < 0 is the minimum in the region and it grows along x gets smaller, both D i f (u x , l y ) and f (ux,ly)−f (lx,ly) ux−lx are less than zero." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4615384638309479, 0.2857142686843872, 0.1538461446762085, 0.23728813230991364, 0.12765957415103912, 0.22857142984867096, 0.06557376682758331, 0.1538461446762085, 0.07692307233810425, 0.1875, 0.1111111044883728, 0.045454539358615875, 0.21276594698429108, 0.23529411852359772, 0.21621620655059814, 0.1818181723356247, 0.25, 0.12121211737394333, 0, 0.19354838132858276, 0.07692307233810425, 0.17391303181648254, 0.2142857164144516, 0.1395348757505417, 0.0624999962747097, 0.19354838132858276, 0.13333332538604736, 0.21052631735801697, 0.12121211737394333, 0.21052631735801697, 0.2142857164144516, 0.11428570747375488, 0.19999998807907104, 0.0714285671710968, 0.25, 0, 0.4848484694957733, 0.10810810327529907, 0.1249999925494194, 0.22727271914482117, 0.307692289352417, 0.21621620655059814, 0.1538461446762085, 0, 0.0833333283662796, 0, 0.06896550953388214, 0.1463414579629898, 0.1538461446762085, 0.12903225421905518, 0.0952380895614624, 0.0952380895614624, 0.12903225421905518, 0.14999999105930328, 0.19999998807907104, 0.1428571343421936, 0.0714285671710968, 0.09999999403953552, 0.23999999463558197, 0.0833333283662796 ]
HJxkvlBtwH
true
[ "We present the first approach to certify robustness of neural networks against noise-based perturbations in the audio domain." ]
[ "Since deep neural networks are over-parameterized, they can memorize noisy examples.", "We address such memorizing issue in the presence of annotation noise.", "From the fact that deep neural networks cannot generalize neighborhoods of the features acquired via memorization, we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation.", "Based on this, we propose a novel training method called Learning with Ensemble Consensus (LEC) that prevents overfitting noisy examples by eliminating them using the consensus of an ensemble of perturbed networks.", "One of the proposed LECs, LTEC outperforms the current state-of-the-art methods on noisy MNIST, CIFAR-10, and CIFAR-100 in an efficient manner.", "Deep neural networks (DNNs) have shown excellent performance (Krizhevsky et al., 2012; He et al., 2016) on visual recognition datasets (Deng et al., 2009) .", "However, it is difficult to obtain highquality labeled datasets in practice (Wang et al., 2018a) .", "Even worse, DNNs could not generalize the training data in the presence of noisy examples .", "Therefore, there is an increasing demand for robust training methods.", "In general, DNNs optimized with SGD first generalize clean examples under label noise .", "Based on this, recent studies consider examples that incur small losses on the network that does not overfit noisy examples as being clean (Han et al., 2018; Shen & Sanghavi, 2019) .", "However, such small-loss examples may be corrupted, particularly under a high level of noise.", "Hence, choosing safe examples from the noisy dataset with small-loss criteria may be impractical.", "To address this, we find the method of screening out noisy examples among small-loss examples by focusing on well-known observations:", "(i) noisy examples are learned via memorization rather than via generalization and", "(ii) under a certain perturbation, network predictions for memorized features easily fluctuate, while those for generalized features do not.", "Based on these two observations, we hypothesize that out of small-loss examples, training losses of noisy examples would increase by injecting certain perturbation to network parameters, while those of clean examples would not.", "This suggests that examples that consistently incur small losses under multiple perturbations can be considered as being clean.", "Since this idea comes from an artifact of SGD optimization, it can be applied to any architecture optimized with SGD.", "In this work, we introduce a method of perturbing parameters to filter noisy examples out of smallloss examples.", "By embedding the filtering into training, we propose a new robust training scheme termed learning with ensemble consensus (LEC).", "In LEC, the network is first trained on the entire training set for a while and then trained on the intersection of small-loss examples of the ensemble of perturbed networks.", "We present three LECs with different perturbations and evaluate their effectiveness on three benchmark datasets with random label noise (Goldberger & Ben-Reuven, 2016; Ma et al., 2018) , open-set noise (Wang et al., 2018b) , and semantic noise.", "The proposed LEC outperforms existing robust training methods by efficiently removing noisy examples from training batches.", "Generalization of DNNs.", "Although DNNs are over-parameterized, they have impressive generalization ability (Krizhevsky et al., 2012; He et al., 2016) .", "Some studies argue that gradient-based optimization plays an important role in regularizing DNNs (Neyshabur et al., 2014; .", "show that DNNs optimized with gradient-based methods generalize clean examples in the early stage of training.", "Since mislabeling reduces the correlation with other training examples, it is likely that noisy examples are learned via memorization.", "Therefore, we analyze the difference between generalized and memorized features to discriminate clean and noisy examples.", "Training DNNs with Noisy datasets.", "Label noise issues can be addressed by reducing negative impact of noisy examples.", "One direction is to train with a modified loss function based on the noise distribution.", "Most studies of this direction estimate the noise distribution prior to training as it is not accessible in general (Sukhbaatar et al., 2014; Goldberger & Ben-Reuven, 2016; Patrini et al., 2017; Hendrycks et al., 2018) .", "Another direction is to train with modified labels using the current model prediction (Reed et al., 2014; Ma et al., 2018) .", "Aside from these directions, recent work suggests a method of exploiting small-loss examples (Jiang et al., 2017; Han et al., 2018; Yu et al., 2019; Shen & Sanghavi, 2019) based on the generalization ability of DNNs.", "However, it is still hard to find clean examples by relying on training losses.", "This study presents a simple method to overcome such a problem of small-loss criteria.", "This work presents the method of generating and using the ensemble for robust training.", "We explore three simple perturbation methods to generate the ensemble and then develop the way of identifying noisy examples through ensemble consensus on small-loss examples.", "Along with growing attention to the use of small-loss examples for robust training, we expect that our ensemble method will be useful for such training methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1249999925494194, 0.375, 0.19230768084526062, 0.26923075318336487, 0.24390242993831635, 0, 0.10810810327529907, 0.34285715222358704, 0, 0.11764705181121826, 0.11999999731779099, 0.22857142984867096, 0.17142856121063232, 0.25, 0.1875, 0.052631575614213943, 0.1599999964237213, 0.10526315122842789, 0.09999999403953552, 0.3243243098258972, 0.09999999403953552, 0.22727271914482117, 0.07843136787414551, 0.1111111044883728, 0.0833333283662796, 0, 0.051282044500112534, 0.21621620655059814, 0.14999999105930328, 0.277777761220932, 0, 0.23529411852359772, 0.2222222238779068, 0.18867923319339752, 0.1463414579629898, 0.22641508281230927, 0.11428570747375488, 0.3529411852359772, 0.5294117331504822, 0.2790697515010834, 0.21739129722118378 ]
ryxOUTVYDH
true
[ "This work presents a method of generating and using ensembles effectively to identify noisy examples in the presence of annotation noise. " ]
[ "Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications.", "A popular formulation of the problem is an $\\ell_1$ regularized maximum likelihood estimation.", "Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure.", "Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix.", "However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters.", "We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning.", "We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.", "Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis, and it has found applications in diverse areas.", "In computational biology, a sparse graph structure between gene expression data may be used to understand gene regulatory networks; in finance, a sparse graph structure between financial timeseries may be used to understand the relationship between different financial assets.", "A popular formulation of the problem is an 1 regularization log-determinant estimation of the precision matrix.", "Based on this convex formulation, many algorithms have been designed to solve this problem efficiently, and one can formally prove that under a list of conditions, the solution of the optimization problem is guaranteed to recover the graph structure with high probability.", "However, convex optimization based approaches have their own limitations.", "The hyperparameters, such as the regularization parameters and learning rate, may depend on unknown constants, and need to be tuned carefully to achieve the recovery results.", "Furthermore, the formulation uses a single regularization parameter for all entries in the precision matrix, which may not be optimal.", "It is intuitive that one may obtain better recovery results by allowing the regularization parameters to vary across the entries in the precision matrix.", "However, such flexibility will lead to a quadratic increase in the number of hyperparameters, but it is hard for traditional approaches to search over a large number of hyperparameters.", "Thus, a new paradigm may be needed for designing more effective sparse recovery algorithms.", "Recently, there has been a surge of interest in a new paradigm of algorithm design, where algorithms are augmented with learning modules trained directly with data, rather than prescribing every step of the algorithms.", "This is meaningful because very often a family of optimization problems needs to be solved again and again, similar in structures but different in data.", "A data-driven algorithm may be able to leverage this distribution of problem instances, and learn an algorithm which performs better than traditional convex formulation.", "In our case, the sparse graph recovery problem may also need to be solved again and again, where the underlying graphs are different but have similar degree distribution, the magnitude of the precision matrix entries, etc.", "For instance, gene regulatory networks may be rewiring depending on the time and conditions, and we want to estimate them from gene", "In our experiments, we show that the AM architecture provides very good inductive bias, allowing the model to learn very effective sparse graph recovery algorithm with a small amount of training data.", "In all cases, the learned algorithm can recover sparse graph structures with much fewer data points from a new problem, and it also works well in recovering gene regulatory networks based on realistic gene expression data generators.", "Related works.", "Belilovsky et al. (2017) considers CNN based architecture that directly maps empirical covariance matrices to estimated graph structures.", "Previous works have parameterized optimization algorithms as recurrent neural networks or policies in reinforcement learning.", "For instance, Andrychowicz et al. (2016) considered directly parameterizing optimization algorithm as an RNN based framework for learning to learn.", "Li & Malik (2016) approach the problem of automating algorithm design from reinforcement learning perspective and represent any particular optimization algorithm as a policy.", "Khalil et al. (2017) learn combinatorial optimzation over graph via deep Q-learning.", "These works did not consider the structures of our sparse graph recovery problem.", "Another interesting line of approach is to develop deep neural networks based on unfolding an iterative algorithm Gregor & LeCun (2010) ; ; .", "developed ALISTA which is based on unrolling the Iterative Shrinkage Thresholding Algorithm (ISTA).", "Sun et al. (2016) developed 'ADMM-Net', which is also developed for compressive sensing of MRI data.", "Though these seminal works were primarily developed for compressive sensing applications, they alluded to the general theme of using unrolled algorithms as inductive biases.", "We thus identify a suitable unrolled algorithm and leverage its inductive bias to solve the sparse graph recovery problem.", "We presented a novel neural network, GLAD, for the sparse graph recovery problem based on an unrolled Alternating Minimization algorithm.", "We theoretically prove the linear convergence of AM algorithm as well as empirically show that learning can further improve the sparse graph recovery.", "The learned GLAD model is able to push the sample complexity limits thereby highlighting the potential of using algorithms as inductive biases for deep learning architectures.", "Further development of theory is needed to fully understand and realize the potential of this new direction.", "Alternating Minimization is performing", "Taking the gradient of the objective function with respect to Θ to be zero, we have", "Taking the gradient of the objective function with respect to Z to be zero, we have", "where", "Solving the above two equations, we obtain:", "where", "B LINEAR CONVERGENCE RATE ANALYSIS m , where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in (6).", "The k th iteration of the AM algorithm satisfies,", "where 0 < C λ < 1 is a constant depending on λ.", "We will reuse the following notations in the appendix:", "The update rules for Alternating Minimization are:", "Assumptions: With reference to the theory developed in Rothman et al. (2008), we make the following assumptions about the true model.", "(O P (·) is used to denote bounded in probability.", ")", "We now proceed towards the proof: Lemma 2.", "For any x, y, k ∈ R, k > 0, x = y,", "Proof.", "where", "is the largest eigenvalue of X in absolute value.", "Proof.", "First we factorize X using eigen decomposition, X = Q X D X Q X , where Q X and D X are orthogonal matrix and diagonal matrix, respectively.", "Then we have,", "Similarly, the above equation holds for Y .", "Therefore,", "where we define Q := Q Y Q X .", "Similarly, we have,", "Then the i-th entry on the diagonal of", "ji .", "Using the fact that D X and D Y are diagonal, we have,", "The last step makes use of", "Similarly, using (42), we have,", "Assuming X − Y F > 0 (otherwise (37) trivially holds), using (52) and (50), we have,", "Using lemma (2), we have,", "Therefore,", "Lemma 4.", "Under assumption (2), the output of the k-th and", "where 0 < C λ < 1 is a constant depending on λ.", "Proof.", "The first part is easy to show, if we observe that in the second update step of AM (8), η ρ/λ is a contraction under metric d(X, Y ) = X − Y F .", "Therefore we have,", "Next we will prove the second part.", "To simplify notation, we let A(X) = X X + 4 λ I. Using the first update step of AM (7), we have,", "where", "The last derivation step makes use of the triangle inequality.", "Using lemma (3), we have,", "Therefore", "where", "Λ max (X) is the largest eigenvalue of X in absolute value.", "The rest is to show that both Λ max (Y λ ) and Λ max (Y k+1 ) are bounded using assumption (2).", "For Λ max (Y k+1 ), we have,", "Combining (62) and (68), we have,", "Therefore,", "Continuing with (73), we have,", "Since Z λ is the minimizer of a strongly convex function, its norm is bounded.", "And we also have", "Therefore both Λ max (Y λ ) and Λ max (Y k+1 ) are bounded in (70), i.e. 0 < C λ < 1 is a constant only depending on λ.", "m , where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in (6).", "The k th iteration of the AM algorithm satisfies,", "where 0 < C λ < 1 is a constant depending on λ.", "Proof.", "(1) Error between Θ λ and Θ G Combining the following two equations:", "Note that by the optimality condition, ∇ z f ( Θ λ , Z λ , ρ, λ) = 0, we have the fixed point equation", "λ and we have:", "Since G is σ G -strongly convex, where σ G is independent of the sample covariance matrix Σ * as the hessian of G is independent of Σ * .", "Therefore,", "Proof.", "(2) Error between Θ G and Θ * Corollary 5 (Theorem 1.", "of Rothman et al. (2008)).", "Let Θ G be the minimizer for the optimization", "C EXPERIMENTAL DETAILS This section contains the detailed settings used in the experimental evaluation section." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.1428571343421936, 0.19999998807907104, 0.19999998807907104, 0.03999999538064003, 0.25, 0.1249999925494194, 0.04999999701976776, 0.1463414579629898, 0.13793103396892548, 0.15686273574829102, 0.1666666567325592, 0.21052631735801697, 0.11764705181121826, 0.10810810327529907, 0.09999999403953552, 0.20689654350280762, 0.13636362552642822, 0.05128204822540283, 0.15789473056793213, 0.1666666567325592, 0.11428570747375488, 0.2222222238779068, 0.23999999463558197, 0.12121211737394333, 0.13333332538604736, 0.2857142686843872, 0.21052631735801697, 0.07407406717538834, 0.2857142686843872, 0.1621621549129486, 0.2857142686843872, 0.06666666269302368, 0.10256409645080566, 0.29411762952804565, 0.5714285373687744, 0.3333333432674408, 0.14999999105930328, 0.06451612710952759, 0.21052631735801697, 0.06896550953388214, 0.06896550953388214, 0.09090908616781235, 0.20408162474632263, 0.1666666567325592, 0.07692307233810425, 0.08695651590824127, 0.27272728085517883, 0.05882352590560913, 0, 0.08695651590824127, 0, 0.0833333283662796, 0, 0, 0.1818181723356247, 0, 0, 0.1818181723356247, 0.07407406717538834, 0, 0, 0, 0, 0.08695651590824127, 0.07692307233810425, 0.04255318641662598, 0, 0.09090908616781235, 0.0555555522441864, 0.07999999821186066, 0, 0.07407406717538834, 0, 0, 0, 0, 0.06896550953388214, 0, 0.04999999701976776, 0.22727271914482117, 0.1666666567325592, 0.07692307233810425, 0.07407406717538834, 0.052631575614213943, 0, 0.0624999962747097, 0, 0, 0.260869562625885, 0.0714285671710968 ]
BkxpMTEtPB
true
[ "A data-driven learning algorithm based on unrolling the Alternating Minimization optimization for sparse graph recovery." ]
[ "Counterfactual regret minimization (CFR) is a fundamental and effective technique for solving Imperfect Information Games (IIG).", "However, the original CFR algorithm only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation.", "Such tabular representation limits the method from being directly applied to large games.", "In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. ", "Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization.", "To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization (MCCFR) method, which may be of independent interests. ", "Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. ", "On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR.", "On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by $9.8\\pm4.1$ chips per game.", "It's a successful application of neural CFR in large games.\n", "While significant advance has been made in addressing large perfect information games, such as Go (Silver et al., 2016) , solving imperfect information games remains a challenging task.", "For Imperfect Information Games (IIG), a player has only partial knowledge about her opponents before making a decision, so that she has to reason under the uncertainty about her opponents' information while exploiting the opponents' uncertainty about herself.", "Thus, IIGs provide more realistic modeling than perfect information games for many real-world applications, such as trading, traffic routing, and politics.", "Nash equilibrium is a typical solution concept for a two-player perfect-recall IIG.", "One of the most effective approaches is CFR (Zinkevich et al., 2007) , which minimizes the overall counterfactual regret so that the average strategies converge to a Nash equilibrium.", "However the original CFR only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation.", "Such tabular representation limits the method from being directly applied to large games.", "To tackle this challenge, one can simplify the game by grouping similar states together to solve the simplified (abstracted) game approximately via tabular CFR (Zinkevich et al., 2007; Lanctot et al., 2009) .", "Constructing an effective abstraction, however, demands rich domain knowledge and its solution may be a coarse approximation of true equilibrium.", "Function approximation can be used to replace the tabular representation.", "Waugh et al. (2015) combines regression tree function approximation with CFR based on handcrafted features, which is called Regression CFR (RCFR).", "However, since RCFR uses full traversals of the game tree, it is still impractical for large games.", "Moravcik et al. (2017) propose a seminal approach DeepStack, which uses fully connected neural networks to represent players' counterfactual values, tabular CFR however was used in the subgame solving.", "Jin et al. (2017) use deep reinforcement learning to solve regret minimization problem for single-agent settings, which is different from two-player perfect-recall IIGs.", "To learn approximate Nash equilibrium for IIGs in an end-to-end manner, Heinrich et al. (2015) and Heinrich & Silver (2016) propose eXtensive-form Fictitious Play (XFP) and Neural Fictitious Self-Play (NFSP), respectively, based on deep reinforcement learning.", "In a NFSP model, the neural strategies are updated by selecting the best responses to their opponents' average strategies.", "These approaches are advantageous in the sense that they do not rely on abstracting the game, and accordingly their strategies can improve continuously with more optimization iterations.", "However fictitious play empirically converges much slower than CFR-based approaches.", "Srinivasan et al. (2018) use actor-critic policy optimization methods to minimize regret and achieve performance comparable to NFSP.", "Thus it remains an open question whether a purely neural-based end-to-end approach can achieve comparable performance to tabular based CFR approach.", "In the paper, we solve this open question by designing a double neural counterfactual regret minimization (DNCFR) algorithm 2 .", "To make a neural representation, we modeled imperfect information game by a novel recurrent neural network with attention.", "Furthermore, in order to improve the convergence of the neural algorithm, we also developed a new sampling technique which converged much more efficient than the outcome sampling, while being more memory efficient than the external sampling.", "In the experiment, we conducted a set of ablation studies related to each novelty.", "The experiments showed DNCRF converged to comparable results produced by its tabular counterpart while performing much better than NFSP.", "In addition, we tested DNCFR on extremely large game, heads-up no-limit Texas Hold'em (HUNL).", "The experiments showed that DNCFR with only a few number of parameters achieved strong neural strategy and beat ABS-CFR.", "h∈H denotes a possible history (or state), which consists of each player's hidden variable and actions taken by all players including chance.", "The empty sequence ∅ is a member of H. h j h denotes h j is a prefix of", "h. Z ⊆ H denotes the terminal histories and any member z ∈Z is not a prefix of any other sequences.", "A(h)={a:ha∈H} is the set of available actions after non-terminal history h ∈ H \\Z.", "A player function P assigns a member of N ∪{c} to each non-terminal history, where c is the chance ( we set c=−1).", "P (h) is the player who takes an action after history", "h. For each player i, imperfect information is denoted by information set (infoset) I i .", "All states h∈I i are indistinguishable to", "i. I i refers to the set of infosets of", "i. The utility function u i (z) defines the payoff of i at state z.", "See appendix B.1 for more details.", "Solving IIGs via function approximation methods is an important and challenging problem.", "Neural Fictitious Self-Play (NFSP) (Heinrich & Silver, 2016 ) is a function approximation method based on deep reinforcement learning, which is a prior leading method to solve IIG.", "However, fictitious play empirically converges slower than CFR-based approaches in many settings.", "Recently, Lockhart et al. (2019) propose a new framework to directly optimize the final policy against worst-case opponents.", "However, the authors consider only small games.", "Regression CFR (RCFR) (Waugh et al., 2015) is a function approximation method based on CFR.", "However, RCFR needs to traverse the full game tree.", "Such traversal is intractable in large games.", "In addition, RCFR uses hand-crafted features and regression tree to estimate cumulative regret rather than learning features from data.", "Deep learning empirically performs better than regression tree in many areas, such as the Transformer and BERT in natural language models (Ashish Vaswani, 2017; Jacob Devlin, 2018) .", "In the past year, concurrent works deep CFR (DCFR) (Brown et al., 2018) and single deep CFR (SD-CFR) (Steinberger, 2019) have been proposed to address this problem via deep learning.", "DCFR, SDCFR, RCFR and our DNCFR are based on the framework of counterfactual regret minimization.", "However, there are many differences in several important aspects, which are listed as follows.", "(1) We represent the extensive-form game by recurrent neural network.", "The proposed LSTM with attention performs better than fully connected network (see details in Section 3.2).", "(2) DNCFR updates the cumulative regret only based on the additionally collected samples in current iteration rather than using the samples in a big reservoir (see details in Section 3.3.1).", "(3) It's important to use squared-loss for the average strategies rather than log loss.", "Because the log loss is based on the big reservoir samples up to T -th iteration, it is very memory-expensive (see details in Section 3.3.2).", "(4) Another important aspect to make deep learning model work is that we divide regret by √ T and renormalize the regret, because the cumulative regret can grow unboundedly (see details in Section 3.3.1).", "(5) Also, DNCFR collects data by an efficiently unbiased mini-batch robust sampling method, which may be of independent interests to the IIG communities (see details in Section 4).", "There are also big differences in the experimental evaluations.", "In our method, we conduct a set of ablation studies in various settings.", "We believe that our ablation studies are informative and could have a significant impact on these kinds of algorithms.", "Also, we evaluate DNCFR on extremely large games while RCFR and SDCFR are only evaluated on small toy games.", "We proposed a novel double neural counterfactual regret minimization approach to solve large IIGs by combining many novel techniques, such as recurrent neural representation, attention, robust sampling, and mini-batch MCCFR.", "We conduct a set of ablation studies and the results show that these techniques may be of independent interests.", "This is a successful application of applying deep learning into large IIG.", "We believe DNCFR and other related neural methods open up a promising direction for future work.", "A GAME RULES" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06896550953388214, 0.05882352590560913, 0.07692307233810425, 0.2222222238779068, 0.2142857164144516, 0.08888888359069824, 0.1538461446762085, 0, 0.11428570747375488, 0.25, 0.14999999105930328, 0.1395348757505417, 0.05882352590560913, 0.0833333283662796, 0.09999999403953552, 0.060606054961681366, 0.07692307233810425, 0.1428571343421936, 0.060606054961681366, 0.08695651590824127, 0, 0.06666666269302368, 0.1428571343421936, 0.1111111044883728, 0, 0.19999998807907104, 0, 0, 0.06666666269302368, 0.12121211737394333, 0.25, 0.3448275923728943, 0.1428571343421936, 0.14814814925193787, 0.0624999962747097, 0, 0.1249999925494194, 0.05714285373687744, 0.07692307233810425, 0.060606054961681366, 0, 0.1111111044883728, 0, 0.14814814925193787, 0.09999999403953552, 0.0952380895614624, 0, 0, 0, 0.15789473056793213, 0, 0.19354838132858276, 0, 0.0714285671710968, 0.1818181723356247, 0, 0.06451612710952759, 0, 0.09756097197532654, 0.0714285671710968, 0, 0.260869562625885, 0.06666666269302368, 0.05128204822540283, 0.07407406717538834, 0.05405404791235924, 0.043478257954120636, 0.04878048226237297, 0, 0.07692307233810425, 0.1249999925494194, 0, 0.3414634168148041, 0.12903225421905518, 0.07999999821186066, 0.20689654350280762, 0 ]
ByedzkrKvH
true
[ "We proposed a double neural framework to solve large-scale imperfect information game. " ]
[ "We present the first verification that a neural network for perception tasks produces\n", "a correct output within a specified tolerance for every input of interest.", "We define\n", "correctness relative to a specification which identifies 1) a state space consisting of\n", "all relevant states of the world and 2) an observation process that produces neural\n", "network inputs from the states of the world.", "Tiling the state and input spaces with\n", "a finite number of tiles, obtaining ground truth bounds from the state tiles and\n", "network output bounds from the input tiles, then comparing the ground truth and\n", "network output bounds delivers an upper bound on the network output error for\n", "any input of interest.", "Results from two case studies highlight the ability of our\n", "technique to deliver tight error bounds for all inputs of interest and show how the\n", "error bounds vary over the state and input spaces.", "Neural networks are now recognized as powerful function approximators with impressive performance across a wide range of applications, especially perception tasks (e.g. vision, speech recognition).", "Current techniques, however, provide no correctness guarantees on such neural perception systemsthere is currently no way to verify that a neural network provides correct outputs (within a specified tolerance) for all inputs of interest.", "The closest the field has come is robustness verification, which aims to verify if the network prediction is stable for all inputs in some neighborhood around a selected input point.", "But robustness verification does not verify for all inputs of interest -it only verifies around local regions.", "Besides, it does not guarantee that the output, even if stable, is actually correct -there is no specification that defines the correct output for any input except for the manually-labeled center point of each region.", "We present the first correctness verification of neural networks for perception -the first verification that a neural network produces a correct output within a specified tolerance for every input of interest.", "Neural networks are often used to predict some property of the world given an observation such as an image or audio recording.", "We therefore define correctness relative to a specification which identifies", "1) a state space consisting of all relevant states of the world and", "2) an observation process that produces neural network inputs from the states of the world.", "Then the inputs of interest are all inputs that can be observed from the state space via the observation process.", "We define the set of inputs of interest as the feasible input space.", "Because the quantity of interest that the network predicts is some property of the state of the world, the state defines the ground truth output (and therefore defines the correct output for each input to the neural network).", "We present Tiler, the algorithm for correctness verification of neural networks.", "Evaluating the correctness of the network on a single state is straightforward -use the observation process to obtain the possible inputs for that state, use the neural network to obtain the possible outputs, then compare the outputs to the ground truth from the state.", "To do correctness verification, we generalize this idea to work with tiled state and input spaces.", "We cover the state and input spaces with a finite number of tiles: each state tile comprises a set of states; each input tile is the image of the corresponding state tile under the observation process.", "The state tiles provide ground truth bounds for the corresponding input tiles.", "We use recently developed techniques from the robustness verification literature to obtain network output bounds for each input tile (Xiang et al., 2018; Gehr et al., 2018; Weng et al., 2018; Bastani et al., 2016; Lomuscio and Maganti, 2017; Tjeng et al., 2019) .", "A comparison of the ground truth and output bounds delivers an error upper bound for that region of the state space.", "The error bounds for all the tiles jointly provide the correctness verification result.", "We present two case studies.", "The first involves a world with a (idealized) fixed road and a camera that can vary its horizontal offset and viewing angle with respect to the centerline of the road (Section 5).", "The state of the world is therefore characterized by the offset δ and the viewing angle θ.", "A neural network takes the camera image as input and predicts the offset and the viewing angle.", "The state space includes the δ and θ of interest.", "The observation process is the camera imaging process, which maps camera positions to images.", "This state space and the camera imaging process provide the specification.", "The feasible input space is the set of camera images that can be observed from all camera positions of interest.", "For each image, the camera positions of all the states that can produce the image give the possible ground truths.", "We tile the state space using a grid on (δ, θ).", "Each state tile gives a bound on the ground truth of δ and θ.", "We then apply the observation process to project each state tile into the image space.", "We compute a bounding box for each input tile and apply techniques from robustness verification (Tjeng et al., 2019) to obtain neural network output bounds for each input tile.", "Comparing the ground truth bounds and the network output bounds gives upper bounds on network prediction error for each tile.", "We verify that our trained neural network provides good accuracy across the majority of the state space of interest and bound the maximum error the network will ever produce on any feasible input.", "The second case study verifies a neural network that classifies a LiDAR measurement of a sign in an (idealized) scene into one of three shapes (Section 6).", "The state space includes the position of the LiDAR sensor and the shape of the sign.", "We tile the state space, project each tile into the input space via the LiDAR observation process, and again apply techniques from robustness verification to verify the network, including identifying regions of the input space where the network may deliver an incorrect classification.", "The techniques presented in this paper work with specifications provided by the combination of a state space of the world and an observation process that converts states into neural network inputs.", "Results from the case studies highlight how well the approach works for a state space characterized by several attributes and a camera imaging or LiDAR measurement observation process.", "We anticipate that the technique will also work well for other problems that have a low dimensional state space (but potentially a high dimensional input space).", "For higher dimensional state spaces, the framework makes it possible to systematically target specific regions of the input space to verify.", "Potential applications include targeted verification, directed testing, and the identification of illegal inputs for which the network is not expected to work on." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7222222089767456, 0.6470588445663452, 0.11428570747375488, 0.2702702581882477, 0.20000000298023224, 0.13333332538604736, 0.1621621549129486, 0.22857142984867096, 0.23529411852359772, 0.2222222238779068, 0.12121211737394333, 0.21052631735801697, 0.1249999925494194, 0.16326530277729034, 0.37037035822868347, 0.19607841968536377, 0.19999998807907104, 0.26923075318336487, 0.8936170339584351, 0.09090908616781235, 0.12121211737394333, 0.17142856121063232, 0.3243243098258972, 0.19999998807907104, 0.29411762952804565, 0.40816324949264526, 0.4117647111415863, 0.2641509473323822, 0.05128204822540283, 0.21276594698429108, 0.1764705777168274, 0.24561403691768646, 0.2380952388048172, 0.17142856121063232, 0.1428571343421936, 0.20408162474632263, 0.10526315122842789, 0.21621620655059814, 0.1818181723356247, 0.0555555522441864, 0.060606054961681366, 0.24390242993831635, 0.14999999105930328, 0.1764705777168274, 0.1621621549129486, 0.10810810327529907, 0.3265306055545807, 0.20512819290161133, 0.31372547149658203, 0.21276594698429108, 0.11428570747375488, 0.20689654350280762, 0.23076923191547394, 0.12244897335767746, 0.260869562625885, 0.1428571343421936, 0.17777776718139648 ]
B1gtK0NKwr
true
[ "We present the first verification that a neural network for perception tasks produces a correct output within a specified tolerance for every input of interest. " ]
[ "Deep generative models have achieved remarkable progress in recent years.", "Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges.", "One of the most popular metrics for evaluating generative models is the log-likelihood.", "While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS).", "In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models.", "We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate.", "We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone.", "Generative models of images represent one of the most exciting areas of rapid progress of AI (Brock et al., 2019; Karras et al., 2018b; a) .", "However, evaluating the performance of generative models remains a significant challenge.", "Many of the most successful models, most notably Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) , are implicit generative models for which computation of log-likelihoods is intractable or even undefined.", "Evaluation typically focuses on metrics such as the Inception score (Salimans et al., 2016) or the Fréchet Inception Distance (FID) score (Heusel et al., 2017) , which do not have nearly the same degree of theoretical underpinning as likelihood-based metrics.", "Log-likelihoods are one of the most important measures of generative models.", "Their utility is evidenced by the fact that likelihoods (or equivalent metrics such as perplexity or bits-per-dimension) are reported in nearly all cases where it's convenient to compute them.", "Unfortunately, computation of log-likelihoods for implicit generative models remains a difficult problem.", "Furthermore, log-likelihoods have important conceptual limitations.", "For continuous inputs in the image domain, the metric is often dominated by the fine-grained distribution over pixels rather than the high-level structure.", "For models with low-dimensional support, one needs to assign an observation model, such as (rather arbitrary) isotropic Gaussian noise (Wu et al., 2016) .", "Lossless compression metrics for GANs often give absurdly large bits-per-dimension (e.g. 10 14 ) which fails to reflect the true performance of the model (Grover et al., 2018; Danihelka et al., 2017) .", "See Theis et al. (2015) for more discussion of limitations of likelihood-based evaluation.", "Typically, one is not interested in describing the pixels of an image directly, and it would be sufficient to generate images close to the true data distribution in some metric such as Euclidean distance.", "For this reason, there has been much interest in Wasserstein distance as a criterion for generative models, since the measure exploits precisely this metric structure Gulrajani et al., 2017; Salimans et al., 2018) .", "However, Wasserstein distance remains difficult to approximate, and hence it is not routinely used to evaluate generative models.", "We aim to achieve the best of both worlds by measuring lossy compression rates of deep generative models.", "In particular, we aim to estimate the rate distortion function, which measures the number of bits required to match a distribution to within a given distortion.", "Like Wasserstein distance, it can exploit the metric structure of the observation space, but like log-likelihoods, it connects to the rich literature of probabilistic and information theoretic analysis of generative models.", "By focusing on different parts of the rate distortion curve, one can achieve different tradeoffs between the description length and the fidelity of reconstruction -thereby fixing the problem whereby lossless compression focuses on the details at the expense of high-level structure.", "It has the further advantage that the distortion metric need not have a probabilistic interpretation; hence, one is free to use more perceptually valid distortion metrics such as structural similarity (SSIM) (Wang et al., 2004) or distances between hidden representations of a convolutional network (Huang et al., 2018) .", "Algorithmically, computing rate distortion functions raises similar challenges to estimating loglikelihoods.", "We show that the rate distortion curve can be computed by finding the normalizing constants of a family of unnormalized probability distributions over the noise variables z.", "Interestingly, when the distortion metric is squared error, these distributions correspond to the posterior distributions of z for Gaussian observation models with different variances; hence, the rate distortion analysis generalizes the evaluation of log-likelihoods with Gaussian observation models.", "Annealed Importance Sampling (AIS) (Neal, 2001 ) is currently the most effective general-purpose method for estimating log-likelihoods of implicit generative models, and was used by Wu et al. (2016) to compare log-likelihoods of a variety of implicit generative models.", "The algorithm is based on gradually interpolating between a tractable initial distribution and an intractable target distribution.", "We show that when AIS is used to estimate log-likelihoods under a Gaussian observation model, the sequence of intermediate distributions corresponds to precisely the distributions needed to compute the rate distortion curve.", "Since AIS maintains a stochastic lower bound on the normalizing constants of these distributions, it automatically produces an upper bound on the entire rate distortion curve.", "Furthermore, the tightness of the bound can be validated on simulated data using bidirectional Monte Carlo (BDMC) (Grosse et al., 2015; Wu et al., 2016) .", "Hence, we can approximate the entire rate distortion curve for roughly the same computational cost as a single log-likelihood estimate.", "We use our rate distortion approximations to study a variety of variational autoencoders (VAEs) (Kingma & Welling, 2013) , GANs and adversarial autoencoders (AAE) (Makhzani et al., 2015) , and arrive at a number of insights not obtainable from log-likelihoods alone.", "For instance, we observe that VAEs and GANs have different rate distortion tradeoffs: While VAEs with larger code size can generally achieve better lossless compression rates, their performances drop at lossy compression in the low-rate regime.", "Conversely, expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime.", "We find that increasing the capacity of GANs by increasing the code size (width) has a qualitatively different effect on the rate distortion tradeoffs than increasing the depth.", "We also find that that different GAN variants with the same code size achieve nearly identical RD curves, and that the code size dominates the performance differences between GANs.", "In this work, we studied rate distortion approximations for evaluating different generative models such as VAEs, GANs and AAEs.", "We showed that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost.", "For instance, we observed that while VAEs with larger code size can generally achieve better lossless compression rates, their performances drop at lossy compression in the low-rate regime.", "Conversely, expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime.", "This may help explain the success of large GAN architectures (Brock et al., 2019; Karras et al., 2018a; b) .", "We also discovered that increasing the capacity of GANs by increasing the code size (width) has a very different effect than increasing the depth.", "The former extends the rate distortion curves leftwards, while the latter pushes the curves down.", "We also found that different GAN variants with the same code size has almost similar rate distortion curves, and that the code size dominates the algorithmic differences of GANs.", "Overall, lossy compression yields a richer and more complete picture of the distribution modeling performance of generative models.", "The ability to quantitatively measure performance tradeoffs should lead to algorithmic insights which can improve these models." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05128204822540283, 0.13333332538604736, 0.24390242993831635, 0.1230769157409668, 0.3333333134651184, 0.4363636374473572, 0.1904761791229248, 0.03999999538064003, 0.14999999105930328, 0.13793103396892548, 0.06557376682758331, 0.10256409645080566, 0.06896550953388214, 0.09756097197532654, 0, 0.08163265138864517, 0, 0.09999999403953552, 0.09756097197532654, 0.06666666269302368, 0.13333332538604736, 0.08695651590824127, 0.17391303181648254, 0.11999999731779099, 0.1090909019112587, 0.13114753365516663, 0.10958903282880783, 0.09999999403953552, 0.22641508281230927, 0.14035087823867798, 0.158730149269104, 0.04444443807005882, 0.2142857164144516, 0.11538460850715637, 0.038461532443761826, 0.375, 0.2461538463830948, 0.158730149269104, 0.07843136787414551, 0.23076923191547394, 0.19230768084526062, 0.2916666567325592, 0.800000011920929, 0.1071428507566452, 0.07843136787414551, 0.04255318641662598, 0.16326530277729034, 0.24390242993831635, 0.2641509473323822, 0.17391303181648254, 0.04444443807005882 ]
ryga2CNKDH
true
[ "We study rate distortion approximations for evaluating deep generative models, and show that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost." ]
[ "Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time.", "Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks.", "To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning.", "Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context.", "Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents.", "We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments.", "We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads.", "Both model-based and model-free reinforcement learning (RL) methods generally operate in one of two regimes: all training is performed in advance, producing a model or policy that can be used at test-time to make decisions in settings that approximately match those seen during training; or, training is performed online (e.g., as in the case of online temporal-difference learning), in which case the agent can slowly modify its behavior as it interacts with the environment.", "However, in both of these cases, dynamic changes such as failure of a robot's components, encountering a new terrain, environmental factors such as lighting and wind, or other unexpected perturbations, can cause the agent to fail.", "In contrast, humans can rapidly adapt their behavior to unseen physical perturbations and changes in their dynamics BID6 : adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children that can walk on carpet and grass can quickly figure out how to walk on ice without having to relearn how to walk.", "How is this possible?", "If an agent has encountered a large number of perturbations in the past, it can in principle use that experience to learn how to adapt.", "In this work, we propose a meta-learning approach for learning online adaptation.Motivated by the ability to tackle real-world applications, we specifically develop a model-based meta-reinforcement learning algorithm.", "In this setting, data for updating the model is readily available at every timestep in the form of recent experiences.", "But more crucially, the meta-training process for training such an adaptive model can be much more sample efficient than model-free meta-RL approaches BID11 BID55 .", "Further, our approach foregoes the episodic framework on which model-free meta-RL approaches rely on, where tasks are pre-defined to be different rewards or environments, and tasks exist at the trajectory level only.", "Instead, our method considers each timestep to potentially be a new \"task, \" where any detail or setting could have changed at any timestep.", "This view induces a more general meta-RL problem setting by allowing the notion of a task to represent anything from existing in a different part of the state space, to experiencing disturbances, or attempting to achieve a new goal.Learning to adapt a model alleviates a central challenge of model-based reinforcement learning: the problem of acquiring a global model that is accurate throughout the entire state space.", "Furthermore, even if it were practical to train a globally accurate dynamics model, the dynamics inherently change as a function of uncontrollable and often unobservable environmental factors, such as those mentioned above.", "If we have a model that can adapt online, it need not be perfect everywhere a priori.", "This property has previously been exploited by adaptive control methods BID2 BID45 BID38 ; but, scaling such methods to complex tasks and nonlinear systems is exceptionally difficult.", "Even when working with deep neural networks, which have been used to model complex nonlinear systems BID21 , it is exceptionally difficult to enable adaptation, since such models typically require large amounts of data and many gradient steps to learn effectively.", "By specifically training a neural network model to require only a small amount of experience to adapt, we can enable effective online adaptation in complex environments while putting less pressure on needing a perfect global model.The primary contribution of our work is an efficient meta reinforcement learning approach that achieves online adaptation in dynamic environments.", "To the best knowledge of the authors, this is the first meta-reinforcement learning algorithm to be applied in a real robotic system.", "Our algorithm efficiently trains a global model that is capable to use its recent experiences to quickly adapt, achieving fast online adaptation in dynamic environments.", "We evaluate two versions of our approach, recurrence-based adaptive learner (ReBAL) and gradient-based adaptive learner (GrBAL) on stochastic and simulated continuous control tasks with complex contact dynamics (Fig. 2) .", "In our experiments, we show a quadrupedal \"ant\" adapting to the failure of different legs, as well as a \"half-cheetah\" robot adapting to the failure off different joints, navigating terrains with different slopes, and walking on floating platforms of varying buoyancy.", "Our model-based meta RL method attains substantial improvement over prior approaches, including standard model-based methods, online model-adaptive methods, model-free methods, and prior meta-reinforcement learning methods, when trained with similar amounts of data.", "In all experiments, meta-training across multiple tasks is sample efficient, using only the equivalent of 1.5 − 3 hours of real-world experience, roughly 10× less than what model-free methods require to learn a single task.", "Finally, we demonstrate GrBAL on a real dynamic legged millirobot (see Fig 2) .", "To highlight not only the sample efficiency of our meta model-based reinforcement learning approach, but also the importance of fast online adaptation in the real world, we show the agent's learned ability to adapt online to tasks such as a missing leg, novel terrains and slopes, miscalibration or errors in pose estimation, and new payloads to be pulled.", "In this work, we present an approach for model-based meta-RL that enables fast, online adaptation of large and expressive models in dynamic environments.", "We show that meta-learning a model for online adaptation results in a method that is able to adapt to unseen situations or sudden and drastic changes in the environment, and is also sample efficient to train.", "We provide two instantiations of our approach (ReBAL and GrBAL), and we provide a comparison with other prior methods on a range of continuous control tasks.", "Finally, we show that (compared to model-free meta-RL approaches), our approach is practical for real-world applications, and that this capability to adapt quickly is particularly important under complex real-world dynamics." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.11320754140615463, 0.260869562625885, 0.19999998807907104, 0.09999999403953552, 0.06666666269302368, 0.1818181723356247, 0.25, 0.15789473056793213, 0.1702127605676651, 0.15625, 0, 0.2631579041481018, 0.25, 0.05882352590560913, 0.052631575614213943, 0.08888888359069824, 0.10810810327529907, 0.21875, 0.09090908616781235, 0.19354838132858276, 0.04878048226237297, 0.03703703358769417, 0.22580644488334656, 0.2857142686843872, 0.41025641560554504, 0, 0.12765957415103912, 0.0952380895614624, 0.07999999821186066, 0.2142857164144516, 0.21875, 0.42105263471603394, 0.27272728085517883, 0.05405404791235924, 0.19512194395065308 ]
HyztsoC5Y7
true
[ "A model-based meta-RL algorithm that enables a real robot to adapt online in dynamic environments" ]
[ "Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc).", "During training, these approaches often implicitly construct a latent space that contains key information for decision making.", "In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS).", "We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models.", "We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents.", "Model-free deep reinforcement learning (DRL) approaches (e.g., deep Q-learning BID14 ], DDPG BID12 ], A3C BID16 ], etc) have been applied extensively in many simulated environments with complete information and relatively simple game dynamics (e.g., Atari games, Go ], Doom, etc).", "The learned agent, which acts reactively based on the current game situation, can even achieve superhuman performance.However, for complicated environments, planning ahead (or \"predicting the future\") before making an actual decision is important.", "Such a planning procedure requires a forward model that estimates the next state s t+1 given the current state s t and action a t , which is in general non-trivial to construct and estimate from the high-dimensional raw input.", "For partially observable environments (e.g., Real-time Strategy Games like StarCraft), constructing a forward model is more difficult even with a perfect domain knowledge of the game, due to the deliberate concealing of information and the additional requirement to capture the belief of the unknown for the agent.A natural question now arises.", "Could we borrow the success of model-free approach to learn a forward model?", "Note that in model-free approaches, a single shared network (called \"trunk\") is often used to extract features from the input game situation to obtain a latent representation.", "From the latent space, multiple reinforcement learning quantities (Q-function, value function V , advantage function A, etc) are predicted via simple linear transformations and used for decision making.", "Strong performance of these approaches indicates that the learned latent space must have captured key ingredients of the input situation and remains low-dimensional.", "Therefore, it is an excellent candidate for the state representation of a forward model.In this paper, we study whether it is possible to use the latent space learned by model-free approaches to construct forward models.", "We use MiniRTS ], an efficient and simple twoplayer Real-time Strategy (RTS) game.", "MiniRTS captures the basic dynamics of its kind: the agent builds units (workers and troops) that consume resources, gathers resources, explores regions out of sights (\"fog of war\"), defends enemy's attack, and invades enemy's base.", "This game is incomplete information, because the agent can only see within its sight, and does not know the action of its opponent by default.", "Rather than unit based control as in ; ; ], the agent uses 9 discrete actions to control the overall strategy (e.g., build a particular kind of troops, attack or defend).Our", "contributions are three-fold: First, we propose to study the relationship between the latent space learned by model-free approaches and the state representation of forward models. Very", "few works (e.g, DARLA BID10 ], DQN BID15 ]) in model-free RL study these properties in depth, let alone using the latent state in model-based approaches for incomplete information game. To our", "knowledge, we are one of the first works to explore such directions. Second", ", we improve the performance of model-based agent in MiniRTS by input feature design and show that the latent space learned from actor-critic models BID16 ] can reconstruct critical information of the game, e.g., Hit Point of the base and available resources. Finally", ", we propose novel algorithms that learn a forward model that maps a latent state h t to its future counterpart h t (t > t) with reduced drifting. Such a", "forward model enables us to use model-based planning such as Monte-Carlo Tree Search (MCTS) in incomplete information games. We show", "positive performance (8% higher than random planning) in terms of win rates against rule-based agents.", "Latent space learned by model-free reinforcement learning encodes important information for an agent to make sensible decisions to maximize the reward in a complicated simulated environment.", "In this paper, we verify the power of latent space of successfully trained model-free agent, and propose several methods to learn forward models on this space, in a real-time strategy game with incomplete information.", "Despite an extremely hard problem, we learn forward models that make it possible to use planning approaches such as Monte Carlo Tree Search, and show consistently positive gains over baselines.A lot of future works follow.", "As a first step, although we show that it is possible to learn a forward model for incomplete information Real-time Strategy games to enable model-based planning in the latent space, it remains an open problem how to improve its performance.", "It is possible that despite a good forward model is learned, the value function is not good enough, e.g., putting too much focus on the on-policy trajectory, for Monte-Carlo Tree Search.", "Also, in this paper we use predefined 9 global actions for the game.", "How to automatically learn global actions from unit-based commands that are exponentially large, is still an challenging issue to solve." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.21739129722118378, 0.4727272689342499, 0.30188679695129395, 0.29999998211860657, 0.12121211737394333, 0.12903225421905518, 0.2295081913471222, 0.21621620655059814, 0.2380952388048172, 0.2222222238779068, 0.1071428507566452, 0.2800000011920929, 0.4000000059604645, 0.0476190447807312, 0.06896550953388214, 0.1538461446762085, 0.13333332538604736, 0.37735849618911743, 0.23333333432674408, 0.0952380895614624, 0.2985074520111084, 0.1818181723356247, 0.3265306055545807, 0.13636362552642822, 0.3333333134651184, 0.3606557250022888, 0.2461538463830948, 0.3384615480899811, 0.20689654350280762, 0.1428571343421936, 0.0416666604578495 ]
H1LAqMbRW
true
[ "The paper analyzes the latent space learned by model-free approaches in a miniature incomplete information game, trains a forward model in the latent space and apply it to Monte-Carlo Tree Search, yielding positive performance." ]
[ "Several state of the art convolutional networks rely on inter-connecting different layers to ease the flow of information and gradient between their input and output layers.", "These techniques have enabled practitioners to successfully train deep convolutional networks with hundreds of layers.", "Particularly, a novel way of interconnecting layers was introduced as the Dense Convolutional Network (DenseNet) and has achieved state of the art performance on relevant image recognition tasks.", "Despite their notable empirical success, their theoretical understanding is still limited.", "In this work, we address this problem by analyzing the effect of layer interconnection on the overall expressive power of a convolutional network.", "In particular, the connections used in DenseNet are compared with other types of inter-layer connectivity.", "We carry out a tensor analysis on the expressive power inter-connections on convolutional arithmetic circuits (ConvACs) and relate our results to standard convolutional networks.", "The analysis leads to performance bounds and practical guidelines for design of ConvACs.", "The generalization of these results are discussed for other kinds of convolutional networks via generalized tensor decompositions.", "Recently, densely connected networks such as FractalNet BID8 , ResNet BID6 , and DenseNet BID7 , have obtained state of the art performance on large problems where highly deep network configurations are used.", "Adding dense connections between different layers of a network virtually shortens its depth, thus allowing a better flow of information and gradient through the network.", "This makes possible the training of highly deep models.", "Models with these types of connections have been successfully trained with hundreds of layers.", "More specifically, DenseNets have achieved state of the art performance on the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets, using models of up to 1 thousand layers in depth.", "Nevertheless, whether these connections provide a fundamental enhancement on the expressive power of a network, or just improve the training of the model, is still an open question.", "In BID7 , DenseNet models with 3 times less parameters than its counterpart (ResNets) were able to achieve the same performance on the ImageNet challenge.", "Moreover, a theoretical understanding of why the connections used by DenseNets lead to better performance compared with FractalNets or ResNets is still pending.Despite the popularity of these models, there are few theoretical frameworks explaining the power of these models and providing insights to their performance.", "In , the authors considered convolutional networks with linear activations and product pooling layers, called convolutional arithmetic circuits (ConvACs), and argued for the expressiveness of deep networks using a tensor based analysis.", "This analysis has been extended to rectifier based convolutional networks via generalization of the tensor product .", "In , it was shown that ConvACs enjoy a greater expressive power than rectifier based models despite the popularity of rectifier based networks in practice.", "Indeed the empirical relevance of ConvAC was demonstrated through an architecture called SimNets .", "In addition, the generative ConvAC of BID11 achieved state of the art performance in classification of images with missing pixels.", "These results served as motivation for the works of ; ; BID9 ; BID10 , where different aspects of ConvACs were studied from a theoretical perspective.In the inductive bias introduced by pooling geometries was studied.", "Later, BID9 makes use of the quantum entanglement measure to analyze the inductive bias introduced by the correlations among the channels of ConvACs.", "Moreover, BID10 generalizes the convolutional layer of ConvACs by allowing overlapping receptive fields, in other words permitting stride values lower than the convolution patch size.", "These locally overlapping connections led to an enhancement on the expressive capacity of ConvACs.", "The notion of inter-layer connectivity for ConvACs was addressed by in the context of sequential data processing, such as audio and text related tasks.", "In that work, the expressive capabilities of interconnecting processing blocks from a sequence was studied.", "Nevertheless, these types of interconnections are related to the sequential nature of the problem and different from the ones used in ResNet, FractalNet and DenseNet.In this work, we extend the tensor analysis framework of to obtain insightful knowledge about the effect of dense connections, from the kind used in DenseNets, FractalNet and ResNet, on the expressiveness of deep ConvACs.", "We study the expressive capabilities provided by different types of dense connections.", "Moreover, from these results we derive performance bounds and practical guidelines for selection of the hyperparameters of a deep ConvAC, such as layer widths and the topology of dense connections.", "These results serve as the first step into understanding dense connectivity in rectifier networks as well, since they can be further extended to include rectifier linear units, in the same spirit as the generalization of the tensor products done by .The", "remainder of this paper is organized as follows. In", "Section 2, we introduce the notation and basic concepts from tensor algebra. In", "Section 3, we present the tensor representation of ConvACs as introduced by , and later in Section 4, we obtain tensor representations for densely connected ConvACs. In", "Section 5, performance bounds and design guidelines are derived for densely connected ConvACs." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.0714285671710968, 0.10256409645080566, 0, 0.24242423474788666, 0.3571428656578064, 0.2857142686843872, 0.07692307233810425, 0.27586206793785095, 0.13636362552642822, 0.17142856121063232, 0.1818181723356247, 0.1599999964237213, 0.20512820780277252, 0.2702702581882477, 0.05405404791235924, 0.23529411852359772, 0.1463414579629898, 0.27586206793785095, 0.277777761220932, 0.1538461446762085, 0.19999998807907104, 0.09090908616781235, 0.1875, 0.1621621549129486, 0.29629629850387573, 0.1666666567325592, 0.2142857164144516, 0.1818181723356247, 0.4000000059604645, 0.1538461446762085, 0.1702127605676651, 0.09090908616781235, 0.1538461446762085, 0.2222222238779068, 0 ]
Byj54-bAW
true
[ "We analyze the expressive power of the connections used in DenseNets via tensor decompositions." ]
[ "We consider the following central question in the field of Deep Reinforcement Learning (DRL):", "How can we use implicit human feedback to accelerate and optimize the training of a DRL algorithm?", "State-of-the-art methods rely on any human feedback to be provided explicitly, requiring the active participation of humans (e.g., expert labeling, demonstrations, etc.).", "In this work, we investigate an alternative paradigm, where non-expert humans are silently observing (and assessing) the agent interacting with the environment.", "The human's intrinsic reactions to the agent's behavior is sensed as implicit feedback by placing electrodes on the human scalp and monitoring what are known as event-related electric potentials.", "The implicit feedback is then used to augment the agent's learning in the RL tasks.", "We develop a system to obtain and accurately decode the implicit human feedback (specifically error-related event potentials) for state-action pairs in an Atari-type environment.", "As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games using an electroencephalogram (EEG) cap, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm with the intent of accelerating its learning of the game.", "Building atop the baseline, we then make the following novel contributions in our work:\n(i) We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials. \n", "(ii) We propose two different frameworks to combine recent advances in DRL into the error-potential based feedback system in a sample-efficient manner, allowing humans to provide implicit feedback while training in the loop, or prior to the training of the RL agent.\n", "(iii) Finally, we scale the implicit human feedback (via ErrP) based RL to reasonably complex environments (games) and demonstrate the significance of our approach through synthetic and real user experiments.\n", "Deep Reinforcement Learning (DRL) algorithms have now beaten human experts in Go (Silver et al., 2017) , taught robots to become parkour masters , and enabled truly autonomous vehicles (Wang et al., 2018) .", "However, current state-of-the-art RL agents equipped with deep neural networks are inherently complex, difficult and time-intensive to train.", "Particularly in complex environments with sparse reward functions (e.g., maze navigation), the DRL agents need an inordinate amount of interaction with the environment to learn the optimal policy.", "Human participation can potentially help DRL algorithms by accelerating their training and reducing the learning costs without compromising final performance.", "This potential has inspired a several research efforts where either an alternative (or supplementary) feedback is obtained from the human participant (Knox, 2012) .", "Such approaches despite being highly effective, severely burden the human-in-the-loop demanding either expert demonstrations (Ross et al., 2011) or explicit feedback (Christiano et al., 2017) .", "In this paper, we investigate an alternative paradigm that substantially increases the richness of the reward functions, while not severely burdening the human-in-the-loop.", "We study the use of electroencephalogram (EEG) based brain waves of the human-in-the-loop to generate the reward functions that can be used by the DRL algorithms.", "Such a model will benefit from the natural rich activity of a powerful sensor (the human brain), but at the same time not burden the human if the activity being relied upon is intrinsic.", "This paradigm is inspired by a high-level error-processing system in humans that generates error-related potential/negativity (ErrP or ERN) (Scheffers et al., 1996) .When", "a human recognizes an error made by an agent, the elicited ErrP can be captured through EEG to inform agent about the sub-optimality of the taken action in the particular state.", "As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm.", "We show that a full access approach to obtain feedback on every state-action pair while RL agent is learning, can significantly speedup the training convergence of RL agent.", "We contend that while obtaining such implicit human feedback through EEG is less burdensome, it is still a time-intensive task for the subject and the experimenter alike.", "This, combined with the noisy EEG signals and stochasticity in inferring error-potentials, raises significant challenges in terms of the practicality of the solution.", "In this context, we first argue that the definition of ErrPs is generalizable across different environments.", "We show that ErrPs of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the ErrP.", "This is notably different from previous approaches (Chavarriaga & Millán, 2010; Salazar-Gomez et al., 2017) , where the labeled ErrPs are obtained in the same environment (where the RL task is performed).", "For any new and unseen environment, it does not require the human to go through the training phase again, and assumes no prior knowledge about the optimal state-action pairs of the environment.", "We present two different frameworks to combine recent advances in DRL into the implicit human feedback mechanism (via ErrP) in a practical, sample-efficient manner.", "This reduces the cost of human supervision sufficiently allowing the DRL systems to train.", "Relying on Active Learning (AL) methods, our first framework allows humans to provide implicit feedback in the loop, while an RL agent is being trained.", "An uncertainty based acquisition function is modeled to select the samples state-action pairs for querying the implicit human feedback.", "However, as a human is always required to be in the loop, our second framework allows humans to provide their feedback implicitly before the agent starts training.", "Based on the human feedback obtained during pre-training, a quality (Q) function is learned over these imperfect demonstrations to provide the supplementary reward to the RL agent.", "We present results from real ErrP experiments to evaluate the acceleration in learning, and sample efficiency, in both frameworks.", "In summary, the novel contributions of our work are,", "1. We demonstrate the generalizability of error-potentials over various Atari-like environments (discrete grid-based navigation games, studied in this work), enabling the estimation of implicit human feedback in new and unseen environments.", "2. We propose two different frameworks to combine recent advances in DRL into ErrP based feedback system in a practical, sample-efficient manner.", "The first framework allows humans to provide implicit feedback while training in the loop.", "Taking advantage of recent approaches in learning from imperfect demonstrations, in the second framework, the implicit human feedback is obtained prior to the training of the RL agent.", "3. We scale the implicit human feedback (via ErrP) based RL to reasonably complex environments and demonstrate the significance of our approach through synthetic and real user experiments.", "Daniel et al. (2015) ; El Asri et al. (2016); Wang et al. (2016) studied RL from human rankings or ratings, however rely on explicit human feedback, and assume that the feedback is noiseless.", "Demonstrations have been commonly used to improve the efficiency of RL (Kim et al., 2013; Chemali & Lazaric, 2015; Piot et al., 2014) , and a common paradigm is to initialize RL algorithms with good policy or Q function (Nair et al., 2018; Hester et al., 2018; Gao et al., 2018) .", "In this work, we use rely on implicit feedback from non-expert humans (via ErrPs) which is inherently noisy.", "(Chavarriaga & Millán, 2010; Iturrate et al., 2010; Salazar-Gomez et al., 2017) demonstrate the benefit of ErrPs in a very simple setting (i.e., very small state-space), and use ErrP-based feedback as the only reward.", "Moreover, in all of these works, the ErrP decoder is trained on a similar game (or robotic task), essentially using the knowledge that is supposed to be unknown in the RL task.", "In our work, we use labeled ErrPs examples of very simple and known environments to train the ErrP decoder, and combine with the recent advances in DRL in a sample-efficient manner for reasonably complex environments.", "Consider a Markov Decision Process (MDP) problem M , as a tuple < X , A, P, P 0 , R, γ >, with state-space X , action-space A, transition kernel P , initial state distribution P 0 , accompanied with reward function R, and discounting factor 0 ≤ γ ≤ 1. Here the random variable Z(s, a) denotes the accumulated discounted future rewards starting from state s and action a.", "We first demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm.", "Then we argue that the definition of ErrPs is generalizable across different environment.", "In the ideal approach, we validate the augmentation effect of ErrP labels on RL algorithms by the full access method.", "Then, in the practical approach, we propose two augmentation frameworks for RL agent, applicable to different situations.", "The first is to integrate human into the training loop of RL agent based on active learning, while the second is to learn a reward function from imperfect demonstrations labeled by ErrP.", "The demonstration of the generalizability of error-potentials is limited across the environments presented in the paper.", "We have considered discrete grid-based reasonably complex navigation games.", "The validation of the generalization to a variety of Atari and Robotic environments is the subject of the future work.", "We also plan to test our framework of integrating implicit human feedback (via ErrPs) over robotic environments, and text the generalization capability of error-potentials between virtual and physical worlds.", "As future work, we plan to investigate as to how machines can be assisted in RL by using intrinsic EEG-based cooperations among humans and machines.", "are bandpass filtered in [0.5, 40] Hz.", "Epochs of 800ms were extracted relative to pre-stimulus 200ms baseline, and were subjected to spatial filtering.", "In spatial filtering, prototype responses of each class, i.e., \"correct\" and \"erroneous\", are computed by averaging all training trials in the corresponding classes(\"xDAWN Spatial Filter\" (Rivet et al., 2009; Barachant & Congedo, 2014; ).", "\"xDAWN filtering\" projects the EEG signals from sensor space (i.e., electrode space) to the source space (i.e., a low-dimensional space constituted by the actual neuronal ensembles in brain firing coherently).", "The covariance matrix of each epoch is computed, and concatenated with the prototype responses of the class.", "Further, dimensionality reduction is achieved by selecting relevant channels through backward elimination .", "The filtered signals are projected to the tangent space for feature extraction.", "The obtained feature vector is first normalized (using L1 norm) and fed to a regularized regression model.", "A threshold value is selected for the final decision by maximizing accuracy offline on the training set.", "We present the algorithm to decode the ErrP signals in Algorithm 2.", "Algorithm 2: Riemannian Geometry based ErrP classification algorithm Input : raw EEG signals EEG 1 Pre-process raw EEG signals ; 2 Spatial Filtering: xDAWN Spatial Filter (nf ilter) ; 3 Electrode Selection: ElectrodeSelect (nelec, metric='riemann') ; 4 Tangent Space Projection : TangentSpace(metric = \"logeuclid\") Normalize using L1 norm ; 5 Regression: ElasticNet ; 6 Select decision threshold by maximizing accuracy" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23529411852359772, 0.6842105388641357, 0.21739129722118378, 0.0476190410554409, 0.25, 0.2857142686843872, 0.4000000059604645, 0.21875, 0.1818181723356247, 0.3636363446712494, 0.3199999928474426, 0.15686273574829102, 0.10256409645080566, 0.2083333283662796, 0.19512194395065308, 0.1818181723356247, 0.08888888359069824, 0.0952380895614624, 0.2790697515010834, 0.16326530277729034, 0.08888888359069824, 0.25, 0.25, 0.2978723347187042, 0.30434781312942505, 0.25, 0.10810810327529907, 0.21739129722118378, 0.07999999821186066, 0.2448979616165161, 0.5, 0.29411762952804565, 0.21739129722118378, 0.25641024112701416, 0.30434781312942505, 0.2222222238779068, 0.25641024112701416, 0.13333332538604736, 0.3333333432674408, 0.3333333134651184, 0.34285715222358704, 0.3636363446712494, 0.3829787075519562, 0.1599999964237213, 0.16393442451953888, 0.20512819290161133, 0.26923075318336487, 0.20408162474632263, 0.3461538553237915, 0.0833333283662796, 0.29629629850387573, 0.11764705181121826, 0.10256409645080566, 0.21052631735801697, 0.23999999463558197, 0.1764705777168274, 0.06666666269302368, 0.2702702581882477, 0.375, 0.13636362552642822, 0.06896550953388214, 0.17142856121063232, 0.17543859779834747, 0.1666666567325592, 0.1666666567325592, 0, 0.12121211737394333, 0.15789473056793213, 0.10810810327529907, 0.25, 0 ]
rJgDT04twH
true
[ "We use implicit human feedback (via error-potentials, EEG) to accelerate and optimize the training of a DRL algorithm, in a practical manner." ]
[ "Deep learning has demonstrated abilities to learn complex structures, but they can be restricted by available data.", "Recently, Consensus Networks (CNs) were proposed to alleviate data sparsity by utilizing features from multiple modalities, but they too have been limited by the size of labeled data.", "In this paper, we extend CN to Transductive Consensus Networks (TCNs), suitable for semi-supervised learning.", "In TCNs, different modalities of input are compressed into latent representations, which we encourage to become indistinguishable during iterative adversarial training.", "To understand TCNs two mechanisms, consensus and classification, we put forward its three variants in ablation studies on these mechanisms.", "To further investigate TCN models, we treat the latent representations as probability distributions and measure their similarities as the negative relative Jensen-Shannon divergences.", "We show that a consensus state beneficial for classification desires a stable but imperfect similarity between the representations.", "Overall, TCNs outperform or align with the best benchmark algorithms given 20 to 200 labeled samples on the Bank Marketing and the DementiaBank datasets.", "Deep learning has demonstrated impressive capacities to learn complicated structures from massive data sets.", "However, acquiring sufficient labeled data can be expensive or difficult (e.g., for specific pathological populations BID10 ).", "Transductive learning (a set of semi-supervised algorithms) uses intrinsic structures among unlabeled data to boost classifier performance.", "In the real world, data can spread across multiple modalities (e.g., visual, acoustic, and text) in typical tasks, although many existing transductive algorithms do not exploit the structure across these modalities.", "Co-training [3] and tri-training BID23 use one classifier per modality to supervise each other, but they can only apply to two and three modalities respectively.Recently, Consensus Networks (CNs) BID24 incorporated the idea of co-training.", "Not limited by the number of modalities, CNs showed promising results on detecting cognitive impairments from multi-modal datasets of speech.", "A consensus network contains several interpreters (one per modality), a discriminator, and a classifier.", "The interpreters try to produce low-dimensional representations of input data that are indistinguishable by the discriminator.", "The classifier makes predictions based on these representation vectors.Despite promising results, CN is limited by the amount of available training data.", "This motivates our extension into semi-supervised learning with our Transductive Consensus Network (TCN).TCNs", "operate in two mechanisms: as consensus or classifier. The", "consensus mechanism urges the modality representations to resemble each other (trained on the whole dataset without using labels), and the classifier mechanism optimizes the networks to retain information useful for classification (trained on the labeled dataset). To", "illustrate the importance of these two mechanisms in an ablation study, we also put forward its three variants: TCN-embed, TCN-svm, and TCN-AE in §3. By", "this ablation study, we show that both mechanisms should function together via iterative training.To further reveal the mechanisms of TCN, we formulate in §3.5 the similarity between latent representations using negative Jensen-Shannon divergences. By", "monitoring their similarities, we show that a meaningful consensus state prefers representations to have suboptimal similarities.In experiments ( §4), we compare TCN to its three variants, TCN's multimodal supervised learning counterpart (CN), and several other semi-supervised learning benchmark algorithms on two datasets: Bank Marketing (from the UCI repository) and DementiaBank (a dataset of pathological speech in multiple modalities). On", "both datasets, the F-scores of TCN align with the best benchmark models when there are more labeled data available, and outperform benchmarks (including tri-training) given as few as 20 labeled points.", "In this paper, we present Transductive Consensus Networks (TCNs) that extend consensus networks with semi-supervised learning.", "We identify two mechanisms in which TCNs function, i.e., the consensus and classifier mechanisms.", "With three TCN variants in an ablation study, we show the importance of both mechanisms.", "Moreover, by treating the representations as probability distributions and defining their similarity as negative relative JS divergences, we show that although the consensus mechanism urges high similarities, a good consensus state might not need perfect similarities between modality representations.In the future, several avenues may be considered.", "To start with, building consensus networks using other types of neural networks may be considered.", "In addition, more exploration could be done to find a more explainable metric to describe the extent of agreement.", "Currently, we use −" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.06451612710952759, 0.04999999701976776, 0.20689654350280762, 0.11428570747375488, 0.1764705777168274, 0.17142856121063232, 0.12903225421905518, 0, 0.0714285671710968, 0.060606054961681366, 0.19354838132858276, 0, 0.04255318641662598, 0.060606054961681366, 0, 0.13333332538604736, 0.0555555522441864, 0.14814814925193787, 0, 0.09302324801683426, 0.21052631735801697, 0.21276594698429108, 0.20000000298023224, 0.0952380895614624, 0.13333332538604736, 0.06896550953388214, 0.27586206793785095, 0.0357142835855484, 0.0714285671710968, 0.06451612710952759, 0 ]
H1e_Qy_toX
true
[ "TCN for multimodal semi-supervised learning + ablation study of its mechanisms + interpretations of latent representations" ]
[ "Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings.", "However, some of the most popular of these optimization tools - namely Adam, Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds.", "We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product.", "Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms.", "Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball.", "Developing powerful stochastic gradient-based optimization algorithms is of major importance for a variety of application domains.", "In particular, for computational efficiency, it is common to opt for a first order method, when the number of parameters to be optimized is great enough.", "Such cases have recently become ubiquitous in engineering and computational sciences, from the optimization of deep neural networks to learning embeddings over large vocabularies.This new need resulted in the development of empirically very successful first order methods such as ADAGRAD BID5 , ADADELTA BID29 , ADAM BID9 or its recent update AMSGRAD BID18 .Note", "that these algorithms are designed to optimize parameters living in a Euclidean space R n , which has often been considered as the default geometry to be used for continuous variables. However", ", a recent line of work has been concerned with the optimization of parameters lying on a Riemannian manifold, a more general setting allowing non-Euclidean geometries. This family", "of algorithms has already found numerous applications, including for instance solving Lyapunov equations BID27 , matrix factorization BID23 , geometric programming BID22 , dictionary learning BID2 or hyperbolic taxonomy embedding BID15 BID6 BID4 BID14 .A few first", "order stochastic methods have already been generalized to this setting (see section 6), the seminal one being Riemannian stochastic gradient descent (RSGD) BID1 , along with new methods for their convergence analysis in the geodesically convex case . However, the", "above mentioned empirically successful adaptive methods, together with their convergence analysis, remain to find their respective Riemannian counterparts.Indeed, the adaptivity of these algorithms can be thought of as assigning one learning rate per coordinate of the parameter vector. However, on", "a Riemannian manifold, one is generally not given an intrinsic coordinate system, rendering meaningless the notions sparsity or coordinate-wise update.Our contributions. In this work", "we (i) explain", "why generalizing these adaptive schemes to the most agnostic Riemannian setting in an intrinsic manner is compromised, and (ii) propose", "generalizations of the algorithms together with their convergence analysis in the particular case of a product of manifolds where each manifold represents one \"coordinate\" of the adaptive scheme. Finally, we", "(iii) empirically", "support our claims on the realistic task of hyperbolic taxonomy embedding.Our initial motivation. The particular application", "that motivated us in developing Riemannian versions of ADAGRAD and ADAM was the learning of symbolic embeddings in non-Euclidean spaces. As an example, the GloVe algorithm", "BID17 ) − an unsupervised method for learning Euclidean word embeddings capturing semantic/syntactic relationships − benefits significantly from optimizing with ADAGRAD compared to using SGD, presumably because different words are sampled at different frequencies. Hence the absence of Riemannian adaptive", "algorithms could constitute a significant obstacle to the development of competitive optimization-based Riemannian embedding methods. In particular, we believe that the recent", "rise of embedding methods in hyperbolic spaces could benefit from such developments BID15 BID6 b; BID4 BID28 .", "Driven by recent work in learning non-Euclidean embeddings for symbolic data, we propose to generalize popular adaptive optimization tools (e.g. ADAM, AMSGRAD, ADAGRAD) to Cartesian products of Riemannian manifolds in a principled and intrinsic manner.", "We derive convergence rates that are similar to the Euclidean corresponding models.", "Experimentally we show that our methods outperform popular non-adaptive methods such as RSGD on the realistic task of hyperbolic word taxonomy embedding.", "DISPLAYFORM0 i * .", "Combining the following formula 8 : DISPLAYFORM1 with the following inequality (given by lemma 6): DISPLAYFORM2 yields DISPLAYFORM3 where the use the notation ·, · x i for ρ DISPLAYFORM4 Now applying Cauchy-Schwarz' and Young's inequalities to the last term yields DISPLAYFORM5 From the geodesic convexity of f t for 1 ≤ t ≤ T , we have DISPLAYFORM6 Let's look at the first term.", "Using β 1t ≤ β 1 and with a change of indices, we have DISPLAYFORM7 where the last equality comes from a standard telescopic summation.", "We now need the following lemma.Lemma 3.", "DISPLAYFORM8 Proof.", "Let's start by separating the last term, and removing the hat on v. Using that β 1k ≤ β 1 for all k ∈ [T ], (1 − β 1j )β DISPLAYFORM9 Finally, (1 − β 1j ) ≤ 1 and" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.32258063554763794, 0.13333332538604736, 0.05882352590560913, 0.10256409645080566, 0, 0.06451612710952759, 0.03389830142259598, 0.05128204822540283, 0.060606058686971664, 0, 0.09302325546741486, 0.08888888359069824, 0.060606058686971664, 0, 0.1428571343421936, 0.060606058686971664, 0, 0.06451612710952759, 0.08695651590824127, 0.13793103396892548, 0, 0.1428571343421936, 0.09999999403953552, 0, 0, 0.032786883413791656, 0, 0, 0 ]
r1eiqi09K7
true
[ "Adapting Adam, Amsgrad, Adagrad to Riemannian manifolds. " ]
[ "We study the problem of defending deep neural network approaches for image classification from physically realizable attacks.", "First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks.", "Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples.", "Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.", "State-of-the-art effectiveness of deep neural networks has made it the technique of choice in a variety of fields, including computer vision (He et al., 2016) , natural language processing (Sutskever et al., 2014) , and speech recognition (Hinton et al., 2012) .", "However, there have been a myriad of demonstrations showing that deep neural networks can be easily fooled by carefully perturbing pixels in an image through what have become known as adversarial example attacks (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini & Wagner, 2017b; Vorobeychik & Kantarcioglu, 2018) .", "In response, a large literature has emerged on defending deep neural networks against adversarial examples, typically either proposing techniques for learning more robust neural network models (Wong & Kolter, 2018; Wong et al., 2018; Raghunathan et al., 2018b; Cohen et al., 2019; Madry et al., 2018) , or by detecting adversarial inputs (Metzen et al., 2017; Xu et al., 2018) .", "Particularly concerning, however, have been a number of demonstrations that implement adversarial perturbations directly in physical objects that are subsequently captured by a camera, and then fed through the deep neural network classifier (Boloor et al., 2019; Eykholt et al., 2018; Athalye et al., 2018b; Brown et al., 2018) .", "Among the most significant of such physical attacks on deep neural networks are three that we specifically consider here: 1) the attack which fools face recognition by using adversarially designed eyeglass frames (Sharif et al., 2016) , 2) the attack which fools stop sign classification by adding adversarially crafted stickers (Eykholt et al., 2018) , and 3) the universal adversarial patch attack, which causes targeted misclassification of any object with the adversarially designed sticker (patch) (Brown et al., 2018) .", "Oddly, while considerable attention has been devoted to defending against adversarial perturbation attacks in the digital space, there are no effective methods specifically to defend against such physical attacks.", "Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks: the eyeglass frame attack on face recognition (Sharif et al., 2016) and the sticker attack on stop signs (Eykholt et al., 2018) .", "Specifically, we study the performance on adversarial training and randomized smoothing against these attacks, and show that both have limited effectiveness in this context (quite ineffective in some settings, and somewhat more effective, but still not highly robust, in others), despite showing moderate effectiveness against l ∞ and l 2 attacks, respectively.", "Our second contribution is a novel abstract attack model which more directly captures the nature of common physically realizable attacks than the conventional l p -based models.", "Specifically, we consider a simple class of rectangular occlusion attacks in which the attacker places a rectangular sticker onto an image, with both the location and the content of the sticker adversarially chosen.", "We develop several algorithms for computing such adversarial occlusions, and use adversarial training to obtain neural network models that are robust to these.", "We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage l p -based attack models.", "Related Work While many approaches for defending deep learning in vision applications have been proposed, robust learning methods have been particularly promising, since alternatives are often defeated soon after being proposed (Madry et al., 2018; Raghunathan et al., 2018a; Wong & Kolter, 2018; Vorobeychik & Kantarcioglu, 2018) .", "The standard solution approach for this problem is an adaptation of Stochastic Gradient Descent (SGD) where gradients are either with respect to the loss at the optimal adversarial perturbation for each i (or approximation thereof, such as using heuristic local search (Goodfellow et al., 2015; Madry et al., 2018) or a convex over-approximation (Raghunathan et al., 2018b; Wang et al., 2018) ), or with respect to the dual of the convex relaxation of the attacker maximization problem (Raghunathan et al., 2018a; Wong & Kolter, 2018; Wong et al., 2018) .", "Despite these advances, adversarial training a la Madry et al. (2018) remains the most practically effective method for hardening neural networks against adversarial examples with l ∞ -norm perturbation constraints.", "Recently, randomized smoothing emerged as another class of techniques for obtaining robustness (Lecuyer et al., 2019; Cohen et al., 2019) , with the strongest results in the context of l 2 -norm attacks.", "In addition to training neural networks that are robust by construction, a number of methods study the problem of detecting adversarial examples (Metzen et al., 2017; Xu et al., 2018) , with mixed results (Carlini & Wagner, 2017a) .", "Of particular interest is recent work on detecting physical adversarial examples (Chou et al., 2018) .", "However, detection is inherently weaker than robustness, which is our goal, as even perfect detection does not resolve the question of how to make decisions on adversarial examples.", "Finally, our work is in the spirit of other recent efforts that characterize robustness of neural networks to physically realistic perturbations, such as translations, rotations, blurring, and contrast (Engstrom et al., 2019; Hendrycks & Dietterich, 2019) .", "There are two possible reasons why conventional robust ML perform poorly against physical attacks:", "1) adversarial models involving l p -bounded perturbations are too hard to enable effective robust learning, and", "2) the conventional attack model is too much of a mismatch for realistic physical attacks.", "In Appendix B, we present evidence supporting the latter.", "Specifically, we find that conventional robust ML models exhibit much higher robustness when faced with the l p -bounded attacks they are trained to be robust to.", "As we have shown, conventional methods for making deep learning approaches for image classification robust to physically realizable attacks tend to be relatively ineffective.", "In contrast, a new threat model we proposed, rectangular occlusion attacks (ROA), coupled with adversarial training, achieves high robustness against several prominent examples of physical attacks.", "While we explored a number of variations of ROA attacks as a means to achieve robustness against physical attacks, numerous questions remain.", "For example, can we develop effective methods to certify robustness against ROA, and are the resulting approaches as effective in practice as our method based on a combination of heuristically computed attacks and adversarial training?", "Are there other types of occlusions that are more effective?", "Answers to these and related questions may prove a promising path towards practical robustness of deep learning when deployed for downstream applications of computer vision such as autonomous driving and face recognition.", "(Parkhi et al., 2015 ) is a benchmark for face recognition, containing 2622 subjusts with 2.6 million images in total.", "We chose ten subjects: A. J. Buckley, A. R. Rahman, Aamir Khan, Aaron Staton, Aaron Tveit, Aaron Yoo, Abbie Cornish, Abel Ferrara, Abigail Breslin, and Abigail Spencer, and subselected face images pertaining only to these individuals.", "Since approximately half of the images cannot be downloaded, our final dataset contains 300-500 images for each subject.", "We used the standard corp-and-resize method to process the data to be 224 × 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject.", "In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set.", "We use the VGGFace convolutional neural network (Parkhi et al., 2015) model, a variant of the VGG16 model containing 5 convolutional layer blocks and 3 fully connected layers.", "We make use of standard transfer learning as we only classify 10 subjects, keeping the convolutional layers as same as VGGFace structure, 3 but changing the fully connected layer to be 1024 → 1024 →10 instead of 4096 → 4096 →2622.", "Specifically, in our Pytorch implementation, we convert the images from RGB to BGR channel orders and subtract the mean value [129.1863, 104.7624, 93 .5940] in order to use the pretrained weights from VGG-Face on convolutional layers.", "We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10 −4 and default parameters in Pytorch.", "4 We drop the learning rate by 0.1 every 10 epochs.", "Additionally, we used validation set accuracy to keep track of model performance and choose a model in case of overfitting.", "After 30 epochs of training, the model successfully obtains 98.94 % on test data." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0, 0, 0, 0, 0, 0, 0.03703703358769417, 0, 0.029411762952804565, 0, 0.04444444179534912, 0.039215683937072754, 0, 0, 0, 0.04999999701976776, 0, 0, 0, 0, 0, 0.0833333283662796, 0.05882352590560913, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04999999701976776, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04878048598766327, 0, 0, 0, 0.08695651590824127 ]
H1xscnEKDr
true
[ "Defending Against Physically Realizable Attacks on Image Classification" ]
[ "Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.", "However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process.", "Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary (concept drift), imbalanced, or not always fully available, i.e., rare edge cases.", "We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity (DHP) Softmax layer that adds a rapid learning plastic component (compressed episodic memory) to the fixed (slow changing) parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale.", "We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task.", "We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST --- a dataset that combines the challenges of class imbalance and concept drift.", "Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.", "A key aspect of human intelligence is the ability to continually adapt and learn in dynamic environments, a characteristic which is challenging to embed into artificial intelligence.", "Recent advances in machine learning (ML) have shown tremendous improvements in various problems, by learning to solve one complex task very well, through extensive training on large datasets with millions of training examples or more.", "However, most of the ML models that are used during deployment in the real-world are exposed to non-stationarity where the distributions of acquired data changes over time.", "Therefore, after learning is complete, and these models are further trained with new data, responding to distributional changes, performance degrades with respect to the original data.", "This phenomenon known as catastrophic forgetting or catastrophic interference (McCloskey & Cohen, 1989; French, 1999 ) presents a crucial problem for deep neural networks (DNNs) that are tasked with continual learning (Ring, 1994) , also called lifelong learning (Thrun & Mitchell, 1995; Thrun, 1998) .", "In continual learning, the goal is to adapt and learn consecutive tasks without forgetting how to perform well on previously learned tasks, enabling models that are scalable and efficient over long timescales.", "In most supervised learning methods, DNN architectures require independent and identically distributed (iid) samples from a stationary training distribution.", "However, for ML systems in realworld applications that require continual learning, the iid assumption is easily violated when: (1) There is concept drift in the training data distribution.", "(2) There are imbalanced class distributions and concept drift occuring simultaneously.", "(3) Data representing all scenarios in which the learner is expected to perform are not initially available.", "In such situations, learning systems face the \"stability-plasticity dilemma\" which is a well-known problem for artificial and biological neural networks (Carpenter & Grossberg, 1987; Abraham & Robins, 2005) .", "This presents a continual learning challenge for an ML system where the model needs to provide a balance between its plasticity (to integrate new knowledge) and stability (to preserve existing knowledge).", "In biological neural networks, synaptic plasticity has been argued to play an important role in learning and memory (Howland & Wang, 2008; Takeuchi et al., 2013; Bailey et al., 2015) and two major theories have been proposed to explain a human's ability to perform continual learning.", "The first theory is inspired by synaptic consolidation in the mammalian neocortex (Benna & Fusi, 2016) where a subset of synapses are rendered less plastic and therefore preserved for a longer timescale.", "The general idea for this approach is to consolidate and preserve synaptic parameters that are considered important for the previously learned tasks.", "This is normally achieved through task-specific updates of synaptic weights in a neural network.", "The second is the complementary learning system (CLS) theory (McClelland et al., 1995; Kumaran et al., 2016) , which suggests that humans extract highlevel structural information and store it in different brain areas while retaining episodic memories.", "Recent work on differentiable plasticity has shown that neural networks with \"fast weights\" that leverage Hebbian learning rules (Hebb, 1949) can be trained end-to-end through backpropagation and stochastic gradient descent (SGD) to optimize the standard \"slow weights\", as well as also the amount of plasticity in each synaptic connection (Miconi, 2016; Miconi et al., 2018) .", "These works use slow weights to refer to the weights normally used to train vanilla neural networks, which are updated slowly and are often associated with long-term memory.", "The fast weights represent the weights that are superimposed on the slow weights and change quickly from one time step to the next based on input representations.", "These fast weights behave as a form of short-term memory that enable \"reactivation\" of long-term memory traces in the slow weights.", "Miconi et al. (2018) showed that simple plastic networks with learned plasticity outperform networks with uniform plasticity on various problems.", "Moreover, there have been several approaches proposed recently for overcoming the catastrophic forgetting problem in fixed-capacity models by dynamically adjusting the plasticity of each synapse based on its importance for retaining past memories (Parisi et al., 2019) .", "Here, we extend the work on differentiable plasticity to the task-incremental continual learning setting (van de Ven & Tolias, 2019) , where tasks arrive in a batch-like fashion, and have clear boundaries.", "We develop a Differentiable Hebbian Consolidation 1 model that is capable of adapting quickly to changing environments as well as consolidating previous knowledge by selectively adjusting the plasticity of synapses.", "We modify the traditional softmax layer and propose to augment the slow weights in the final fully-connected (FC) layer (softmax output layer) with a set of plastic weights implemented using Differentiable Hebbian Plasticity (DHP).", "Furthermore, we demonstrate the flexibility of our model by combining it with recent task-specific synaptic consolidation based approaches to overcoming catastrophic forgetting such as elastic weight consolidation (Kirkpatrick et al., 2017; Schwarz et al., 2018) , synaptic intelligence (Zenke et al., 2017b) and memory aware synapses (Aljundi et al., 2018) .", "Our model unifies core concepts from Hebbian plasticity, synaptic consolidation and CLS theory to enable rapid adaptation to new unseen data, while consolidating synapses and leveraging compressed episodic memories in the softmax layer to remember previous knowledge and mitigate catastrophic forgetting.", "We test our proposed method on established benchmark problems including the Permuted MNIST (Goodfellow et al., 2013) , Split MNIST (Zenke et al., 2017b) and Vision Datasets Mixture (Ritter et al., 2018) benchmarks.", "We also introduce the Imbalanced Permuted MNIST problem and show that plastic networks with task-specific synaptic consolidation methods outperform networks with uniform plasticity." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.13333332538604736, 0.27272728085517883, 0.12903225421905518, 0.22535210847854614, 0.28070175647735596, 0.13333332538604736, 0.09090908616781235, 0.25925925374031067, 0.16129031777381897, 0.15094339847564697, 0.18518517911434174, 0.2535211145877838, 0.1666666567325592, 0.12244897335767746, 0.1090909019112587, 0.04878048226237297, 0.12765957415103912, 0.21052631735801697, 0.20689654350280762, 0.2857142686843872, 0.26229506731033325, 0.15686273574829102, 0.3181818127632141, 0.1538461446762085, 0.31707316637039185, 0.25925925374031067, 0.1538461446762085, 0.3333333432674408, 0.12765957415103912, 0.1515151411294937, 0.2295081913471222, 0.20689654350280762, 0.3333333134651184, 0.3380281627178192, 0.3283582031726837, 0.06896550953388214, 0.31372547149658203 ]
BJlA6eBtvH
true
[ "Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks and with the combination of task-specific synaptic consolidation can improve the ability to alleviate catastrophic forgetting in continual learning." ]
[ "In order to choose a neural network architecture that will be effective for a particular modeling problem, one must understand the limitations imposed by each of the potential options.", "These limitations are typically described in terms of information theoretic bounds, or by comparing the relative complexity needed to approximate example functions between different architectures.", "In this paper, we examine the topological constraints that the architecture of a neural network imposes on the level sets of all the functions that it is able to approximate.", "This approach is novel for both the nature of the limitations and the fact that they are independent of network depth for a broad family of activation functions.", "Neural networks have become the model of choice in a variety of machine learning applications, due to their flexibility and generality.", "However, selecting network architectures and other hyperparameters is typically a matter of trial and error.", "To make the choice of neural network architecture more straightforward, we need to understand the limits of each architecture, both in terms of what kinds of functions any given network architecture can approximate and how those limitations impact its ability to learn functions within those limits.A number of papers (3; 6; 11; 13) have shown that neural networks with a single hidden layer are a universal approximator, i.e. that they can approximate any continuous function on a compact domain to arbitrary accuracy if the hidden layer is allowed to have an arbitrarily high dimension.", "In practice, however, the neural networks that have proved most effective tend to have a large number of relatively low-dimensional hidden layers.", "This raises the question of whether neural networks with an arbitrary number of hidden layers of bounded dimension are also a universal approximator.In this paper we demonstrate a fairly general limitation on functions that can be approximated with the L ∞ norm on compact subsets of a Euclidean input space by layered, fully-connected feedforward neural networks of arbitrary depth and activation functions from a broad family including sigmoids and ReLus, but with layer widths bounded by the dimension of the input space.", "By a layered network, we mean that hidden nodes are grouped into successive layers and each node is only connected to nodes in the previous layer and the next layer.", "The constraints on the functions are defined in terms of topological properties of the level sets in the input space.This analysis is not meant to suggest that deep networks are worse than shallow networks, but rather to better understand how and why they will perform differently on different data sets.", "In fact, these limitations may be part of the reason deep nets have proven more effective on datasets whose structures are compatible with these limitations.By a level set, we mean the set of all points in the input space that the model maps to a given value in the output space.", "For classification models, a level set is just a decision boundary for a particular cutoff.", "For regression problems, level sets don't have a common interpretation.The main result of the paper, Theorem 1, states that the deep, skinny neural network architectures described above cannot approximate any function with a level set that is bounded in the input space.", "This can be rephrased as saying that for every function that can be approximated, every level set must be unbounded, extending off to infinity.While a number of recent papers have made impressive progress in understanding the limitations of different neural network architectures, this result is notable because it is independent of the number of layers in the network, and because the limitations are defined in terms of a very simple topological property.", "Topological tools have recently been employed to study the properties of data sets within the field known as Topological Data Analysis (9), but this paper exploits topological ideas to examine the topology of the models themselves.", "By demonstrating topological constraints on a widely used family of models, we suggest that there is further potential to apply topological ideas to understand the strengths and weaknesses of algorithms and methodologies across machine learning.After discussing the context and related work in Section 2, we introduce the basic definitions and notation in Section 3, then state the main Theorem and outline the proof in Section 4.", "The detailed proof is presented in Sections 5 and 6.", "We present experimental results that demonstrate the constraints in Section 7, then in Section 8 we present conclusions from this work.", "In this paper, we describe topological limitations on the types of functions that can be approximated by deep, skinny neural networks, independent of the number of hidden layers.", "We prove the result using standard set theoretic topology, then present examples that visually demonstrate the result." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.0952380895614624, 0.1428571343421936, 0.19999998807907104, 0.05405404791235924, 0.06451612710952759, 0.1573033630847931, 0.15789473056793213, 0.15789473056793213, 0.09302324801683426, 0.23333333432674408, 0.10169491171836853, 0, 0.1818181723356247, 0.11267605423927307, 0.04255318641662598, 0.029411761090159416, 0, 0.05714285373687744, 0.1428571343421936, 0.0624999962747097 ]
ryGgSsAcFQ
true
[ "This paper proves that skinny neural networks cannot approximate certain functions, no matter how deep they are." ]
[ "Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs.", "Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops.", "In this paper, we present Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces.", "Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces.", "We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently.", "We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset.", "CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset.", "Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches.", "We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset.", "Program verification offers a principled approach for systematically eliminating different classes of bugs and proving the correctness of programs.", "However, as programs have become increasingly complex, real-world program verification often requires prohibitively expensive manual effort (Wilcox et al., 2015; Gu et al., 2016; Chajed et al., 2019) .", "Recent efforts have focused on automating the program verification process, but automated verification of general programs with unbounded loops remains an open problem (Nelson et al., 2017; .", "Verifying programs with loops requires determining loop invariants, which captures the effect of the loop on the program state irrespective of the actual number of loop iterations.", "Automatically inferring correct loop invariants is a challenging problem that is undecidable in general and difficult to solve in practice (Blass & Gurevich, 2001; Furia et al., 2014) .", "Existing approaches use stochastic search (Sharma & Aiken, 2016) , heurstics-based search (Galeotti et al., 2015) , PAC learning based on counter examples (Padhi & Millstein, 2017) , or reinforcement learning (Si et al., 2018) .", "However, these approaches often struggle to learn complex, real-world loop invariants.", "In this paper, we introduce a new approach to learning loop invariants by modeling the loop behavior from program execution traces using a new type of neural architecture.", "We note that inferring loop invariants can be posed as learning formulas in Satisfiability Modulo Theories (SMT) (Biere et al., 2009 ) over program variables collected from program execution traces (Nguyen et al., 2017) .", "In principle, Neural networks seem well suited to this task because they can act as universal function approximators and have been successfully applied in various domains that require modeling of arbitrary functions (Hornik et al., 1989; Goodfellow et al., 2016) .", "However, loop invariants must be represented as explicit SMT formulas to be usable for program verification.", "Unfortunately, existing methods for extracting logical rules from general neural architectures lack sufficient precision (Augasta & Kathirvalavakumar, 2012) , while inductive logic learning lacks sufficient expressiveness for use in verification (Evans & Grefenstette, 2018) .", "We address this issue by developing a novel neural architecture, Continuous Logic Network (CLN), which is able to efficiently learn explicit and precise representations of SMT formulas by using continuous truth values.", "Unlike existing neural architectures, CLNs can represent a learned SMT formula explicitly in its structure and thus allow us to precisely extract the exact formula from a trained model.", "In order to train CLNs, we introduce a new semantic mapping for SMT formulas to continuous truth values.", "Our semantic mapping builds on BL, or basic fuzzy logic (Hájek, 2013) , to support general SMT formulas in a continuous logic setting.", "We further prove that our semantic model is sound (i.e., truth assignments for the formulas are consistent with their discrete counterparts) and complete (i.e., all formulas can be represented) with regard to the discrete SMT formula space.", "These properties allow CLNs to represent any quantifier-free SMT formula operating on mixed integer-real arithmetic as an end-to-end differentiable series of operations.", "We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms state-of-the-art tools on the Code2Inv dataset by solving all 124 theoretically solvable problems in the dataset.", "This is 20 problems more than LoopInvGen, the winner of the SyGus 2018 competition loop invariant track (Padhi & Millstein, 2017) .", "Moreover, CLN2INV finds invariants for each program in 1.1 second on average, more than 40 times faster than LoopInvGen.", "We also demonstrate that CLN2INV is able to learn complex, real-world loop invariants with combinations of conjunctions and disjunctions of multivariable constraints.", "Our main contributions are:", "• We introduce a new semantic mapping for assigning continuous truth values to SMT formulas that is theoretically grounded and enables learning formulas through backpropagation.", "We further prove that our semantic model is sound and complete.", "• We develop a novel neural architecture, Continuous Logic Networks (CLNs), that to the best of our knowledge is the first to efficiently learn precise and explicit SMT formulas by construction.", "• We use CLNs to implement a new loop invariant inference system, CLN2INV, that is the first to solve all 124 theoretically solvable problems in the Code2Inv dataset, 20 more than the existing methods.", "CLN2INV is able to find invariants for each problem in 1.1 second on average, 40× faster than existing systems.", "• We further show CLN2INV is able to learn 12 more complex loop invariants than the ones present in the Code2Inv dataset with combinations of multivariable constraints.", "Related Work.", "Traditionally, loop invariant learning relies on stochastic or heuristics-guided search (Sharma & Aiken, 2016; Galeotti et al., 2015) .", "Other approaches like NumInv analyze traces and discover conjunctions of equalities by solving a system of linear equations (Sharma et al., 2013; Nguyen et al., 2017) .", "LoopInvGen uses PAC learning of CNF using counter-examples (Padhi et al., 2016; Padhi & Millstein, 2017) .", "By contrast, Code2Inv learns to guess loop invariants using reinforcement learning with recurrent and graph neural networks (Si et al., 2018) .", "However, these approaches struggle to learn complex invariants.", "Unlike these works, CLN2INV efficiently learns complex invariants directly from execution traces.", "There is a extensive work on PAC learning of boolean formulas, but learning precise formulas require a prohibitively large number of samples (Kearns et al., 1994) .", "Several recent works use differentiable logic to learn boolean logic formulas from noisy data (Kimmig et al., 2012; Evans & Grefenstette, 2018; Payani & Fekri, 2019) or improving adversarial robustness by applying logical rules to training (Fischer et al., 2019) .", "By contrast, our work learns precise SMT formulas directly by construction, allowing us to learn richer predicates with compact representation in a noiseless setting.", "A variety of numerical relaxations have been applied to SAT and SMT solving.", "Application-specific approximations using methods such as interval overapproximation and slack variables have been developed for different classes of SMT (Eggers et al., 2008; Nuzzo et al., 2010) .", "More recent work has applied recurrent and graph neural networks to Circuit SAT problems and unsat core detection (Amizadeh et al., 2019; Selsam et al., 2019; Selsam & Bjørner, 2019) .", "FastSMT uses embeddings from natural language processing like skip-gram and bag-of-words to represent formulas for search strategy optimization (Balunovic et al., 2018) .", "Unlike these approaches, we relax the SMT semantics directly to generate a differentiable representation of SMT.", "We develop a novel neural architecture that explicitly and precisely learns SMT formulas by construction.", "We achieve this by introducing a new sound and complete semantic mapping for SMT that enables learning formulas through backpropagation.", "We use CLNs to implement a loop invariant inference system, CLN2INV, that is the first to solve all theoretically solvable problems in the Code2Inv benchmark and takes only 1.1 second on average.", "We believe that the CLN architecture will also be beneficial for other domains that require learning SMT formulas.", "A CONTINUOUS PREDICATES Figure 5 shows examples of shifted sigmoids for S(>), S(≥), and S(=).", "Combing these results, we have", "For any t-norm, we have 0 ⊗ 1 = 0, 1 ⊗ 1 = 1, and 1 ⊗ 0 = 0.", "Put it altogether, we have", "(f (t, u; B, ) ⊗ g(t, u; B, )) = 1 t = u 0 t = u which concludes the proof." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.1538461446762085, 0.604651153087616, 0.2666666507720947, 0.2790697515010834, 0.23255813121795654, 0.05714285373687744, 0.051282044500112534, 0.24390242993831635, 0.21052631735801697, 0, 0.08510638028383255, 0.09999999403953552, 0.21276594698429108, 0.0416666604578495, 0.12903225421905518, 0.35555556416511536, 0.19230768084526062, 0.03389830142259598, 0.2857142686843872, 0.15686273574829102, 0.4313725531101227, 0.21276594698429108, 0.2702702581882477, 0.1904761791229248, 0.2222222238779068, 0.0476190410554409, 0.19999998807907104, 0.09999999403953552, 0.10526315122842789, 0.19512194395065308, 0, 0.3636363446712494, 0.12903225421905518, 0.40816324949264526, 0.15686273574829102, 0.10256409645080566, 0.17391303181648254, 0.10256409645080566, 0.09090908616781235, 0.05405404791235924, 0.2380952388048172, 0.0714285671710968, 0.0624999962747097, 0.13636362552642822, 0.0363636314868927, 0.13636362552642822, 0.12121211737394333, 0.1304347813129425, 0.08695651590824127, 0.1395348757505417, 0.17142856121063232, 0.4571428596973419, 0.3499999940395355, 0.19999998807907104, 0.37837836146354675, 0.11428570747375488, 0, 0.0624999962747097, 0, 0.05405404791235924 ]
HJlfuTEtvB
true
[ "We introduce the Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants and general SMT formulas." ]
[ "Single cell RNA sequencing (scRNAseq) technology enables quantifying gene expression profiles by individual cells within cancer.", "Dimension reduction methods have been commonly used for cell clustering analysis and visualization of the data.", "Current dimension reduction methods tend overly eliminate the expression variations correspond to less dominating characteristics, such we fail to find the homogenious properties of cancer development.", "In this paper, we proposed a new and clustering analysis method for scRNAseq data, namely BBSC, via implementing a binarization of the gene expression profile into on/off frequency changes with a Boolean matrix factorization.", "The low rank representation of expression matrix recovered by BBSC increase the resolution in identifying distinct cell types or functions.", "Application of BBSC on two cancer scRNAseq data successfully discovered both homogeneous and heterogeneous cancer cell clusters.", "Further finding showed potential in preventing cancer progression.", "Cancer the biggest deadly threat to human has been a huge puzzle since its determination in 1775.", "From once considered as contagious to nowadays cancer immunotherapy, the modern medication continues to evolve in tackling this problem (Dougan et al., 2019) .", "And yet, not enough to make a huge difference, 1,762,450 people have been diagnosed with cancer and 606,880 has died in 2018 (Siegel et al., 2019) .", "The development of single cell RNA sequencing (scRNA-seq), which measures each single cell in cancer tissue with over 20,000 dimension of genes (features), picturized the hologram of cancer and its micro-environment with high resolution (Picelli et al., 2014; Puram et al., 2017; Tirosh et al., 2016) .", "As illustrated in Figure 1A , classic analysis pipeline takes a linear (PCA) or non-linear (t-SNE) dimension reduction of the high dimensional input data, by which loadings of the top bases are further used for cell clustering and visualization (Tirosh et al., 2016) .", "Figure 1: Classic analysis pipeline for scRNA-seq data and Melanoma example Cancer cell heterogeneity hampers theraputic development.", "We use the melanoma dataset as an example.", "Cells in a scRNA-seq data are always with multiple crossed conditions, such as types of cancer, origin of patients and different cell types.", "By analyzing melanoma scRNA-seq data with classic pipeline, we differentiated the cell type of each cell in its cancer microenvironment (CME) (figure 1B).", "All cell types other than cancer cell are constituted by multiple patients ( figure 1C ), validated the accuracy of classic pipeline in cell type identification.", "While on cancer cell, each patient forms a distinct cluster (highlighted in shadow), suggesting confounding patient-wise heterogeneity.", "Similar phenomenon also exists in breast cancer and head and neck cancer.", "On the other hand, being an investment-heavy industry like medical industry, the uniqueness of each cancer patient contradicts its general principle as to", "In addition, f follows a beta distribution accounting for the collective effect of the probability to shift the expression from off to on (k on ) and from on to off (k of f ).", "y denotes the true expression of gene i inside cell j and x is the observation of y with Gaussian error.", "Recent study revealed that, regulated by enhancers, burst frequency f is the major facilitator of cell type specific gene expression landscape (Larsson et al., 2019) .", "Though f and k size cannot be precisely fitted from our observed data, since y follows the Poisson distribution of the pure product of k size and f , we could still capture the most significant frequency changes across different cells.", "That is, we could infer whether f is above or equal to zero, corresponding to expression/no-expression of the gene, from our observed data.", "Counting this property, we thus propose the following approximate gene expression bi-state models.", "where F denotes a latent binary matrix of f , which is considered as a low rank representation of k different cell types, generated by the Boolean product of two binary matrix A and B plus a Boolean flipping error E. Y denotes the true quantitative expression level generated from F , and X is considered as a measure of Y with i.i.d. Gaussian error .", "Here our approach takes the approximating Y by Hadamard product between X and n×k ⊗B k×m , i.e.", "where n×k andB k×m are the estimation of A n×k and B k×m .", "Bi-state and Boolean matrix factorization for scRNA-seq data (BBSC).", "In sight of this, we developed a novel scRNA-seq pattern mining and analysis pipeline namely BBSC (Figure 2 ), by implementing a data binarization process for the inference of ON/OFF bi-state expression patterns.", "In addition, we proposed a fast binary matrix factorization (BMF) method, namely PFAST, adapting to the large scale of scRNA-seq data.", "BBSC can be easily implemented with classic dimension reduction based analysis procedure.", "Application of BBSC on scRNA-seq of the head and neck cancer and melanoma data successfully revealed the cancer homogeneity hence increased the sensitivity in identifying sub types of cells.", "In addition, cancer cell clusters expressing the epithelial mesenchymal transition (EMT) markers were specifically identified by BBSC in head and neck cancer study, which consist cancer cells from different patient samples, suggesting heterogeneous cancer cells may adopt a similar strategy in cancer metastasis process.", "We summarize our contributions as follows:", "• We constructed a scRNA-seq analysis pipeline, BBSC, for retrieving cancer homogeneity properties.", "BBSC is by far the first analysis pipeline accounting the fundamental interplay between cell type and gene expression in the analysis of scRNA-seq data.", "• As a major component in BBSC pipeline, we proposed a fast and efficient BMF algorithm, PFAST, in adapting to the large scale of scRNA-seq data.", "• In the analysis of head and neck cancer data, BBSC identified that cancer cell may adapt similar strategies in metastasis.", "This finding could be applied to prevent cancer progression.", "Enabled by the development of single cell technology, we now can observe the complicated biological process like cancer with unprecedented resolution.", "However, the classic analysis pipeline fails to deliver detailed information:", "1) it does not reveal common characteristic of cancer cell in different cancer patients.", "2) Even it separates functional cells; it fails to reveal intra-cluster heterogeneity.", "To solve above problems, we have developed BBSC analysis pipeline.", "Rooted from casting the frequency change in gene expression, we have applied BMF in the feature selection process, which avoids adding new expensive and potentially noisy information.", "We have applied tailored binarizing process for each dataset.", "Moreover, to deal with big scale tall matrix like scRNAseq data, we have developed a fast and efficient algorithm called PFAST.", "Letting alone its fast speed in handling large-scale data, it shows high accuracy compared with state-of-art BMF algorithms.", "We have applied BBSC on two high quality cancer studies, head and neck cancer and melanoma.", "In both datasets, BBSC shutters the big clusters into several sub clusters, and promotes a gateway to analysis intra-cluster heterogeneity.", "Moreover, BBSC manages to get common cancer sub cell clusters in both datasets, and decreases the patient-wise heterogeneity that hindered cancer therapeutic development.", "We next have justified the biological meanings of BBSC derived sub clusters by looking into the sub cancer clusters in head and neck cancer.", "By analyzing their detailed expression profile, We find out that the common clusters are in the EMT transition process indicating these cancer cells play an important part in cancer metastasis.", "While patient specific clusters are in the early EMT process indicating that these cells are still in the original cancer micro environment.", "These findings have first justified the biological importance of BBSC derived sub clusters.", "Secondly, it brings much insightful ideas in the clinical application.", "We now can hypothesize that when cancer cells seek metastasis, they will transform into similar states that are common across different patients.", "The characteristic of the common clusters may serve as target in preventing cancer metastasis.", "Furthermore, we validate that the heterogeneity of cancer comes from the original cancer tissue.", "Also BBSC shows promising results in deciphering this kind of heterogeneity.", "Especially in head and neck cancer study, BBSC distinctly divides cancer cells from the same patient into two sub clusters.", "Due to our limited expertise in cancer biology, we did not look closely in this property.", "However, we believe this would bring insightful ideas in the cause of cancer origin heterogeneity.", "Overall BBSC is an efficient and valuable analysis platform for scRNAseq or other single cell data.", "It is capable to bring insightful knowledge for our detailed understanding of complicated biological process." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0833333283662796, 0, 0.0624999962747097, 0, 0.0714285671710968, 0.0833333283662796, 0.625, 0.07999999821186066, 0.12903225421905518, 0.11428570747375488, 0.08888888359069824, 0.04081632196903229, 0, 0, 0.06896550953388214, 0.13333332538604736, 0.1249999925494194, 0.1599999964237213, 0.2222222238779068, 0.06666666269302368, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.12903225421905518, 0.08695651590824127, 0, 0.0952380895614624, 0.06896550953388214, 0.0624999962747097, 0.1428571343421936, 0.3529411852359772, 0.0714285671710968, 0, 0.1904761791229248, 0, 0, 0.060606058686971664, 0, 0, 0.07692307233810425, 0.09090908616781235, 0, 0.13333332538604736, 0.1428571343421936, 0.11428570747375488, 0.14814814925193787, 0, 0.1111111044883728, 0.06896550953388214, 0.27272728085517883, 0.09999999403953552, 0.10526315122842789, 0.14814814925193787, 0.17391303181648254, 0.17391303181648254, 0, 0 ]
rygGnertwH
true
[ "Our finding shed lights in preventing cancer progression" ]
[ "While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown.", "In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations.", "Discrete flows have numerous applications.", "We display proofs of concept under 2 flow architectures: discrete autoregressive flows enable bidirectionality, allowing for example tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows (i.e., with layer structure from RealNVP) enable parallel generation such as exact nonautoregressive text modeling.", "There have been many recent advances in normalizing flows, a technique for constructing high-dimensional continuous distributions from invertible transformations of simple distributions BID22 BID25 BID23 .", "Applications for high-dimensional continuous distributions are widespread: these include latent variable models with expressive posterior approximations BID22 BID20 BID12 , parallel image generation BID6 BID11 , parallel speech synthesis BID19 , and general-purpose density estimation BID18 .Normalizing", "flows are based on the change-of-variables formula, which derives a density given an invertible function applied to continuous events. There have", "not been analogous advances for discrete distributions, where flows are typically thought to not be applicable. Instead, most", "research for discrete data has focused on building either latent-variable models with approximate inference BID2 , or increasingly sophisticated autoregressive models that assume a fixed ordering of the data BID0 BID26 . In this paper", ", we present an alternative for flexible modeling of discrete sequences by extending continuous normalizing flows to the discrete setting. We demonstrate", "proofs of concept of discrete flows with two architectures:1. Discrete autoregressive", "flows enable multiple levels of autoregressivity. For example, one can design", "a bidirectional language model of text where each token depends on both left-to-right and right-to-left contexts while maintaining an exact likelihood and sampling.2. Discrete bipartite flows (i.e.", ", with flow structure similar to RealNVP BID6 ) enable flexible models with parallel generation. For example, one can design nonautoregressive", "text models which maintain an exact likelihood for training and evaluation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.2142857164144516, 0.1818181723356247, 0.1428571343421936, 0.2142857164144516, 0, 0.04651162400841713, 0.13333332538604736, 0.23076923191547394, 0.1463414579629898, 0.2666666507720947, 0.31578946113586426, 0.09999999403953552, 0.10810810327529907, 0.13793103396892548, 0.09999999403953552 ]
rJlo4UIt_E
true
[ "We extend autoregressive flows and RealNVP to discrete data." ]
[ "We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs.", "The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP).", "The complete network learns to ignore local perturbation while performing global feature detection and classification.", "The experimental results on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images.", "There is a growing interest in deploying DNNs in autonomous systems interacting with physical world such as autonomous vehicles and robotics.", "It is important that an autonomous systems make reliable classifications even with noisy data.", "However, in a deep convolutional neural networks (CNN) trained using stochastic gradient descent (SGD), pixel level perturbation can cause kernels to generate incorrect feature maps.", "Such errors can propagate through network and degrade the classification accuracy (Nazaré et al. (2017) ; Luo & Yang (2014) ).", "Approaches for improving robustness of a DNN to pixel perturbation can be broadly divided into two complementary categories.", "First, many research efforts have developed image de-noising (or filtering) networks that can pre-process an image before classification, but at the expense of additional latency in the processing pipeline (Ronneberger et al. (2015) ; Na et al. (2019) ; Xie et al. (2012) ; Zhussip & Chun (2018) ; Soltanayev & Chun (2018) ; Zhang et al. (2017) ).", "De-noising is an effective approach to improve accuracy under noise but can degrade accuracy for clean images (Na et al. (2019) ).", "Moreover, de-noising networks trained on a certain noise type do not perform well if the a different noise structure is experienced during inference (Zhussip & Chun (2018) ).", "Advanced de-noising networks are capable of generalizing to multiple levels of a type of noise and effective for different noise types (Zhussip & Chun (2018) ; Soltanayev & Chun (2018) ; Zhang et al. (2017) ).", "But high complexity of these network makes them less suitable for real-time applications and lightweight platforms with limited computational and memory resources.", "An orthogonal approach is to develop a classification network that is inherently robust to input perturbations.", "Example approaches include training with noisy data, introducing noise to network parameters during training, and using pixel level regularization (Milyaev & Laptev (2017) ; Nazaré et al. (2017) ; Luo & Yang (2014) ; Na et al. (2018) ; Long et al. (2019) ).", "These approaches do not change the processing pipeline or increase computational and memory demand during inference.", "However, training-based approaches to design robust DNNs also degrade classification accuracy for clean images, and more importantly, are effective only when noise structure (and magnitude) during training and inference closely match.", "Therefore, a new class of DNN architecture is necessary for autonomous system that is inherently resilient to input perturbations of different type and magnitude without requiring training on noisy data, as well as computationally efficient.", "Towards this end, this paper proposes a new class of DNN architecture that integrates features extracted via unsupervised neuro-inspired learning and supervised training.", "The neuro-inspired learning, in particular, spiking neural network (SNN) with spike-timing-dependent plasticity (STDP) is an alternative and unsupervised approach to learning features in input data (Hebb et al. (1950) ; (2019)).", "However, the classification accuracy of a STDP-learned SNN for complex datasets is much lower than a that of a DNN.", "The fundamental premise of this paper is that, augmenting the feature space of a supervised (trained) DNN with features extracted by an SNN via STDP-based learning increases robustness of the DNN to input perturbations.", "We argue that stochastic gradient descent (SGD) based back-propagation in a DNN enables global learning between low-level pixel-to-pixel interactions and high-level detection and classification.", "On the other hand, STDP performs unsupervised local learning and extracts low-level features under spatial correlation.", "By integrating features from global (supervised training) and local (STDP) learning, the hybrid network \"learns to ignore\" locally uncorrelated perturbations (noise) in pixels while extracting the correct feature representation from the overall image.", "Consequently, hybridization of SGD and STDP enables robust image classification under noisy input while preserving the accuracy of the baseline DNN for clean images.", "We present a hybrid network architecture, referred to as Spike Assisted Feature Extraction based Deep Neural Network (SAFE-DNN), to establish the preceding premise.", "We develop an integrated learning/training methodology to couple the features extracted via neuro-inspired learning and supervised training.", "In particular, this paper makes the following contributions:", "• We present a SAFE-DNN architecture ( Figure 1 ) that couples STDP-based robust learning of local features with SGD based supervised training.", "This is achieved by integrating a spiking convolutional module within a DNN pipeline.", "• We present a novel frequency-dependent stochastic STDP learning rule for the spiking convolutional demonstrating local competitive learning of low level features.", "The proposed learning method makes the feature extracted by the spiking convolutional module robust to local perturbations in the input image.", "• We develop a methodology to transform the STDP-based spiking convolution to an equivalent CNN.", "This is achieved by using a novel special neuron activation unit (SAU), a non-spiking activation function, that facilitates integration of the SNN extracted features within the DNN thereby creating a single fully-trainable deep network.", "The supervised (SGD-based) training is performed in that deep network after freezing the STDP-learnt weights in the spiking CNN module.", "We present implementations of SAFE-DNN based on different deep networks including MobileNet, ResNet and DenseNet (Sandler et al. (2018) , He et al. (2015) , Huang et al. (2016) ) to show the versatility of our network architecture.", "Experiment is conducted for CIFRA10 and ImageNet subset considering different types of noise, including Gaussian, Wald, Poisson, Salt&Paper, and adversarial noise demonstrating robust classification under input noise.", "Unlike training-based approaches, SAFE-DNN shows improved accuracy for a wide range of noise structure and magnitude without requiring any prior knowledge of the perturbation during training and inference and does not degrade the accuracy for clean images (even shows marginal improvement in many cases).", "SAFE-DNN complements, and can be integrated with, de-noising networks for input pre-processing.", "However, unlike de-noising networks, the SAFE-DNN has negligible computation and memory overhead, and does not introduce new stages in the processing pipeline.", "Hence, SAFE-DNN is an attractive architecture for resource-constrained autonomous platforms with real-time processing.", "We note that, SAFE-DNN differs from deep SNNs that convert a pre-trained DNN to SNN (Sengupta et al. (2019) , Hu et al. (2018) ).", "Such networks function as a spiking network during inference to reduce energy; however, the learning is still based on supervision and back-propagation.", "In contrast, SAFE-DNN hybridizes STDP and SGD during learning but creates a single hybrid network operating as a DNN during inference.", "In this paper we present SAFE-DNN as a deep learning architecture that integrates spiking convolutional network with STDP based learning into a conventional DNN for robust low level feature extraction.", "The experimental results show that SAFE-DNN improves robustness to different input perturbations without any prior knowledge of the noise during training/inference.", "SAFE-DNN is compatible with various DNN designs and incurs negligible computation/memory overhead.", "Hence, it is an attractive candidate for real-time autonomous systems operating in noisy environment." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0, 0.0833333283662796, 0, 0.07407406717538834, 0, 0, 0.06451612710952759, 0, 0, 0, 0.07407406717538834, 0.0624999962747097, 0.05714285373687744, 0, 0.09999999403953552, 0.04878048598766327, 0, 0.1111111119389534, 0.052631575614213943, 0.1428571343421936, 0.0555555522441864, 0, 0.0555555522441864, 0.06896551698446274, 0.09090908616781235, 0, 0.0714285671710968, 0, 0.08695651590824127, 0, 0.20689654350280762, 0, 0.07407406717538834, 0.1599999964237213, 0, 0.0555555522441864, 0.0833333283662796, 0.10526315867900848, 0.12903225421905518, 0.04651162400841713, 0, 0, 0.10526315122842789, 0.06896551698446274, 0.0714285671710968, 0.07999999821186066, 0.23529411852359772, 0.07407406717538834, 0, 0 ]
BJg1fgBYwH
true
[ "A noise robust deep learning architecture." ]
[ "Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks.", "The success of neural embeddings has prompted significant amounts of research into applications in domains other than language.", "One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling.", "For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned.\n", "However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space.", "We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space.", "We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets.", "Embeddings are used to represent complex high-dimensional data in lower-dimensional continuous spaces BID28 BID3 .", "Embedded representations provide three principal benefits over sparse schemes: They encapsulate similarity, are compact, and perform better as inputs to machine learning models BID29 .", "These benefits are particularly important for graph-structured data where the native representation is the adjacency matrix, which is typically a sparse matrix of connection weights.Neural embedding models are a flavour of embedding where the embedded representation corresponds to a subset of the connection weights in a neural network (see FIG2 ), which are learned through backpropagation.", "Neural embedding models have been shown to improve performance on many tasks across multiple domains, including word analogies (Mikolov et al., 2013a; BID20 , machine translation BID31 ), document comparison (Kusner et al., 2015 , missing edge prediction BID12 , vertex attribution BID26 , product recommendations BID10 BID1 , customer value prediction BID14 BID6 and item categorisation BID2 .", "In all cases, the embeddings are learned without labels (unsupervised) from a sequence of tokens.", "Previous work on neural embedding models has either either explicitly or implicitly (by using the Euclidean dot product) assumed that the embedding space is Euclidean.", "However, recent work in the field of complex networks has found that many interesting networks, particularly those with a scale-free structure such as the Internet BID30 BID5 or academic citations BID8 BID7 can be well described with a geometry which is non-Euclidean, such as hyperbolic geometry.", "Even more recently the problem of mapping graphs and datasets to a low-dimensional hyperbolic space has been addressed in BID24 and BID4 .", "Here we use a neural embedding approach based on the Skipgram architecture to find hyperbolic embeddings.There are two reasons why embedding complex networks in hyperbolic geometry can be expected to perform better than Euclidean geometry.", "The first is that complex networks exhibit a hierarchical structure.", "Hyperbolic geometry provides a continuous analogue of tree-like graphs, and even infinite trees have nearly isometric embeddings in hyperbolic space BID11 .", "The second property is that complex networks have power-law degree distributions, resulting in high-degree hub vertices.", "All tiles are of constant area in hyperbolic space, but shrink to zero area at the boundary of the disk in Euclidean space.", "c Hub and spokes graph.", "It is impossible to embed this graph in two-dimensional Euclidean space and preserve the properties that (1) all spokes are the same distance from the hub, (2) all spokes are the same distance from each other, and (3) the distance between spokes along the circumference is more than twice the distance to the hub.", "In hyperbolic space such embeddings exist.", "FIG1 shows a simple hub-and-spoke graph where each spoke is a distance R from the hub and 2R from each other.", "For an embedding in two-dimensional Euclidean space it is impossible to reproduce this geometry for more than two spokes.", "However, in hyperbolic space, large numbers of spokes that satisfy these geometrical constraints can be embedded because the circumference of a circle expands exponentially rather than polynomially with the radius.The starting point for our model is the celebrated Skipgram architecture (Mikolov et al., 2013a; b) shown in FIG2 .", "Skipgram is a shallow neural network with three layers: (1) An input projection layer that maps from a one-hot-encoded token to a distributed representation, (2) a hidden layer, and (3) an output softmax layer.", "Skipgram is trained on a sequence of words that is decomposed into (input word, context word)-pairs.", "The model uses two separate vector representations, one for the input words and another for the context words, with the input representation comprising the learned embedding.", "The (input word, context word)-pairs are generated by running a fixed length sliding window over a word sequence.", "Words are initially randomly allocated to vectors within the two vector spaces.", "Then, for each training word pair, the vector representations of the observed input and context words are pushed towards each other and away from all other words (see FIG2 ).", "The model can be extended to network structured data using random walks to create sequences of vertices.", "Vertices are then treated exactly analogously to words in the NLP formulation.", "This was originally proposed as DeepWalk BID26 .", "Extensions varying the nature of the random walks have been explored in LINE BID32 and Node2vec BID12 .Contribution", "In this paper, we introduce the new concept of neural embeddings in hyperbolic space. We formulate", "backpropagation in hyperbolic space and show that using the natural geometry of complex networks improves performance in vertex classification tasks across multiple networks. At the same", "time, BID24 independently proposed a hyperbolic embedding algorithm that has similarities to ours. The key differences", "are that BID24 try to fit the hyperbolic distance between nodes using cartesian coordinates in the Poincaré disk, whereas we use a modified cosine distance in a spherical hyperbolic coordinate system. Our approach does not", "require a numerical constraint to prevent points from 'falling off' the edge of the disk and becoming infinitely distant from the others." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.2857142686843872, 0.10810810327529907, 0.23076923191547394, 0.17142856121063232, 0.5161290168762207, 0.2666666507720947, 0.07999999821186066, 0, 0.11764705181121826, 0, 0.1538461446762085, 0.1875, 0.11764705181121826, 0.3125, 0.23255813121795654, 0, 0.3125, 0.07407406717538834, 0.3333333432674408, 0, 0.1304347813129425, 0.3529411852359772, 0, 0.19999998807907104, 0.10526315122842789, 0.04878048226237297, 0.07692307233810425, 0, 0, 0, 0.0555555522441864, 0.07407406717538834, 0.08695651590824127, 0, 0.1428571343421936, 0.5185185074806213, 0.23529411852359772, 0.07407406717538834, 0.09756097197532654, 0.06666666269302368 ]
S1xDcSR6W
true
[ "We learn neural embeddings of graphs in hyperbolic instead of Euclidean space" ]
[ "The International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS) plays a pivotal role in fostering the development of new Knowledge Engineering (KE) tools, and in emphasising the importance of principled approaches for all the different KE aspects that are needed for the successful long-term use of planning in real-world applications. \n", "In this paper, as an exercise in synthesis and for the sake of stimulating thoughts and discussion, we review the format of previous ICKEPS, to suggest alternative formats for future competitions, ideally to motivate someone to step up and organise the next ones.", "The International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS) has been running since 2005 as an almost biennial event promoting the development and importance of the use of knowledge engineering (KE) methods and techniques within this area.", "The aim of the competition series is to foster developments in the knowledge-based and domain modelling aspects of Automated Planning, to accelerate knowledge engineering research, to encourage the creation and sharing of prototype tools and software platforms that promise more rapid, accessible, and effective ways to construct reliable and efficient Automated Planning systems.The latest competition took place in 2016 1 BID3 , which aimed at on-site domain modelling, and highlighted a number of major issues.", "Most teams did not use any of the existing KE tools, and thus relied only on their expertise.", "Second, existing tools do not effectively support cooperation, which is needed to cope with the growing complexity of planning applications.", "Finally, and more worryingly, the number of participants of ICKEPS is still not very large, especially when compared with the latest edition of the International Planning Competition: this suggests that the planning community underestimates the importance of knowledge engineering, despite of its enormous impact on applicability of domain-independent planning in real-world scenarios.", "Accidental complexity issues BID2 , for instance, can prevent the exploitation of automated planning approaches in complex scenarios, and even an unfortunate ordering of elements in the domain model can adversely affect the performance of planning engines BID7 .Given", "the pivotal role played by ICKEPS in promoting the importance of principled KE approaches and tools, we believe it is important to evolve and adapt its format in order to attract and engage a larger number of participants. In this", "paper, we review the format of past competitions, in order to highlight weaknesses and strengths both from organisers' and participants' perspective. Building", "on top of this analysis, we suggest some alternative formats that may help future ICKEPS organisers in performing their tasks.It should be noted, though, that the aim of this paper is twofold: to review formats and suggest improvements to ICKEPS, and -more importantly-to make a call for action for organising future competitions focused on KE aspects of planning and scheduling.", "Concluding this paper, we believe that there is a strong need to organise the ICKEPS competitions in order to increase awareness of KE techniques, tool and issues in the ICAPS and general AI communities.", "The success of future ICK-EPS competitions (e.g. considerable increase of the number of participants) can, in consequence, influence the domainindependent AI planning field by making it accessible for use (by planning non-experts) in various application domains.", "To give some motivation and inspiration for the future ICKEPS competitions, we, in this paper, provided a review of the format of the past ICKEPS competitions, and suggested two possibly new formats that, we believe, can at-tract more participants and possibly avoid an excessive burden of organisers.We believe that the paper initiates a fruitful discussion about the format of future ICKEPS competitions as well as motivate potential organisers to step up and organise the next competition(s)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.04444444179534912, 0.10256410390138626, 0.05128204822540283, 0, 0, 0, 0.043478257954120636, 0.0555555522441864, 0.052631575614213943, 0, 0.11320754885673523, 0.05882352590560913, 0.1111111119389534, 0.09836065769195557 ]
BkeXxZIcPE
true
[ "Ideas for future ICKEPS" ]
[ "We show that generating English Wikipedia articles can be approached as a multi-\n", "document summarization of source documents.", "We use extractive summarization\n", "to coarsely identify salient information and a neural abstractive model to generate\n", "the article.", "For the abstractive model, we introduce a decoder-only architecture\n", "that can scalably attend to very long sequences, much longer than typical encoder-\n", "decoder architectures used in sequence transduction.", "We show that this model can\n", "generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia\n", "articles.", "When given reference documents, we show it can extract relevant factual\n", "information as reflected in perplexity, ROUGE scores and human evaluations.", "The sequence-to-sequence framework has demonstrated success in natural-language sequence transduction tasks such as machine translation.", "More recently, neural techniques have been applied to do single-document, abstractive (paraphrasing) text summarization of news articles BID15 , BID9 ).", "In this prior work, the input to supervised models ranged from the first sentence to the entire text of an article, and they are trained end-to-end to predict reference summaries.", "Doing this end-to-end requires a significant number of parallel article-summary pairs since language understanding is a pre-requisite to generate fluent summaries.In contrast, we consider the task of multi-document summarization, where the input is a collection of related documents from which a summary is distilled.", "Prior work has focused on extractive summarization, which select sentences or phrases from the input to form the summaries, rather than generating new text.", "There has been limited application of abstractive neural methods and one possible reason is the paucity of large, labeled datasets.In this work, we consider English Wikipedia as a supervised machine learning task for multidocument summarization where the input is comprised of a Wikipedia topic (title of article) and a collection of non-Wikipedia reference documents, and the target is the Wikipedia article text.", "We describe the first attempt to abstractively generate the first section, or lead, of Wikipedia articles conditioned on reference text.", "In addition to running strong baseline models on the task, we modify the Transformer architecture BID18 to only consist of a decoder, which performs better in the case of longer input sequences compared to recurrent neural network (RNN) and Transformer encoder-decoder models.", "Finally we show our modeling improvements allow us to generate entire Wikipedia articles.", "In FIG3 , we show the predictions from three different models (using tf-idf extraction, and the combined corpus) along with the Wikipedia ground truth.", "As the perplexity decreases we see improvements in the model outputs, in terms of fluency, factual accuracy, and narrative complexity.", "In particular, the T-DMCA model offers a respectable alternative to the Wikipedia version and is more succinct, while mentioning key facts, such as where the law firm was located, when and how it was formed, and the rise and fall of the firm.In manual inspection of model outputs, we noticed an unexpected side-effect: models learn to translate names from English into multiple languages, e.g. Rohit Viswanath into Hindi (see FIG4 ).", "Although we did not do a systematic evaluation of the translations, we found they are often correct, and often they are not found in the Wikipedia article itself.", "We also verified that in general the translation is not merely copied from the source, such as example cases where the target language is the incorrect one (e.g. translation of an English name into Ukrainian).", "We have shown that generating Wikipedia can be approached as a multi-document summarization problem with a large, parallel dataset, and demonstrated a two-stage extractive-abstractive framework for carrying it out.", "The coarse extraction method used in the first stage appears to have a significant effect on final performance, suggesting further research on improving it would be fruitful.", "We introduce a new, decoder-only sequence transduction model for the abstractive stage, capable of handling very long input-output examples.", "This model significantly outperforms traditional encoderdecoder architectures on long sequences, allowing us to condition on many reference documents and to generate coherent and informative Wikipedia articles." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.260869562625885, 0.2666666507720947, 0.1428571343421936, 0.0952380895614624, 0, 0, 0, 0.1249999925494194, 0.21052631735801697, 0, 0, 0, 0.12903225421905518, 0.0555555522441864, 0.04255318641662598, 0.12121211737394333, 0.06896551698446274, 0.5714285373687744, 0.04444444179534912, 0.260869562625885, 0.0624999962747097, 0, 0.028985504060983658, 0.06451612710952759, 0.04878048226237297, 0.10810810327529907, 0.0555555522441864, 0.06896550953388214, 0.24242423474788666 ]
Hyg0vbWC-
true
[ "We generate Wikipedia articles abstractively conditioned on source document text." ]
[ "Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability.", "Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way.", "This makes it possible to control the way models are trained in much greater detail.", "We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks.", "One of the most common methods of training neural networks is stochastic gradient descent (SGD) (Bottou et al. (2016) ).", "SGD has strong theoretical guarantees, including convergence in locally non-convex optimization problems (Lee et al. (2016) ).", "It also shows improved generalization and stability when compared to other optimization algorithms (Smith & Le (2018) ).", "There have been various efforts in improving the speed and generalization of SGD.", "One popular modification is to use an adaptive gradient (Duchi et al. (2011) ), which scales the gradient step size to be larger in directions with consistently small gradients.", "Adam, an implementation that combines SGD with momentum and an adaptive step size inversely proportional to the RMS gradient, has been particularly successful at speeding up training and solving particular problems (Kingma & Ba (2014) ).", "However, at other problems it pays a penalty in worse generalization (Wilson et al. (2017) ; Keskar & Socher (2017) ), and it requires additional modifications to achieve a convergence guarantee (Reddi et al. (2018) ; Li & Orabona (2018) ).", "Here we develop an intuition for adaptive gradient methods that allows us to unify Adam with SGD in a natural way.", "The new optimizer, SoftAdam, descends in a direction that mixes the SGD with Adam update steps.", "As such, it should be able to achieve equal or better optimization results across a variety of problems.", "In this paper, we have motivated and demonstrated a new optimization algorithm that naturally unifies SGD and Adam.", "We have focused our empirical results on the default hyper-parameter setting, η = 1, and predetermined learning schedules.", "With these parameters, the algorithm was shown to produce optimization that is better than or equal to SGD and Adam on image classification tasks.", "It also performed significantly better than SGD on language modeling tasks.", "Together with finding the optimal values for η, we expect a better understanding of the learning schedule to bring light to the way in which the adaptive gradient methods improve convergence.", "SoftAdam now also makes it possible to create a learning schedule on η, which may be another fruitful avenue of research, expanding on the work of Ward et al. (2018) .", "Better understanding of how adaptive gradients improve the convergence of practical machine learning models during training will enable larger models to be trained to more accurately in less time.", "This paper provides a useful intuition for how that occurs and provides a new algorithm that can be used to improve performance across a diverse set of problems.", "# S t a t e i n i t i a l i z a t i o n i f l e n ( s t a t e ) == 0 : s t a t e [ \" s t e p \" ] = 0 # E x p o n e n t i a l moving a v e r a g e o f g r a d i e n t v a l u e s s t a t e [ \" e x p a v g \" ] = t o r c h .", "z e r o s l i k e ( p . d a t a ) # E x p o n e n t i a l moving a v e r a g e o f # s q u a r e d g r a d i e n t v a l u e s s t a t e [ \" e x p a v g s q \" ] = t o r c h .", "z e r o s l i k e ( p . d a t a ) e x p a v g , e x p a v g s q = ( s t a t e [ \" e x p a v g \" ] , s t a t e [ \" e x p a v g s q \" ] , ) b e t a 1 , b e t a 2 = g r o u p [ \" b e t a s \" ] s t a t e [ \" s t e p \" ] += 1 b e t a 2 h a t = min ( b e t a 2 , 1 .", "0 − 1 .", "0 / ( s t a t e [ \" s t e p \" ] ) ) r b e t a = ( 1 − b e t a 2 ) / ( 1 − b e t a 2 h a t ) e t a h a t 2 = ( g r o u p [ \" e t a \" ] * g r o u p [ \" e t a \" ] * r b e t a ) # Decay t h e f i r s t and s e c o n d moment w i t h t h e # r u n n i n g a v e r a g e c o e f f i c i e n t e x p a v g .", "mul ( b e t a 1 ) .", "a d d r e t u r n l o s s" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10526315122842789, 0.12903225421905518, 0, 0.3243243098258972, 0.06451612710952759, 0.06896550953388214, 0.06666666269302368, 0.23999999463558197, 0, 0.08695651590824127, 0.04444444179534912, 0.1818181723356247, 0.1428571343421936, 0.06666666269302368, 0.27586206793785095, 0.13333332538604736, 0.22857142984867096, 0.08695651590824127, 0.10256409645080566, 0.04999999701976776, 0.052631575614213943, 0.277777761220932, 0, 0, 0, 0, 0.04255318641662598, 0, 0 ]
Skgfr1rYDH
true
[ "An algorithm for unifying SGD and Adam and empirical study of its performance" ]
[ "The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging.", "In fact, previous work has shown that when the modes are known, learning separate policies for each mode or sub-task can greatly improve the performance of imitation learning.", "In this work, we discover the interaction between sub-tasks from their resulting state-action trajectory sequences using a directed graphical model.", "We propose a new algorithm based on the generative adversarial imitation learning framework which automatically learns sub-task policies from unsegmented demonstrations.", "Our approach maximizes the directed information flow in the graphical model between sub-task latent variables and their generated trajectories.", "We also show how our approach connects with the existing Options framework, which is commonly used to learn hierarchical policies.", "Complex human activities can often be broken down into various simpler sub-activities or sub-tasks that can serve as the basic building blocks for completing a variety of complicated tasks.", "For instance, when driving a car, a driver may perform several simpler sub-tasks such as driving straight in a lane, changing lanes, executing a turn and braking, in different orders and for varying times depending on the source, destination, traffic conditions etc.", "Using imitation learning to learn a single monolithic policy to represent a structured activity can be challenging as it does not make explicit the sub-structure between the parts within the activity.", "In this work, we develop an imitation learning framework that can learn a policy for each of these sub-tasks given unsegmented activity demonstrations and also learn a macro-policy which dictates switching from one sub-task policy to another.", "Learning sub-task specific policies has the benefit of shared learning.", "Each such sub-task policy also needs to specialize over a restricted state space, thus making the learning problem easier.Previous works in imitation learning BID16 BID7 focus on learning each sub-task specific policy using segmented expert demonstrations by modeling the variability in each sub-task policy using a latent variable.", "This latent variable is inferred by enforcing high mutual information between the latent variable and expert demonstrations.", "This information theoretic perspective is equivalent to the graphical model shown in FIG0 (Left), where the node c represents the latent variable.", "However, since learning sub-task policies requires isolated demonstrations for each sub-task, this setup is difficult to scale to many real world scenarios where providing such segmented trajectories is cumbersome.", "Further, this setup does not learn a macro-policy to combine the learned sub-task policies in meaningful ways to achieve different tasks.In our work, we aim to learn each sub-task policy directly from unsegmented activity demonstrations.", "For example, given a task consisting of three sub-tasks -A, B and C, we wish to learn a policy to complete sub-task A, learn when to transition from A to B, finish sub-task B and so on.", "To achieve this we use a causal graphical model, which can be represented as a Dynamic Bayesian Network as GAIL Li et al. (2017) .", "Right: Causal model in this work.", "The latent code causes the policy to produce a trajectory.", "The current trajectory, and latent code produce the next latent code shown in FIG0 (Right).", "The nodes c t denote latent variables which indicate the currently active sub-task and the nodes τ t denote the state-action pair at time t.", "We consider as given, a set of expert demonstrations, each of which is represented by τ = {τ 1 , · · · , τ T } and has a corresponding sequence of latent factors c = {c 1 , · · · , c T −1 }.", "The sub-activity at time t dictates what state-action pair was generated at time t.", "The previous sub-task and the current state together cause the selection of the next sub-task.As we will discuss in Section 3, extending the use of mutual information to learn sub-task policies from unsegmented demonstrations is problematic, as it requires learning the macro-policy as a conditional probability distribution which depends on the unobserved future.", "This unobserved future is unknown during earlier points of interaction ( FIG0 ).", "To alleviate this, in our work we aim to force the policy to generate trajectories that maximize the directed information or causal information BID17 flow from trajectories to latent factors of variation within the trajectories instead of mutual information.", "Using directed information requires us to learn a causally conditioned probability distribution BID12 which depends only on the observed past while allowing the unobserved future to be sequentially revealed.", "Further, since there exists feedback in our causal graphical model i.e., information flows from the latent variables to trajectories and vice versa, directed information also provides a better upper bound on this information flow between the latent variables and expert trajectories than does the conventional mutual information BID17 BID12 .We", "also draw connections with existing work on learning sub-task policies using imitation learning with the options framework BID27 BID3 . We", "show that our work, while derived using the information theoretic perspective of maximizing directed information, bears a close resemblance to applying the options framework in a generative adversarial imitation setting. Thus", ", our approach combines the benefits of learning hierarchical policies using the options framework with the robustness of generative adversarial imitation learning, helping overcome problems such as compounding errors that plague behaviour cloning.In summary, the main contributions of our work include:• We extend existing generative adversarial imitation learning frameworks to allow for learning of sub-task specific policies by maximizing directed information in a causal graph of subactivity latent variables and observed trajectory variables.• We", "draw connections between previous works on imitation learning with sub-task policies using options and show that our proposed approach can also be seen as option learning in a generative adversarial setting.• We", "show through experiments on both discrete and continuous state-action spaces, the ability of our approach to segment expert demonstrations into meaningful sub-tasks and combine sub-task specific policies to perform the desired task.2 RELATED", "WORK", "Learning separate sub-task policies can help improve the performance of imitation learning when the demonstrated task is complex and has a hierarchical structure.", "In this work, we present an algorithm that infers these latent sub-task policies directly from given unstructured and unlabelled expert demonstrations.", "We model the problem of imitation learning as a directed graph with sub-task latent variables and observed trajectory variables.", "We use the notion of directed information in a generative adversarial imitation learning framework to learn sub-task and macro policies.", "We further show theoretical connections with the options literature as used in hierarchical reinforcement and imitation learning.", "We evaluate our method on both discrete and continuous environments.", "Our experiments show that our method is able to segment the expert demonstrations into different sub-tasks, learn sub-task specific policies and also learn a macro-policy that can combines these sub-task.", "TAB3 : Experiment settings for all the different environments for both DirectedInfo-GAIL and VAE-pretraining step respectively.", "Thus, by maximizing directed information instead of mutual information, we can learn a posterior distribution over the next latent factor c given the latent factors discovered up to now and the trajectory followed up to now, thereby removing the dependence on the future trajectory.", "In practice, we do not consider the H(c) term.", "This gives us the objective, DISPLAYFORM0 In practice, we fix q from the VAE pre-training and only minimize over the policy π in equation 4.", "BID24 to train our policy network with = 0.2.", "For the VAE pre-training step we set the VAE learning rate also to 3e −4 .", "For the Gumbel-Softmax distribution we set an initial temperature τ = 5.0.", "The temperature is annealed using using an exponential decay with the following schedule τ = max(0.1, exp −kt ), where k = 3e − 3 and t is the current epoch." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.13793103396892548, 0.06666666269302368, 0, 0, 0, 0, 0, 0.04651162400841713, 0.10526315122842789, 0.042553190141916275, 0, 0, 0, 0.04878048598766327, 0.052631575614213943, 0, 0, 0, 0, 0, 0, 0, 0.03703703358769417, 0, 0.05128204822540283, 0, 0.038461536169052124, 0.07407406717538834, 0.052631575614213943, 0.028985504060983658, 0.04878048598766327, 0, 0.06451612710952759, 0.06666666269302368, 0, 0, 0, 0, 0, 0, 0, 0, 0.0624999962747097, 0, 0, 0, 0.054054051637649536 ]
BJeWUs05KQ
true
[ "Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information" ]
[ "The Convolutional Neural Network (CNN) has been successfully applied in many fields during recent decades; however it lacks the ability to utilize prior domain knowledge when dealing with many realistic problems.", "We present a framework called Geometric Operator Convolutional Neural Network (GO-CNN) that uses domain knowledge, wherein the kernel of the first convolutional layer is replaced with a kernel generated by a geometric operator function.", "This framework integrates many conventional geometric operators, which allows it to adapt to a diverse range of problems.", "Under certain conditions, we theoretically analyze the convergence and the bound of the generalization errors between GO-CNNs and common CNNs.", "Although the geometric operator convolution kernels have fewer trainable parameters than common convolution kernels, the experimental results indicate that GO-CNN performs more accurately than common CNN on CIFAR-10/100.", "Furthermore, GO-CNN reduces dependence on the amount of training examples and enhances adversarial stability.", "Convolutional Neural Networks have been successfully applied in many fields during recent decades, but the theoretical understanding of the deep neural network is still in the preliminary stages.", "Although CNNs have strong expressive abilities, they have two clear deficiencies.", "First, as complex functional mappings, CNNs, like black boxes, cannot take full advantage of domain knowledge and prior information.", "Second, when little data is available for a certain task, CNNs' generalization ability weakens.", "This is due to overfitting, which may occur due to the large number of parameters and the large model size.", "Stemming from these two defects, a great deal of research has been done to modify CNNs BID7 Wang et al., 2018; Sarwar et al., 2017) .Before", "CNNs were applied, traditional geometric operators had developed quite well. Each geometric", "operator represents the precipitation of domain knowledge and prior information. For example, the", "Sobel operator (Works) is a discrete difference operator, which can extract image edge information for edge detection. The Schmid operator", "(Schmid, 2001 ) is an isotropic circular operator, which extracts texture information from images for face recognition. The Histogram of Oriented", "Gradients (HOG) BID8 ) is a statistic operator of gradient direction, which extracts edge direction distributions from images for pedestrian detection and other uses.Many computer vision tasks require domain knowledge and prior information. For example, in BID2 , the", "texture information from the image is used for an auxiliary diagnosis of a fracture. Geometric operators can make", "use of domain knowledge and prior information, but cannot automatically change parameter values by learning from data. Convolutional Neural Networks", "have strong data expression abilities and learning abilities, but they struggle to make use of domain knowledge. For better data learning, we", "have combined the two. It is natural to directly use", "geometric operators for pre-processing, and then classify the data through a Convolutional Neural Network (Yao et al., 2016) . However, this method uses human", "experience to select geometric operator parameter values, and then carries out the Convolutional Neural Network learning separately. This method is a kind of two-stage", "technique, and without reducing parameter redundancy in a Convolutional Neural Network, it is difficult to achieve global optimization. The method proposed in this paper", "directly constructs geometric operator convolution and then integrates geometric operator convolution into a Convolutional Neural Network to form a new framework -the Geometric Operator Convolutional Neural Network. This method achieves global optimizations", "and utilizes the properties of geometric operators.In summary, the contributions of this work are as follows:• This framework can integrates many conventional geometric operators, which reveals its broad customization capabilities when handling diverse problems.• In theory, the same approximation accuracy", "and generalization error bounds are achieved when geometric operators meet certain conditions.• The Geometric Operator Convolutional Neural", "Network not only reduces the redundancy of the parameters, but also reduces the dependence on the amount of the training samples.• The Geometric Operator Convolutional Neural", "Network enhances adversarial stability.", "In this paper, we present a novel framework named the Geometric Operator Convolution Neural Network, where the kernel in the first convolutional layer is replaced with kernels generated by geometric operator functions.", "This new network boasts several contributions.", "Firstly, the GO-CNN is customizable for diverse situations.", "Secondly, there is a theoretical guarantee in the learning framework of the GO-CNN.", "Thirdly, the GO-CNN reduces the dependence on training samples.", "Lastly, the GO-CNN enhances adversarial stability.", "In the future, we can explore a more appropriate geometric operator convolution block." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.1395348757505417, 0, 0, 0, 0, 0.21052631735801697, 0, 0, 0, 0, 0, 0, 0, 0.06451612710952759, 0, 0, 0.06451612710952759, 0.12121211737394333, 0, 0.08695651590824127, 0.10810810327529907, 0.10810810327529907, 0.1111111044883728, 0.15789473056793213, 0.03999999538064003, 0.19354838132858276, 0.11764705181121826, 0, 0.09302324801683426, 0.21052631735801697, 0, 0, 0, 0, 0 ]
BkVvwj0qFm
true
[ "Traditional image processing algorithms are combined with Convolutional Neural Networks,a new neural network." ]
[ "Determinantal point processes (DPPs) is an effective tool to deliver diversity on multiple machine learning and computer vision tasks.", "Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement.", "We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may result in highly computational instability.", "This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity.", "In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels.", "By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible (no gradients exist in this case).", "In this sense, our algorithm can be easily incorporated with multiple deep learning tasks.", "Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems.", "Diversity is desired in multiple machine learning and computer vision tasks (e.g., image hashing (Chen et al., 2017; Carreira-Perpinán & Raziperchikolaei, 2016) , descriptor learning , metric learning (Mishchuk et al., 2017) and video summarization (Sharghi et al., 2018; Liu et al., 2017) ), in which sub-sampled points or learned features need to spread out through a specific bounded space.", "Originated from quantum physics, determinantal point processes (DPP) have shown its power in delivering such properties Kulesza & Taskar, 2011b) .", "Compared with other diversity-oriented techniques (e.g., entropy (Zadeh et al., 2017) and orthogonality ), DPP shows its superiority as it incorporates only one single metric and delivers genuine diversity on any bounded space Affandi et al., 2013; Gillenwater et al., 2012) .", "Therefore, DPP has been utilized in a large body of diversity-oriented tasks.", "In general, sample points from a DPP tend to distribute diversely within a bounded space A .", "Given a positive semi-definite kernel function κ : A × A → R, the probability of a discrete point set X ⊂ A under a DPP with kernel function κ can be characterized as:", "where L is a |X | × |X | matrix with entry L ij = κ(x i , x j ) and x i , x j ∈ X .", "L is called L-ensemble.", "Note that A is a continuous space, whereas X is finite.", "In the Hilbert space associated with κ, larger determinant implies larger spanned volume, thus the mapped points tend not to be similar or linearly dependent.", "DPP can be viewed from two perspectives: sampling and learning.", "A comprehensive introduction to mathematical fundamentals of DPP for sampling from a discrete space can be found in .", "Based on this, a line of works has been proposed (Kulesza & Taskar, 2011a; Kang, 2013; Hennig & Garnett, 2016) .", "In this paper, we concentrate on learning DPPs.", "In learning of DPP, the term det(L) is typically treated as a singleton diversity measurement and is extended to learning paradigms on continuous space (Chao et al., 2015; Kulesza & Taskar, 2010; Affandi et al., 2014) .", "There are generally two lines of strategies to learn DPPs:", "Approximation.", "This type of methods is to convert DPP into a simpler format which can ease and stabilize the computation.", "low-rank approximation proves powerful in easing the computational burden (Gartrell et al., 2017) , in which the gram matrix is factorized as L = BB where B ∈ n×m with m n.", "This decomposition can also reduce the complexity which is originally a cubic time of |L|.", "Kulesza & Taskar (2011b) explicitly expressed the kernel with κ(x,", "y) = σ 1 σ 2 δ(x", ") δ(y", "), where σ measures the intrinsic quality of the feature and δ(·) is function mapping input x to a feature space. In", "this sense, the pairwise similarity is calculated in Euclidean feature space with cosine distance. Elfeki", "et al. (2019) suggest approximating a given distribution by approximating the eigenvalues of the corresponding DPP. As such", ", the computation can be eased and become stable. Following", "this, DPP is also applied on some visual tasks, such as video summarization (Sharghi et al., 2018) , ranking (Liu et al., 2017) and image classification (Xie et al., 2017) . It can be", "noted that the approximation is not straightforward for DPP, thus cannot fully deliver the diversity property (e.g. resulting in rank-deficiency).", "Direct optimization.", "While the aforementioned methods optimize DPP with specific approximation, a series of efforts also seek to optimize the DPP term directly (Gillenwater et al., 2014; Mariet & Sra, 2015; Bardenet & Titsias, 2015) .", "In this setting, the whole gram matrix L corresponding to the pairwise similarity among features is updated directly, which allows accommodating more flexible feature mapping functions rather than an approximation.", "Gillenwater et al. (2014) proposed an Expectation-Maximization algorithm to update marginal kernel DPP K = L(L + I) −1 , together with a baseline K-Ascent derived from projected gradient ascent (Levitin & Polyak, 1966) .", "Mariet & Sra (2015) extended DPP from a fixed-point perspective and Bardenet & Titsias (2015) proposed to optimize DPP upon a lower bound in variational inference fashion.", "A key problem of such line of works is that the computation is not differentiable, making it difficult to be used in deep learning frameworks.", "To the best of our knowledge, there is no previous method incorporating DPP as a feature-level diversity metric in deep learning.", "A key difficulty in doing so is that the calculation of the gradient of det(L) involves matrix inversion, which can be unstable and inaccurate in GPUs.", "Though KAscent seems to be a naive rule, it still needs explicit matrix inversion in the first step before the projection procedure.", "This fact greatly hinders the tight integration of DPP with deep networks.", "Some alternative methods seek to reach diversity under more constrained settings.", "For example, resorted to a global pairwise orthogonality constraint in hyper-sphere and Zadeh et al. (2017) employed statistical moments to measure the diversity.", "However, compared with DPP, such measurements are unable to fully characterize diversity in an arbitrary bounded space.", "In this paper, rather than providing more efficient DPP solvers, we concentrate on delivering a feasible feature-level DPP integration under the deep learning framework.", "To this end, we revisit the spectral decomposition of DPP and propose a sub-gradient generation method which can be tightly integrated with deep learning.", "Our method differs from either approximation or direct optimization by introducing a \"differentiable direct optimization\" procedure, thus can produce genuinely diverse features in continuous bounded space.", "Our method is stable and scalable to the relatively large dataset with a specific mini-batch sampling strategy, which is verified by several experiments on various tasks.", "Notations: Bold lower case x and bold upper case K represent vector and matrix, respectively.", "det(·) and Tr(·) calculate the determinant and trace of a matrix, respectively.", "A ⊗ B is the element-wise product of matrices A and B. |X | and |x| measure the cardinality of a finite set X and the L 2 length of a vector x, respectively.", "x, y calculates the inner product of the two vectors.", "x = diag(X) transforms a diagonal matrix X into its vector form x, and vice versa.", "We refer \"positive semi-definite\" and \"positive definite\" to PSD and PD, respectively.", "Denote the real numbers.", "In this paper, we investigated the problem of learning diverse features via a determinantal point process under deep learning framework.", "To overcome the instability in computing the gradient which involves the matrix inverse, we developed an efficient and reliable procedure called proper spectral sub-gradient generation.", "The generated proper sub-gradient can replace the true gradient and performs well in applications.", "We also considered how to constrain the features into a bounded space, since in such a way one can ensure the behavior of the network more predictable.", "To this end, we further incorporated Wasserstein GAN into our framework.", "Together, DPP+WGAN showed significant performance on both some common criteria and feature space utility.", "A APPENDIX" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.15789473056793213, 0.1818181723356247, 0.14999999105930328, 0.15094339847564697, 0.03999999538064003, 0.1249999925494194, 0.0624999962747097, 0.11940298229455948, 0.10526315122842789, 0, 0.06666666269302368, 0.12121211737394333, 0.08888888359069824, 0.051282044500112534, 0, 0.0714285671710968, 0.04878048226237297, 0.0714285671710968, 0.1111111044883728, 0.10810810327529907, 0.07692307233810425, 0.11764705181121826, 0.0714285671710968, 0.10810810327529907, 0, 0.060606054961681366, 0, 0, 0.10526315122842789, 0, 0.05882352590560913, 0, 0, 0, 0.1249999925494194, 0.04255318641662598, 0.11538460850715637, 0.1463414579629898, 0.1463414579629898, 0.20512819290161133, 0, 0.10256409645080566, 0.06666666269302368, 0.06896550953388214, 0.09999999403953552, 0.05714285373687744, 0.19512194395065308, 0.2857142686843872, 0.09302324801683426, 0.1860465109348297, 0, 0.06896550953388214, 0.04651162400841713, 0, 0.05882352590560913, 0.1428571343421936, 0, 0.4324324131011963, 0.1463414579629898, 0.1249999925494194, 0.1428571343421936, 0.06896550953388214, 0 ]
rkeIq2VYPr
true
[ "We proposed a specific back-propagation method via proper spectral sub-gradient to integrate determinantal point process to deep learning framework." ]
[ "The quality of a machine translation system depends largely on the availability of sizable parallel corpora.", "For the recently popular Neural Machine Translation (NMT) framework, data sparsity problem can become even more severe.", "With large amount of tunable parameters, the NMT model may overfit to the existing language pairs while failing to understand the general diversity in language.", "In this paper, we advocate to broadcast every sentence pair as two groups of similar sentences to incorporate more diversity in language expressions, which we name as parallel cluster.", "Then we define a more general cluster-to-cluster correspondence score and train our model to maximize this score.", "Since direct maximization is difficult, we derive its lower-bound as our surrogate objective, which is found to generalize point-point Maximum Likelihood Estimation (MLE) and point-to-cluster Reward Augmented Maximum Likelihood (RAML) algorithms as special cases.", "Based on this novel objective function, we delineate four potential systems to realize our cluster-to-cluster framework and test their performances in three recognized translation tasks, each task with forward and reverse translation directions.", "In each of the six experiments, our proposed four parallel systems have consistently proved to outperform the MLE baseline, RL (Reinforcement Learning) and RAML systems significantly.", "Finally, we have performed case study to empirically analyze the strength of the cluster-to-cluster NMT framework.", "Recently, an encode-decoder neural architecture has surged and gained its popularity in machine translation.", "In this framework, the encoder builds up a representation of the source sentence and the decoder uses its previous RNN hidden state and attention mechanism to generate target translation.", "In order to better memorize the input information, an attention mechanism has been exploited to further boost its performance.", "In order to train the attentive encoder-decoder architecture, Maximum Likelihood Estimation (MLE) algorithm has been widely used, which aims at maximizing the point-to-point (one sentence to one sentence) log-likelihood of data pairs in a given dataset.", "However, this algorithm has severely suffered from data sparsity problem, or in other word, maximizing only likelihood the existing language pairs might make the model blind to all the non-existing similar sentence pairs.", "Thus, the large neural model might overfit to certain prototypes existing in the training set while failing to generalize more unseen but similar scenarios in test time.hurting its semantic meaning.", "2) Model-Centroid Augmentation (RL), and BID13 leverage model-generated candidates as pseudo training samples, which are weighted with rewards to enhance the model learning.", "By exploring self-generated candidates, the model is able to understand the diversity in the output space.", "In pseudo-learning algorithms, both RAML and RL can be interpreted as broadcasting a target ground truth as a cluster of analogues while leaving the source input untouched, which though helps the model understand target diversity, fails to capture the input diversity.", "In order to explore both sides' diversity, we advocate a novel and general cluster-to-cluster framework of pseudo learning, which first broadcasts both source and target sentence as clusters and then train the model to comprehend their correspondence, as described in FIG0 .In", "this paper, we first introduce the concept of parallel cluster, then design the cluster-to-cluster correspondence score as our optimization objective, based on which, we derive its lower bound KL-divergence as our surrogate objective for model training. In", "order to realize our proposed framework, we design four parallel systems and apply them to three recognized machine translation tasks with both forward and reverse translation directions, these four systems have all demonstrated their advantages over the existing competing algorithms in six translation tasks. In", "the appendices, we draw samples from the parallel clusters and further analyze their properties to verify our motivation.The contributions of our paper can be summarized as follows: 1)", "We are the first to propose the concept of cluster-to-cluster framework, which provides a novel perspective to current sequence-tosequence learning problems. 2)", "We delineate the framework and arrive in a novel KL-divergence loss function and generalizes several existing algorithms as special cases, which provides a highlevel understanding about the previous algorithms.2 RELATED", "LITERATURE", "In this paper, we propose a cluster-to-cluster learning framework and incorporate this concept into neural machine translation.", "Our designed systems have proved to be efficient in helping current NMT model to generalize in both source and target sides.", "In the cluster-to-cluster framework, the cooperation of four agents can augment valuable samples and alleviate data sparsity, and achieve significant improvement compared with strong baseline systems.", "We believe the concept of clusterto-cluster learning can be applicable to a wide range of natural language or computer vision tasks, which will be explored in the future.", "Appendices A SYSTEM-DESIGN Sequence to sequence problem (machine translation) can be considered to produce an output sequence Y = (y 1 , y 2 , . . . , y T ), y t ∈ A given an input X. Given input-target pairs (X, Y * ), the generated sequence Y on test is evaluated with task-specific score R(Y, Y * ).", "Recurrent neural networks have been widely used in sequence to sequence prediction tasks.", "As proposed in and , the basic idea is to first encode the input sequence as a variablelength feature vectors, then apply attention mechanism to compute weighted average over the input vectors and summarize a context vector, with which, previous hidden states and previous label are fed into the decoder RNN to predict the next state and its label.", "In our approach, attention-based encoder-decoder is leveraged for both the translation and cluster models, shown as: DISPLAYFORM0 A.1", "RL NMT In order to train our RL system as well as adaptive cluster, we need to define a task-level reward as driving signal.", "Instead of directly applying BLEU or other evaluation metric, we advocate to use a surrogate n-gram match interpolation, as shown as: DISPLAYFORM1 where N n denotes the number of n-gram match between Y and Y * .", "In order to alleviate sequencereward sparseness, we further split it as a series of local reward to drive model's policy search at every time step.", "Formally, we write the step-wise reward r(y t |y 1:t−1 , Y * ) as following.", "(22) where N (Y,Ỹ ) represents the occurrence of n-gramỸ in sequence Y , specifically, if a certain nsequence y t−n+1:t appears in reference and it's not repeating more than needed, then we assign a corresponding matching score to y t , the policy gradient is described as: DISPLAYFORM2 DISPLAYFORM3 A.2", "RAML NMT In order to sample from the intractable payoff distribution for system-A/B as well as our implemented RAML system, we adopt stratified sampling technique described in .", "Given a sentence Y * , we first sample an edit distance m, and then randomly select m positions to replace the original labels.", "For each sentence, we randomly sample four candidates to perform RAML training.", "DISPLAYFORM4 B MATHEMATICAL ANALYSIS We optimize the model parameters of our cluster-to-cluster models by minimizing the lower-bound KL-divergence instead of maximizing the original correspondence score, to characterize the difference between the two objective function, we analyze the relationships between these two functions below: DISPLAYFORM5 which can be further written as: DISPLAYFORM6 therefore, we can derive: DISPLAYFORM7 Since both cluster and translation confidence score c(Y |Y * , X * ) and w(Y |X, X * ) require computing the marginalized probability p(Y |X * ) known to be intractable for variable-length sequences, here we adopt different mechanisms to approximate them.", "In system-A and C, we simplify DISPLAYFORM8 pη(Y |X * ) .", "In system-B and D, since Y is broadcast through the translation system, the marginalized probabilityp(Y |X * ) is close to one, we discard this factor and approximate c(Y |Y DISPLAYFORM9" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.10810810327529907, 0.24390242993831635, 0.1304347813129425, 0.1666666567325592, 0.07999999821186066, 0.15686273574829102, 0.09090908616781235, 0.22857142984867096, 0.05882352590560913, 0.21739129722118378, 0.10526315122842789, 0.1111111044883728, 0.07999999821186066, 0.0416666604578495, 0.1395348757505417, 0.1764705777168274, 0.3636363446712494, 0.3571428656578064, 0.11320754140615463, 0.10344827175140381, 0.12765957415103912, 0.29999998211860657, 0.2978723347187042, 0.2222222238779068, 0.25641024112701416, 0.1818181723356247, 0.2666666507720947, 0.060606054961681366, 0, 0.09090908616781235, 0.20512819290161133, 0.09999999403953552, 0.11538460850715637, 0.045454539358615875, 0.0555555522441864, 0.08955223113298416, 0.13333332538604736, 0.13636362552642822, 0, 0.16326530277729034, 0.06451612710952759, 0.0833333283662796 ]
BykJlIAbM
true
[ "We invent a novel cluster-to-cluster framework for NMT training, which can better understand the both source and target language diversity." ]
[ "The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems.", "In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP).", "We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data.", "In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster.", "We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.", "In this work, we propose Neural Logic Inductive Learning, a differentiable ILP framework that learns explanatory rules from data.", "We demonstrate that NLIL can scale to very large datasets while being able to search over complex and expressive rules.", "More importantly, we show that a scalable ILP method is effective in explaining decisions of supervised models, which provides an alternative perspective for inspecting the decision process of machine learning systems." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0, 0.19354838132858276, 0.6486486196517944, 0.1538461446762085, 0.1249999925494194, 0.3636363446712494, 0.1818181723356247, 0.13636362552642822 ]
SJlh8CEYDB
true
[ "An efficient differentiable ILP model that learns first-order logic rules that can explain the data." ]
[ "Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems.", "Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways.", "When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations.", "Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems.\n\n", "We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints.", "We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations.", "We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation.", "Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world.", "Our results show that adversarial examples are a practical concern for real-world systems.\n", "The existence of adversarial examples for neural networks has until now been largely a theoretical concern.", "While minute, carefully-crafted perturbations can cause targeted misclassification in a neural network, adversarial examples produced using standard techniques lose adversariality when directly translated to the physical world as they are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise.", "This suggests that practical systems may not be at risk because adversarial examples generated using standard techniques are not robust in the physical world.", "We show that neural network-based classifiers are vulnerable to physical-world adversarial examples.", "We introduce a new algorithm for reliably producing physical 3D objects that are adversarial over a distribution of viewpoints.", "FIG0 shows an example of an adversarial object constructed using our approach, where a 3D-printed turtle is consistently classified as rifle by an ImageNet classifier.", "In this paper, we demonstrate the efficacy and generality of our method, demonstrating conclusively that adversarial examples are a concern in real-world systems.", "The results and quantative analysis in this section demonstrate the efficacy of EOT and confirm the existence of physical adversarial examples.", "Here, we perform a qualitative analysis of the results:Modeling Perception.", "The EOT algorithm as presented in Section 2 presents a general method to construct adversarial examples over a chosen perceptual distribution, but notably gives no guarantees for observations of the image outside of the chosen distribution.", "In constructing physical-world adversarial objects, we use a crude, high-variance approximation of the rendering and capture process, and this succeeds in ensuring robustness to a diverse set of environments; see, for example, FIG5 , which shows the same adversarial turtle in vastly different environments.", "In specialized 1 Although the viewpoints were not selected in any way and were simply the result of walking around the objects, moving them up/down, etc., we hesitate to call them \"random\" since they were not in fact generated numerically or sampled from a concrete distribution, in contrast with the rendered 3D examples.", "domains, however, a domain expert may opt to model the perceptual distribution precisely in order to better constrain the search space.", "Our work shows that adversarial examples pose a practical threat to systems using neural networkbased image classifiers.", "By introducing EOT, a general-purpose algorithm for creating robust adversarial examples under any chosen distribution, and modeling 3D rendering and printing within the framework of EOT, we succeed in fabricating three-dimensional adversarial objects.", "With access only to low-cost commercially available 3D printing technology, we successfully print physical adversarial objects that are strongly classified as a chosen target class over a variety of angles, viewpoints, and lighting conditions by a standard ImageNet classifier." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.14999999105930328, 0.31372547149658203, 0.17777776718139648, 0.4444444477558136, 0.29999998211860657, 0.2926829159259796, 0.2790697515010834, 0.2222222238779068, 0.21052631735801697, 0.2769230604171753, 0.31111109256744385, 0.23529411852359772, 0.44999998807907104, 0.08888888359069824, 0.2666666507720947, 0.29999998211860657, 0.1249999925494194, 0.29629629850387573, 0.2666666507720947, 0.20895521342754364, 0.19512194395065308, 0.20512819290161133, 0.38461539149284363, 0.23728813230991364 ]
BJDH5M-AW
true
[ "We introduce a new method for synthesizing adversarial examples robust in the physical world and use it to fabricate the first 3D adversarial objects." ]
[ "Representations of sets are challenging to learn because operations on sets should be permutation-invariant.", "To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end.", "The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models.", "We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering.\n", "Consider a task where each input sample is a set of feature vectors with each feature vector describing an object in an image (for example: person, table, cat).", "Because there is no a priori ordering of these objects, it is important that the model is invariant to the order that the elements appear in the set.", "However, this puts restrictions on what can be learned efficiently.", "The typical approach is to compose elementwise operations with permutation-invariant reduction operations, such as summing (Zaheer et al., 2017) or taking the maximum (Qi et al., 2017) over the whole set.", "Since the reduction operator compresses a set of any size down to a single descriptor, this can be a significant bottleneck in what information about the set can be represented efficiently (Qi et al., 2017; Le & Duan, 2018; Murphy et al., 2019) .We", "take an alternative approach based on an idea explored in Vinyals et al. (2015a) , where they find that some permutations of sets allow for easier learning on a task than others. They", "do this by ordering the set elements in some predetermined way and feeding the resulting sequence into a recurrent neural network. For", "instance, it makes sense that if the task is to output the top-n numbers from a set of numbers, it is useful if the input is already sorted in descending order before being fed into an RNN. This", "approach leverages the representational capabilities of traditional sequential models such as LSTMs, but requires some prior knowledge of what order might be useful.Our idea is to learn such a permutation purely from data without requiring a priori knowledge (section 2). The", "key aspect is to turn a set into a sequence in a way that is both permutation-invariant, as well as differentiable so that it is learnable. Our", "main contribution is a Permutation-Optimisation (PO) module that satisfies these requirements: it optimises a permutation in the forward pass of a neural network using pairwise comparisons. By", "feeding the resulting sequence into a traditional model such as an LSTM, we can learn a flexible, permutation-invariant representation of the set while avoiding the bottleneck that a simple reduction operator would introduce. Techniques", "used in our model may also be applicable to other set problems where permutation-invariance is desired, building on the literature of approaches to dealing with permutation-invariance (section 3).In four different", "experiments, we show improvements over existing methods (section 4). The former two tasks", "measure the ability to learn a particular permutation as target: number sorting and image mosaics. We achieve state-of-the-art", "performance with our model, which shows that our method is suitable for representing permutations in general. The latter two tasks test whether", "a model can learn to solve a task that requires it to come up with a suitable permutation implicitly: classification from image mosaics and visual question answering. We provide no supervision of what", "the permutation should be; the model has to learn by itself what permutation is most useful for the task at hand. In the ordering cost C, elements", "of X are compared to each other (blue represents a negative value, red represents a positive value). Gradients are applied to unnormalised", "permutations P (t) , which are normalised to proper permutations P (t) .Here, our model also beats the existing", "models and we improve the performance of a state-of-the-art model in VQA with it. This shows that our PO module is able to", "learn good permutation-invariant representations of sets using our approach.", "In this paper, we discussed our Permutation-Optimisation module to learn permutations of sets using an optimisation-based approach.", "In various experiments, we verified the merit of our approach for learning permutations and, from them, set representations.", "We think that the optimisation-based approach to processing sets is currently underappreciated and hope that the techniques and results in this paper will inspire new algorithms for processing sets in a permutation-invariant manner.", "Of course, there is plenty of work to be done.", "For example, we have only explored one possible function for the total cost; different functions capturing different properties may be used.", "The main drawback of our approach is the cubic time complexity in the set size compared to the quadratic complexity of Mena et al. FORMULA0 , which limits our model to tasks where the number of elements is relatively small.", "While this is acceptable on the real-world dataset that we used -VQA with up to 100 object proposals per image -with only a 30% increase in computation time, our method does not scale to the much larger set sizes encountered in domains such as point cloud classification.", "Improvements in the optimisation algorithm may improve this situation, perhaps through a divide-and-conquer approach.We believe that going beyond tensors as basic data structures is important for enabling higher-level reasoning.", "As a fundamental mathematical object, sets are a natural step forward from tensors for modelling unordered collections.", "The property of permutation invariance lends itself to greater abstraction by allowing data that has no obvious ordering to be processed, and we took a step towards this by learning an ordering that existing neural networks are able to take advantage of." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07407406717538834, 0.3448275923728943, 0.3529411852359772, 0.12244897335767746, 0.15789473056793213, 0.1666666567325592, 0, 0.1428571343421936, 0.11764705181121826, 0.04444444179534912, 0.11428570747375488, 0.17391303181648254, 0.07692307233810425, 0.17142856121063232, 0.05128204822540283, 0.13636362552642822, 0.1395348757505417, 0, 0.1249999925494194, 0.05714285373687744, 0.13636362552642822, 0.05405404791235924, 0.1249999925494194, 0.06666666269302368, 0.1621621549129486, 0, 0.06451612710952759, 0.0624999962747097, 0.09756097197532654, 0.0833333283662796, 0, 0.08888888359069824, 0.13793103396892548, 0.045454539358615875, 0.06666666269302368, 0.07999999821186066 ]
HJMCcjAcYX
true
[ "Learn how to permute a set, then encode permuted set with RNN to obtain a set representation." ]
[ "The physical design of a robot and the policy that controls its motion are inherently coupled.", "However, existing approaches largely ignore this coupling, instead choosing to alternate between separate design and control phases, which requires expert intuition throughout and risks convergence to suboptimal designs.", "In this work, we propose a method that jointly optimizes over the physical design of a robot and the corresponding control policy in a model-free fashion, without any need for expert supervision.", "Given an arbitrary robot morphology, our method maintains a distribution over the design parameters and uses reinforcement learning to train a neural network controller.", "Throughout training, we refine the robot distribution to maximize the expected reward.", "This results in an assignment to the robot parameters and neural network policy that are jointly optimal.", "We evaluate our approach in the context of legged locomotion, and demonstrate that it discovers novel robot designs and walking gaits for several different morphologies, achieving performance comparable to or better than that of hand-crafted designs.", "An agent's ability to navigate through and interact with its environment depends not just on its skill at planning and controlling its motion, but also on its physical design.", "Different physical designs are inherently better suited to different tasks and environments.", "By making appropriate choices during fabrication, mechanical elements can be designed to improve robustness to non-idealities such as errors in perception, delays in actuation, etc., and indeed, make control problem an easier one to solve.", "At the same time, robots that take different forms may find completely different control strategies to be optimal to complete the same task.", "Therefore, the physical and computational design of an agent are inherently coupled, and must ideally be jointly optimized if the robot is to successfully complete a task in a particular environment.Consider the development of a legged robot for locomotion.", "Variations in physical design will require changes to the joint torques in order to preserve a particular locomotion behavior (e.g., a heavier torso requires greater torque at the ankle), and will likely result in completely different walking gaits, even when the morphology is preserved.", "In fact, some changes to design may render locomotion impossible for the target operating environment (e.g., a robot with long feet may be unable to locomote up an incline).", "Meanwhile, careful choice of bipedal design enables passive walking BID20 BID9 BID4 .", "It is therefore beneficial to not simply consider the robot's design or gait to be fixed, but to optimize both jointly for the target environment and task.", "Similar co-design can be beneficial in other settings-for example for the control policy and physical characteristics of digits in robotic grippers for grasping.While a robot's physical design and the corresponding control policy are inherently coupled, most existing methods ignore this coupling, instead choosing to alternate between separate design and control phases.", "Existing approaches that jointly reason over design and control BID7 BID12 BID46 assume knowledge of an accurate model of the robot dynamics and require expert supervision (e.g., to provide a suitable initial design and guide the optimization process).", "However, these restrictive assumptions limits their applicability to a handful of specific settings, and often yield solutions heavily influenced by expert intuition.In this work, we seek a general approach-one that can optimize a robot's physical characteristics jointly with controllers of a desired complexity (Fig. 1) , that can be applied to general tasks in some DISPLAYFORM0 Figure 1: Our algorithm learns a robot's physical design jointly with the control policy.", "Here we show the learned designs evolving over time for the Hopper (top left), the Walker2d (top right) and the Ant (bottom), each with the default Roboschool design for comparison.", "Scale is fixed for each robot.", "Note that these designs correspond to modes of the distribution over robot designs that our algorithm maintains during training.given environment, and that can explore the joint search space of physical design and computational control in a purely data-driven way, without a model of the robot dynamics and independent of the biases of expert intuition.", "We develop this approach in the context of determining the physical parameters of an articulated agent-the lengths and thicknesses of each limbs in a given morphologythrough joint training with a neural network for control, with the objective of achieving locomotion.", "Our method maintains a distribution over these physical parameters, and simultaneously trains the parameters of this distribution with those of a neural network controller, using deep reinforcement learning.", "In this way, we pursue a design distribution and control policy that are jointly optimal for the given task and environment.", "Experimental results show that starting from random initializations, our approach is able to find novel designs and walking gaits that match or exceed the performance of manually designed agents.", "To the best of our knowledge, our method is the first to successfully carry out such a joint optimization of design and control in a completely model-free manner.", "We proposed what is, to the best of our knowledge, the first model-free algorithm that jointly optimizes over the physical design of a robot and the corresponding control policy, without any need for expert supervision.", "Given an arbitrary morphology, our robot maintains a distribution over the robot design parameters and learns these parameters together with a neural network controller using policy gradient-based reinforcement learning.", "This results in an assignment to the policy over robot parameters and the control policy that are jointly optimal.", "We evaluated our approach on a series of different legged robot morphologies, demonstrating that it results in novel robot designs and walking gaits, achieving performance that either matches or exceeds that of manually defined designs.Our findings suggest several avenues for future work.", "The most direct is extending the current approach to find optimized designs for uneven terrain, the presence of obstacles, changes in slope, variations in friction, etc.", "We are also interested in extending our framework to relax the assumption that the morphology is pre-defined.", "Finally, we are investigating applications to different types of agents and design spaces beyond legged robots (e.g., end-effectors), and exploring appropriate stochastic parameterization for such designs." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4375, 0.1428571343421936, 0.4000000059604645, 0.3589743673801422, 0.2222222238779068, 0.3030303120613098, 0.1666666567325592, 0.19999998807907104, 0.1428571343421936, 0.08163265138864517, 0.17142856121063232, 0.3265306055545807, 0.1818181723356247, 0.2666666507720947, 0.1428571343421936, 0.19999998807907104, 0.27586206793785095, 0.31372547149658203, 0.2702702581882477, 0.14999999105930328, 0.09090908616781235, 0.28070175647735596, 0.2083333283662796, 0.39024388790130615, 0.3333333432674408, 0.13636362552642822, 0.29999998211860657, 0.3829787075519562, 0.380952388048172, 0.3636363446712494, 0.1111111044883728, 0.14999999105930328, 0.1249999925494194, 0.1395348757505417 ]
SyfiiMZA-
true
[ "Use deep reinforcement learning to design the physical attributes of a robot jointly with a control policy." ]
[ "Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data.", "Applications of neural networks often consider learning in the context of a single task.", "However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks.", "Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks.", "However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits.", "Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide.", "In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training.", "To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix.", "By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks.", "On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both.", "While deep learning has enabled remarkable levels of generalization through the use of function approximators, this comes at the cost of large amounts of data, which remains a critical challenge in deploying deep learning to a number of domains.", "When combined with deep networks, multitask learning offers the promise of building more powerful representations using less data per task, leading to greater performance and data efficiency.", "However, multi-task deep learning has also posed considerable challenges.", "Numerous works have observed that joint training on multiple tasks can actually decrease task performance due to the negative influence of other tasks (Parisotto et al., 2015; Rusu et al., 2016a) .", "Indeed, training networks entirely independently on each task has remained a strong approach, to the point that multiple multi-task methods have first trained models independently before using them to train a multi-tasking model (Parisotto et al., 2015; Rusu et al., 2016a; Ghosh et al., 2017; Teh et al., 2017; .", "Moreover, our experiments in Section 6 indicate that three recently proposed methods for multi-task learning are all surpassed by training models independently per task.", "However, training independent models will only work well when provided enough data per task, and precludes potential positive data-efficiency gains from multi-task learning, only providing protection against negative transfer.", "Further, while a number of works have successfully shared parameters, finding an architecture with the appropriate level of parameter sharing for a given problem domain can require a considerable amount of manual engineering.", "In this work, we aim to develop a multi-task learning method that can perform well both when tasks share very little and when they share a large amount of structure.", "To address this problem, we consider how a single neural network model can represent two extremes: independent models, when optimization challenges prevail, or a single model with shared weights, when sharing is beneficial.", "Further, we would like such a model to be able to represent intermediate levels of model sharing, when appliable.", "One option for performing independent training within a single model is to put separate networks with independent weights into a single model, using the task ID to select which network prediction to output.", "However, this prevents any sharing.", "An alternative approach is to condition the model on the task ID, through various conditioning approaches, including additive and multiplicative approaches such as FiLM (Perez et al., 2018) .", "In fact, point-wise multiplicative conditioning, as proposed in FiLM, can indeed represent separate networks by selecting which parts of the network to be used for different tasks, as can a number of other approaches in multi-task learning (Rosenbaum et al., 2017; 2019; Fernando et al., 2017 ).", "Yet, these approaches still require an optimization over shared parameters in order to select which parameters are used for each task.", "These shared parameters can introduce significant optimization challenges.", "We instead consider how to allow a model to perform optimization on only shared parameters, only disjoint parameters, or any combination thereof.", "We can achieve this by simply interleaving learned per-task matrices at each layer of a jointly-trained neural network.", "When optimization over shared parameters is ineffective, the model can still represent a full neural network per task using only the per-task matrices, resulting in independent training; while using identical per-task matrices results in standard joint training.", "Intermediately, a mix of shared and per-task parameters may be used.", "In effect, by incorporating these matrices into the network, the optimizer itself can automatically and dynamically modulate the degree to which a representation is shared between tasks, depending on the problem domain and the optimization progress, and can do so without having to optimize shared parameters.", "The primary contribution of this paper is a simple yet effective approach for multi-task learning that can represent and smoothly interpolate between independent training and joint training, via matrix interleaving (Mint).", "We describe how we can implement Mint in deep multi-task models and show its effectiveness in improving data efficiency and generalization in multi-task settings while providing intuition about the reasons why this architecture performs so well.", "Further, we show that the model can be extended to goal-conditioned reinforcement learning in a straightforward manner by allowing the model to generate the interleaved matrices conditioned on task information such as the goal.", "We evaluate Mint on sets of tasks with both high and low levels of shared structure and find that it performs well in both settings, performing comparably to or outperforming both joint training and independent training, effectively combining the best elements of both.", "Further, in comparison to previous methods that use multiplicative interactions for continual learning (Cheung et al., 2019) and for general conditioning (Perez et al., 2018) , Mint is better able to separate tasks by avoiding the need to optimize over shared parameters and can empirically produce substantially better performance on a range of challenging multi-task problems.", "Finally, Mint also outperforms state-of-the-art approaches for multi-task learning while being significantly simpler to implement.", "Simultaneous optimization of multiple, potentially unrelated tasks can prove challenging for deep neural networks.", "Recent multi-task learning architectures attempt to mitigate this issue by providing alternative pathways for information to flow through a neural network for each task.", "In this paper, we introduce a new multi-task learning module, Mint, which provides theoretical guarantees of universal approximation even for multi-task settings with no shared structure.", "We conjecture that this property, not shared by similar multi-task architectures, enables Mint to outperform other multi-task approaches on a variety of reinforcement learning benchmarks.", "We also observe that Mint is able to match or improve upon the performance of independent training.", "While Mint exhibits strong performance gains over previous methods, one potential limitation is that the task matrices may introduce a significant number of parameters, particularly as the number of tasks increases.", "As discussed, this can be alleviated for problem domains with many tasks, by learning a single neural network that produces the matrices and biases conditioned on the task descriptor.", "Further, in our experiments, we find that Mint-based networks can outperform prior methods while using comparable or fewer parameters.", "In summary, Mint is a simple, yet effective approach for deep multi-task learning.", "Its implementation requires minimal modifications over standard deep networks.", "As a result, we expect it to be straightforward for future work to build upon or use Mint for more effective multi-task learning in deep networks.", "A PROOF OF THEOREM 1 Lemma 1.", "For a given α i , applying Mint to y (l−1) can express an arbitrary affine transformation at layer l for each task." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.20512820780277252, 0.19607841968536377, 0.22727271914482117, 0.3103448152542114, 0.15094339847564697, 0.7346938848495483, 0.17543859779834747, 0.21276594698429108, 0.4000000059604645, 0.17543859779834747, 0.19230768084526062, 0.11428570747375488, 0.1818181723356247, 0.21212120354175568, 0.1599999964237213, 0.14814814925193787, 0.145454540848732, 0.2641509473323822, 0.2545454502105713, 0.1860465109348297, 0.3333333134651184, 0, 0.18518517911434174, 0.20588235557079315, 0.1304347813129425, 0, 0.17777776718139648, 0.09090908616781235, 0.23728813230991364, 0.10810810327529907, 0.158730149269104, 0.4285714328289032, 0.13793103396892548, 0.2181818187236786, 0.35483869910240173, 0.18666666746139526, 0.1463414579629898, 0, 0.1666666567325592, 0.19607841968536377, 0.23999999463558197, 0.2790697515010834, 0.1111111044883728, 0.25925925374031067, 0.04444443807005882, 0.25641024112701416, 0, 0.19999998807907104, 0, 0.12244897335767746 ]
BJxnIxSKDr
true
[ "We propose an approach that endows a single model with the ability to represent both extremes: joint training and independent training, which leads to effective multi-task learning." ]
[ "Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment.", "However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful.", "This has not been the case for agents trained with reinforcement learning (RL) algorithms.", "In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks.", "Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously. \n", "We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken.", "The results of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training.\n", "Learning control policies from high-dimensional sensory input has been gaining more traction lately due to the popularity of deep reinforcement learning (DRL) Mnih et al. (2015) ; ; Zhang et al. (2018b) ; Rakelly et al. (2019) , which enables learning the perception and control modules simultaneously.", "However, most of the work done in RL chooses to evaluate the learned policies in the same environment in which training occurred Cobbe et al. (2018) .", "Using the same environments to train and test agents does not give any insight into the generalization abilities of the learned policy.", "There could be a number of changes in the environment at test time that would degrade the agent's performance.", "Variations could appear in the visual aspects that determine the agent's observation, the physical structure that determines the agent's state and even some aspects that are related to the agent's goal (Figure 1 ).", "For example, different observations of the same room are encountered at different times of the day (different lighting conditions).", "New obstacles could be present.", "Levels of a game could be different, yet playing a few levels should often be enough to figure out how to play the rest.", "Such variations might result in a new environment where the control model that defined the training environment has changed.", "A robust policy should generalize from its experience and perform the same skills in the presence of these variations.", "DRL agents have been notorious for overfitting to their training environments Cobbe et al. (2018) .", "An agent could have drastically different performance on testing environments even if it manages to maximize the reward during training Zhang et al. (2018a) .", "Supervised learning algorithms have been shown to have some generalization guarantees when adding proper regularization Mohri et al. (2018) .", "However, these guarantees are weakened in reinforcement learning algorithms where the source of the data is not i.i.d..", "In order to make use of the progress of DRL algorithms in practice we need policies that are robust to possible changes in the sensory inputs, surrounding structure and even some aspects of the task.", "In this paper we study the notion of generalization that is appropriate for visual navigation control policies that are learned with DRL.", "We present: (1) a study of the generalization of visual control policies to certain changes in the underlying dynamical system; (2) an alternative training method that combines DRL with supervised learning, thus using DRL to learn a controller while leveraging the generalization properties of supervised learning.", "In our experiments we use the VizDoom platform Kempka et al. (2016) which is easily customizable and enables the generation of numerous variants of a given environment.", "We present a study of the generalization capabilities of visual navigation agents trained with deep reinforcement learning algorithms.", "We formalize what it means to generalize in the context of a POMDP.", "We find that the tendency of RL agent to overfit even when exposed to large training sets is quite visible.", "We show that using domain randomization with RL, without adding invariant features to the input such as the depth maps, is not enough to generalize.", "In the second part, we proposed Invariance Regularization (IR), a method that attempts to regularize the RL model with a supervised learning loss.", "It improves the generalization success and displays stable performance across different seeds.", "In this work, we focused our experimentation on generalization to changes in the input observation.", "However, it is also interesting to generalize the learned skills to different architectural designs of the environment, just as one one wishes to generalize to different levels of the game as proposed in the retro competition Nichol et al. (2018) .", "Another avenue of future work is to explore the appropriate transformation function T of the observations.One might consider an adaptive form of T learned with data augmentation Cubuk et al. (2018) or adversarial examples Goodfellow et al. (2015 The first part consists of training RL on the observations of the original training environment, while the second part can be seen as a supervised learning objective on the transformed observations, as shown in Algorithm 1.", "The first step trains RL on one environment and then use the actions that the trained policy would have taken in that environment to tune the model with supervised learning on the textured environments.", "In the reported experiments using the split version, the model is trained with one iteration of the algorithm.", "Therefore, the training process has two stages, train RL then train with a supervised learning setup, without iterating between both." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.13333332538604736, 0.14999999105930328, 0.09302324801683426, 0.09302324801683426, 0.3103448152542114, 0.0833333283662796, 0.21875, 0.0833333283662796, 0.17391303181648254, 0.1818181723356247, 0.11764705181121826, 0.0476190410554409, 0.06451612710952759, 0.1702127605676651, 0.09302324801683426, 0.13636362552642822, 0.04878048226237297, 0.1599999964237213, 0.1818181723356247, 0.13636362552642822, 0.145454540848732, 0.04255318641662598, 0.2222222238779068, 0.11764705181121826, 0.23255813121795654, 0.20512820780277252, 0.17777776718139648, 0.20408162474632263, 0.1702127605676651, 0.10526315122842789, 0.19512194395065308, 0.072727270424366, 0.119047611951828, 0.18518517911434174, 0.04878048226237297, 0.13333332538604736 ]
B1xtFpVtvB
true
[ "We propose a regularization term that, when added to the reinforcement learning objective, allows the policy to maximize the reward and simultaneously learn to be invariant to the irrelevant changes within the input.." ]
[ "Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.", "In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT.", "In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet.", "An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations.", "In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT.", "Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines.", "Visual information has been introduced for neural machine translation in some previous studies (NMT) Barrault et al., 2018; Ive et al., 2019) though the contribution of images is still an open question (Elliott, 2018; Caglayan et al., 2019) .", "Typically, each bilingual (or multilingual) parallel sentence pair is annotated manually by one image describing the content of this sentence pair.", "The bilingual parallel corpora with manual image annotations are used to train a multimodel NMT model by an end-to-end framework, and results are reported on a specific data set, Multi30K .", "One strong point of the multimodel NMT model is the ability to use visual information to improve the quality of the target translation.", "However, the effectiveness heavily relies on the availability of bilingual parallel sentence pairs with manual image annotations, which hinders the image applicability to the NMT.", "As a result, the visual information is only applied to the translation task over a small and specific multimodel data set Multi30K , but not to large-scale text-only NMT (Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) and low-resource text-only NMT (Fadaee et al., 2017; Lample et al., 2018; .", "In addition, because of the high cost of annotation, the content of one bilingual parallel sentence pair is only represented by a single image, which is weak in capturing the diversity of visual information.", "The current situation of introducing visual information results in a bottleneck in the multimodel NMT, and is not feasible for text-only NMT and low-resource NMT.", "In this paper, we present a universal visual representation (VR) method 1 relying only on image-monolingual annotations instead of the existing approach that depends on image-bilingual annotations, thus breaking the bottleneck of using visual information in NMT.", "In detail, we transform the existing sentence-image pairs into topic-image lookup table from a small-scale multimodel data set Multi30K.", "During the training and decoding process, a group of images with similar topic to the source sentence will be retrieved from the topic-image lookup table learned by the term frequency-inverse document frequency, and thus is encoded as image representations by a pretrained ResNet (He et al., 2016) .", "A simple and effective attention layer is then designed to fuse the image representations and the original source sentence representations as input to the decoder for predicting target translations.", "In particular, the proposed approach can be easily integrated into the text-only NMT model without annotating large-scale bilingual parallel corpora.", "The proposed method was evaluated on four widely-used translation datasets, including the WMT'16 Englishto-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K which are standard corpora for NMT and multi-modal machine translation (MMT) evaluation.", "Experiments and analysis show effectiveness.", "In summary, our contributions are primarily three-fold:", "1. We present a universal visual representation method that overcomes the shortcomings of the bilingual (or multilingual) parallel data with manual image annotations for MMT.", "2. The proposed method enables the text-only NMT to use the multimodality of visual information without annotating the existing large scale bilingual parallel data.", "3. Experiments on different scales of translation tasks verified the effectiveness and generality of the proposed approach.", "This work presents a universal visual representation method for neural machine translation relying on monolingual image annotations, which breaks the restraint of heavy dependency on bilingual sentence-image pairs in the current multimodal NMT setting.", "In particular, this method enables visual information to be applied to large-scale text-only NMT through a topic-image lookup.", "We hope this work sheds some light for future MMT research.", "In the future, we will try to adopt the proposed method to other tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.25, 0.3571428656578064, 0.29032257199287415, 0.2083333283662796, 0.22727271914482117, 0.07692307233810425, 0.23728813230991364, 0.04444443807005882, 0.18518517911434174, 0.1818181723356247, 0.21276594698429108, 0.15625, 0.1111111044883728, 0.2083333283662796, 0.23728813230991364, 0.04444443807005882, 0.23529411852359772, 0.1599999964237213, 0.08888888359069824, 0.1818181723356247, 0, 0, 0.2800000011920929, 0.1666666567325592, 0.09756097197532654, 0.4482758641242981, 0.1860465109348297, 0.10810810327529907, 0.10526315122842789 ]
Byl8hhNYPS
true
[ "This work proposed a universal visual representation for neural machine translation (NMT) using retrieved images with similar topics to source sentence, extending image applicability in NMT." ]
[ "This paper introduces a novel framework for learning algorithms to solve online combinatorial optimization problems.", "Towards this goal, we introduce a number of key ideas from traditional algorithms and complexity theory.", "First, we draw a new connection between primal-dual methods and reinforcement learning.", "Next, we introduce the concept of adversarial distributions (universal and high-entropy training sets), which are distributions that encourage the learner to find algorithms that work well in the worst case.", "We test our new ideas on a number of optimization problem such as the AdWords problem, the online knapsack problem, and the secretary problem.", "Our results indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems.", "Machine learning has led to dramatic improvements in our capabilities to solve problems previously considered intractable.", "Besides the obvious empirical evidence of success, there has also been a strong parallel effort in the theory of ML which aims to explain why, when, and how ML techniques work.Our goal in this paper is to explore whether machine learning can be used to learn algorithms for classic combinatorial optimization problems.", "We will define this question more specifically by connecting to three concepts from traditional algorithms and complexity theory.", "In this work, we introduced several ideas from traditional algorithmic thinking to train neural networks to solve online optimization problems.", "In the problems that we consider, our results show that RL was able to find key characteristics of the optimal \"pen-and-paper\" algorithms.", "However, in some instances (such as in the knapsack and secretary problem), we saw that some state augmentation was needed in order for the learner to more adequately recover the optimal algorithms.", "In this work, we took a step towards that by having the RL environment encode that state in a form usable by the agent.", "In future work, we plan to remove the state augmentation from the RL environment and force the agent to learn the state augmentation as part of the training process.", "FIG3 compares the agent's learned algorithm with the optimal algorithm in the binary setting.", "FIG3 plots the threshold for the agent's learned algorithm in the value setting with changing distributions.", "Observe that both have learned a threshold at around 1/e." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5789473652839661, 0.41025641560554504, 0.2857142686843872, 0.20408162474632263, 0.23255813121795654, 0.24390242993831635, 0.15789473056793213, 0.22857142984867096, 0.19512194395065308, 0.380952388048172, 0.1860465109348297, 0.19999998807907104, 0.1395348757505417, 0.13333332538604736, 0, 0.05405404791235924, 0.12121211737394333 ]
rkluJ2R9KQ
true
[ "By combining ideas from traditional algorithms design and reinforcement learning, we introduce a novel framework for learning algorithms that solve online combinatorial optimization problems." ]
[ "Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems.", "Using a functional view of these networks gives us a useful new lens with which to understand them.", "This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization.", "One key result is that generalization results from smoothness of the functional approximation, combined with a flat initial approximation.", "This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well.", "Deep neural networks, trained via gradient descent, have revolutionized the field of machine learning.", "Despite their widespread adoption, theoretical understanding of fundamental properties of deep learning -the true value of depth, the root cause of implicit regularization, and the seemingly 'unreasonable' generalization achieved by overparameterized networks -remains mysterious.", "Empirically, it is known that depth is critical to the success of deep learning.", "Theoretically, it has been proven that maximum expressivity grows exponentially with depth, with a smaller number of trainable parameters (Raghu et al., 2017; Poole et al., 2016) .", "This theoretical capacity may not be used, as recently shown explicitly by (Hanin & Rolnick, 2019) .", "Instead, the number of regions within a trained network is proportional to the total number of hidden units, regardless of depth.", "Clearly deep networks perform better, but what is the value of depth if not in increasing expressivity?", "Another major factor leading to the success and widespread adoption of deep learning has been its surprisingly high generalization performance (Zhang et al., 2016) .", "In contrast to other machine learning techniques, continuing to add parameters to a deep network (beyond zero training loss) tends to improve generalization performance.", "This is even for networks that are massively overparameterized, wherein according to traditional ML theory they should (over)fit all the training data (Neyshabur et al., 2015) .", "How does training deep networks with excess capacity lead to generalization?", "And how can it be that this generalization error decreases with overparameterization?", "We believe that taking a functional view allows us a new, useful lens with which to explore and understand these issues.", "In particular, we focus on shallow and deep fully connected univariate ReLU networks, whose parameters will always result in a Continuous Piecewise Linear (CPWL) approximation to the target function.", "We provide theoretical results for shallow networks, with experiments showing that these qualitative results hold in deeper nets.", "Our approach is related to previous work from (Savarese et al., 2019; Arora et al., 2019; Frankle & Carbin, 2018) in that we wish to characterize parameterization and generalization.", "We differ from these other works by using small widths, rather than massively overparamaterized or infinite, and by using a functional parameterization to measure properties such as smoothness.", "Other prior works such as (Serra et al., 2017; Arora et al., 2016; Montufar et al., 2014) attempt to provide theoretical upper or lower bounds to the number of induced pieces in ReLU networks, whereas we are more interested in the empirical number of pieces in example tasks.", "Interestingly, (Serra et al., 2017) also takes a functional view, but is not interested in training and generalization as we are.", "Previous work (Advani & Saxe, 2017) has hinted at the importance of small norm initialization, but the functional perspective allows us to prove generalization properties in shallow networks." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1249999925494194, 0.09999999403953552, 0.23529411852359772, 0.06451612710952759, 0.13793103396892548, 0.08888888359069824, 0.1428571343421936, 0.04999999701976776, 0.06451612710952759, 0.0624999962747097, 0, 0.09999999403953552, 0.1111111044883728, 0.0952380895614624, 0.07692307233810425, 0.14814814925193787, 0.17142856121063232, 0.045454539358615875, 0.0624999962747097, 0.19512194395065308, 0.1463414579629898, 0.037735845893621445, 0.10810810327529907, 0.1904761791229248 ]
BJl9PRVKDS
true
[ "A functional approach reveals that flat initialization, preserved by gradient descent, leads to generalization ability." ]
[ "It is well-known that deeper neural networks are harder to train than shallower ones.", "In this short paper, we use the (full) eigenvalue spectrum of the Hessian to explore how the loss landscape changes as the network gets deeper, and as residual connections are added to the architecture.", "Computing a series of quantitative measures on the Hessian spectrum, we show that the Hessian eigenvalue distribution in deeper networks has substantially heavier tails (equivalently, more outlier eigenvalues), which makes the network harder to optimize with first-order methods.", "We show that adding residual connections mitigates this effect substantially, suggesting a mechanism by which residual connections improve training.", "Practical experience in deep learning suggests that the increased capacity that comes with deeper models can significantly improve their predictive performance.", "It has also been observed that as the network becomes deeper, training becomes harder.", "In convolutional neural networks (CNNs), residual connections BID5 are used to alleviate this problem.", "Various explanations are provided for this phenomenon: BID6 suggests that residual connections reduce the flatness of the landscape, whereas BID3 questions this premise, noting that the extremal eigenvalues of the loss Hessian are much larger when residual connections are present: large Hessian eigenvalues indicate that the curvature of the loss is much sharper, and less flat.", "In a different line of work, BID0 observes that the gradients with respect to inputs in deeper networks decorrelate with depth, and suggest that residual connections reduce the 'shattering' of the gradients.In this paper, we explore the interaction between depth and the loss geometry.", "We first establish that gradient explosion or vanishing is not responsible for the slowing down of training, as is commonly believed.", "Searching for an alternative explanation, we study the Hessian eigenvalue density (using the tools introduced in BID3 to obtain estimates of the eigenvalue histogram or density).", "The classical theory of strongly convex optimization tells us that optimization is slow when the spectrum simultaneously contains very small and very large eigenvalues (i.e., optimization rate is dependent on κ = λ max /λ min ).", "Following this intuition, we focus on examining the relative spread of the Hessian eigenvalues.", "In particular, we quantify the extent of the large outliers by computing some scale-invariant classical statistics of the Hessian eigenvalues, namely the skewness and kurtosis.", "Finally, we observe that in comparable models with residual connections, these magnitude of these outliers is substantially mitigated.", "In BID3 , it is hypothesised that batch normalization suppresses large outlier eigenvalues, thereby speeding up training; in this paper, we present evidence that residual connections speed up training through essentially the same channel.Throughout, the dataset of interest is CIFAR-10; we describe the specific model architectures used in Appendix A.", "In this paper, we have presented qualitative and quantitative evidence that depth increases outlier eigenvalues in the Hessian, and that residual connections mitigate this.", "We believe that this touches upon some of the fundamental dynamics of optimizing neural networks, and that any theoretical explanation of residual connections needs to explain this." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0, 0.20000000298023224, 0.1702127605676651, 0.13793103396892548, 0.1249999925494194, 0.07999999821186066, 0.1538461446762085, 0.20000000298023224, 0.21276594698429108, 0.0624999962747097, 0.17142856121063232, 0.08510638028383255, 0.3199999928474426, 0.12121211737394333, 0.06896550953388214, 0.178571417927742, 0.5454545617103577, 0.17142856121063232 ]
SyxJ2y2qaE
true
[ "Network depth increases outlier eigenvalues in the Hessian. Residual connections mitigate this." ]
[ "In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss.", "Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training.", "This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated.", "We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, results in a very slight drop in the overall accuracy (and in sometimes better).", "We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures (VGG19,\n", "ResNet-110 and DenseNet-121).", "On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards results in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters results in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters results in 0.57% drop in accuracy.", "Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, resulting in a better BLEU-4 score than the fully trained model.", "Our source code can be found in the appendix.", "The immense success of deep neural networks we are witnessing since the deep learning revolution occurred is surprising.", "A large variety of vision and language applications ranging from image classification, object detection, image synthesis, image super-resolution, image captioning, language modeling....", "etc.", "has proved that neural networks possess a powerful capability of learning very complex data.", "However, training these networks to perform as expected is very time-consuming and requires powerful graphical processing units (GPUs) .", "A recently published open-source project by NVIDIA 1 claimed that training a generative adversarial network (GAN) took more than 6 days on 8 Tesla V100 GPUs.", "However, we argue that a lot of parameters involved during training are important for update only for the first few epochs (in our experiments, the first two epochs only), and can be frozen for the rest of the training epochs.", "The backpropagation algorithm is the base algorithm used to optimize deep neural networks.", "For each weight, a gradient is computed with respect to the loss which indicates the amount a weight should change.", "Large gradients correspond to a large change that will occur in the weight, while small ones (near to zero) indicate that the weight is nearly optimized and does not need much change.", "In particular, if a gradient for a particular weight is zero or close to zero, this means that it has either reached its optimal solution, or it is stuck at a saddle point.", "The former means that the weight has a good value and is less likely to change throughout the training and can be kept frozen.", "In this paper, we wish to show the redundancy of weights in a neural network that have no influence and can be kept frozen during training.", "In particular, we demonstrate that fully training a model with all its weights is required for the first two epochs only.", "To justify this, we propose an experimental technique named Partial Backpropagation, which freezes weights that have gradients very near to zero and are less likely to change, with the rest of the weights trained normally.", "This induces a very slight drop in accuracy (and no harm in accuracy for lesser freezing).", "An overview of our experimental technque is shown in Figure 1 .", "Note that in Figure 1(b) , the red weights are frozen and not removed or zeroed out.", "We can further visualize the histogram of gradients across the network layers to have a better understanding of their distributions.", "In Figure 2 , we visualize the distribution of gradients from several layers in a VGG19 convolutional network (Simonyan & Zisserman, 2015) .", "In particular, we visualize the gradients of layers 3, 7, 10 and 13 after training for 2 epochs.", "We can see a large number of gradients with values very near to zero, suggesting that a lot of weights in these layers have already been optimized and are less likely to change throughout the training.", "We provided an experimental study on the importance of a neural network weights, and to which extent do they need to be updated.", "Through our experiments, we emphasized the number of redundant parameters that carry no informative gradient, which if frozen from the third epoch onwards, slightly effect (and in sometimes do not) the overall accuracy of the model.", "To prove our concern, we ran experiments on the MNIST and CIFAR10 datasets using several CNN architectures (VGG19, ResNet-110 and DenseNet-121), as well as the Flick8k dataset using an image captioning architecture composed of LSTM networks with attention mechanism.", "Our experiments successfully prove the concern of this paper." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.22727271914482117, 0.260869562625885, 0.25, 0.39344263076782227, 0.052631575614213943, 0, 0.31578946113586426, 0.2539682388305664, 0.23529411852359772, 0.0952380895614624, 0.09302324801683426, 0.20512820780277252, 0.04651162400841713, 0.07843136787414551, 0.2545454502105713, 0.05405404791235924, 0.1860465109348297, 0.15094339847564697, 0.07547169178724289, 0.21276594698429108, 0.31372547149658203, 0.260869562625885, 0.24561403691768646, 0.307692289352417, 0.2222222238779068, 0.1904761791229248, 0.1860465109348297, 0.21276594698429108, 0.09302324801683426, 0.3103448152542114, 0.21276594698429108, 0.31578946113586426, 0.09999999403953552, 0.1764705777168274 ]
rkg6PhNKDr
true
[ "An experimental paper that proves the amount of redundant weights that can be freezed from the third epoch only, with only a very slight drop in accuracy." ]
[ "Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum.", "While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads.", "In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning.", "LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples.", "It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn.", "In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution.", "We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10.", "We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks.", "We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.", "Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples.", "However, successfully training deep networks to solve problems under such conditions is mystifyingly hard (Erhan et al. (2009) ; Larochelle et al. (2007) ).", "The go-to solution in most cases is Stochastic Gradient Descent with mini-batches (simple batch learning) and its derivatives.", "While offering a standardized solution, simple batch learning often fails to find solutions that are simultaneously stable, highly generalizable and scalable to large systems (Das et al. (2016) ; Keskar et al. (2016) ; Goyal et al. (2017) ; You et al. (2017) ).", "This is a by-product of how mini-batches are constructed.", "For example, the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution; small batch sizes help achieve more generalizable solutions, but do not scale as well to vast computational resources as large mini-batches.", "It is hard to construct a solution that is a perfect compromise between all cases.", "Two lines of work, curriculum learning and label smoothing, offer alternative strategies to improve learning in deep networks.", "Curriculum learning, inspired by strategies used for humans (Skinner (1958) ; Avrahami et al. (1997) ), works by gradually increasing the conceptual difficulty of samples used to train deep networks ; Florensa et al. (2017) ; Graves et al. (2017) ).", "This has been shown to improve performance on corrupted (Jiang et al. (2017) ) and small datasets (Fan et al. (2018) ).", "More recently, deep networks have been used to categorize samples (Weinshall et al. (2018) ) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth (Hacohen & Weinshall (2019) ).", "To the best of our knowledge, previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order.", "This introduces computational overheads e.g. pre-computing the relative difficulty of samples, and also reduces the effective amount of data from which a model can learn in early epochs.", "Further, curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks.", "A complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima.", "In this regard, label smoothing offers an important solution that is invariant to the underlying architecture.", "Early works like Xie et al. (2016) replace ground-truth labels with noise while Reed et al. (2014) uses other models' outputs to prevent over-fitting.", "This idea was extended in Bagherinezhad et al. (2018) to an iterative method which uses logits obtained from previously trained versions of the same deep network.", "While Miyato et al. (2015) use local distributional smoothness, based on the robustness of a model's distribution around a data point, to regularize outcomes, Pereyra et al. (2017) penalized highly confident outputs directly.", "Closest in spirit to our work is the label smoothing method defined in Szegedy et al. (2016) , which offers an alternative target distribution for all training samples with no extra data augmentation.", "In general, label smoothing is applied to all examples regardless of how it affects the network's understanding of them.", "Further, in methods which use other models to provide logits/labels, often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset, both of which introduce additional computation.", "In this work, we propose LILAC, Learning with Incremental Labels and Adaptive Compensation, which emphasizes a label-based curriculum and adaptive compensation, to improve upon previous methods and obtain highly accurate and stable solutions.", "LILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples.", "It works in two key phases,", "1) incremental label introduction and", "2) adaptive compensation.", "In the first phase, we incrementally introduce groups of labels in the training process.", "Data, corresponding to labels not yet introduced to the model, use a single fake label selected from within the dataset.", "Once a network has been trained for a fixed number of epochs with this setup, an additional set of ground-truth labels is introduced to the network and the training process continues.", "In recursively revealing labels, LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples.", "Once all ground-truth labels are revealed the adaptive compensation phase of training is initiated.", "This phase mirrors conventional batch learning, except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution.", "Thus, we avoid adjusting labels across the entire dataset, like previous methods, while elevating the stability and average performance of the model.", "Further, instead of being pre-computed by an alternative model, these softer distributions are generated on-the-fly from the outputs of the model being trained.", "We apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines.", "While incremental and continual learning work on evolving data distributions with the addition of memory constraints ((Rebuffi et al., 2017; Castro et al., 2018) and derivative works), knowledge distillation ( Rolnick et al., 2018) and similar works) or other requirements, this work is a departure into using negative mining and focused training to improve learning on a fully available dataset.", "In incremental/continual learning works, often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset, distinguished by Seen and Unseen labels.", "Thus, it avoids data deficient learning.", "Further, works like Bucher et al. (2016) ; Li et al. (2013) ; Wang & Gupta (2015) emphasize the importance of hard negative mining, both in size and diversity, in improving learning.", "Although the original formulation of negative mining was based on imbalanced data, recent object detection works have highlighted its importance in contrasting and improving learning in neural networks.", "To summarize, our main contributions in LILAC are as follows,", "• we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples, • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads, • we improve average recognition accuracy and decrease the standard deviation of performance across several image classification benchmarks compared to batch learning, a property not shared by other curriculum learning and label smoothing methods.", "In the incremental phase, we initially replace the ground-truth labels of several class using a constant held-out label.", "Gradually, over the course of several fixed intervals of training we reveal the true label.", "Within a fixed interval of training, we keep constant two sets of data, \"Seen\", whose groundtruth labels are known and \"Unseen\", whose labels are replaced by a fake value.", "When training, Illustration of the evolution of data partitions in the incremental label introduction phase for a four label dataset.", "In the first incremental step, only one label is used for training while the remaining data use label 4.", "A short period of training is performed with this fixed setup, where data from U is uniformly sampled to match the number of samples from S, in every mini-batch.", "The final incremental step depicted is equivalent to batch learning since all the labels are available to the network.", "Once all the ground-truth labels are revealed we begin the adaptive compensation phase described in Sec. 2.2.", "mini-batches are uniformly sampled from the entire training set, but the instances from \"Unseen\" classes use the held-out label.", "By the end of the final interval, we reveal all ground-truth labels.", "We now describe the incremental phase in more detail.", "At the beginning of the incremental label introduction phase, we virtually partition data into two mutually exclusive sets, S : Seen and U : Unseen, as shown in Fig. 1 .", "Data samples in S use their ground-truth labels as target values while those in U use a designated unseen label, which is held constant throughout the entire training process.", "LILAC assumes a random ordering of labels, Or(M ), where M denotes the total number of labels in the dataset.", "Within this ordering, the number of labels and corresponding data initially placed in S is defined by the variable b.", "The remaining labels, M − b, are initially placed in U and incrementally revealed in intervals of m labels, a hyper-parameter defined by the user.", "Training in the incremental phase happens at fixed intervals of E epochs each.", "Within a fixed interval, the virtual data partition is held constant.", "Every mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch, labels are obtained based on their placement in S or U. Then the number of samples from U is reduced or augmented, using a uniform prior, to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label.", "Finally, the curated mini-batches of data are used to train the neural network.", "At the end of each fixed interval, we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval.", "In this work, we proposed LILAC which rethinks curriculum learning based on incrementally learning labels instead of samples.", "This approach helps kick-start the learning process from a substantially better starting point while making the learned embedding space amenable to adaptive negative logit compensation.", "Both these techniques combine well in LILAC to show the highest performance on CIFAR-10 for simple data augmentations while easily outperforming batch and curriculum learning and label smoothing on comparable network architectures.", "The next step in unlocking the full potential of this setup is to extend this setup to include a confidence measure on the predictions of network so that it can handle the effects of dropout or partial inputs.", "In further expanding LILAC's ability to handle partial inputs, we aim to explore its effect on standard incremental learning (memory constrained) while also extending it applicability to more complex neural network architectures.", "A LILAC: ALGORITHM Table 8 : The table captures the effect of varying the number of epochs used for the fixed training intervals in the incremental label introduction phase.", "Across CIFAR-10 there is an obvious peak after which the mean value decreases.", "However, in STL-10 there seems to be a consistent increase, with the assumption of minor noise.", "Finally, in CIFAR-100 there isn't a clear pattern.", "From the results in Table 8 , we observe that the choice of E is dependent on the dataset.", "There isn't an explicit pattern that can be used to select the value of E without trial runs.", "Further, the available run-time is an important constraint when select E from a range of values since both m and E affect it." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14999999105930328, 0.16326530277729034, 0.3255814015865326, 0.25806450843811035, 0.11999999731779099, 0.19512194395065308, 0.17777776718139648, 0.17391303181648254, 0.09090908616781235, 0.1904761791229248, 0.045454539358615875, 0.04999999329447746, 0.1090909019112587, 0, 0.03278687968850136, 0.05714285373687744, 0.20512819290161133, 0.14814814925193787, 0.1463414579629898, 0.15094339847564697, 0.1249999925494194, 0.08163265138864517, 0.17777776718139648, 0.15789473056793213, 0.10526315122842789, 0.09090908616781235, 0.0833333283662796, 0.038461532443761826, 0.18518517911434174, 0.09999999403953552, 0.10169491171836853, 0.1538461446762085, 0.1249999925494194, 0, 0.07407407462596893, 0, 0.11428570747375488, 0.09999999403953552, 0.16326530277729034, 0.12244897335767746, 0.0555555522441864, 0.09090908616781235, 0.1904761791229248, 0.0476190410554409, 0.21052631735801697, 0.0845070406794548, 0.18518517911434174, 0.0714285671710968, 0.07999999821186066, 0.08163265138864517, 0, 0.37037035822868347, 0.05128204822540283, 0, 0.1304347813129425, 0.05128204822540283, 0.05128204822540283, 0.1249999925494194, 0.1538461446762085, 0.052631575614213943, 0, 0.060606054961681366, 0, 0.03999999538064003, 0.12244897335767746, 0.04999999329447746, 0.1463414579629898, 0.13333332538604736, 0, 0, 0.10666666179895401, 0.05882352590560913, 0.21052631735801697, 0.307692289352417, 0.1304347813129425, 0.26923075318336487, 0.037735845893621445, 0.11538460850715637, 0.08510638028383255, 0.11428570747375488, 0.052631575614213943, 0, 0, 0.04999999329447746, 0.045454539358615875 ]
H1lTUCVYvH
true
[ "A novel approach to curriculum learning by incrementally learning labels and adaptively smoothing labels for mis-classified samples which boost average performance and decreases standard deviation." ]
[ "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare.", "Learning representations for words in the ``long tail'' of this distribution requires enormous amounts of data. \n", "Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation.", "We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task.", "We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling.\n", "Natural language yields a Zipfian distribution BID28 which tells us that a core set of words (at the head of the distribution) are frequent and ubiquitous, while a significantly larger number (in the long tail) are rare.", "Learning representations for rare words is a well-known challenge of natural language understanding, since the standard end-to-end supervised learning methods require many occurrences of each word to generalize well.The typical remedy to the rare word problem is to learn embeddings for some proportion of the head of the distribution, possibly shifted towards the domain-specific vocabulary of the dataset or task at hand, and to treat all other words as out-of-vocabulary (OOV), replacing them with an unknown word \"UNK\" token with a shared embedding.", "This essentially heuristic solution is inelegant, as words from technical domains, names of people, places, institutions, and so on will lack a specific representation unless sufficient data are available to justify their inclusion in the vocabulary.", "This forces model designers to rely on overly large vocabularies, as observed by BID17 BID22 , which are parametrically expensive, or to employ vocabulary selection strategies BID16 .", "In both cases, we face the issue that words in the tail of the Zipfian distribution will typically still be too rare to learn good representations for through standard embedding methods.", "Some models, such as in the work of BID13 , have sought to deal with the open vocabulary problem by obtaining representations of words from characters.", "This is successful at capturing the semantics of morphological derivations (e.g. \"running\" from \"run\") but puts significant pressure on the encoder to capture semantic distinctions amongst syntactically similar but semantically unrelated words (e.g. \"run\" vs. \"rung\").", "Additionally, nothing about the spelling of named entities, e.g. \"The Beatles\", tells you anything about their semantics (namely that they are a rock band).In", "this paper we propose a new method for computing embeddings \"on the fly\", which jointly addresses the large vocabulary problem and the paucity of data for learning representations in the long tail of the Zipfian distribution. This", "method, which we illustrate in FIG0 , can be summarized as follows: instead of directly learning separate representations for all words in a potentially unbounded vocabulary, we train a network to predict the representations of words based on auxiliary data. Such", "auxiliary data need only satisfy the general requirement that it describe some aspect of the semantics of the word for which a representation is needed. Examples", "of such data could be dictionary definitions, Wikipedia infoboxes, linguistic descriptions of named entities obtained from Wikipedia articles, or something as simple as the spelling of a word. We will", "refer to the content of auxiliary data as \"definitions\" throughout the paper, regardless of the source. Several", "sources of auxiliary data can be used simultaneously as input to a neural network that will compute a combined representation.These representations can then be used for out-of-vocabulary words, or combined with withinvocabulary word embeddings directly trained on the task of interest or pretrained from an external data source BID18 BID20 . Crucially", ", the auxiliary data encoders are trained jointly with the objective, ensuring the preservation of semantic alignment with representations of within-vocabulary words. In the present", "paper, we will focus on a subset of these approaches and auxiliary data sources, restricting ourselves to producing out-of-vocabulary words embeddings from dictionary data, spelling, or both.The obvious use case for our method would be datasets and tasks where there are many rare terms such as technical writing or bio/medical text BID6 . On such datasets", ", attempting to learn global vectors-for example GloVe embeddings BID20 -from external data, would only provide coverage for common words and would be unlikely to be exposed to sufficient (or any) examples of domain-specific technical terms to learn good enough representations. However, there", "are no (or significantly fewer) established neural network-based baselines on these tasks, which makes it harder to validate baseline results. Instead, we present", "results on a trio of well-established tasks, namely reading comprehension, recognizing textual entailment, and a variant on language modelling. For each task, we compare", "baseline models with embeddings trained directly only on the task objective to those same models with our on the fly embedding method. Additionally, we report results", "for the same models with pretrained GLoVe vectors as input which we do not update. We aim to show how the gap in results", "between the baseline and the data-rich GLoVe-based models can be partially but substantially closed merely through the introduction of relatively small amounts of auxiliary definitions. Quantitative results show that auxiliary", "data improves performance. Qualitative evaluation indicates our method", "allows models to draw and exploit connections defined in auxiliary data, along the lines of synonymy and semantic relatedness.", "We showed how different sources of auxiliary information, such as the spelling and a dictionary of definitions can be used to produce on the fly useful embeddings for rare words.", "While it was known before that adding the spelling information to the model is helpful, it is often hard or not possible to infer the meaning directly from the characters, as confirmed by our entailment recognition experiments.", "Our more general approach offers endless possibilities of adding other data sources and learning end-to-end to extract the relevant bits of information from them.", "Our experiments with a dictionary of definitions show the feasibility of the approach, as we report improvements over using just the spelling on question answering and semantic entailment classification tasks.", "Our qualitative investigations on the question answering data confirms our intuition on where the improvement comes from.", "It is also clear from them that adding more auxiliary data would help, and that it would probably be also useful to add definitions not just for words, but also for phrases (see \"Mark Twain\" from Section 4.1).", "We are planning to add more data sources (e.g. first sentences from Wikipedia articles) and better use the available ones (WordNet has definitions of phrasal verbs like \"come across\") in our future work.An important question that we did not touch in this paper is how to deal with rare words in the auxiliary information, such as dictionary definitions.", "Based on our qualitative investigations (see the example with \"arrow\" and \"weapon\" in Section 4.1), we believe that better handling rare words in the auxiliary information could substantially improve the proposed method.", "It would be natural to use on the fly embeddings similarly to the ones that we produce for words from the input, but the straight-forward approach of computing them on request would be very computation and memory hungry.", "One would furthermore have to resolve cyclical dependencies, which are unfortunately common in dictionary data (when e.g. \"entertainment\" is defined using \"diverting\" and \"diverting\" is defined using \"entertainment\").", "In our future work we want to investigate asynchronous training of on the fly embeddings and the main model.", "In this paper, we have shown that introducing relatively small amounts of auxiliary data and a method for computing embeddings on the fly using that data bridges the gap between data-poor setups, where embeddings need to be learned directly from the end task, and data-rich setups, where embeddings can be pretrained and sufficient external data exists to ensure in-domain lexical coverage.A large representative corpus to pretrain word embeddings is not always available and our method is applicable when one has access only to limited auxiliary data.", "Learning end-to-end from auxiliary sources can be extremely data efficient when these sources represent compressed relevant information about the word, as dictionary definitions do.", "A related desirable aspect of our approach is that it may partially return the control over what a language processing system does into the hands of engineers or even users: when dissatisfied with the output, they may edit or add auxiliary information to the system to make it perform as desired.", "Furthermore, domain adaptation with our method could be carried out simply by using other sources of auxiliary knowledge, for example definitions of domain-specific technical terms in order to understand medical texts.", "Overall, the aforementioned properties of our method make it a promising alternative to the existing approaches to handling rare words." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19354838132858276, 0.06451612710952759, 0.23255813121795654, 0.3589743673801422, 0.04878048226237297, 0.1304347813129425, 0.15189872682094574, 0.19607843458652496, 0.09756097197532654, 0.1818181723356247, 0.307692289352417, 0.12244897335767746, 0.09999999403953552, 0.17391303181648254, 0.11999999731779099, 0.052631575614213943, 0.1463414579629898, 0.06896550953388214, 0.1355932205915451, 0.11428570747375488, 0.17910447716712952, 0.07692307233810425, 0.052631575614213943, 0.05405404791235924, 0.21621620655059814, 0.15789473056793213, 0.04878048226237297, 0.08695651590824127, 0.060606054961681366, 0.2790697515010834, 0.1304347813129425, 0.10526315122842789, 0.1428571343421936, 0.06666666269302368, 0.1249999925494194, 0.22857142984867096, 0.17777776718139648, 0.17391303181648254, 0.04999999701976776, 0.060606054961681366, 0.12345678359270096, 0.10526315122842789, 0.10526315122842789, 0.2222222238779068, 0.3030303120613098 ]
B1CNpYg0-
true
[ "We propose a method to deal with rare words by computing their embedding from definitions." ]
[ "The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications.", "In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks.", "Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions.", "Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution.", "Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks.", "Out-of-distribution (OOD) detection, also known as novelty detection, refers to the task of identifying the samples that differ in some respect from the training samples.", "Recently, deep neural networks (DNNs) turned out to show unpredictable behaviors in case of mismatch between the training and testing data distributions; for example, they tend to make high confidence prediction for the samples that are drawn from OOD or belong to unseen classes (Szegedy et al., 2014; Moosavi-Dezfooli et al., 2017) .", "For this reason, accurately measuring the distributional uncertainty (Malinin & Gales, 2018) of DNNs becomes one of the important challenges in many real-world applications where we can hardly control the testing data distribution.", "Several recent studies have tried to simply detect OOD samples using the confidence score defined by softmax probability (Hendrycks & Gimpel, 2017; Liang et al., 2018) or Mahalanobis distance from class means (Lee et al., 2018) , and they showed promising results even without re-training the model.", "However, all of them employ the DNNs designed for a discriminative (or softmax) classifier, which has limited power to locate OOD samples distinguishable with in-distribution (ID) samples in their latent space.", "To be specific, the softmax classifier is optimized to learn the discriminative latent space where the training samples are aligned along their corresponding class weight vectors, maximizing the softmax probability for the target classes.", "As pointed out in (Hendrycks & Gimpel, 2017) , OOD samples are more likely to have small values of the softmax probability for all known classes, which means that their latent vectors get closer to the origin.", "As a result, there could be a large overlap between two sets of ID and OOD samples in the latent space (Figure 1 ), which eventually reduces the gap between their confidence scores and degrades the performance as well.", "In addition, most of existing confidence scores adopt additional calibration techniques Hinton et al., 2015) to enhance the reliability of the detection, but they include several hyperparameters whose optimal values vary depending on the testing data distribution.", "In this situation, they utilized a small portion of each test set (containing both ID and OOD samples) for validation, and reported the results evaluated on the rest by using the optimal hyperparameter values for each test case.", "Considering the motivation of OOD detection that prior knowledge of test distributions is not available before we encounter them, such process of tuning the hyperparameters for each test case is not practical when deploying the DNNs in practice.", "In this paper, we propose a novel objective to train DNNs with a generative (or distance) classifier which is capable of effectively identifying OOD test samples.", "The main difference of our deep generative classifier is to learn separable class-conditional distributions in the latent space, by explicitly modeling them as a DNN layer.", "The generative classifier places OOD samples further apart from the distributions of all given classes, without utilizing OOD samples for its validation.", "Thus, based on the Euclidean distance between a test sample and the centers of the obtained class-conditional distributions, we can calculate how likely and how confidently the sample belongs to each class.", "This can be interpreted as a multi-class extension of unsupervised anomaly detection (Ruff et al., 2018) , and Gaussian discriminant analysis provides the theoretical background for incorporating the generative classifier into the DNNs.", "Our extensive experiments on images and tabular data demonstrate that the proposed classifier distinguishes OOD samples more accurately than the state-of-the-art method, while maintaining the classification accuracy for ID samples.", "This paper introduces a deep learning objective to learn the multi-class generative classifier, by fusing the concept of Gaussian discriminant analysis with DNNs.", "Unlike the conventional softmax classifier, our generative (or distance) classifier learns the class-conditional distributions to be separated from each other and follow the Gaussian distribution at the same time, thus it is able to effectively distinguish OOD samples from ID samples.", "We empirically show that our confidence score beats other competing methods in detecting both OOD tabular data and OOD images, and also the distance classifier can be easily combined with various types of DNNs to further improve their performances." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23728813230991364, 0.8709677457809448, 0.23333333432674408, 0.1666666567325592, 0.3125, 0.19607841968536377, 0.1818181723356247, 0.06666666269302368, 0.13698630034923553, 0.23333333432674408, 0.16949151456356049, 0.1538461446762085, 0.21875, 0.0923076868057251, 0.12903225421905518, 0.09836065024137497, 0.290909081697464, 0.3571428656578064, 0.19999998807907104, 0.1428571343421936, 0.3606557250022888, 0.10526315122842789, 0.5, 0.2153846174478531, 0.11940298229455948 ]
HJePXkHtvS
true
[ "This paper proposes a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks." ]
[ "One of the most prevalent symptoms among the elderly population, dementia, can be detected by classifiers trained on linguistic features extracted from narrative transcripts.", "However, these linguistic features are impacted in a similar but different fashion by the normal aging process.", "Aging is therefore a confounding factor, whose effects have been hard for machine learning classifiers to isolate. \n\n", "In this paper, we show that deep neural network (DNN) classifiers can infer ages from linguistic features, which is an entanglement that could lead to unfairness across age groups.", "We show this problem is caused by undesired activations of v-structures in causality diagrams, and it could be addressed with fair representation learning.", "We build neural network classifiers that learn low-dimensional representations reflecting the impacts of dementia yet discarding the effects of age.", "To evaluate these classifiers, we specify a model-agnostic score $\\Delta_{eo}^{(N)}$ measuring how classifier results are disentangled from age.", "Our best models outperform baseline neural network classifiers in disentanglement, while compromising accuracy by as little as 2.56\\% and 2.25\\% on DementiaBank and the Famous People dataset respectively.", "One in three seniors die of Alzheimer's and other types of dementia in the United States (Association, 2018) .", "Although its causes are not yet fully understood, dementia impacts people's cognitive abilities in a detectable manner.", "This includes different syntactic distributions in narrative descriptions BID28 , more pausing BID29 , higher levels of difficulty in recalling stories BID21 , and impaired memory generally BID20 .", "Fortunately, linguistic features can be used to train classifiers to detect various cognitive impairments.", "For example, BID8 detected primary progressive aphasia with up to 100% accuracy, and classified subtypes of primary progressive aphasia with up to 79% accuracy on a set of 40 participants using lexical-syntactic and acoustic features.", "BID7 classified dementia from control participants with 82% accuracy on narrative speech.However, dementia is not the only factor causing such detectable changes in linguistic features of speech.", "Aging also impairs cognitive abilities BID11 , but in subtly different ways from dementia.", "For example, aging inhibits fluid cognitive abilities (e.g., cognitive processing speed) much more than the consolidated abilities (e.g., those related to cumulative skills and memories) BID4 .", "In other words, the detected changes of linguistic features, including more pauses and decreased short-term memories, could attribute to just normal aging process instead of dementia.", "Unfortunately, due to the high correlation between dementia and aging, it can be difficult to disentangle symptoms are caused by dementia or aging BID24 .", "Age is therefore a confounding factor in detecting dementia.The effects of confounding factors are hard for traditional machine learning algorithms to isolate, and this is largely due to sampling biases in the data.", "For example, some algorithms predict higher risk of criminal recidivism for people with darker skin colors BID15 , others identify images of smiling Asians as blinking BID19 , and GloVe word embeddings can project European-American names significantly closer to the words like 'pleasant' than African-American names BID3 .", "It is preferable for classifiers to make decisions without biasing too heavily on demographic factors, and therefore to isolate the effects of confounding factors.", "However, as we will show in Experiments, traditional neural network classifiers bias on age to infer dementia; this can lead to otherwise avoidable false positives and false negatives that are especially important to avoid in the medical domain.", "Graphically, if both age A and dementia D cause changes in a feature X, the result is a v-structure BID17 A → X ← D which is activated upon observing X. In other words, the confounder A affects P (D|X) if we train the classifier in traditional ways, which is to collect data points {(X, D) (i) } and to learn an inference model P (D|X) approximating the affected P (D|X).Traditionally", ", there are several ways to eliminate the effects of confounding factors A.Controlling A gives a posterior distribution P (D|X, A)P (A). This is unfortunately", "unrealistic for small, imbalanced clinical datasets, in which sparsity may require stratification. However, the stratified", "distributions P (D|X, A) can be far from a meaningful representation of the real world (as we will show, e.g., in FIG3 ). Moreover, a discrepancy", "in the sizes of age groups can skew the age prior P (A), which would seriously inhibit the generalizability of a classifier.Controlling X Conducting a randomized control trial (RCT) on X removes all causal paths leading \"towards\" the variable X, which gives a de-confounded dataset P (D|do(X)) according to the notation in BID27 . However, RCTs on X are", "even less practical because simultaneously controlling multiple features produces exponential number of scenarios, and doing this to more than 400 features require far more data points than any available dataset.Pre-adjusting X according to a pre-trained model X = f (A) per feature could also approximately generate the dataset P (D|do(X)). However, such a model", "should consider participant differences, otherwise interpolating using a fixed age A would give exactly the same features for everybody. The participant differences", ", however, are best characterized via X, which are the values you want to predict.To overcome the various problems with these methods, we let our classifiers be aware of cognitive impairments while actively filtering out any information related to aging. This is a fair representation", "learning framework that protects age as a \"sensitive attribute\".Fair representation learning frameworks", "can be used to train classifiers to equally consider the subjects with different sensitive attributes. A sensitive attribute (or \"protected attribute", "\") can be race, age, or other variables whose impact should be ignored. In the framework proposed by BID32 , classifiers", "were penalized for the differences in classification probabilities among different demographic groups. After training, the classifiers produced better", "demographic similarities while compromising only a little overall accuracy. To push the fair representation learning idea further", ", adversarial training can be incorporated. BID9 introduced generative adversarial networks, in", "which a generator and a discriminator are iteratively optimized against each other. Incorporating adversarial training, BID22 proposed", "a framework to learn a latent representation of data in order to limit its adversary's ability to classify based on the sensitive attributes.However, these approaches to fair representation learning only handle binary attributes. E.g., BID22 binarized age. To apply to cognitive impairments", "detection, we want to represent", "age on a continuous scale (with some granularity if necessary). We formulate a fairness metric for evaluating the ability of a classifier", "to isolate a continuous-valued attribute. We also propose four models that compress high-dimensional feature vectors", "into low-dimensional representations which encrypt age from an adversary. We show empirically that our models achieve better fairness metrics than baseline", "deep neural network classifiers, while compromising accuracies by as little as 2.56% and 2.25% on our two empirical datasets, respectively.", "We evaluate the performances of our four proposed neural networks against the DNN baseline.", "As an additional ablation study, two variants of age-indep-entropy are also evaluated.", "TAB1 : Evaluation results of our representation learning models.", "The \"age-indep\" prefix are replaced with \"*\" in model names.", "age-indep-simple and age-indep-autoencoder have better disentanglement scores, while the rest two models could have better accuracy.Accuracy The fair representation learning models compromise accuracy, in comparison to DNN baselines.", "This confirms that part of the classification power of DNNs come from biasing with regards to age.", "On DementiaBank, the age-indep-autoencoder reduces accuracy the least (only 2.56% in comparison to the DNN baseline).", "On the Famous People data, age-indep-consensus and age-indep-entropy models compromise accuracies by only 2.25% and 2.75% respectively, which are not statistically different from the DNN baseline 7 .Disentanglement", "In comparison to DNN baselines, our fair representation learning models improve disentanglement/fairness 8 , the improvements are mostly significant when measured by the two-group scores ∆eo . Also, the five-group", "scores ∆eo are less stable for both datasets, and the scores in the Famous People have higher variances than in DementiaBank. Following is an explanation", ". DementiaBank has ∼400 data", "samples. In 5-fold cross validation", ", each of the five age groups has only ∼16 samples during evaluation. Famous People data contains", "∼250 samples, which increases the variance. When the number of groups,", "N of ∆ (N ) eo , is kept small (e.g., ∼100 samples per label per group, as in DementiaBank N = 2), the fairness metrics are stable.", "Here, we identify the problem of entangling age in the detection of cognitive impairments.", "After explaining this problem with causality diagrams, we formulate it into a fair representation learning task, and propose a fairness score to measure the extent of disentanglement.", "We put forward four fair representation learning models that learn low-dimensional representations of data samples containing as little age information as possible.", "Our best model improves upon the DNN baseline in our fairness metrics, while compromising as little accuracy as 2.56% (on DementiaBank) and 2.25% (on the Famous People dataset).7 p = 0.20, 0.16 on 38-DoF one-tailed t-tests, respectively.", "8 On DementiaBank, p = 0.01 and 0.03 for age-indep-simple and age-indep-entropy on ∆ (2) eo respectively; these are significant.", "p = 0.08 and 0.09 on age-indep-autoencoder and age-indep-consensus-net on ∆ (2) eo respectively; these are marginally significant.", "However, these differences are not as significant on ∆ Proof of Theorem For each of the age groups: |p a −p| + |n a −n| ≤ max{|p a − 0| + |n a − 0|, |p a − 0.5| + |n a − 0.5|} ≤ max{0.5, 1} = 1 Summing up the N a age groups results in our upper bound N a for non-trivial classifiers." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0, 0.05714285373687744, 0.08888888359069824, 0.25, 0.11428570747375488, 0.05714285373687744, 0.09090908616781235, 0.060606054961681366, 0.05882352590560913, 0.0476190410554409, 0.06666666269302368, 0.09090908616781235, 0.04651162400841713, 0.06451612710952759, 0.0952380895614624, 0.0476190410554409, 0.05128204822540283, 0.08510638028383255, 0.06557376682758331, 0.04999999329447746, 0.11764705181121826, 0.05714285373687744, 0, 0, 0.045454539358615875, 0.032258059829473495, 0.0312499962747097, 0.052631575614213943, 0.13333332538604736, 0.27586206793785095, 0.05714285373687744, 0, 0, 0.1764705777168274, 0, 0.060606054961681366, 0.1818181723356247, 0, 0.05405404791235924, 0.1875, 0.21052631735801697, 0.05405404791235924, 0, 0, 0.23076923191547394, 0.07407406717538834, 0.23255813121795654, 0.1818181723356247, 0, 0.09090908616781235, 0.1818181723356247, 0.05128204822540283, 0, 0, 0.05882352590560913, 0, 0.045454539358615875, 0.20689654350280762, 0.2790697515010834, 0.31578946113586426, 0.03703703358769417, 0.05405404791235924, 0.05882352590560913, 0.0615384578704834 ]
BylTHoR5Km
true
[ "Show that age confounds cognitive impairment detection + solve with fair representation learning + propose metrics and models." ]
[ "Much of the focus in the design of deep neural networks had been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios. ", "As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. ", "In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical on-device edge usage by introducing NetScore, a new metric designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. ", "In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 60 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. ", "The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. " ]
[ 0, 0, 1, 0, 0 ]
[ 0.25925925374031067, 0.25, 0.6567164063453674, 0.22535210847854614, 0.15686273574829102 ]
Hyzq4ZKa97
false
[ "We introduce NetScore, new metric designed to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network." ]
[ "Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, especially white-box targeted attacks.", "This paper studies the problem of how aggressive white-box targeted attacks can be to go beyond widely used Top-1 attacks.", "We propose to learn ordered Top-k attacks (k>=1), which enforce the Top-k predicted labels of an adversarial example to be the k (randomly) selected and ordered labels (the ground-truth label is exclusive).", "Two methods are presented.", "First, we extend the vanilla Carlini-Wagner (C&W) method and use it as a strong baseline.", "Second, we present an adversarial distillation framework consisting of two components:", "(i) Computing an adversarial probability distribution for any given ordered Top-$k$ targeted labels.", "(ii) Learning adversarial examples by minimizing the Kullback-Leibler (KL) divergence between the adversarial distribution and the predicted distribution, together with the perturbation energy penalty.", "In computing adversarial distributions, we explore how to leverage label semantic similarities, leading to knowledge-oriented attacks.", "In experiments, we test Top-k (k=1,2,5,10) attacks in the ImageNet-1000 val dataset using two popular DNNs trained with the clean ImageNet-1000 train dataset, ResNet-50 and DenseNet-121.", "Overall, the adversarial distillation approach obtains the best results, especially by large margin when computation budget is limited..", "It reduces the perturbation energy consistently with the same attack success rate on all the four k's, and improve the attack success rate by large margin against the modified C&W method for k=10. ", "Despite the recent dramatic progress, deep neural networks (DNNs) (LeCun et al., 1998; Krizhevsky et al., 2012; He et al., 2016; Szegedy et al., 2016) trained for visual recognition tasks (e.g., image classification) can be easily fooled by so-called adversarial attacks which utilize visually imperceptible, carefully-crafted perturbations to cause networks to misclassify inputs in arbitrarily chosen ways in the close set of labels used in training (Nguyen et al., 2015; Szegedy et al., 2014; Athalye & Sutskever, 2017; Carlini & Wagner, 2016) , even with one-pixel attacks (Su et al., 2017) .", "The existence of adversarial attacks hinders the deployment of DNNs-based visual recognition systems in a wide range of applications such as autonomous driving and smart medical diagnosis in the long-run.", "In this paper, we are interested in learning visually-imperceptible targeted attacks under the whitebox setting in image classification tasks.", "In the literature, most methods address targeted attacks in the Top-1 manner, in which an adversarial attack is said to be successful if a randomly selected label (not the ground-truth label) is predicted as the Top-1 label with the added perturbation satisfying to be visually-imperceptible.", "One question arises,", "• The \"robustness\" of an attack method itself : How far is the attack method able to push the underlying ground-truth label in the prediction of the learned adversarial examples?", "Table 1 shows the evaluation results of the \"robustness\" of different attack methods.", "The widely used C&W method (Carlini & Wagner, 2016) does not push the GT labels very far, especially when smaller perturbation energy is aimed using larger search range (e.g., the average rank of the GT label is 2.6 for C&W 9×1000 ).", "Consider Top-5, if the ground-truth labels of adversarial examples still largely appear in the Top-5 of the prediction, we may be over-confident about the 100% ASR, (He et al., 2016) .", "Please see Sec. 4 for detail of experimental settings.", "Method ASR Proportion of GT Labels in Top-k (smaller is better) Average Rank of GT Labels (larger is better)", "Top-3 Top-5 Top-10 Top-50 Top-100 C&W9×30 (Carlini & Wagner, 2016) 99.9 36.9 50.5 66.3 90.0 95.1 20.4 C&W9×1000 (Carlini & Wagner, 2016) 100 71.9 87.0 96.1 99.9 100 2.6 FGSM (Goodfellow et al., 2015) 80.7 25.5 37.8 52.8 81.2 89.2 44.2 PGD10 (Madry et al., 2018) 100 3.3 6.7 12 34.7 43.9 306.5 MIFGSM10 (Dong et al., 2018) 99.9 0.7 1.9 6.0 22.5 32.3 404.4", "especially when some downstream modules may rely on Top-5 predictions in their decision making.", "But, the three untargeted attack approaches are much better in terms of pushing the GT labels since they are usually move against the GT label explicitly in the optimization, but their perturbation energies are usually much larger.", "As we shall show, more \"robust\" attack methods can be developed by harnessing the advantages of the two types of attack methods.", "In addition, the targeted Top-1 attack setting could limit the flexibility of attacks, and may lead to less rich perturbations.", "To facilitate explicit control of targeted attacks and enable more \"robust\" attack methods, one natural solution, which is the focus of this paper, is to develop ordered Top-k targeted attacks which enforce the Top-k predicted labels of an adversarial example to be the k (randomly) selected and ordered labels (k ≥ 1, the GT label is exclusive).", "In this paper, we present two methods of learning ordered Top-k attacks.", "The basic idea is to design proper adversarial objective functions that result in imperceptible perturbations for any test image through iterative gradient-based back-propagation.", "First, we extend the vanilla Carlini-Wagner (C&W) method (Carlini & Wagner, 2016) and use it as a strong baseline.", "Second, we present an adversarial distillation (AD) framework consisting of two components: (i) Computing an adversarial probability distribution for any given ordered Top-k targeted labels.", "(ii) Learning adversarial examples by minimizing the Kullback-Leibler (KL) divergence between the adversarial distribution and the predicted distribution, together with the perturbation energy penalty.", "The proposed AD framework can be viewed as applying the network distillation frameworks (Hinton et al., 2015; Bucila et al., 2006; Papernot et al., 2016) for \"the bad\" induced by target adversarial distributions.", "To compute a proper adversarial distribution for any given ordered Top-k targeted labels, the AD framework is motivated by two aspects:", "(i) The difference between the objective functions used by the C&W method and the three untargeted attack methods (Table 1) respectively.", "The former maximizes the margin of the logits between the target and the runner-up (either GT or ResNet-50.", "AD is better than the modified C&W method (CW * ).", "The thickness represents the 2 energy (thinner is better).", "Please see Sec. 4 for detail of experimental settings.", "not), while the latter maximizes the cross-entropy between the prediction probabilities (softmax of logits) and the one-hot distribution of the ground-truth.", "(ii) The label smoothing methods Pereyra et al., 2017) , which are often used to improve the performance of DNNs by addressing the over-confidence issue in the one-hot vector encoding of labels.", "More specifically, we explore how to leverage label semantic similarities in computing \"smoothed\" adversarial distributions, leading to knowledge-oriented attacks.", "We measure label semantic similarities using the cosine distance between some off-the-shelf word2vec embedding of labels such as the pretrained Glove embedding (Pennington et al., 2014) .", "Along this direction, another question of interest is further investigated: Are all Top-k targets equally challenging for an attack approach?", "In experiments, we test Top-k (k = 1, 2, 5, 10) in the ImageNet-1000 (Russakovsky et al., 2015) val dataset using two popular DNNs trained with clean ImageNet-1000 train dataset, ResNet-50 (He et al., 2016) and DenseNet-121 (Huang et al., 2017) respectively.", "Overall, the adversarial distillation approach obtains the best results.", "It reduces the perturbation energy consistently with the same attack success rate on all the four k's, and improve the attack success rate by large margin against the modified C&W method for k = 10 (see Fig. 1 ).", "We observe that Top-k targets that are distant from the GT label in terms of either label semantic distance or prediction scores of clean images are actually more difficulty to attack.", "In summary, not only can ordered Top-k attacks improve the \"robustness\" of attacks, but also they provide insights on how aggressive adversarial attacks can be (under affordable optimization budgets).", "Our Contributions.", "This paper makes three main contributions to the field of learning adversarial attacks:", "(i) The problem in study is novel.", "Learning ordered Top-k adversarial attacks is an important problem that reflects the robustness of attacks themselves, but has not been addressed in the literature.", "(ii) The proposed adversarial distillation framework is effective, especially when k is large (such as k = 5, 10).", "(iii) The proposed knowledge-oriented adversarial distillation is novel.", "It worth exploring the existing distillation framework for a novel problem (ordered Top-k adversarial attacks) with some novel modifications (knowledge-oriented target distributions as \"teachers\").", "This paper proposes to extend the traditional Top-1 targeted attack setting to the ordered Top-k setting (k ≥ 1) under the white-box attack protocol.", "The ordered Top-k targeted attacks can improve the robustness of attacks themselves.", "To our knowledge, it is the first work studying this ordered Top-k attacks.", "To learn the ordered Top-k attacks, we present a conceptually simple yet effective adversarial distillation framework motivated by network distillation.", "We also develop a modified C&W method as the strong baseline for the ordered Top-k targeted attacks.", "In experiments, the proposed method is tested in ImageNet-1000 using two popular DNNs, ResNet-50 and DenseNet-121, with consistently better results obtained.", "We investigate the effectiveness of label semantic knowledge in designing the adversarial distribution for distilling the ordered Top-k targeted attacks.", "Discussions.", "We have shown that the proposed AD method is generally applicable to learn ordered Top-k attacks.", "But, we note that the two components of the AD framework are in their simplest forms in this paper, and need to be more thoroughly studied: designing more informative adversarial distributions to guide the optimization to learn adversarial examples better and faster, and investigating loss functions other than KL divergence such as the Jensen-Shannon (JS) divergence or the Earth-Mover distance.", "On the other hand, we observed that the proposed AD method is more effective when computation budget is limited (e.g., using the 9 × 30 search scheme).", "This leads to the theoretically and computationally interesting question whether different attack methods all will work comparably well if the computation budget is not limited.", "Of course, in practice, we prefer more powerful ones when only limited computation budget is allowed.", "Furthermore, we observed that both the modified C&W method and the AD method largely do not work in learning Top-k (k ≥ 20) attacks with the two search schema (9 × 30 and 9 × 1000).", "We are working on addressing the aforementioned issues to test the Top-k (k ≥ 20) cases, thus providing a thorough empirical answer to the question: how aggressive can adversarial attacks be?" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23529411852359772, 0.08695651590824127, 0.25806450843811035, 0, 0, 0.13333332538604736, 0.23529411852359772, 0.0833333283662796, 0.21052631735801697, 0.1428571343421936, 0.0952380895614624, 0, 0.053333330899477005, 0.13333332538604736, 0.09090908616781235, 0.10256410390138626, 0, 0.0714285671710968, 0, 0, 0.06451612710952759, 0, 0.1111111044883728, 0, 0, 0, 0, 0, 0.17391304671764374, 0.375, 0.07407407462596893, 0, 0.2222222238779068, 0.0833333283662796, 0.05882352590560913, 0.23999999463558197, 0, 0, 0, 0, 0, 0, 0, 0.1818181723356247, 0, 0.0833333283662796, 0.0476190447807312, 0.1666666567325592, 0, 0.06451612710952759, 0.25806450843811035, 0.11764705181121826, 0, 0.3076923191547394, 0.0952380895614624, 0.1666666567325592, 0.14814814925193787, 0.17391304671764374, 0.4000000059604645, 0.3529411852359772, 0.260869562625885, 0.29999998211860657, 0, 0.3636363744735718, 0.29999998211860657, 0.038461536169052124, 0, 0, 0, 0.11428571492433548, 0.1875 ]
ryeSKAVtPB
true
[ "ordered Top-k adversarial attacks" ]
[ "Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success.", "However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend.", "In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank.", "We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.", "Our model's training time is on par or faster and its number of parameters on par or lower than previous models.", "It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network.", "We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models.", "Our implementation is available online.", "Graphs are ubiquitous in the real world and its description through scientific models.", "They are used to study the spread of information, to optimize delivery, to recommend new books, to suggest friends, or to find a party's potential voters.", "Deep learning approaches have achieved great success on many important graph problems such as link prediction BID15 , graph classification BID12 BID31 BID13 and semi-supervised node classification BID43 BID21 .There", "are many approaches for leveraging deep learning algorithms on graphs. Node", "embedding methods use random walks or matrix factorization to directly train individual node embeddings, often without using node features and usually in an unsupervised manner, i.e. without leveraging node classes BID33 BID40 BID30 BID15 BID35 . Many", "other approaches use both graph structure and node features in a supervised setting. Examples", "for these include spectral graph convolutional neural networks BID6 BID11 , message passing (or neighbor aggregation) algorithms BID19 BID21 BID16 BID34 BID28 BID13 , and neighbor aggregation via recurrent neural networks BID36 BID24 BID10 . Among these", "categories, the class of message passing algorithms has garnered particular attention recently due to its flexibility and good performance.Several works have been aimed at improving the basic neighborhood aggregation scheme by using attention mechanisms BID19 BID16 BID41 , random walks BID0 BID44 , edge features BID19 BID13 BID37 and making it more scalable on large graphs BID44 . However, all", "of these methods only use the information of a very limited neighborhood for each node. A larger neighborhood", "would be desirable to provide the model with more information, especially for nodes in the periphery or in a sparsely labelled setting.Increasing the size of the neighborhood used by these algorithms, i.e. their range, is not trivial since neighborhood aggregation in this scheme is essentially a type of Laplacian smoothing and too many layers lead to oversmoothing . BID42 highlighted the", "same problem by establishing a relationship between the message passing algorithm termed Graph Convolutional Network (GCN) by BID21 and a random walk. Using this relationship", "we see that GCN converges to this random walk's limit distribution as the number of layers increases. The limit distribution", "is a property of the graph as a whole and does not take the random walk's starting (root) node into account. As such it is unsuited", "to describe the root node's neighborhood. Hence, GCN's performance", "necessarily deteriorates for a high number of layers (or aggregation/propagation steps).To solve this issue, in this", "paper, we first highlight the inherent connection between the limit distribution and PageRank BID32 . We then propose an algorithm", "that utilizes a propagation scheme derived from personalized PageRank instead. This algorithm adds a chance", "of teleporting back to the root node, which ensures that the PageRank score encodes the local neighborhood for every root node BID32 . The teleport probability allows", "us to balance the needs of preserving locality (i.e. staying close to the root node to avoid oversmoothing) and leveraging the information from a large neighborhood. We show that this propagation scheme", "permits the use of far more (in fact, infinitely many) propagation steps without leading to oversmoothing.Moreover, while propagation and classification are inherently intertwined in message passing, our proposed algorithm separates the neural network from the propagation scheme. This allows us to achieve a much higher", "range without changing the neural network, whereas in the message passing scheme every additional propagation step would require an additional layer. It also permits the independent development", "of the propagation algorithm and the neural network generating predictions from node features. That is, we can combine any state-of-the-art", "prediction method with our propagation scheme. We even found that adding our propagation scheme", "during inference significantly improves the accuracy of networks that were trained without using any graph information.Our model achieves state-of-the-art results while requiring fewer parameters and less training time compared to most competing models, with a computational complexity that is linear in the number of edges. We show these results in the most thorough study", "(including significance testing) of message passing models using graphs with text-based features that has been done so far.", "In this paper we have introduced personalized propagation of neural predictions (PPNP) and its fast approximation, APPNP.", "We derived this model by considering the relationship between GCN and PageRank and extending it to personalized PageRank.", "This simple model decouples prediction and propagation and solves the limited range problem inherent in many message passing models without introducing any additional parameters.", "It uses the information from a large, adjustable (via the teleport probability α) neighborhood for classifying each node.", "The model is computationally efficient and outperforms several state-of-the-art methods for semi-supervised classification on multiple graphs in the most thorough study which has been done for GCN-like models so far.For future work it would be interesting to combine PPNP with more complex neural networks used e.g. in computer vision or natural language processing.", "Furthermore, faster or incremental approximations of personalized PageRank BID2 BID3 BID25 and more sophisticated propagation schemes would also benefit the method.A EXISTENCE OF Π PPRThe matrix DISPLAYFORM0 exists iff the determinant det(I n − (1 − α)Â) = 0, which is the case iff det( − 1 1−α I n ) = 0, i.e. iff 1 1−α is not an eigenvalue ofÂ.", "This value is always larger than 1 since the teleport probability α ∈ (0, 1].", "Furthermore, the symmetrically normalized matrix has the same eigenvalues as the row-stochastic matrixà rw .", "This can be shown by multiplying the eigenvalue equationÂv = λv .", "The largest eigenvalue of a row-stochastic matrix is 1, as can be proven using the Gershgorin circle theorem.", "Hence, B CONVERGENCE OF APPNP APPNP uses the iterative equation DISPLAYFORM1 After the k-th propagation step, the resulting predictions are DISPLAYFORM2 H.If we take the limit k → ∞ the left term tends to 0 and the right term becomes a geometric series.", "The series converges since α ∈ (0, 1] and is symmetrically normalized and therefore det(Â) ≤ 1, resulting in The sampling procedure is illustrated in FIG3 .", "The data is first split into a visible and a test set.", "For the visible set 1500 nodes were sampled for the citation graphs and 5000 for MICROSOFT ACADEMIC.", "The test set contains all remaining nodes.", "We use three different label sets in each experiment: A training set of 20 nodes per class, an early stopping set of 500 nodes and either a validation or test set.", "The validation set contains the remaining nodes of the visible set.", "We use 20 random seeds for determining the splits.", "These seeds are drawn once and fixed across runs to facilitate comparisons.", "We use one set of seeds for the validation splits and a different set for the test splits.", "Each experiment is run with 5 random initializations on each data split, leading to a total of 100 runs per experiment.", "DISPLAYFORM3 The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10 000 epochs.", "The patience is reset whenever the accuracy increases or the loss decreases on the early stopping set.", "We choose the parameter set achieving the highest accuracy and break ties by selecting the lowest loss on this set.", "This criterion was inspired by GAT BID41 .We", "used TensorFlow (Martín BID26 for all experiments except bootstrapped feature propagation. All", "uncertainties and confidence intervals correspond to a confidence level of 95 % and were calculated by bootstrapping with 1000 samples.We use the Adam optimizer with a learning rate of l = 0.01 and cross-entropy loss for all models BID20 . Weights", "are initialized as described in BID14 . The feature", "matrix is L 1 normalized per row." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.13333332538604736, 0.2926829159259796, 0.31578946113586426, 0.1111111044883728, 0.11428570747375488, 0, 0, 0.06451612710952759, 0.04999999329447746, 0.1304347813129425, 0, 0.038461532443761826, 0.1249999925494194, 0.20408162474632263, 0.0833333283662796, 0.05882352590560913, 0.08695651590824127, 0.09999999403953552, 0.0555555522441864, 0.19512194395065308, 0, 0.05882352590560913, 0.1111111044883728, 0.1875, 0.09756097197532654, 0.12765957415103912, 0.1355932205915451, 0.0952380895614624, 0.2702702581882477, 0.13793103396892548, 0.1515151411294937, 0.0555555522441864, 0.4000000059604645, 0.23529411852359772, 0.1463414579629898, 0, 0.0845070406794548, 0.1428571343421936, 0, 0, 0.06896550953388214, 0.0555555522441864, 0.1090909019112587, 0.04878048226237297, 0.13793103396892548, 0.060606054961681366, 0, 0.08888888359069824, 0.07407406717538834, 0, 0.06666666269302368, 0.1249999925494194, 0.052631575614213943, 0.10256409645080566, 0, 0.11428570747375488, 0.07692307233810425, 0.06666666269302368, 0.1111111044883728, 0, 0 ]
H1gL-2A9Ym
true
[ "Personalized propagation of neural predictions (PPNP) improves graph neural networks by separating them into prediction and propagation via personalized PageRank." ]
[ "Most of recent work in cross-lingual word embeddings is severely Anglocentric.", "The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting.", "With this work, however, we challenge these practices.", "First, we show that the choice of hub language can significantly impact downstream lexicon induction performance.", "Second, we both expand the current evaluation dictionary collection to include all language pairs using triangulation, and also create new dictionaries for under-represented languages.", "Evaluating established methods over all these language pairs sheds light into their suitability and presents new challenges for the field.", "Finally, in our analysis we identify general guidelines for strong cross-lingual embeddings baselines, based on more than just Anglocentric experiments.", "Continuous distributional vectors for representing words (embeddings) (Turian et al., 2010) have become ubiquitous in modern, neural NLP.", "Cross-lingual representations (Mikolov et al., 2013) additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI).", "BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging (Zhang et al., 2016) , parsing (Ammar et al., 2016) , document classification (Klementiev et al., 2012) , and machine translation (Irvine and CallisonBurch, 2013; Artetxe et al., 2018b; Lample et al., 2018) .", "Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively) .", "First, monolingual word embeddings are learned over large swaths of text; such pre-trained word embeddings, in fact, are available for several languages and are widely used, like the fastText Wikipedia vectors (Grave et al., 2018) .", "Second, a mapping between the languages is learned, in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision (Zou et al., 2013) , under minimal supervision e.g. using only identical strings (Smith et al., 2017) , or even in a completely unsupervised fashion (Zhang et al., 2017; Conneau et al., 2018) .", "Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned to (hereinafter \"the hub\").", "We outline the details in Section 2.", "Despite all the recent progress in learning cross-lingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric.", "Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one.", "We argue and empirically show, however, that English is a poor hub language choice.", "In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language).", "However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages.", "This Anglocentricity is even more evident at the evaluation stage.", "The lexica most commonly used for evaluation are the MUSE lexica (Conneau et al., 2018) which cover 45 languages, but with translations only from and into English.", "Even still, alternative evaluation dictionaries are also very English-and European-centric: Dinu and Baroni (2014) report results on English-Italian, Artetxe et al. (2017) on English-German and English-Finnish, Zhang et al. (2017) on Spanish-English and Italian-English, and Artetxe et al. (2018a) between English and Italian, German, Finnish, Spanish, and Turkish.", "We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems (Aronoff and Fudeman, 2011) .", "These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs.", "In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages.", "With this work, we attempt to address these shortcomings, providing the following contributions:", "• We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance (e.g., by more than 10 percentage points for BWE over distant languages).", "We also show that often English is a suboptimal hub for MWE.", "• We identify some general guidelines for choosing a hub language which could lead to stronger baselines; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees).", "For distant languages, multilingual systems should in most cases be preferred over bilingual ones.", "• We provide resources for training and evaluation on non-Anglocentric language pairs.", "We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 2352 lexicons covering 49 languages, and we present results on a subset of them.", "We also create new evaluation lexica for under-resourced languages using Azerbaijani, Belarusian, and Galician as our test cases.", "We additionally provide recipes for creating such dictionaries for any language pair with available parallel data.", "With this work we challenge the standard practices in learning cross-lingual word embeddings.", "We empirically showed that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings.", "More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs.", "Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric cross-lingual word embeddings.", "A Does evaluation directionality matter?", "We also explored whether there are significant differences between the evaluated quality of aligned spaces, when computed on both directions (src-trg and trg-src).", "We find that the evaluation direction indeed matters a lot, when the languages of the evaluation pair are very distant, in terms of morphological complexity and data availability (which affects the quality of the original embeddings).", "A prominent example, from our European-languages experiment, are evaluation pairs involving Az or Be.", "When evaluating on the Az-XX and Be-XX dictionaries, the word translation P@1 is more than 20 percentage points higher than when evaluating on the opposite direction (XX-Az or XX-Be).", "For example, Es-Az has a mere P@1 of 9.9, while Az-Es achieves a P@1 of 44.9.", "This observation holds even between very related languages (cf. Ru-Be: 12.8, Be-Ru: 41.1 and Tr-Az: 8.4, Az-Tr: 32.0), which supports our hypothesis that this difference is also due to the quality of the pre-trained embeddings.", "It is important to note that such directionality differences are not observed when evaluating distant pairs with presumably high-quality pre-trained embeddings e.g. Tr-Sk or Tr-Es; the P@1 for both directions is very close." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.2083333283662796, 0, 0.2857142686843872, 0.1395348757505417, 0.10256409645080566, 0.10256409645080566, 0, 0.08510638028383255, 0.0363636314868927, 0, 0.11538460850715637, 0.14492753148078918, 0.17391303181648254, 0.07692307233810425, 0.1904761791229248, 0.14999999105930328, 0.1818181723356247, 0.2790697515010834, 0.0555555522441864, 0.06896550953388214, 0.17391303181648254, 0.07407406717538834, 0.05970148742198944, 0.10526315122842789, 0.13636362552642822, 0.0624999962747097, 0.22641508281230927, 0.06451612710952759, 0.19354838132858276, 0.060606054961681366, 0.12903225421905518, 0.21276594698429108, 0, 0.11764705181121826, 0.1249999925494194, 0.2666666507720947, 0.16326530277729034, 0.0555555522441864, 0, 0.2380952388048172, 0.1666666567325592, 0, 0.09302324801683426, 0.060606054961681366, 0.14035087823867798, 0.038461532443761826 ]
Hkla70NFPH
true
[ "The choice of the hub (target) language affects the quality of cross-lingual embeddings, which shouldn't be evaluated only on English-centric dictionaries." ]
[ "Interpreting generative adversarial network (GAN) training as approximate divergence minimization has been theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting extensions such as f-GANs and Wasserstein GANs.", "For both classic GANs and f-GANs, there is an original variant of training and a \"non-saturating\" variant which uses an alternative form of generator update.", "The original variant is theoretically easier to study, but the alternative variant frequently performs better and is recommended for use in practice.", "The alternative generator update is often regarded as a simple modification to deal with optimization issues, and it appears to be a common misconception that the two variants minimize the same divergence.", "In this short note we derive the divergences approximately minimized by the original and alternative variants of GAN and f-GAN training.", "This highlights important differences between the two variants.", "For example, we show that the alternative variant of KL-GAN training actually minimizes the reverse KL divergence, and that the alternative variant of conventional GAN training minimizes a \"softened\" version of the reverse KL.", "We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.10256409645080566, 0.11764705181121826, 0.060606054961681366, 0.0952380895614624, 0.1249999925494194, 0, 0.2857142686843872, 0.1875 ]
HJxb8-o6s7
false
[ "Typical GAN training doesn't optimize Jensen-Shannon, but something like a reverse KL divergence." ]
[ "REINFORCE can be used to train models in structured prediction settings to directly optimize the test-time objective.", "However, the common case of sampling one prediction per datapoint (input) is data-inefficient.", "We show that by drawing multiple samples (predictions) per datapoint, we can learn with significantly less data, as we freely obtain a REINFORCE baseline to reduce variance.", "Additionally we derive a REINFORCE estimator with baseline, based on sampling without replacement.", "Combined with a recent technique to sample sequences without replacement using Stochastic Beam Search, this improves the training procedure for a sequence model that predicts the solution to the Travelling Salesman Problem.", " REINFORCE (Williams, 1992 ) is a well known policy optimization algorithm that learns directly from experience.", "Variants of it have been used to train models for a wide range of structured prediction tasks, such as Neural Machine Translation BID12 BID0 , Image Captioning (Vinyals et al., 2015b) and predicting solutions (tours) for the Travelling Salesman Problem (TSP) BID1 BID6 .", "As opposed to maximum likelihood (supervised) learning, the appeal of using REINFORCE for structured prediction is that it directly optimizes the test-time performance.When using REINFORCE, often for each datapoint (e.g. a sentence, image or TSP instance) only a single sample/prediction (e.g. a translation, caption or tour) is used to construct a gradient estimate.", "From a classic Reinforcement Learning (RL) point of view, this makes sense, as we may not be able to evaluate multiple sampled actions for a state (datapoint).", "However, from a data point of view, this is inefficient if we can actually evaluate multiple samples, such as in a structured prediction setting.", "Reinforcement Learning with multiple samples/predictions for a single datapoint has been used before (e.g. BID14 ; ), but we use the samples as counterfactual information by constructing a (local, for a single datapoint) REINFORCE baseline.", "A similar idea was applied for variational inference by BID10 .Many", "structured prediction tasks can be formulated in terms of sequence modelling, which is the focus of this paper. In most", "sequence modelling tasks, the objective is a deterministic function of the predicted sequence. As a result", ", duplicate sampled sequences are uninformative and therefore do not improve the quality of the gradient estimate. To solve this", "problem, we propose to use sampling without replacement to construct a better gradient estimate. This is inspired", "by recent work by BID7 , who introduce Stochastic Beam Search as a method to sample sequences without replacement, and use this to construct a (normalized) importance-weighted estimator for (sentence level) BLEU score. We extend this idea", "to estimate policy gradients using REINFORCE, and we show how to use the same set of samples (without replacement) to construct a baseline. This way we can leverage", "sampling without replacement to improve training of sequence models.In our experiment, we consider the TSP and show that using REINFORCE with multiple samples is beneficial compared to single sample REINFORCE, both computationally and in terms of data-efficiency. Additionally, for a sample", "size of 4 − 8 samples per datapoint, sampling without replacement results in slightly faster learning.", "In this paper, we have derived REINFORCE estimators based on drawing multiple samples, with and without replacement, and evaluated the effectiveness of the proposed estimators in a structured prediction setting: the prediction of tours for the TSP.", "The derived estimators yield results comparable to recent results using REINFORCE with a strong greedy rollout baseline, at greater data-efficiency and computational efficiency.These estimators are especially well suited for structured prediction settings, where the domain is too large to compute exact gradients, but we are able to take multiple samples for the same datapoint, and the objective is a deterministic function of the sampled prediction.", "We hope the proposed estimators have potential to be used to improve training efficiency in more structured prediction settings, for example in the context of Neural Machine Translation or Image Captioning, where depending on the entropy of the model, sampling without replacement may yield a beneficial improvement." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.0555555522441864, 0.8163264989852905, 0.2222222238779068, 0.11764705181121826, 0.1538461446762085, 0.0624999962747097, 0.0882352888584137, 0.16326530277729034, 0.260869562625885, 0.3272727131843567, 0.05882352590560913, 0.0476190410554409, 0.0555555522441864, 0, 0.10256409645080566, 0.1428571343421936, 0.25, 0.26229506731033325, 0.10256409645080566, 0.22641508281230927, 0.15789473056793213, 0.0624999962747097 ]
r1lgTGL5DE
true
[ "We show that by drawing multiple samples (predictions) per input (datapoint), we can learn with less data as we freely obtain a REINFORCE baseline." ]
[ "Reinforcement learning (RL) is a powerful technique to train an agent to perform a task. ", "However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. ", "Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. ", "Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. ", "We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized subset of the state-space. ", "The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. ", "Our method thus automatically produces a curriculum of tasks for the agent to learn. ", "We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment (Videos and code available at: https://sites.google.com/view/goalgeneration4rl).", "Our method can also learn to achieve tasks with sparse rewards, which pose significant challenges for traditional RL methods.", "Reinforcement learning (RL) can be used to train an agent to perform a task by optimizing a reward function.", "Recently, a number of impressive results have been demonstrated by training agents using RL: such agents have been trained to defeat a champion Go player BID16 , to outperform humans in 49 Atari games (Guo et al., 2016; Mnih et al., 2015) , and to perform a variety of difficult robotics tasks (Lillicrap et al., 2015; BID18 .", "In each of the above cases, the agent is trained to optimize a single reward function in order to learn to perform a single task.", "However, there are many real-world environments in which a robot will need to be able to perform not a single task but a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations.", "We consider the problem of maximizing the average success rate of our agent over all possible goals, where success is defined as the probability of successfully reaching each goal by the current policy.In order to efficiently maximize this objective, the algorithm must intelligently choose which goals to focus on at every training stage: goals should be at the appropriate level of difficulty for the current policy.", "To do so, our algorithm allows an agent to generate its own reward functions, defined with respect to target subsets of the state space, called goals.", "We generate such goals using a Goal Generative Adversarial Network (Goal GAN), a variation of to the GANs introduced by Goodfellow et al. (2014) .", "A goal discriminator is trained to evaluate whether a goal is at the appropriate level of difficulty for the current policy, and a goal generator is trained to generate goals that meet this criteria.", "We show that such a framework allows an agent to quickly learn a policy that reaches all feasible goals in its environment, with no prior knowledge about the environment or the tasks being performed.", "Our method automatically creates a curriculum, in which, at each step, the generator generates goals that are only slightly more difficult than the goals that the agent already knows how to achieve.In summary, our main contribution is a method for automatic curriculum generation that considerably improves the sample efficiency of learning to reach all feasible goals in the environment.Learning to reach multiple goals is useful for multi-task settings such as navigation or manipulation, in which we want the agent to perform a wide range of tasks.", "Our method also naturally handles sparse reward functions, without needing to manually modify the reward function for every task, based on prior task knowledge.", "Instead, our method dynamically modifies the probability distribution from which goals are sampled to ensure that the generated goals are always at the appropriate difficulty level, until the agent learns to reach all goals within the feasible goal space.", "We propose a new paradigm in RL where the objective is to train a single policy to succeed on a variety of goals, under sparse rewards.", "To solve this problem we develop a method for automatic curriculum generation that dynamically adapts to the current performance of the agent.", "The curriculum is obtained without any prior knowledge of the environment or of the tasks being performed.", "We use generative adversarial training to automatically generate goals for our policy that are always at the appropriate level of difficulty (i.e. not too hard and not too easy).", "In the future we want to combine our goal-proposing strategy with recent multi-goal approaches like HER BID1 ) that could greatly benefit from better ways to select the next goal to train on.", "Another promising line of research is to build hierarchy on top of the multi-task policy that we obtain with our method by training a higher-level policy that outputs the goal for the lower level multi-task policy (like in Heess et al. (2016 ) or in Florensa et al. (2017a ).", "The hierarchy could also be introduced by replacing our current feed-forward neural network policy by an architecture that learns to build implicit plans (Mnih et al., 2016; BID18 , or by leveraging expert demonstrations to extract sub-goals BID23 , although none of these approaches tackles yet the multi-task learning problem formulated in this work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.1395348757505417, 0.07692307233810425, 0.17391303181648254, 0.12765957415103912, 0.08888888359069824, 0.1621621549129486, 0.14035087823867798, 0.04878048226237297, 0.1538461446762085, 0.030303025618195534, 0.0952380895614624, 0.0357142798602581, 0.13513512909412384, 0.1702127605676651, 0.13333332538604736, 0.1249999925494194, 0.22641508281230927, 0.18390804529190063, 0.13333332538604736, 0.07547169178724289, 0.17777776718139648, 0.3720930218696594, 0.10810810327529907, 0.1599999964237213, 0.1538461446762085, 0.19672130048274994, 0.138888880610466 ]
SyhRVm-Rb
true
[ "We efficiently solve multi-task problems with an automatic curriculum generation algorithm based on a generative model that tracks the learning agent's performance." ]
[ "A wide range of defenses have been proposed to harden neural networks against adversarial attacks.", "However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. ", "Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?\n", "This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. ", "We show that, for certain classes of problems, adversarial examples are inescapable. ", "Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.\n\n", "A number of adversarial attacks on neural networks have been recently proposed.", "To counter these attacks, a number of authors have proposed a range of defenses.", "However, these defenses are often quickly broken by new and revised attacks.", "Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?In", "this paper, we identify a broad class of problems for which adversarial examples cannot be avoided. We", "also derive fundamental limits on the susceptibility of a classifier to adversarial attacks that depend on properties of the data distribution as well as the dimensionality of the dataset.Adversarial examples occur when a small perturbation to an image changes its class label. There", "are different ways of measuring what it means for a perturbation to be \"small\"; as such, our analysis considers a range of different norms. While", "the ∞ -norm is commonly used, adversarial examples can be crafted in any p -norm (see FIG0 ). We will", "see that the choice of norm can have a dramatic effect on the strength of theoretical guarantees for the existence of adversarial examples. Our analysis", "also extends to the 0 -norm, which yields \"sparse\" adversarial examples that only perturb a small subset of image pixels FIG2 ). BID19 on Resnet50", ", along with the distance between the base image and the adversarial example, and the top class label.", "There are a number of ways to escape the guarantees of adversarial examples made by Theorems 1-4.", "One potential escape is for the class density functions to take on extremely large values (i.e., exponentially large U c ); the dependence of U c on n is addressed separately in Section 8.Unbounded density functions and low-dimensional data manifolds In practice, image datasets might lie on low-dimensional manifolds within the cube, and the support of these distributions could have measure zero, making the density function infinite (i.e., U c = ∞).", "The arguments above are still relevant (at least in theory) in this case; we can expand the data manifold by adding a uniform random noise to each image pixel of magnitude at most 1 .", "The expanded dataset has positive volume.", "Then, adversarial examples of this expanded dataset can be crafted with perturbations of size 2 .", "This method of expanding the manifold before crafting adversarial examples is often used in practice.", "BID39 proposed adding a small perturbation to step off the image manifold before crafting adversarial examples.", "This strategy is also used during adversarial training BID19 .Adding", "a \"don't know\" class The analysis above assumes the classifier assigns a label to every point in the cube. If a classifier", "has the ability to say \"I don't know,\" rather than assign a label to every input, then the region of the cube that is assigned class labels might be very small, and adversarial examples could be escaped even if the other assumptions of Theorem 4 are satisfied. In this case, it", "would still be easy for the adversary to degrade classifier performance by perturbing images into the \"don't know\" class.Feature squeezing If decreasing the dimensionality of data does not lead to substantially increased values for U c (we see in Section 8 that this is a reasonable assumption) or loss in accuracy (a stronger assumption), measuring data in lower dimensions could increase robustness. This can be done", "via an auto-encoder BID22 BID30 , JPEG encoding BID9 , or quantization BID43 .Computational hardness", "It may be computationally hard to craft adversarial examples because of local flatness of the classification function, obscurity of the classifier function, or other computational difficulties. Computational hardness", "could prevent adversarial attacks in practice, even if adversarial examples still exist." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15789473056793213, 0.2857142686843872, 0.27272728085517883, 0.6818181872367859, 0.3888888955116272, 0.307692289352417, 0.17142856121063232, 0.05714285373687744, 0.11428570747375488, 0.27272728085517883, 0.29999998211860657, 0.3103448152542114, 0.17777776718139648, 0.1904761791229248, 0.27272728085517883, 0.2916666567325592, 0.1621621549129486, 0.307692289352417, 0.1538461446762085, 0.1428571343421936, 0, 0.1621621549129486, 0.2631579041481018, 0.20512819290161133, 0.12121211737394333, 0.1463414579629898, 0.20588235557079315, 0.1463414579629898, 0, 0.25531914830207825, 0.11764705181121826 ]
r1lWUoA9FQ
true
[ "This paper identifies classes of problems for which adversarial examples are inescapable, and derives fundamental bounds on the susceptibility of any classifier to adversarial examples. " ]
[ "For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks.", "Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs.", "One way to reduce this large memory footprint is to reduce the precision of activations.", "However, past works have shown that reducing the precision of activations hurts model accuracy.", "We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy.", "We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network.", "As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory band- width and computational energy) and speed up the training and inference process with appropriate hardware support.", "We call our scheme WRPN -- wide reduced-precision networks.", "We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.", "A promising approach to lower the compute and memory requirements of convolutional deeplearning workloads is through the use of low numeric precision algorithms.", "Operating in lower precision mode reduces computation as well as data movement and storage requirements.", "Due to such efficiency benefits, there are many existing works which propose low-precision deep neural networks (DNNs) BID27 BID12 BID14 BID6 ; BID24 , even down to 2-bit ternary mode BID29 BID11 BID25 and 1-bit binary mode BID28 BID2 BID16 BID23 .", "However, majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks.", "Further, most prior works target reducing the precision of the model parameters (network weights).", "This primarily benefits the inference step only when batch sizes are small.We observe that activation maps (neuron outputs) occupy more memory compared to the model parameters for batch sizes typical during training.", "This observation holds even during inference when batch size is around eight or more.", "Based on this observation, we study schemes for training and inference using low-precision DNNs where we reduce the precision of activation maps as well as the model parameters without sacrificing network accuracy.To improve both execution efficiency and accuracy of low-precision networks, we reduce both the precision of activation maps and model parameters and increase the number of filter maps in a layer.", "We call networks using this scheme wide reduced-precision networks (WRPN) and find that this scheme compensates or surpasses the accuracy of the baseline full-precision network.", "Although the number of raw compute operations increases as we increase the number of filter maps in a layer, the compute bits required per operation is now a fraction of what is required when using full-precision operations (e.g. going from FP32 AlexNet to 4-bits precision and doubling the number of filters increases the number of compute operations by 4x, but each operation is 8x more efficient than FP32).", "WRPN offers better accuracies, while being computationally less expensive compared to previously reported reduced-precision networks.", "We report results on AlexNet BID10 , batch-normalized Inception BID8 , and ResNet-34 BID7 on ILSVRC-12 (Russakovsky et al., 2015) dataset.", "We find 4-bits to be sufficient for training deep and wide models while achieving similar or better accuracy than baseline network.", "With 4-bit activation and 2-bit weights, we find the accuracy to be at-par with baseline full-precision.", "Making the networks wider and operating with 1-bit precision, we close the accuracy gap between previously reported binary networks and show state-of-the art results for ResNet-34 (69.85% top-1 with 2x wide) and AlexNet (48.04% top-1 with 1.3x wide).", "To the best of our knowledge, our reported accuracies with binary networks and 4-bit precision are highest to date.Our reduced-precision quantization scheme is hardware friendly allowing for efficient hardware implementations.", "To this end, we evaluate efficiency benefits of low-precision operations (4-bits to 1-bits) on Titan X GPU, Arria-10 FPGA and ASIC.", "We see that FPGA and ASIC can deliver significant efficiency gain over FP32 operations (6.5x to 100x), while GPU cannot take advantage of very low-precision operations.", "While most prior works proposing reduced-precision networks work with low precision weights (e.g. work in BID2 BID29 BID28 BID25 ; BID11 ; BID23 ), we find that activation maps occupy a larger memory footprint when using mini-batches of inputs.", "Using mini-batches of inputs is typical in training of DNNs and cloud-based batched inference BID9 .", "FIG0 shows memory footprint of activation maps and filter maps as batch size changes for 4 different networks (AlexNet, Inception-Resnet-v2 BID22 , during the training and inference steps.", "We present the Wide Reduced-Precision Networks (WRPN) scheme for DNNs.", "In this scheme, the numeric precision of both weights and activations are significantly reduced without loss of network accuracy.", "This result is in contrast to many previous works that find reduced-precision activations to detrimentally impact accuracy; specifically, we find that 2-bit weights and 4-bit activations are sufficient to match baseline accuracy across many networks including AlexNet, ResNet-34 and batchnormalized Inception.", "We achieve this result with a new quantization scheme and by increasing the number of filter maps in each reduced-precision layer to compensate for the loss of information capacity induced by reducing the precision.", "We believe ours to be the first work to study the interplay between layer width and precision -with widening, the number of neurons in a layer increase; yet with reduced precision, we control overfitting and regularization.We motivate this work with our observation that full-precision activations contribute significantly more to the memory footprint than full-precision weight parameters when using mini-batch sizes common during training and cloud-based inference; furthermore, by reducing the precision of both activations and weights the compute complexity is greatly reduced (40% of baseline for 2-bit weights and 4-bit activations).The", "WRPN quantization scheme and computation on low precision activations and weights is hardware friendly making it viable for deeply-embedded system deployments as well as in cloud-based training and inference servers with compute fabrics for low-precision. We", "compare Titan X GPU, Arria-10 FPGA and ASIC implementations using WRPN and show our scheme increases performance and energy-efficiency for iso-accuracy across each. Overall", ", reducing the precision allows custom-designed compute units and lower buffering requirements to provide significant improvement in throughput." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.0952380895614624, 0.17142856121063232, 0.1666666567325592, 0.0555555522441864, 0.23529411852359772, 0.11538460850715637, 0, 0.0416666604578495, 0.1395348757505417, 0.1666666567325592, 0.06557376682758331, 0.05405404791235924, 0.11428570747375488, 0.038461532443761826, 0.0555555522441864, 0.1904761791229248, 0.1395348757505417, 0.17142856121063232, 0, 0.04878048226237297, 0.09302324801683426, 0.15789473056793213, 0.1090909019112587, 0.15686273574829102, 0.04651162400841713, 0.0833333283662796, 0.09999999403953552, 0.0555555522441864, 0.1666666567325592, 0.0624999962747097, 0.29999998211860657, 0.1071428507566452, 0.19230768084526062, 0.12903225421905518, 0.2222222238779068, 0.045454539358615875, 0.14999999105930328 ]
B1ZvaaeAZ
true
[ "Lowering precision (to 4-bits, 2-bits and even binary) and widening the filter banks gives as accurate network as those obtained with FP32 weights and activations." ]
[ "We investigate methods for semi-supervised learning (SSL) of a neural linear-chain conditional random field (CRF) for Named Entity Recognition (NER) by treating the tagger as the amortized variational posterior in a generative model of text given tags.", "We first illustrate how to incorporate a CRF in a VAE, enabling end-to-end training on semi-supervised data.", "We then investigate a series of increasingly complex deep generative models of tokens given tags enabled by end-to-end optimization, comparing the proposed models against supervised and strong CRF SSL baselines on the Ontonotes5 NER dataset.", "We find that our best proposed model consistently improves performance by $\\approx 1\\%$ F1 in low- and moderate-resource regimes and easily addresses degenerate model behavior in a more difficult, partially supervised setting.", "Named entity recognition (NER) is a critical subtask of many domain-specific natural language understanding tasks in NLP, such as information extraction, entity linking, semantic parsing, and question answering.", "State-of-the-art models treat NER as a tagging problem (Lample et al., 2016; Ma & Hovy, 2016; Strubell et al., 2017; Akbik et al., 2018) , and while they have become quite accurate on benchmark datasets in recent years (Lample et al., 2016; Ma & Hovy, 2016; Strubell et al., 2017; Akbik et al., 2018; Devlin et al., 2018) , utilizing them for new tasks is still expensive, requiring a large corpus of exhaustively annotated sentences (Snow et al., 2008) .", "This problem has been largely addressed by extensive pretraining of high-capacity sentence encoders on massive-scale language modeling tasks Devlin et al., 2018; Howard & Ruder, 2018; Radford et al., 2019; Liu et al., 2019b) , but it is natural to ask if we can squeeze more signal from our unlabeled data.", "Latent-variable generative models of sentences are a natural approach to this problem: by treating the tags for unlabeled data as latent variables, we can appeal to the principle of maximum marginal likelihood (Berger, 1985; Bishop, 2006) and learn a generative model on both labeled and unlabeled data.", "For models of practical interest, however, this presents multiple challenges: learning and prediction both require an intractable marginalization over the latent variables and the specification of the generative model can imply a posterior family that may not be as performant as the current state-of-the-art discriminative models.", "We address these challenges using a semi-supervised Variational Autoencoder (VAE) (Kingma et al., 2014) , treating a neural tagging CRF as the approximate posterior.", "We address the issue of optimization through discrete latent tag sequences by utilizing a differentiable relaxation of the Perturb-and-MAP algorithm (Papandreou & Yuille, 2011; Mensch & Blondel, 2018; Corro & Titov, 2018) , allowing for end-to-end optimization via backpropagation (Rumelhart et al., 1988) and SGD (Robbins & Monro, 1951) .", "Armed with this learning approach, we no longer need to restrict the generative model family (as in Ammar et al. (2014) ; Zhang et al. (2017) ), and explore the use of rich deep generative models of text given tag sequences for improving NER performance.", "We also demonstrate how to use the VAE framework to learn in a realistic annotation scenario where we only observe a biased subset of the named entity tags.", "Our contributions can be summarized as follows:", "1. We address the problem of semi-supervised learning (SSL) for NER by treating a neural CRF as the amortized approximate posterior in a discrete structured VAE.", "To the best of our knowledge, we are the first to utilize VAEs for NER.", "2. We explore several variants of increasingly complex deep generative models of text given tags with the goal of improving tagging performance.", "We find that a joint tag-encoding Transformer (Vaswani et al., 2017) architecture leads to an ≈ 1% improvement in F1 score over supervised and strong CRF SSL baselines.", "3. We demonstrate that the proposed approach elegantly corrects for degenerate model performance in a more difficult partially supervised regime where sentences are not exhaustively annotated and again find improved performance.", "4. Finally, we show the utility of our method in realistic low-and high-resource scenarios, varying the amount of unlabeled data.", "The resulting high-resource model is competitive with state-of-the-art results and, to the best of our knowledge, achieves the highest reported F1 score (88.4%) for models that do not use additional labeled data or gazetteers.", "We proposed a novel generative model for semi-supervised learning in NER.", "By treating a neural CRF as the amortized variational posterior in the generative model and taking relaxed differentiable samples, we were able to utilize a transformer architecture in the generative model to condition on more context and provide appreciable performance gains over supervised and strong baselines on both semi-supervised and partially-supervised datasets.", "We also found that inclusion of powerful pretrained autoregressive language modeling states had neglible or negative effects while using a pretrained bidirectional encoder offers significant performance gains.", "Future work includes the use of larger in-domain unlabeled corpora and the inclusion of latent-variable CRFs in more interesting joint semi-supervised models of annotations, such as relation extraction and entity linking.", "Gumbel, 1954) and τ ≥ 0 be the temperature:", "We know from Papandreou & Yuille (2011) that the MAP sequence from this perturbed distribution is a sample from the unperturbed distribution.", "Coupled with the property that the zero temperature limit of the Gibbs distribution is the MAP state (Wainwright et al., 2008) , it immediately follows that the zero temperature limit of the perturbedq is a sample from q:", "⇒ lim τ →0q", "where q φ (y|x; τ ) is the tempered but unperturbed q φ and \"one-hot\" is a function that converts elements of Y N to a one-hot vector representation.", "Thus we can use the temperature τ to anneal the perturbed joint distributionq φ (y|x; τ ) to a sample from the unperturbed distribution,ỹ ∼ q φ .", "When τ > 0,q φ (y|x; τ ) is differentiable and can be used for end-to-end optimization by allowing us to approximate the expectation with a relaxed single-sample Monte Carlo estimate:", "where we have modified log p θ (x|y) to accept the simplex representations of y 1:N fromq φ instead of discrete elements, which has the effect of log p θ (x|y) computing a weighted combination of its input vector representations for y ∈ Y similarly to an attention mechanism or the annotation function in Kim et al. (2017) (see Equation 7.) This can be thought of as a generalization of the Gumbel-softmax trick from Jang et al. (2016); Maddison et al. (2016) to structured joint distributions.", "The statements in (8-10) also imply something of practical interest: we can compute (1) the argmax (Viterbi decoding) and its differentiable relaxation; (2) a sample and its differentiable relaxation; (3) the partition function; and (4) the marginal tag distributions, all using the same sum-product algorithm implementation, controlled by the temperature and the presence of noise.", "We have detailed the algorithm in Appendix B." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.31372547149658203, 0.29411762952804565, 0.3199999928474426, 0.1702127605676651, 0.17777776718139648, 0.17142856121063232, 0.0312499962747097, 0.17543859779834747, 0.14035087823867798, 0.19512194395065308, 0.16393442451953888, 0.20689654350280762, 0.2790697515010834, 0, 0.4761904776096344, 0.1875, 0.15789473056793213, 0.21276594698429108, 0.2083333283662796, 0.1666666567325592, 0.07692307233810425, 0.48275861144065857, 0.16949151456356049, 0.13636362552642822, 0.17777776718139648, 0.07407406717538834, 0.1111111044883728, 0.08888888359069824, 0, 0.1395348757505417, 0.04999999329447746, 0.1249999925494194, 0.0952380895614624, 0.13114753365516663, 0.1538461446762085 ]
BkxnKkrtvS
true
[ "We embed a CRF in a VAE of tokens and NER tags for semi-supervised learning and show improvements in low-resource settings." ]
[ "To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights.", "One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping.", "Despite its empirical success, little is understood about why the straight-through gradient method works.\n", "Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method.", "ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness.", "For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization.", "For binary quantization, our analysis shows both theoretically and experimentally that ProxQuant is more stable than the straight-through gradient method (i.e. BinaryConnect), challenging the indispensability of the straight-through gradient method and providing a powerful alternative." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.27586206793785095, 0.23076923191547394, 0.17241379618644714, 0.1249999925494194, 0.07407406717538834, 0.1463414579629898 ]
HyGxNg9SiQ
false
[ "A principled framework for model quantization using the proximal gradient method." ]