source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains.", "Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism.", "In this paper, we consider the adversarial detection problem under the robust optimization framework.", "We partition the input space into subspaces and train adversarial robust subspace detectors using asymmetrical adversarial training (AAT).", "The integration of the classifier and detectors presents a detection mechanism that provides a performance guarantee to the adversary it considered.", "We demonstrate that AAT promotes the learning of class-conditional distributions, which further gives rise to generative detection/classification approaches that are both robust and more interpretable.", "We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems.", "Deep neural networks have become the staple of modern machine learning pipelines, achieving stateof-the-art performance on extremely difficult tasks in various applications such as computer vision (He et al., 2016) , speech recognition (Amodei et al., 2016) , machine translation (Vaswani et al., 2017) , robotics (Levine et al., 2016) , and biomedical image analysis (Shen et al., 2017) .", "Despite their outstanding performance, these networks are shown to be vulnerable against various types of adversarial attacks, including evasion attacks (aka, inference or perturbation attacks) (Szegedy et al., 2013; Goodfellow et al., 2014b; Carlini & Wagner, 2017b; Su et al., 2019) and poisoning attacks (Liu et al., 2017; Shafahi et al., 2018) .", "These vulnerabilities in deep neural networks hinder their deployment in sensitive domains including, but not limited to, health care, finances, autonomous driving, and defense-related applications and have become a major security concern.", "Due to the mentioned vulnerabilities, there has been a recent surge toward designing defense mechanisms against adversarial attacks (Gu & Rigazio, 2014; Jin et al., 2015; Papernot et al., 2016b; Bastani et al., 2016; Madry et al., 2017; Sinha et al., 2018) , which has in turn motivated the design of stronger attacks that defeat the proposed defenses (Goodfellow et al., 2014b; Kurakin et al., 2016b; a; Carlini & Wagner, 2017b; Xiao et al., 2018; Athalye et al., 2018; Chen et al., 2018; He et al., 2018) .", "Besides, the proposed defenses have been shown to be limited and often not effective and easy to overcome (Athalye et al., 2018) .", "Alternatively, a large body of work has focused on detection of adversarial examples (Bhagoji et al., 2017; Feinman et al., 2017; Gong et al., 2017; Grosse et al., 2017; Metzen et al., 2017; Hendrycks & Gimpel, 2017; Li & Li, 2017; Xu et al., 2017; Pang et al., 2018; Roth et al., 2019; Bahat et al., 2019; Ma et al., 2018; Zheng & Hong, 2018; Tian et al., 2018) .", "While training robust classifiers focuses on maintaining performance in presence of adversarial examples, adversarial detection only cares for detecting these examples.", "The majority of the current detection mechanisms focus on non-adaptive threats, for which the attacks are not specifically tuned/tailored to bypass the detection mechanism, and the attacker is oblivious to the detection mechanism.", "In fact, Carlini & Wagner (2017a) and Athalye et al. (2018) showed that the detection methods presented in (Bhagoji et al., 2017; Feinman et al., 2017; Gong et al., 2017; Grosse et al., 2017; Metzen et al., 2017; Hendrycks & Gimpel, 2017; Li & Li, 2017; Ma et al., 2018) , are significantly less effective than their claimed performances under adaptive attacks.", "The current solutions are mostly heuristic approaches that cannot provide performance guarantees to the adversary they considered.", "In this paper, we are interested in detection mechanisms for adversarial examples that can withstand adaptive attacks.", "Unlike previous approaches that assume adversarial and natural samples coming from different distributions, thus rely on using a single classifier to distinguish between them, we instead partition the input space into subspaces based on the classification system's output and perform adversarial/natural sample classification in these subspaces.", "Importantly, the mentioned partitions allow us to drop the adversarial constrain and employ a novel asymmetrical adversarial training (AAT) objective to train robust binary classifiers in the subspaces.", "Figure 1 demonstrates our idea of space partitioning and robust detector training.", "Our qualitative results show that AAT supports detectors to learn class-conditional distributions, which further motivates generative detection/classification solutions that are both robust and interpretable.", "Our specific contributions are:", "• We develop adversarial example detection techniques that provide performance guarantees to norm constrained adversaries.", "Empirically, our best models improve previous state-ofthe-art mean L 2 distortion from 3.68 to 4.47 on the MNIST dataset, and from 1.1 to 1.5 on the CIFAR10 dataset.", "• We study powerful and versatile generative classification models derived from our detection framework and demonstrate their competitive performances over discriminative robust classifiers.", "While defense mechanisms based on ordinary adversarial training are vulnerable to unrecognizable inputs (e.g., rubbish examples), inputs that cause confident predictions of our models have human-understandable semantic meanings.", "• We demonstrate that AAT not only induces robustness as ordinary adversarial training methods do, but also promotes the learning of class-conditional distributions.", "Intuitively, the learning mechanism is similar to that of GANs, but the objective doesn't learn a fixed generator.", "On 1D and 2D benchmarking datasets we show this flexibility allows us to precisely control the data generation process such that the detector could be pushed to a good approximation of the underlying density function.", "(In case of GANs at the global optimum the discriminator converges to a degenerated uniform solution.)", "Our image generation results on CIFAR10 and ImageNet rival that of state-of-the-art GANs." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.051282044500112534, 0.21739129722118378, 0.19354838132858276, 0.22857142984867096, 0.1621621549129486, 0.1904761791229248, 0.307692289352417, 0.09677419066429138, 0.09677419066429138, 0.0833333283662796, 0.052631575614213943, 0.10256409645080566, 0.1090909019112587, 0.21052631735801697, 0.1818181723356247, 0.06666666269302368, 0.05714285373687744, 0.11428570747375488, 0.20338982343673706, 0.2380952388048172, 0.13333332538604736, 0.19512194395065308, 0, 0.24242423474788666, 0.13636362552642822, 0.25, 0.1702127605676651, 0.04878048226237297, 0.05714285373687744, 0.07999999821186066, 0.05882352590560913, 0.12903225421905518 ]
SJeQEp4YDH
true
[ "A new generative modeling technique based on asymmetrical adversarial training, and its applications to adversarial example detection and robust classification" ]
[ "Exploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. ", "In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. ", "Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. ", "To make this approach feasible, we develop several pruning techniques, including learning to predict a program's success based on its syntactic properties. ", "We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well.", "Figure 1: Our RL agent is augmented with a curiosity module, obtained by meta-learning over a complex space of programs, which computes a pseudo-reward r at every time step.", "When an agent is learning to behave online, via reinforcement learning (RL), it is critical that it both explores its domain and exploits its rewards effectively.", "In very simple problems, it is possible to solve the problem optimally, using techniques of Bayesian decision theory (Ghavamzadeh et al., 2015) .", "However, these techniques do not scale at all well and are not effectively applicable to the problems addressable by modern deep RL, with large state and action spaces and sparse rewards.", "This difficulty has left researchers the task of designing good exploration strategies for RL systems in complex environments.", "One way to think of this problem is in terms of curiosity or intrisic motivation: constructing reward signals that augment or even replace the extrinsic reward from the domain, which induce the RL agent to explore their domain in a way that results in effective longer-term learning and behavior (Pathak et al., 2017; Burda et al., 2018; Oudeyer, 2018) .", "The primary difficulty with this approach is that researchers are hand-designing these strategies: it is difficult for humans to systematically consider the space of strategies or to tailor strategies for the distribution of environments an agent might be expected to face.", "We take inspiration from the curious behavior observed in young humans and other animals and hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in agent's life in order to expose it to experiences that enable it to learn to obtain high rewards over the course of its lifetime.", "We propose to formulate the problem of generating curious behavior as one of meta-learning: an outer loop, operating at \"evolutionary\" scale will search over a space of algorithms for generating curious behavior by dynamically adapting the agent's reward signal, and the inner loop will perform standard reinforcement learning using the adapted reward signal.", "This process is illustrated in figure 1; note that the aggregate agent, outlined in gray, has the standard interface of an RL agent.", "The inner RL algorithm is continually adapting to its input stream of states and rewards, attempting to learn a policy that optimizes the discounted sum of proxy rewards k≥0 γ k r t+k .", "The outer \"evolutionary\" search is attempting to find a program for the curiosity module, so to optimize the agent's lifetime return T t=0 r t , or another global objective like the mean performance on the last few trials.", "Although it is, in principle, possible to discover a complete, integrated algorithm for the entire curious learning agent in the gray box, that is a much more complex search problem that is currently computationally infeasible.", "We are relying on the assumption that the foundational methods for reinforcement learning, including those based on temporal differencing and policy gradient, are fundamentally sound and can serve as the behavior-learning basis for our agents.", "It is important to note, though, that the internal RL algorithm in our architecture must be able to tolerate a nonstationary reward signal, which may necessitate minor algorithmic changes or, at least, different hyperparameter values.", "In this meta-learning setting, our objective is to find a curiosity module that works well given a distribution of environments from which we can sample at meta-learning time.", "If the environment distribution is relatively low-variance (the tasks are all quite similar) then it might suffice to search over a relatively simple space of curiosity strategies (most trivially, the in an -greedy exploration strategy).", "Meta-RL has been widely explored recently, in some cases with a focus on reducing the amount of experience needed by initializing the RL algorithm well (Finn et al., 2017; Clavera et al., 2019) and, in others, for efficient exploration (Duan et al., 2016; Wang et al., 2017) .", "The environment distributions in these cases have still been relatively low-diversity, mostly limited to variations of the same task, such as exploring different mazes or navigating terrains of different slopes.", "We would like to discover curiosity mechanisms that can generalize across a much broader distribution of environments, even those with different state and action spaces: from image-based games, to joint-based robotic control tasks.", "To do that, we perform meta-learning in a rich, combinatorial, open-ended space of programs.", "This paper makes three novel contributions.", "We focus on a regime of meta-reinforcement-learning in which the possible environments the agent might face are dramatically disparate and in which the agent's lifetime is very long.", "This is a substantially different setting than has been addressed in previous work on meta-RL and it requires substantially different techniques for representation and search.", "We represent meta-learned curiosity strategies in a rich, combinatorial space of programs rather than in a fixed-dimensional numeric parameter space.", "The programs are represented in a domain-specific language (DSL) which includes sophisticated building blocks including neural networks complete with gradient-descent mechanisms, learned objective functions, ensembles, buffers, and other regressors.", "This language is rich enough to represent many previously reported hand-designed exploration algorithms.", "We believe that by performing meta-RL in such a rich space of mechanisms, we will be able to discover highly general, fundamental curiosity-based exploration methods.", "This generality means that a relatively computationally expensive meta-learning process can be amortized over the lifetimes of many agents in a wide variety of environments.", "We make the search over programs feasible with relatively modest amounts of computation.", "It is a daunting search problem to find a good solution in a combinatorial space of programs, where evaluating a single potential solution requires running an RL algorithm for up to millions of time steps.", "We address this problem in multiple ways.", "By including environments of substantially different difficulty and character, we can evaluate candidate programs first on relatively simple and short-horizon domains: if they don't perform well in those domains, they are pruned early, which saves a significant amount of computation time.", "In addition, we predict the performance of an algorithm from its structure and operations, thus trying the most promising algorithms early in our search.", "Finally, we also monitor the learning curve of agents and stop unpromising programs before they reach all T environment steps.", "We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well.", "In this work we show that programs are a powerful, succinct, representation for algorithms for generating curious exploration, and these programs can be meta-learned efficiently via active search.", "Results from this work are two-fold.", "First, by construction, algorithms resulting from this search will have broad generalization and will thus be a useful default for RL settings, where reliability is key.", "Second, the algorithm search code will be open-sourced to facilitate further research on exploration algorithms based on new ideas or building blocks, which can be added to the search.", "In addition, we note that the approach of meta-learning programs instead of network weights may have further applications beyond finding curiosity algorithms, such as meta-learning optimization algorithms or even meta-learning meta-learning algorithms." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.23076923191547394, 0.1492537260055542, 0.045454539358615875, 0.21276594698429108, 0.2083333283662796, 0.04651162400841713, 0.09090908616781235, 0.04081632196903229, 0.051282044500112534, 0.11594202369451523, 0.1090909019112587, 0.1538461446762085, 0.15625, 0.0952380895614624, 0.11538460850715637, 0.0714285671710968, 0.07843136787414551, 0.03999999538064003, 0.1090909019112587, 0.1702127605676651, 0.14814814925193787, 0.09836065024137497, 0.08163265138864517, 0.30188679695129395, 0.22857142984867096, 0.07407406717538834, 0.13333332538604736, 0.09302324801683426, 0.2631579041481018, 0.07999999821186066, 0.11764705181121826, 0.260869562625885, 0.13636362552642822, 0.11764705181121826, 0.11999999731779099, 0, 0.1355932205915451, 0.09090908616781235, 0.09756097197532654, 0.21276594698429108, 0.1702127605676651, 0, 0.1304347813129425, 0.04444443807005882, 0.2083333283662796 ]
BygdyxHFDS
true
[ "Meta-learning curiosity algorithms by searching through a rich space of programs yields novel mechanisms that generalize across very different reinforcement-learning domains." ]
[ "Many machine learning algorithms represent input data with vector embeddings or discrete codes.", "When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations.", "While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces.", "We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives.", "We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization.", "We have introduced a new evaluation method called TRE for generating graded judgments about compositional structure in representation learning problems where the structure of the observations is understood.", "TRE infers a set of primitive meaning representations that, when composed, approximate the observed representations, then measures the quality of this approximation.", "We have applied TRE-based analysis to four different problems in representation learning, relating compositionality to learning dynamics, linguistic compositionality, similarity and generalization.Many interesting questions regarding compositionality and representation learning remain open.", "The most immediate is how to generalize TRE to the setting where oracle derivations are not available; in this case Equation 2 must be solved jointly with an unsupervised grammar induction problem BID25 .", "Beyond this, it is our hope that this line of research opens up two different kinds of new work: better understanding of existing machine learning models, by providing a new set of tools for understanding their representational capacity; and better understanding of problems, by better understanding the kinds of data distributions and loss functions that give rise to compositionalor non-compositional representations of observations." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.05405404791235924, 0.23529411852359772, 0.3103448152542114, 0.2800000011920929, 0.42307692766189575, 0.35999998450279236, 0.1818181723356247, 0.2745097875595093, 0.1071428507566452, 0.19718308746814728 ]
HJz05o0qK7
true
[ "This paper proposes a simple procedure for evaluating compositional structure in learned representations, and uses the procedure to explore the role of compositionality in four learning problems." ]
[ "In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem.", "The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled constrained programming algorithm as a building block in the architecture to enforce constraints.", "With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (29.7% improvement in some cases in F1 scores and even larger improvement for pseudoknotted structures) and runs as efficient as the fastest algorithms in terms of inference time.", "Ribonucleic acid (RNA) is a molecule playing essential roles in numerous cellular processes and regulating expression of genes (Crick, 1970) .", "It consists of an ordered sequence of nucleotides, with each nucleotide containing one of four bases: Adenine (A), Guanine (G), Cytosine (C) and Uracile (U).", "This sequence of bases can be represented as", "x := (x 1 , . . . , x L ) where x i ∈ {A, G, C, U }, which is known as the primary structure of RNA.", "The bases can bond with one another to form a set of base-pairs, which defines the secondary structure.", "A secondary structure can be represented by a binary matrix A * where A * ij = 1 if the i, j-th bases are paired (Fig 1) .", "Discovering the secondary structure of RNA is important for understanding functions of RNA since the structure essentially affects the interaction and reaction between RNA and other cellular components.", "Although secondary structure can be determined by experimental assays (e.g. X-ray diffraction), it is slow, expensive and technically challenging.", "Therefore, computational prediction of RNA secondary structure becomes an important task in RNA research and is useful in many applications such as drug design (Iorns et al., 2007) .", "(ii) Pseudo-knot", "(i) Nested Structure Research on computational prediction of RNA secondary structure from knowledge of primary structure has been carried out for decades.", "Most existing methods assume the secondary structure is a result of energy minimization, i.e., A * = arg min A E x (A).", "The energy function is either estimated by physics-based thermodynamic experiments (Lorenz et al., 2011; Markham & Zuker, 2008) or learned from data (Do et al., 2006) .", "These approaches are faced with a common problem that the search space of all valid secondary structures is exponentially-large with respect to the length L of the sequence.", "To make the minimization tractable, it is often assumed the base-pairing has a nested structure (Fig 2 left) , and the energy function factorizes pairwisely.", "With this assumption, dynamic programming (DP) based algorithms can iteratively find the optimal structure for subsequences and thus consider an enormous number of structures in time O(L 3 ).", "Although DP-based algorithms have dominated RNA structure prediction, it is notable that they restrict the search space to nested structures, which excludes some valid yet biologically important RNA secondary structures that contain 'pseudoknots', i.e., elements with at least two non-nested base-pairs (Fig 2 right) .", "Pseudoknots make up roughly 1.4% of base-pairs (Mathews & Turner, 2006) , and are overrepresented in functionally important regions (Hajdin et al., 2013; Staple & Butcher, 2005) .", "Furthermore, pseudoknots are present in around 40% of the RNAs.", "They also assist folding into 3D structures (Fechter et al., 2001 ) and thus should not be ignored.", "To predict RNA structures with pseudoknots, energy-based methods need to run more computationally intensive algorithms to decode the structures.", "In summary, in the presence of more complex structured output (i.e., pseudoknots), it is challenging for energy-based approaches to simultaneously take into account the complex constraints while being efficient.", "In this paper, we adopt a different viewpoint by assuming that the secondary structure is the output of a feed-forward function, i.e., A * = F θ (x), and propose to learn θ from data in an end-to-end fashion.", "It avoids the second minimization step needed in energy function based approach, and does not require the output structure to be nested.", "Furthermore, the feed-forward model can be fitted by directly optimizing the loss that one is interested in.", "Despite the above advantages of using a feed-forward model, the architecture design is challenging.", "To be more concrete, in the RNA case, F θ is difficult to design for the following reasons:", "(i) RNA secondary structure needs to obey certain hard constraints (see details in Section 3), which means certain kinds of pairings cannot occur at all (Steeg, 1993) .", "Ideally, the output of F θ needs to satisfy these constraints.", "(ii) The number of RNA data points is limited, so we cannot expect that a naive fully connected network can learn the predictive information and constraints directly from data.", "Thus, inductive biases need to be encoded into the network architecture.", "(iii) One may take a two-step approach, where a post-processing step can be carried out to enforce the constraints when F θ predicts an invalid structure.", "However, in this design, the deep network trained in the first stage is unaware of the post-processing stage, making less effective use of the potential prior knowledge encoded in the constraints.", "In this paper, we present an end-to-end deep learning solution which integrates the two stages.", "The first part of the architecture is a transformer-based deep model called Deep Score Network which represents sequence information useful for structure prediction.", "The second part is a multilayer network called Post-Processing Network which gradually enforces the constraints and restrict the output space.", "It is designed based on an unrolled algorithm for solving a constrained optimization.", "These two networks are coupled together and learned jointly in an end-to-end fashion.", "Therefore, we call our model E2Efold.", "By using an unrolled algorithm as the inductive bias to design Post-Processing Network, the output space of E2Efold is constrained (see Fig 3 for an illustration), which makes it easier to learn a good model in the case of limited data and also reduces the overfitting issue.", "Yet, the constraints encoded in E2Efold are flexible enough such that pseudoknots are included in the output space.", "In summary, E2Efold strikes a nice balance between model biases for learning and expressiveness for valid RNA structures.", "We conduct extensive experiments to compare E2Efold with state-of-the-art (SOTA) methods on several RNA benchmark datasets, showing superior performance of E2Efold including:", "• being able to predict valid RNA secondary structures including pseudoknots;", "• running as efficient as the fastest algorithm in terms of inference time;", "• producing structures that are visually close to the true structure;", "• better than previous SOTA in terms of F1 score, precision and recall.", "Although in this paper we focus on RNA secondary structure prediction, which presents an important and concrete problem where E2Efold leads to significant improvements, our method is generic and can be applied to other problems where constraints need to be enforced or prior knowledge is provided.", "We imagine that our design idea of learning unrolled algorithm to enforce constraints can also be transferred to problems such as protein folding and natural language understanding problems (e.g., building correspondence structure between different parts in a document).", "We propose a novel DL model, E2Efold, for RNA secondary structure prediction, which incorporates hard constraints in its architecture design.", "Comprehensive experiments are conducted to show the superior performance of E2Efold, no matter on quantitative criteria, running time, or visualization.", "Further studies need to be conducted to deal with the RNA types with less samples.", "Finally, we believe the idea of unrolling constrained programming and pushing gradient through post-processing can be generic and useful for other constrained structured prediction problems.", "Here we explain the difference between our approach and other works on unrolling optimization problems.", "First, our view of incorporating constraints to reduce output space and to reduce sample complexity is novel.", "Previous works (Hershey et al., 2014; Belanger et al., 2017; Ingraham et al., 2018) did not discuss these aspects.", "The most related work which also integrates constraints is OptNet (Amos & Kolter, 2017) , but its very expensive and can not scale to the RNA problem.", "Therefore, our proposed approach is a simple and effective one.", "Second, compared to (Chen et al., 2018; Shrivastava et al., 2019) , our approach has a different purpose of using the algorithm.", "Their goal is to learn a better algorithm, so they commonly make their architecture more flexible than the original algorithm for the room of improvement.", "However, we aim at enforcing constraints.", "To ensure that constraints are nicely incorporated, we keep the original structure of the algorithm and only make the hyperparameters learnable.", "Finally, although all works consider end-to-end training, none of them can directly optimize the F1 score.", "We proposed a differentiable loss function to mimic the F1 score/precision/recall, which is effective and also very useful when negative samples are much fewer than positive samples (or the inverse)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3829787075519562, 0.42553192377090454, 0.1269841194152832, 0.051282044500112534, 0.0476190410554409, 0, 0.1818181723356247, 0.2702702581882477, 0.1860465109348297, 0.25, 0.10256409645080566, 0.21739129722118378, 0.20512819290161133, 0.1860465109348297, 0, 0.1395348757505417, 0.0952380895614624, 0.2083333283662796, 0.2222222238779068, 0.04347825422883034, 0.13793103396892548, 0, 0.1666666567325592, 0.2083333283662796, 0.24561403691768646, 0.19999998807907104, 0.17142856121063232, 0.1249999925494194, 0.277777761220932, 0.31111109256744385, 0.19999998807907104, 0.12765957415103912, 0.19999998807907104, 0.27272728085517883, 0.1395348757505417, 0.1764705777168274, 0.2857142686843872, 0.15789473056793213, 0.25, 0.1249999925494194, 0.07999999821186066, 0.29999998211860657, 0.1764705777168274, 0.1666666567325592, 0.09999999403953552, 0.19999998807907104, 0.19354838132858276, 0.13333332538604736, 0.0624999962747097, 0.3050847351551056, 0.24561403691768646, 0.5128204822540283, 0.10256409645080566, 0.1875, 0.0952380895614624, 0.05882352590560913, 0.11764705181121826, 0, 0.21739129722118378, 0, 0.1538461446762085, 0.23255813121795654, 0.07999999821186066, 0.21052631735801697, 0.05714285373687744, 0.12765957415103912 ]
S1eALyrYDH
true
[ "A DL model for RNA secondary structure prediction, which uses an unrolled algorithm in the architecture to enforce constraints." ]
[ "Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.", "Instead, many experimental results on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor.", "We present here eligibility propagation (e-prop), a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models.", "When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs.", "Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop.", "The brain seems to be able to solve tasks such as counting, memorizing and reasoning which require efficient temporal processing capabilities.", "It is natural to model this with recurrent neural networks (RNNs), but their canonical training algorithm called backpropagation through time (BPTT) does not appear to be compatible with learning mechanisms observed in the brain.", "There, long-term changes of synaptic efficacies depend on the local neural activity.", "It was found that the precise timing of the electric pulses (i.e. spikes) emitted by the pre-and post-synaptic neurons matters, and these spike-timing dependent plasticity (STDP) changes can be conditioned or modulated by a third factor that is often thought to be a neuromodulator (see [1, 2] for reviews).", "Looking closely at the relative timing, the third factor affects the plasticity even if it arrives with a delay.", "This suggests the existence of local mechanisms that retain traces of the recent neural activity during this temporal gap and they are often referred to as eligibility traces [2] .", "To verify whether three factor learning rules can implement functional learning algorithms, researchers have simulated how interesting learnt behaviours can emerge from them [1, 3, 4] .", "The third factor is often considered as a global signal emitted when a reward is received or predicted, and this alone can solve learning tasks of moderate difficulty, even in RNNs [4] .", "Yet in feed-forward networks, it was already shown that plausible learning algorithms inspired by backpropagation and resulting in neuron-specific learning signals largely outperform the rules based on a global third factor [5, 6, 7] .", "This suggests that backpropagation provides important details that are not captured by all three factor learning rules.", "Here we aim at a learning algorithm for RNNs that is general and efficient like BPTT but remains plausible.", "A major plausibility issue of BPTT is that it requires to propagate errors backwards in time or to store the entire state space trajectory raising questions on how and where this is performed in the brain [8] .", "We suggest instead a rigorous re-analysis of gradient descent in RNNs that leads to a gradient computation relying on a diversity of learning signals (i.e. neuron-specific third factors) and a few eligibility traces per synapse.", "We refer to this algorithm as eligibility propagation (e-prop).", "When derived with spiking neurons, e-prop fits under the three factor learning rule framework and is qualitatively compatible with experimental data [2] .", "To test its learning efficiency, we applied e-prop to artificial Long Short-Term Memory (LSTM) networks [9] , and Long short-term memory Spiking Neural Networks (LSNNs) [10] (spiking RNNs combining short and long realistic time constants).", "We found that (1) it is competitive with BPTT on the TIMIT speech recognition benchmark, and (2) it can solve nontrivial temporal credit assignment problems with long delays.", "We are not aware of any comparable achievements with previous three factor learning rules.", "Real-time recurrent learning (RTRL) [11] computes the same loss gradients as BPTT in an online fashion but requires many more operations.", "Eventhough the method is online, one may wonder where can it be implemented in the brain if it requires a machinery bigger than the network itself.", "Recent works [12, 13, 6] have suggested that eligibility traces can be used to approximate RTRL.", "This was shown to be feasible if the neurons do not have recurrent connections [6] , if the recurrent connections are ignored during learning [12] or if the network dynamics are approximated with a trained estimator [13] .", "However these algorithms were derived for specific neuron models without long-short term memory, making it harder to tackle challenging RNN benchmark tasks (no machine learning benchmarks were considered in [6, 12] ).", "Other mathematical methods [14, 15] , have suggested approximations to RTRL which are compatible with complex neuron models.", "Yet those methods lead to gradient estimates with a high variance [15] or requiring heavier computations when the network becomes large [14, 11] .", "This issue was solved in e-prop, as the computational and memory costs are the same (up to constant factor) as for running any computation with the RNN.", "This reduction of the computational load arises from an essential difference between e-prop and RTRL: e-prop computes the same loss gradients but only propagates forward in time the terms that can be computed locally.", "This provides a new interpretation of eligibility traces that is mathematically grounded and generalizes to a broad class of RNNs.", "Our empirical results show that such traces are sufficient to approach the performance of BPTT despite a simplification of the non-local learning signal, but we believe that more complex strategies for computing a learning signals can be combined with e-prop to yield even more powerful online algorithms.", "A separate paper presents one such example to enable one-shot learning in recurrent spiking neural networks [8] ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07999999821186066, 0.2916666567325592, 0.23529411852359772, 0.22727271914482117, 0.1904761791229248, 0.0952380895614624, 0.18518517911434174, 0.11764705181121826, 0.1515151411294937, 0.10256409645080566, 0.1666666567325592, 0.04347825422883034, 0.11538460850715637, 0.14814814925193787, 0.10526315122842789, 0.24390242993831635, 0.2181818187236786, 0.2641509473323822, 0.25806450843811035, 0.3255814015865326, 0.1090909019112587, 0.2916666567325592, 0.1666666567325592, 0.1395348757505417, 0.04444443807005882, 0.15789473056793213, 0.11538460850715637, 0.15094339847564697, 0.14999999105930328, 0.08888888359069824, 0.1304347813129425, 0.11320754140615463, 0.25, 0.16129031777381897, 0.10256409645080566 ]
SkxJ4QKIIS
true
[ "We present eligibility propagation an alternative to BPTT that is compatible with experimental data on synaptic plasticity and competes with BPTT on machine learning benchmarks." ]
[ "Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems.", "RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features.", "In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features.", "The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior.", "We present results of this approach on synthetic environments and six Atari games.", "The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy.", "We also show that these finite policy representations lead to improved interpretability.", "Deep reinforcement learning (RL) and imitation learning (IL) have demonstrated impressive performance across a wide range of applications.", "Unfortunately, the learned policies are difficult to understand and explain, which limits the degree that they can be trusted and used in high-stakes applications.", "Such explanations are particularly problematic for policies represented as recurrent neural networks (RNNs) BID16 BID14 , which are increasingly used to achieve state-of-the-art performance BID15 BID21 .", "This is because RNN policies use internal memory to encode features of the observation history, which are critical to their decision making, but extremely difficult to interpret.", "In this paper, we take a step towards comprehending and explaining RNN policies by learning more compact memory representations.Explaining RNN memory is challenging due to the typical use of high-dimensional continuous memory vectors that are updated through complex gating networks (e.g. LSTMs, GRUs BID10 BID5 ).", "We hypothesize that, in many cases, the continuous memory is capturing and updating one or more discrete concepts.", "If exposed, such concepts could significantly aid explainability.", "This motivates attempting to quantize the memory and observation representation used by an RNN to more directly capture those concepts.", "In this case, understanding the memory use can be approached by manipulating and analyzing the quantized system.", "Of course, not all RNN policies will have compact quantized representations, but many powerful forms of memory usage can be captured in this way.Our main contribution is to introduce an approach for transforming an RNN policy with continuous memory and continuous observations to a finite-state representation known as a Moore Machine.", "To accomplish this we introduce the idea of Quantized Bottleneck Network (QBN) insertion.", "QBNs are simply auto-encoders, where the latent representation is quantized.", "Given a trained RNN, we train QBNs to encode the memory states and observation vectors that are encountered during the RNN operation.", "We then insert the QBNs into the trained RNN policy in place of the \"wires\" that propagated the memory and observation vectors.", "The combination of the RNN and QBN results in a policy represented as a Moore Machine Network (MMN) with quantized memory and observations that is nearly equivalent to the original RNN.", "The MMN can be used directly or fine-tuned to improve on inaccuracies introduced by QBN insertion.While training quantized networks is often considered to be quite challenging, we show that a simple approach works well in the case of QBNs.", "In particular, we demonstrate that \"straight through\" gradient estimators as in BID1 BID6 are quite effective.We present experiments in synthetic domains designed to exercise different types of memory use as well as benchmark grammar learning problems.", "Our approach is able to accurately extract the ground-truth MMNs, providing insight into the RNN memory use.", "We also did experiments on 6 Atari games using RNNs that achieve state-of-the-art performance.", "We show that in most cases it is possible to extract near-equivalent MMNs and that the MMNs can be surprisingly small.", "Further, the extracted MMNs give insights into the memory usage that are not obvious based on just observing the RNN policy in action.", "For example, we identify games where the RNNs do not use memory in a meaningful way, indicating the RNN is implementing purely reactive control.", "In contrast, in other games, the RNN does not use observations in a meaningful way, which indicates that the RNN is implementing an open-loop controller." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.04999999329447746, 0.14999999105930328, 0.1428571343421936, 0.1818181723356247, 0.13333332538604736, 0.1249999925494194, 0.10810810327529907, 0.0476190410554409, 0.13333332538604736, 0.08888888359069824, 0.0923076868057251, 0.052631575614213943, 0, 0.051282044500112534, 0.0555555522441864, 0.12121211737394333, 0.12121211737394333, 0.06666666269302368, 0.09756097197532654, 0.10256409645080566, 0.1702127605676651, 0.13793103396892548, 0.07407406717538834, 0.0555555522441864, 0.1764705777168274, 0.051282044500112534, 0.09756097197532654, 0.09302324801683426, 0.0952380895614624 ]
S1gOpsCctm
true
[ "Extracting a finite state machine from a recurrent neural network via quantization for the purpose of interpretability with experiments on Atari." ]
[ "Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies.", "On-policy methods bring many benefits, such as ability to evaluate each resulting policy.", "However, they usually discard all the information about the policies which existed before.", "In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to the on-policy algorithms.", "To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies.", "The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one.", "In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG. ", "The past few years have been marked by active development of reinforcement learning methods.", "Although the mathematical foundations of reinforcement learning have been known long before BID23 , starting from 2013, the novel deep learning techniques allowed to solve vision based discrete control tasks such as Atari 2600 games BID15 as well as continuous control problems BID12 .", "Many of the leading state-of-the-art reinforcement learning methods share the actor-critic architecture BID5 .", "Actorcritic methods separate the actor, providing a policy, and the critic, providing an approximation for the expected discounted cumulative reward or some derived quantities such as advantage functions BID2 .", "However, despite improvements, state-of-the-art reinforcement learning still suffers from poor sample efficiency and extensive parameterisation.", "For most real-world applications, in contrast to simulations, there is a need to learn in real time and over a limited training period, while minimising any risk that would cause damage to the actor or the environment.Reinforcement learning algorithms can be divided into two groups: on-policy and off-policy learning.", "On-policy approaches (e. g., SARSA BID18 , ACKTR BID28 ) evaluate the target policy by assuming that future actions will be chosen according to it, hence the exploration strategy must be incorporated as a part of the policy.", "Off-policy methods (e. g., Qlearning BID27 , DDPG BID12 ) separate the exploration strategy, which modifies the policy to explore different states, from the target policy.The off-policy methods commonly use the concept of replay buffers to memorise the outcomes of the previous policies and therefore exploit the information accumulated through the previous iterations BID13 .", "BID15 combined this experience replay mechanism with Deep Q-Networks (DQN), demonstrating end-to-end learning on Atari 2600 games.", "One limitation of DQN is that it can only operate on discrete action spaces.", "BID12 proposed an extension of DQN to handle continuous action spaces based on the Deep Deterministic Policy Gradient (DDPG).", "There, exponential smoothing of the target actor and critic weights has been introduced to ensure stability of the rewards and critic predictions over the subsequent iterations.", "In order to improve the variance of policy gradients, BID20 proposed a Generalised Advantage Function.", "combined this advantage function learning with a parallelisation of exploration using differently trained actors in their Asynchronous Advantage Actor Critic model (A3C); however, BID26 demonstrated that such parallelisation may also have negative impact on sample efficiency.", "Although some work has been performed on improvement of exploratory strategies for reinforcement learning BID8 , but it still does not solve the fundamental restriction of inability to evaluate the actual policy, neither it removes the necessity to provide a separate exploratory strategy as a separate part of the method.In contrast to those, state-of-the-art on-policy methods have many attractive properties: they are able to evaluate exactly the resulting policy with no need to provide a separate exploration strategy.", "However, they suffer from poor sample efficiency, to a larger extent than off-policy reinforcement learning.", "TRPO method BID19 has introduced trust region policy optimisation to explicitly control the speed of policy evolution of Gaussian policies over time, expressed in a form of Kullback-Leibler divergence, during the training process.", "Nevertheless, the original TRPO method suffered from poor sample efficiency in comparison to off-policy methods such as DDPG.", "One way to solve this issue is by replacing the first order gradient descent methods, standard for deep learning, with second order natural gradient (Amari, 1998).", "BID28 used a Kroneckerfactored Approximate Curvature (K-FAC) optimiser BID14 in their ACKTR method.", "PPO method proposes a number of modifications to the TRPO scheme, including changing the objective function formulation and clipping the gradients.", "BID26 proposed another approach in their ACER algorithm: in this method, the target network is still maintained in the off-policy way, similar to DDPG BID12 , while the trust region constraint is built upon the difference between the current and the target network.Related to our approach, recently a group of methods has appeared in an attempt to get the benefits of both groups of methods.", "BID7 propose interpolated policy gradient, which uses the weighted sum of both stochastic BID24 and deterministic policy gradient BID22 .", "BID17 propose an off-policy trust region method, Trust-PCL, which exploits off-policy data within the trust regions optimisation framework, while maintaining stability of optimisation by using relative entropy regularisation.While it is a common practice to use replay buffers for the off-policy reinforcement learning, their existing concept is not used in combination with the existing on-policy scenarios, which results in discarding all policies but the last.", "Furthermore, many on-policy methods, such as TRPO BID19 , rely on stochastic policy gradient BID24 , which is restricted by stationarity assumptions, in a contrast to those based on deterministic policy gradient BID22 , like DDPG BID12 .", "In this article, we describe a novel reinforcement learning algorithm, allowing the joint use of replay buffers with trust region optimisation and leading to sample efficiency improvement.", "The contributions of the paper are given as follows:1.", "a reinforcement learning method, enabling replay buffer concept along with on-policy data;", "2. theoretical insights into the replay buffer usage within the on-policy setting are discussed;", "3. we show that, unlike the state-of-the-art methods as ACKTR BID28 , PPO (Schulman et al., 2017) and TRPO BID19 , a single non-adaptive set of hyperparameters such as the trust region radius is sufficient for achieving better performance on a number of reinforcement learning tasks.As we are committed to make sure the experiments in our paper are repeatable and to further ensure their acceptance by the community, we will release our source code shortly after the publication.", "The paper combines replay buffers and on-policy data for reinforcement learning.", "Experimental results on various tasks from the MuJoCo suite BID25 show significant improvements compared to the state of the art.", "Moreover, we proposed a replacement of the heuristically calculated trust region parameters, to a single fixed hyperparameter, which also reduces the computational expences, and a trainable diagonal covariance matrix.The proposed approach opens the door to using a combination of replay buffers and trust regions for reinforcement learning problems.", "While it is formulated for continuous tasks, it is possible to reuse the same ideas for discrete reinforcement learning tasks, such as ATARI games." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6829268336296082, 0, 0.12903225421905518, 0.2631579041481018, 0.277777761220932, 0.07843136787414551, 0.1599999964237213, 0.24242423474788666, 0.17543859779834747, 0.25806450843811035, 0.08888888359069824, 0.23529411852359772, 0.12903225421905518, 0.11320754140615463, 0.16129031777381897, 0.0555555522441864, 0.060606054961681366, 0.10526315122842789, 0.14999999105930328, 0.11764705181121826, 0.07407406717538834, 0.15189872682094574, 0.1764705777168274, 0.1249999925494194, 0.10810810327529907, 0.09302324801683426, 0, 0.15789473056793213, 0.0882352888584137, 0.1621621549129486, 0.19178082048892975, 0.07843136787414551, 0.260869562625885, 0.1428571343421936, 0.19354838132858276, 0.1875, 0.1428571343421936, 0.3333333432674408, 0.1621621549129486, 0.17241378128528595, 0.1538461446762085 ]
B1MB5oRqtQ
true
[ "We investigate the theoretical and practical evidence of on-policy reinforcement learning improvement by reusing the data from several consecutive policies." ]
[ "Convolutional Neural Networks (CNN) have been successful in processing data signals that are uniformly sampled in the spatial domain (e.g., images).", "However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss.", "Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes.", "It has been challenging to analyze signals with mixed topologies (for example, point cloud with surface mesh).", "To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms (NUFT) to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error.", "The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum.", "Our representation has four distinct advantages: (1) the process causes no spatial sampling error during initial sampling, (2) the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, (3) it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and (4) the representation allows weighted meshes where each element has a different weight (i.e., texture) indicating local properties.", "We achieve good results on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task.", "We present a unifying and novel geometry representation for utilizing Convolutional Neural Networks (CNNs) on geometries represented on weighted simplex meshes (including textured point clouds, line meshes, polygonal meshes, and tetrahedral meshes) which preserve maximal shape information based on the Fourier transformation.", "Most methods that leverage CNNs for shape learning preprocess these shapes into uniform-grid based 2D images (rendered multiview images) or 3D images (binary voxel or Signed Distance Function (SDF)).", "However, rendered 2D images do not preserve the 3D topologies of the original shapes due to occlusions and the loss of the third spatial dimension.", "Binary voxels and SDF representations under low resolution suffer big aliasing errors and under high resolution become memory inefficient.", "Loss of information in the input bottlenecks the effectiveness of the downstream learning process.", "Moreover, it is not clear how a weighted mesh where each element is weighted by a different scalar or vector (i.e., texture) can be represented by binary voxels and SDF.", "Mesh and graph based CNNs perform learning on the manifold physical space or graph spectrum, but generality across topologies remains challenging.In contrast to methods that operate on uniform sampling based representations such as voxel-based and view-based models, which suffer significant representational errors, we use analytical integration to precisely sample in the spectral domain to avoid sample aliasing errors.", "Unlike graph spectrum based methods, our method naturally generalize across input data structures of varied topologies.", "Using our representation, CNNs can be directly applied in the corresponding physical domain obtainable by inverse Fast Fourier Transform (FFT) due to the equivalence of the spectral and physical domains.", "This allows for the use of powerful uniform Cartesian grid based CNN backbone architectures (such as DLA BID40 , ResNet (He et al., 2016) ) for the learning task on arbitrary geometrical signals.", "Although the signal is defined on a simplex mesh, it is treated as a signal in the Euclidean space instead of on a graph, differentiating our framework from graph-based spectral learning techniques which have significant difficulties generalizing across topologies and unable to utilize state-of-the-art Cartesian CNNs.We evaluate the effectiveness of our shape representation for deep learning tasks with three experiments: a controlled MNIST toy example, the 3D shape retrieval task, and a more challenging 3D point cloud to surface reconstruction task.", "In a series of evaluations on different tasks, we show the unique advantages of this representation, and good potential for its application in a wider range of shape learning problems.", "We achieve state-of-the-art performance among non-pre-trained models for the shape retrieval task, and beat state-of-the-art models for the surface reconstruction task.The key contributions of our work are as follows:• We develop mathematical formulations for performing Fourier Transforms of signals defined on a simplex mesh, which generalizes and extends to all geometries in all dimensions.", "(Sec.3)• We analytically show that our approach computes the frequency domain representation precisely, leading to much lower overall representational errors.", "(Sec. 3)• We empirically show that our representation preserves maximal shape information compared to commonly used binary voxel and SDF representations.", "(Sec. 4.1)• We show that deep learning models using CNNs in conjunction with our shape representation achieves state-of-the-art performance across a range of shape-learning tasks including shape retrieval (Sec. 4.2) and point to surface reconstruction (Sec. 4.3) DISPLAYFORM0 Index of the n-th element among a total of N elements Ω j n Domain of n-th element of order j x Cartesian space coordinate vector.", "DISPLAYFORM1 Imaginary number unit Shape learning involves the learning of a mapping from input geometrical signals to desired output quantities.", "The representation of geometrical signals is key to the learning process, since on the one hand the representation determines the learning architectures, and, on the other hand, the richness of information preserved by the representation acts as a bottleneck to the downstream learning process.", "While data representation has not been an open issue for 2D image learning, it is far from being agreed upon in the existing literature for 3D shape learning.", "The varied shape representations used in 3D machine learning are generally classified as multiview images BID28 BID26 BID2 , volumetric voxels BID38 BID14 BID37 BID0 , point clouds BID19 BID24 BID36 , polygonal meshes BID3 BID34 BID15 BID12 , shape primitives BID42 BID39 , and hybrid representations (Dai & Nießner, 2018) .Our", "proposed representation is closest to volumetric voxel representation, since the inverse Fourier Transform of the spectral signal in physical domain is a uniform grid implicit representation of the shape. However", ", binary voxel representation suffers from significant aliasing errors during the uniform sampling step in the Cartesian space BID16 . Using", "boolean values for de facto floating point numbers during CNN training is a waste of information processing power. Also,", "the primitive-in-cell test for binarization requires arbitrary grouping in cases such as having multiple points or planes in the same cell BID32 . Signed", "Distance Function (SDF) or Truncated Signed Distance Function (TSDF) Canelhas, 2017) provides localization for the shape boundary, but is still constrained to linear surface localization due to the linear interpolation process for recovering surfaces from grids. Our proposed", "representation under Fourier basis can find nonlinear surface boundaries, achieving subgrid-scale accuracy (See FIG1 ).", "We present a general representation for multidimensional signals defined on simplicial complexes that is versatile across geometrical deep learning tasks and maximizes the preservation of shape information.", "We develop a set of mathematical formulations and algorithmic tools to perform the transformations efficiently.", "Last but not least, we illustrate the effectiveness of the NUFT representation with a well-controlled example (MNIST polygon), a classic 3D task (shape retrieval) and a difficult and mostly unexplored task by deep learning (point to surface reconstruction), achieving new state-of-the-art performance in the last task.", "In conclusion, we offer an alternative representation for performing CNN based learning on geometrical signals that shows great potential in various 3D tasks, especially tasks involving mixed-topology signals." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.039215680211782455, 0.10526315122842789, 0.08888888359069824, 0, 0.1818181723356247, 0.04255318641662598, 0.11881187558174133, 0.16326530277729034, 0.11940298229455948, 0.1428571343421936, 0.11999999731779099, 0, 0.14999999105930328, 0.07017543166875839, 0.09999999403953552, 0.04444443807005882, 0.1428571343421936, 0.13333332538604736, 0.1914893537759781, 0.178571417927742, 0.18666666746139526, 0.03999999538064003, 0.07999999821186066, 0.14457830786705017, 0.1249999925494194, 0.1355932205915451, 0.2142857164144516, 0.10666666179895401, 0.14814814925193787, 0.0833333283662796, 0.1249999925494194, 0.07999999821186066, 0.03333332762122154, 0.09090908616781235, 0.2857142686843872, 0.13636362552642822, 0.23529411852359772, 0.1428571343421936 ]
B1G5ViAqFm
true
[ "We use non-Euclidean Fourier Transformation of shapes defined by a simplicial complex for deep learning, achieving significantly better results than point-based sampling techiques used in current 3D learning literature." ]
[ "Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences.", "Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function.", "While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions.", "Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring.", "Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards.", "The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity.", "In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward.", "We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint.", "Discovering and understanding causal mechanisms underlying natural phenomena are important to many disciplines of sciences.", "An effective approach is to conduct controlled randomized experiments, which however is expensive or even impossible in certain fields such as social sciences (Bollen, 1989) and bioinformatics (Opgen-Rhein and Strimmer, 2007) .", "Causal discovery methods that infer causal relationships from passively observable data are hence attractive and have been an important research topic in the past decades (Pearl, 2009; Spirtes et al., 2000; Peters et al., 2017) .", "A major class of such causal discovery methods are score-based, which assign a score S(G), typically computed with the observed data, to each directed graph G and then search over the space of all Directed Acyclic Graphs (DAGs) for the best scoring: min G S(G), subject to G ∈ DAGs.", "(1)", "While there have been well-defined score functions such as the Bayesian Information Criterion (BIC) or Minimum Description Length (MDL) score (Schwarz, 1978; Chickering, 2002) and the Bayesian Gaussian equivalent (BGe) score (Geiger and Heckerman, 1994) for linear-Gaussian models, Problem (1) is generally NP-hard to solve (Chickering, 1996; Chickering et al., 2004) , largely due to the combinatorial nature of its acyclicity constraint with the number of DAGs increasing superexponentially in the number of graph nodes.", "To tackle this problem, most existing approaches rely on local heuristics to enforce the acyclicity.", "For example, Greedy Equivalence Search (GES) enforces acyclicity one edge at a time, explicitly checking for the acyclicity constraint when an edge is added.", "GES is known to find the global minimizer with infinite samples under suitable assumptions (Chickering, 2002; Nandy et al., 2018) , but this is not guaranteed in the finite sample regime.", "There are also hybrid methods that use constraint-based approaches to reduce the search space before applying score-based methods, e.g., the max-min hill climbing method (Tsamardinos et al., 2006) .", "However, this methodology lacks a principled way of choosing a problem-specific combination of score functions and search strategies.", "Recently, Zheng et al. (2018) introduced a smooth characterization for the acyclicity constraint, and Problem (1) can be formulated as a continuous optimization problem w.r.t. the weighted graph adjacency matrix by picking a proper loss function, e.g., the least squares loss.", "Subsequent works Yu et al. (2019) and Lachapelle et al. (2019) have also adopted the evidence lower bound and the negative log-likelihood as loss functions, respectively, and used Neural Networks (NNs) to model the causal relationships.", "Note that the loss functions in these methods must be carefully chosen in order to apply continuous optimization methods.", "Unfortunately, many effective score functions, e.g., the generalized score function proposed by Huang et al. (2018) and the independence based score function given by , either cannot be represented in closed forms or have very complicated equivalent loss functions, and thus cannot be easily combined with this approach.", "We propose to use Reinforcement Learning (RL) to search for the DAG with the best score according to a predefined score function, as outlined in Figure 1 .", "The insight is that an RL agent with stochastic policy can determine automatically where to search given the uncertainty information of the learned policy, which gets updated promptly by the stream of reward signals.", "To apply RL to causal discovery, we use an encoder-decoder NN model to generate directed graphs from the observed data, which are then used to compute rewards consisting of the predefined score function as well as two penalty terms to enforce acyclicity.", "We resort to policy gradient and stochastic optimization methods to train the weights of the NNs, and our output is the graph that achieves the best reward, among all graphs generated in the training process.", "Experiments on both synthetic and real datasets show that our approach has a much improved search ability without sacrificing any flexibility in choosing score functions.", "In particular, the proposed approach using BIC as score function outperforms GES with the same score function on linear non-Gaussian acyclic model (LiNGAM) and linear-Gaussian datasets, and also outperforms recent gradient based methods when the causal relationships are nonlinear." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.0624999962747097, 0.20512819290161133, 0.11764705181121826, 0.05128204822540283, 0.1111111044883728, 0.1249999925494194, 0.07547169178724289, 0.2448979616165161, 0.1875, 0.08695651590824127, 0.11764705181121826, 0.13333332538604736, 0.04999999701976776, 0.1249999925494194, 0, 0.043478257954120636, 0.08695651590824127, 0.060606054961681366, 0.035087715834379196, 0.1304347813129425, 0.11764705181121826, 0.035087715834379196, 0.09999999403953552, 0.0416666604578495, 0.1111111044883728, 0.1304347813129425, 0.2857142686843872, 0.11999999731779099 ]
S1g2skStPB
true
[ "We apply reinforcement learning to score-based causal discovery and achieve promising results on both synthetic and real datasets" ]
[ "The topic modeling discovers the latent topic probability of given the text documents.", "To generate the more meaningful topic that better represents the given document, we proposed a universal method which can be used in the data preprocessing stage.", "The method consists of three steps.", "First, it generates the word/word-pair from every single document.", "Second, it applies a two way parallel TF-IDF algorithm to word/word-pair for semantic filtering.", "Third, it uses the k-means algorithm to merge the word pairs that have the similar semantic meaning.\n\n", "Experiments are carried out on the Open Movie Database (OMDb), Reuters Dataset and 20NewsGroup Dataset and use the mean Average Precision score as the evaluation metric.", "Comparing our results with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines.", "Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99\\%.", "How the number of clusters and the number of word pairs should be adjusted for different type of text document is also discussed.\n", "After millennium, most collective information are digitized to form an immense database distributed across the Internet.", "Among all, text-based knowledge is dominant because of its vast availability and numerous forms of existence.", "For example, news, articles, or even Twitter posts are various kinds of text documents.", "For the human, it is difficult to locate one's searching target in the sea of countless texts without a well-defined computational model to organize the information.", "On the other hand, in this big data era, the e-commerce industry takes huge advantages of machine learning techniques to discover customers' preference.", "For example, notifying a customer of the release of \"Star Wars: The Last Jedi\" if he/she has ever purchased the tickets for \"Star Trek Beyond\"; recommending a reader \"A Brief History of Time\" from Stephen Hawking in case there is a \"Relativity: The Special and General Theory\" from Albert Einstein in the shopping cart on Amazon.", "The content based recommendation is achieved by analyzing the theme of the items extracted from its text description.Topic modeling is a collection of algorithms that aim to discover and annotate large archives of documents with thematic information BID0 .", "Usually, general topic modeling algorithms do not require any prior annotations or labeling of the document while the abstraction is the output of the algorithms.", "Topic modeling enables us to convert a collection of large documents into a set of topic vectors.", "Each entry in this concise representation is a probability of the latent topic distribution.", "By comparing the topic distributions, we can easily calculate the similarity between two different documents.", "BID25 Some topic modeling algorithms are highly frequently used in text-mining BID13 , preference recommendation BID27 and computer vision BID28 .", "BID0 Many of the traditional topic models focus on latent semantic analysis with unsupervised learning.", "Latent Semantic Indexing (LSI) BID11 applies Singular-Value Decomposition (SVD) BID6 to transform the term-document matrix to a lower dimension where semantically similar terms are merged.", "It can be used to report the semantic distance between two documents, however, it does not explicitly provide the topic information.", "The Probabilistic Latent Semantic Analysis (PLSA) BID9 model uses maximum likelihood estimation to extract latent topics and topic word distribution, while the Latent Dirichlet Allocation (LDA) BID1 model performs iterative sampling and characterization to search for the same information.The availability of many manually categorized online documents, such as Internet Movie Database (IMDb) movie review Inc. (1990), Wikipedia articles, makes the training and testing of topics models possible.", "All of the existing workds are based on the bag-of-words model, where a document is considered as a collection of words.", "The semantic information of words and interaction among objects are assumed to be unknown during the model construction.", "Such simple representation can be improved by recent research advances in natural language processing and word embedding.", "In this paper, we will explore the existing knowledge and build a topic model using explicit semantic analysis.The work studies the best data processing and feature extraction algorithms for topic modeling and information retrieval.", "We investigate how the available semantic knowledge, which can be obtained from language analysis or from existing dictionary such as WordNet, can assist in the topic modeling.Our main contributions are:• We redesign a new topic model which combines two types of text features to be the model input.•", "We apply the numerical statistic algorithm to determine the key elements for each document dynamically.•", "We apply a vector quantization method to merge and filter text unit based on the semantic meaning.•", "We significantly improve the accuracy of the prediction using our proposed model. The", "rest of the paper is structured as follows: In Section 2, we review the existing methods, from which we got the inspirations. This", "is followed in Section 3 by details about our topic models. Section", "4 describes our experimental steps and evaluate the results. Finally", ", Section 5 concludes this work.", "In this paper, we proposed a few techniques to processes the dataset and optimized the original RBM model.", "During the dataset processing part, first, we used a semantic dependency parser to extract the word pairs from each sentence of the text document.", "Then, by applying a two way parallel TF-IDF processing, we filtered the data in word level and word pair level.", "Finally, Kmeans clustering algorithm helped us merge the similar word pairs and remove the noise from the feature dictionary.", "We replaced the original word only RBM model by introducing word pairs.", "At the end, we showed that proper selection of K value and word pair generation techniques can significantly improve the topic prediction accuracy and the document retrieval performance.", "With our improvement, experimental results have verified that, compared to original word only RBM model, our proposed word/word pair combined model can improve the mAP score up to 10.48% in OMDb dataset, up to 1.11% in Reuters dataset and up to 12.99% in the 20NewsGroup dataset." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.875, 0.06666666269302368, 0.12121211737394333, 0.10526315122842789, 0.14999999105930328, 0.04347825422883034, 0.0476190410554409, 0.3589743673801422, 0.13636362552642822, 0.09999999403953552, 0, 0, 0.1702127605676651, 0.17391303181648254, 0.08571428060531616, 0.1355932205915451, 0.13636362552642822, 0.1538461446762085, 0.21052631735801697, 0.15789473056793213, 0.13636362552642822, 0.10256409645080566, 0.1249999925494194, 0.27272728085517883, 0.0731707289814949, 0.1428571343421936, 0.1428571343421936, 0.1463414579629898, 0.145454540848732, 0.2769230604171753, 0.20512820780277252, 0.2380952388048172, 0.1666666567325592, 0.09090908616781235, 0.11428570747375488, 0.05882352590560913, 0, 0.19512194395065308, 0.21739129722118378, 0.1904761791229248, 0.04878048226237297, 0.11428570747375488, 0.20408162474632263, 0.158730149269104 ]
Byni8NLHf
true
[ "We proposed a universal method which can be used in the data preprocessing stage to generate the more meaningful topic that better represents the given document" ]
[ "Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets.", "By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly.", "However, most of the existing works are under the assumption that the predefined tasks are related to each other.", "Thus, their applications on real-world are limited, because rare real-world problems are closely related.", "Besides, the understanding of relationships among tasks has been ignored by most of the current methods.", "Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks.", "At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved.", "To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods.", "Multi-task learning (Caruana, 1997) aims to train a single model on multiple related tasks jointly, so that useful knowledge learned from one task can be transferred to enhance the generalization performance of other tasks.", "Over the last few years, different types of multi-task learning mechanisms (Sener & Koltun, 2018; Guo & Farooq, 2018; Ish, 2016; Lon, 2015) have been proposed and proved better than single-task learning methods from natural language processing (Palmer et al., 2017) and computer vision (Cortes et al., 2015) to chemical study (Ramsundar et al., 2015) .", "Despite the success of multi-task learning, when applying to 'discrete' data (graph/text), most of the current multi-task learning frameworks (Zamir et al., 2018; Ish, 2016) only leverage the general task dependency with the assumption that the task dependency remains the same for (1) different data samples; and (2) different sub-structures (node/word) in one data sample (graph/text).", "However, this assumption is not always true in many real-world problems.", "(1) Different data samples may have different task dependency.", "For example, when we want to predict the chemical properties of a particular toxic molecule, despite the general task dependency, its representations learned from toxicity prediction tasks should be more significant than the other tasks.", "(2) Even for the same data sample, different sub-structures may have different task dependency.", "Take sentence classification as an example.", "Words like 'good' or 'bad' may transfer more knowledge from sentiment analysis tasks, while words like 'because' or 'so' may transfer more from discourse relation identification tasks.", "In this work, to accurately learn the task dependency in both general level and data-specific level, we propose a novel framework, 'Learning to Transfer via ModellIng mulTi-level Task dEpeNdency' (L2T-MITTEN).", "The general task dependency is learned as a parameterized weighted dependency graph.", "And the data-specific task dependency is learned with the position-wise mutual attention mechanism.", "The two-level task dependency can be used by our framework to improve the performance on multiple tasks.", "And the objective function of multi-task learning can further enhance the quality of the learned task dependency.", "By iteratively mutual enhancement, our framework can not only perform better on multiple tasks, but also can extract high-quality dependency structures at different levels, which can reveal some hidden knowledge of the datasets.", "Another problem is that to transfer task-specific representations between every task pair, the number of transfer functions will grow quadratically as the number of tasks increases, which is unaffordable.", "To solve this, we develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from.", "This decomposition method reduces the space complexity from quadratic to linear.", "We validate our multi-task learning framework extensively on different tasks, including graph classication, node classification, and text classification.", "Our framework outperforms all the other state-ofthe-art (SOTA) multi-task methods.", "Besides, we show that L2T-MITTEN can be used as an analytic tool to extract interpretable task dependency structures at different levels on real-world datasets.", "Our contributions in this work are threefold:", "• We propose a novel multi-task learning framework to learn to both general task dependency and data-specific task dependency.", "The learned task dependency structures can be mutually enhanced with the objective function of multi-task learning.", "• We develop a decomposition method to reduce the space complexity needed by transfer functions from quadratic to linear.", "• We conduct extensive experiments on different real-world datasets to show the effectiveness of our framework and the importance of modelling multi-level task dependency.", "We propose L2T-MITTEN, a novel multi-task learning framework that (1) employs the positionwise mutual attention mechanism to learn the multi-level task dependency; (2) transfers the taskspecific representations between tasks with linear space-efficiency; and (3) uses the learned multilevel task dependency to guide the inference.", "We design three experimental settings where training data is sufficient, imbalanced or deficient, with multiple graph/text datasets.", "Experimental results demonstrate the superiority of our method against both classical and SOTA baselines.", "We also show that our framework can be used as an analytical tool to extract the task dependency structures at different levels, which can reveal some hidden knowledge of tasks and of datasets A DATASET SUMMARY Figure 4 , in the Encoder Block, we use several layers of graph convolutional layers (Kipf & Welling, 2016) followed by the layer normalization (Ba et al., 2016) .", "In the Readout Block, for graph-level task, we use set-to-set (Vinyals et al., 2015) as the global pooling operator to extract the graph-level representation which is later fed to a classifier; while for node-level task, we simply eliminate the global pooling layer and feed the node-level representation directly to the classifier.", "Figure 4: Graph convolutional networks architecture.", "Note that in node-level task, the Set2Set layer (global pooling) is eliminated." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.31578946113586426, 0.14999999105930328, 0, 0.15789473056793213, 0.42307692766189575, 0.260869562625885, 0.15686273574829102, 0.2142857164144516, 0.17142856121063232, 0.2028985470533371, 0, 0.12121211737394333, 0.1428571343421936, 0.1621621549129486, 0, 0.13333332538604736, 0.2641509473323822, 0.11428570747375488, 0.1111111044883728, 0.24390242993831635, 0.21052631735801697, 0.2181818187236786, 0.2083333283662796, 0.1702127605676651, 0.11428570747375488, 0.2857142686843872, 0.1764705777168274, 0.1249999925494194, 0, 0.5, 0.19999998807907104, 0.2380952388048172, 0.30434781312942505, 0.4193548262119293, 0.04878048226237297, 0.10526315122842789, 0.2716049253940582, 0.19672130048274994, 0, 0.0555555522441864 ]
BklhsgSFvB
true
[ "We propose a novel multi-task learning framework which extracts multi-view dependency relationship automatically and use it to guide the knowledge transfer among different tasks." ]
[ " We design simple and quantifiable testing of global translation-invariance in deep learning models trained on the MNIST dataset.", "Experiments on convolutional and capsules neural networks show that both models have poor performance in dealing with global translation-invariance; however, the performance improved by using data augmentation.", "Although the capsule network is better on the MNIST testing dataset, the convolutional neural network generally has better performance on the translation-invariance.", "Convolutional neural networks (CNN) have achieved state-of-the-art performance than the human being on many computer vision tasks BID6 ; BID2 .", "The deep learning community trend to believe that the success of CNN mainly due to two key features in CNN, reduced computation cost with weight sharing in convolutional layers and generalization with local invariance in subsampling layers BID7 ; BID8 .", "Due to convolutional layers are 'place-coded' equivariant and max-pooling layers are local invariant BID1 , CNN has to learn different models for different viewpoints which need big data and expensive cost.More Generalization model should be able to train on a limited range of viewpoints and getting good performance on a much more wider range.", "Capsule network is robust in dealing with different viewpoints BID3 BID9 ; BID4 .", "Capsules are a group of neurons which includes the pose, colour, lighting and deformation of the visual entity.", "Capsule network aims for 'rate-coded' equivariance because it's the weights that code viewpoint-invariant knowledge, not the neural activities.", "Viewpoint changes in capsule network are linear effects on the pose matrices of the parts and the whole between different capsules layers.", "However, it still unclear whether capsule networks be able to generalize for global translation invariance.Visualize and Quantify the translation-invariance in deep learning model are essential for understanding the architectural choices and helpful for developing Generalization model that is invariant to viewpoint changes.", "An analysis using translation-sensitivity map for MNIST digit dataset has been used to investigate translation invariance in CNN BID5 .", "In this paper, we introduce a simple method to test the performance of global translation-invariance in convolutional and capsule neural network models trained on the MNIST dataset.", "We introduce a simple GTI testing dataset for deep learning models trained on MNIST dataset.", "The goal is to get a better understanding of the ability of CNN and CapsNet to dealing with global translational invariance.", "Although the current version of CapsNet could not handle global translational invariance without data augmentation, we still believe CapsNet architecture potentially better than CNN on dealing with global translational invariance because capsules could train to learn all viewpoint no matter it receives the information for the centre or the edge.", "Our testing method is sample Figure 5: GTI dataset accuracy of models trained on CNN and CapsNet with different amount of random shifting in MNIST training dataset.and quantifiable, and it easy to implement for other datasets of computer vision tasks by taking a clear and correct labelled image from each class and apply the translational shifting to cover all possible cases." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.2857142686843872, 0.1666666567325592, 0, 0.06666666269302368, 0.17777776718139648, 0.07407407462596893, 0.17391303181648254, 0.1538461446762085, 0.07407406717538834, 0.20000000298023224, 0.1702127605676651, 0.13793103396892548, 0.2222222238779068, 0, 0.3448275923728943, 0.1538461446762085, 0.1269841194152832 ]
SJlgOjAqYQ
true
[ "Testing of global translational invariance in Convolutional and Capsule Networks" ]
[ "Gaussian processes are ubiquitous in nature and engineering.", "A case in point is a class of neural networks in the infinite-width limit, whose priors correspond to Gaussian processes.", "Here we perturbatively extend this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors.", "The methodology developed herein allows us to track the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, reminiscent of renormalization-group flow.", "We further develop a perturbative prescription to perform Bayesian inference with weakly non-Gaussian priors." ]
[ 0, 0, 0, 0, 1 ]
[ 0.06666666269302368, 0.24390242993831635, 0.1621621549129486, 0.21276594698429108, 0.277777761220932 ]
HygP3TVFvS
false
[ "We develop an analytical method to study Bayesian inference of finite-width neural networks and find that the renormalization-group flow picture naturally emerges." ]
[ "Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity.", "In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation.", "Our answer is \"early stopping\".", "Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.", "This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later.", "Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK).", "AIR facilities a new understanding of distillation.", "With that, we further utilize distillation to refine noisy labels.", "We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels.", "We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping.", "Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss.", "The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization.", "Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly.\n", "Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing LeCun et al. (2015) .", "Among these tasks, image classification is considered as one of the fundamental tasks since classification networks are commonly used as base networks for other problems.", "In order to achieve higher accuracy using a network with similar complexity as the base network, distillation has been proposed, which aims to utilize the prediction of one (teacher) network to guide the training of another (student) network.", "In Hinton et al. (2015) , the authors suggested to generate a soft target by a heavy-duty teacher network to guide the training of a light-weighted student network.", "More interestingly, Furlanello et al. (2018) ; Bagherinezhad et al. (2018) proposed to train a student network parameterized identically as the teacher network.", "Surprisingly, the student network significantly outperforms the teacher network.", "Later, it was suggested by Zagoruyko & Komodakis (2016a) ; Huang & Wang (2017) ; Czarnecki et al. (2017) to transfer knowledge of representations, such as attention maps and gradients of the classifier, to help with the training of the student network.", "In this work, we focus on the distillation utilizing the network outputs Hinton et al. (2015) ; Furlanello et al. (2018) ; Yang et al. (2018a) ; Bagherinezhad et al. (2018) ; Yang et al. (2018b) .", "To explain the effectiveness of distillation, Hinton et al. (2015) suggested that instead of the hard labels (i.e one-hot vectors), the soft labels generated by the pre-trained teacher network provide extra information, which is called the \"Dark Knowledge\".", "The \"Dark knowledge\" is the knowledge encoded by the relative probabilities of the incorrect outputs.", "In Hinton et al. (2015) ; Furlanello et al. (2018) ; Yang et al. (2018a) , the authors pointed out that secondary information, i.e the semantic similarity between different classes, is part of the \"Dark Knowledge\", and Bagherinezhad et al. (2018) observed that the \"Dark Knowledge\" can help to refine noisy labels.", "In this paper, we would like to answer the following question: can we theoretically explain how neural networks learn the Dark Knowledge?", "Answering this question will help us to understand the regularization effect of distillation.", "In this work, we assume that the teacher network is overparameterized, which means that it can memorize all the labels via gradient descent training Du et al. (2018b; a) ; Oymak & Soltanolkotabi (2018) ; Allen-Zhu et al. (2018) .", "In this case, if we train the overparameterized teacher network until convergence, the network's output coincides exactly with the ground truth hard labels.", "This is because the logits corresponding to the incorrect classes are all zero, and hence no \"Dark knowledge\" can be extracted.", "Thus, we claim that the core factor that enables an overparameterized network to learn \"Dark knowledge\" is early stopping.", "What's more, Arpit et al. (2017) ; Rahaman et al. (2018) ; Xu et al. (2019) observed that \"Dark knowledge\" represents the discrepancy of convergence speed of different types of information during the training of the neural network.", "Neural network tends to fit informative information, such as simple pattern, faster than non-informative and unwanted information such as noise.", "Similar phenomenon was observed in the inverse scale space theory for image restoration Scherzer & Groetsch (2001) ; Burger et al. (2006) ; Xu & Osher (2007) ; Shi & Osher (2008) .", "In our paper, we call this effect Anisotropic Information Retrieval (AIR).", "With the aforementioned interpretation of distillation, We further utilize AIR to refine noisy labels by introducing a new self-distillation algorithm.", "To extract anisotropic information, we sequentially extract knowledge from the output of the network in the previous epoch to supervise the training in the next epoch.", "By dynamically adjusting the strength of the supervision, we can theoretically prove that the proposed self-distillation algorithm can recover the correct labels, and empirically the algorithm achieves the state-of-the-art results on Fashion MNIST and CIFAR10.", "The benefit brought by our theoretical study is twofold.", "Firstly, the existing approach using large networks ; Zhang & Sabuncu (2018) often requires a validation set to early terminate the network training.", "However, our analysis shows that our algorithm can sustain long training without overfitting the noise which makes the proposed algorithm more user-friendly.", "Secondly, our analysis is based on an 2 -loss of the clean labels which enables the algorithm to generate a trained network with a bigger margin and hence generalize better.", "This paper provided an understanding of distillation using overparameterized neural networks.", "We observed that such neural networks posses the property of Anisotropic Information Retrieval (AIR), which means the neural network tends to fit the infomrative information (i.e. the eigenspaces associated with the largest few eigenvalues of NTK) first and the non-informative information later.", "Through AIR, we further observed that distillation of the Dark Knowledge is mainly due to early stopping.", "Based on this new understanding, we proposed a new self-distillation algorithm for noisy label refinery.", "Both theoretical and empirical justifications of the performance of the new algorithm were provided.", "Our analysis is based on the assumption that the teacher neural network is overparameterized.", "When the teacher network is not overparameterized, the network will be biased towards the label even without early stopping.", "It is still an interesting and unclear problem that whether the bias can provide us with more information.", "For label refinery, our analysis is mostly based on the symmetric noise setting.", "We are interested in extending our analysis to the asymmetric setting.", "A PROOF DETAILS A.1", "NEURAL NETWORK PROPERTIES As preliminaries, we first discuss some properties of the neural network.", "We begin with the jacobian of the one layer neural network x → v φ(W x), the Jacobian matrix with respect to W takes the form", "First we borrow Lemma 6.6, 6.7, 6.8 from Oymak & Soltanolkotabi (2018) and Theorem 6.7, 6.8 from .", "T be a data matrix made up of data with unit Euclidean norm.", "Assuming that λ(X) > 0, the following properties hold.", ", at random Gaussian initialization W 0 ∼ N (0, 1) k×d , with probability at least 1 − δ, we have", "T in whichx i corresponds to the center of cluster including x i .", "What's more, we define the matrix of cluster center C = [c 1 , c 2 , . . . , c K ]", "T .", "Assuming that λ(C) > 0, the following properties hold.", "•", ", at random Gaussian initialization W 0 ∼ N (0, 1) k×d , with probability at least 1 − δ, we have", "• range(J(W,X)) ⊂ S + for any parameter matrix W .", "Then, we gives out the perturbation analysis of the Jacobian matrix.", "Lemma 3.", "Let X be a -clusterable data matrix with its center matrixX.", "For parameter matrices W,W , we have", "Proof.", "We bound J(W, X) − J(W ,X) by", "The first term is bounded by Lemma 1.", "As to the second term, we bound it by", "Combining the inequality above, we get", "Lemma 4.", "Let X be a -clusterable data matrix with its center matrixX.", "We assume W 1 , W 2 have a upper bound c √ k.", "Then for parameter matrices W 1 , W 2 ,W 1 ,W 2 , we have", "Proof.", "By the definition of average Jacobian, we have", "A.2", "PROVE OF THE THEOREM First, we introduce the proof idea of our theorem.", "Our proof of the theorem divides the learning process into two stages.", "During the first stage, we aim to prove that the neural network will give out the right classification, i.e. the 0-1-loss converges to 0.", "The proof in this part is modified from .", "Furthermore, we proved that training 0-1-loss will keep 0 until the second stage starts and the margin at the first stage will larger than 1−2ρ 2 .", "During the second stage, we prove that the neural networks start to further enlarge the margin and finally the 2 loss starts to converge to zero.", "(2019) has shown that this dynamic can be illustrated by the average Jacobian.", "Definition 3.", "We define the average Jacobian for two parameters W 1 and W 2 and data matrix X as", "The residualr = f (θ) − y, r = f (θ) − y obey the following equation r = (I − ηC(θ))r", "In our proof, we project the residual to the following subspace Definition 4.", "Let {x i } n i=1 be a -clusterable dataset and {x i } n i=1 be the associated cluster centers, that is,x i = c l iff x i is from lth cluster.", "We define the support subspace S + as a subspace of dimension K, dictated by the cluster membership as follows.", "Let Λ l ⊂ {1, 2, · · · , n} be the set of coordinates i such that = c l .", "Then S + is characterized by", "Definition 5.", "We define the minimum eigenvalue of a matrix B on a subspace S σ min (B, S) = min", "where P S is the projection to the space S.", "Recall the generation process of the dataset Definition 6.", "(Clusterable Dataset Descriptions)", "• We assume that {x i } i∈[n] contains points with unit Euclidean norm and has K clusters.", "Let n l be the number of points in the lth cluster.", "Assume that number of data in each cluster is balanced in the sense that n l ≥ c low n K for constant c low > 0.", "• For each of the K clusters, we assume that all the input data lie within the Euclidean ball B(c l , ), where c l is the center with unit Euclidean norm and > 0 is the radius.", "• A dataset satisfying the above assumptions is called an -clusterable dataset." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.2978723347187042, 0.05714285373687744, 0.25, 0.09999999403953552, 0.15686273574829102, 0.1621621549129486, 0.09999999403953552, 0.15686273574829102, 0.1666666567325592, 0.16949151456356049, 0.12244897335767746, 0.1666666567325592, 0.0833333283662796, 0.1538461446762085, 0.16393442451953888, 0.11320754140615463, 0.08163265138864517, 0.054054051637649536, 0.0937499925494194, 0.1538461446762085, 0.1269841194152832, 0.1395348757505417, 0.11428570747375488, 0.1599999964237213, 0.3255814015865326, 0.158730149269104, 0.15686273574829102, 0.07999999821186066, 0.25, 0.10344827175140381, 0, 0.10526315122842789, 0.1463414579629898, 0.1599999964237213, 0.16326530277729034, 0.17543859779834747, 0.10256409645080566, 0.11538460850715637, 0.08163265138864517, 0.17241378128528595, 0.09756097197532654, 0.1538461446762085, 0.3404255211353302, 0.1818181723356247, 0.1428571343421936, 0.1428571343421936, 0.17391303181648254, 0.1666666567325592, 0.09302324801683426, 0.1463414579629898, 0, 0.13636362552642822, 0.1538461446762085, 0.04444444179534912, 0.1428571343421936, 0.10256409645080566, 0.07999999821186066, 0.1428571343421936, 0.12244897335767746, 0.10256409645080566, 0.07999999821186066, 0.04999999701976776, 0.14999999105930328, 0.09756097197532654, 0.054054051637649536, 0.052631575614213943, 0.052631575614213943, 0.10256409645080566, 0.1111111119389534, 0.09756097197532654, 0.09302324801683426, 0.09756097197532654, 0.15789473056793213, 0.1395348757505417, 0.19512194395065308, 0.11764705181121826, 0.15789473056793213, 0.11538460850715637, 0.11764705181121826, 0.1395348757505417, 0.1304347813129425, 0.04444444179534912, 0.0952380895614624, 0.145454540848732, 0.1702127605676651, 0.12244897335767746, 0.0555555522441864, 0.1702127605676651, 0.10526315122842789, 0.15789473056793213, 0, 0.1249999925494194, 0.1463414579629898, 0.23076923191547394, 0.19354838132858276, 0.09756097197532654 ]
HJlF3h4FvB
true
[ "theoretically understand the regularization effect of distillation. We show that early stopping is essential in this process. From this perspective, we developed a distillation method for learning with corrupted Label with theoretical guarantees." ]
[ "Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. ", "This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations (\\textit{output} kernel) and the per-task model parameters at each round, assuming data arrives in a streaming fashion.", "We propose a novel algorithm called \\textit{Online Output Kernel Learning Algorithm} (OOKLA) for lifelong learning setting.", "To avoid the memory explosion, we propose a robust budget-limited versions of the proposed algorithm that efficiently utilize the relationship between the tasks to bound the total number of representative examples in the support set. ", "In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning.", "Our empirical results over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines.", "Instead of learning individual models, learning from multiple tasks leverages the relationships among tasks to jointly build better models for each task and thereby improve the transfer of relevant knowledge between the tasks, especially from information-rich tasks to information-poor ones.", "Unlike traditional multitask learning, where the tasks are presented simultaneously and an entire training set is available to the learner (Caruana (1998)), in lifelong learning the tasks arrives sequentially BID27 ).", "This paper considers a continuous lifelong learning setting in which both the tasks and the examples of the tasks arrive in an online fashion, without any predetermined order.Following the online setting, particularly from BID24 BID7 , at each round t, the learner receives an example from a task, along with the task identifier and predicts the output label for the example.", "Subsequently, the learner receives the true label and updates the model(s) as necessary.", "This process is repeated as we receive additional data from the same or different tasks.", "Our approach follows an error-driven update rule in which the model for a given task is updated only when the prediction for that task is in error.Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance.", "A lifelong learning agent must provide an efficient way to learn new tasks faster by utilizing the knowledge learned from the previous tasks and also not forgetting or significantly degrading performance on the old tasks.", "The goal of a lifelong learner is to minimize errors as compared to the full ideal hindsight learner, which has access to all the training data and no bounds on memory or computation.", "This paper addresses lifelong multitask learning by jointly re-estimating the inter-task relations from the data and the per-task model parameters at each round, assuming data arrives in a streaming fashion.", "We define the task relationship matrix as output kernels in Reproducing Kernel Hilbert Space (RKHS) on multitask examples.", "We propose a novel algorithm called Online Output Kernel Learning Algorithm (OOKLA) for lifelong learning setting.", "For a successful lifelong learning with kernels, we need to address two key challenges: (1) learn the relationships between the tasks (output kernel) efficiently from the data stream and (2) bound the size of the knowledge to avoid memory explosion.The key challenge in learning with a large number of tasks is to adaptively learn the model parameters and the task relationships, which potentially change over time.", "Without manageability-efficient updates at each round, learning the task relationship matrix automatically may impose a severe computational burden.", "In other words, we need to make predictions and update the models in an efficient real time manner.We propose simple and quite intuitive update rules for learning the task relationship matrix.", "When we receive a new example, the algorithm updates the output kernel when the learner made a mistake by computing the similarity between the new example and the set of representative examples (stored in the memory) that belongs to a specific task.", "If the two examples have similar (different) labels and high similarity, then the relationship between the tasks is increased (decreased) to reflect the positive (negative) correlation and vice versa.To avoid the memory explosion associated with the lifelong learning setting, we propose a robust budget-limited version of the proposed algorithm that efficiently utilizes the relationship between the tasks to bound the total number of representative examples in the support set.", "In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning.It is worth noting that the problem of lifelong multitask learning is closely related to online multitask learning.", "Although the objectives of both online multitask learning and lifelong learning are similar, one key difference is that the online multitask learning, unlike in the lifelong learning, may require that the number of tasks be specified beforehand.", "In recent years, online multitask learning has attracted extensive research attention BID0 ; BID10 ; BID16 BID7 ; BID24 BID17 .", "We evaluate our proposed methods with several state-of-the-art online learning algorithms for multiple tasks.", "Throughout this paper, we refer to our proposed method as online multitask learning or lifelong learning.There are many useful application areas for lifelong learning, including optimizing financial trading as market conditions evolve, email prioritization with new tasks or preferences emerging, personalized news, and spam filtering, with evolving nature of spam.", "Consider the latter, where some spam is universal to all users (e.g. financial scams), some messages might be useful to certain affinity groups, but spam to most others (e.g. announcements of meditation classes or other special interest activities), and some may depend on evolving user interests.", "In spam filtering each user is a \"task,\" and shared interests and dis-interests formulate the inter-task relationship matrix.", "If we can learn the matrix as well as improving models from specific spam/not-spam decisions, we can perform mass customization of spam filtering, borrowing from spam/not-spam feedback from users with similar preferences.", "The primary contribution of this paper is precisely the joint learning of inter-task relationships and its use in estimating per-task model parameters in a lifelong learning setting.", "We proposed a novel lifelong learning algorithm using output kernels.", "The proposed method efficiently learns both the model and the inter-task relationships at each iteration.", "Our update rules for learning the task relationship matrix, at each iteration, were motivated by the recent work in output kernel learning.In order to handle the memory explosion from an unbounded support set in the lifelong learning setting, we proposed a new budget maintenance scheme that utilizes the task relationship matrix to remove the least-useful (high confidence) example from the support set.", "In addition, we proposed a two-stage budget learning scheme based on the intuition that each task only requires a subset of the representative examples in the support set for efficient learning.", "It provides a competitive and efficient approach to handle large number of tasks in many real-life applications.The effectiveness of our algorithm is empirically verified over several benchmark datasets, outperforming several competitive baselines both in the unconstrained case and the budget-limited case, where selective forgetting was required." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.1538461446762085, 0.38461539149284363, 0.04999999701976776, 0.2857142686843872, 0.0714285671710968, 0.0952380895614624, 0.10526315122842789, 0.21052631735801697, 0, 0, 0.1599999964237213, 0.09756097197532654, 0.09999999403953552, 0.1621621549129486, 0.1428571343421936, 0.38461539149284363, 0.09836065024137497, 0.1428571343421936, 0.10256409645080566, 0.09302325546741486, 0.0952380895614624, 0.25, 0.1621621549129486, 0.1428571343421936, 0.25, 0.145454540848732, 0, 0.07407406717538834, 0, 0.1764705777168274, 0.699999988079071, 0, 0.17241379618644714, 0.1621621549129486, 0.07843136787414551 ]
H1Ww66x0-
true
[ "a novel approach for online lifelong learning using output kernels." ]
[ "Minecraft is a videogame that offers many interesting challenges for AI systems.", "In this paper, we focus in construction scenarios where an agent must build a complex structure made of individual blocks.", "As higher-level objects are formed of lower-level objects, the construction can naturally be modelled as a hierarchical task network.", "We model a house-construction scenario in classical and HTN planning and compare the advantages and disadvantages of both kinds of models.", "Minecraft is an open-world computer game, which poses interesting challenges for Artificial Intelligence BID0 BID12 , for example for the evaluation of reinforcement learning techniques BID21 .", "Previous research on planning in Minecraft focused on models to control an agent in the Minecraft world.", "Some examples include learning planning models from a textual description of the actions available to the agent and their preconditions and effects BID4 , or HTN models from observing players' actions BID15 .", ", on the other hand, focused on online goal-reasoning for an agent that has to navigate in the minecraft environment to collect resources and/or craft objects.", "They introduced several propositional, numeric BID7 and hybrid PDDL+ planning models BID8 .In", "contrast, we are interested in construction scenarios, where we generate instructions for making a given structure (e.g. a house) that is composed of atomic blocks. Our", "longterm goal is to design a natural-language system that is able to give instructions to a human user tasked with completing that construction. As", "a first step, in the present paper we consider planning methods coming up with what we call a construction plan, specifying the sequence of construction steps without taking into account the natural-language and dialogue parts of the problem.For the purpose of construction planning, the Minecraft world can be understood as a Blocksworld domain with a 3D environment. Blocks", "can be placed at any position having a non-empty adjacent position. However", ", while obtaining a sequence of \"put-block\" actions can be sufficient for an AI agent, communicating the plan to a human user requires more structure in order to formulate higher-level instructions like build-row, or build-wall. The objects", "being constructed (e.g. rows, walls, or an entire house) are naturally organized in a hierarchy where high-level objects are composed of lower-level objects. Therefore,", "the task of constructing a high-level object naturally translates into a hierarchical planning network (HTN) BID19 BID20 BID22 BID6 .We devise several", "models in both classical PDDL planning BID5 BID13 ) and hierarchical planning for a simple scenario where a house must be constructed. Our first baseline", "is a classical planning model that ignores the high-level objects and simply outputs a sequence of place-blocks actions. This is insufficient", "for our purposes since the resulting sequence of actions can hardly be described in natural language. However, it is a useful", "baseline to compare the other models. We also devise a second", "classical planning model, where the construction of high-level objects is encoded via auxiliary actions.HTN planning, on the other hand, allows to model the object hierarchy in a straightforward way, where there is a task for building each type of high-level object. The task of constructing", "each high-level object can be decomposed into tasks that construct its individual parts. Unlike in classical planning", ", where the PDDL language is supported by most/all planners, HTN planners have their own input language. Therefore, we consider specific", "models for two individual HTN planners: the PANDA planning system BID3 BID2 and SHOP2 BID14 .", "We have introduced several models of a construction scenario in the Minecraft game.", "Our experiments have shown that, even in the simplest construction scenario which is not too challenging from the point of view of the search, current planners may struggle when the size of the world increases.", "This is a serious limitation in the Minecraft domain, where worlds with millions of blocks are not unrealistic.Lifted planners like SHOP2 perform well.", "However, it must be noted that they follow a very simple search strategy, which is very effective on our models where any method decomposition always leads to a valid solution.", "However, it may be less effective when other constraints must be met and/or optimizing quality is required.", "For example, if some blocks are removed from the ground by the user, then some additional blocks must be placed as auxiliary structure for the main construction.", "Arguably, this could be easily fixed by changing the model so that whenever a block cannot be placed in a target location, an auxiliary tower of blocks is built beneath the location.", "However, this increases the burden of writing new scenarios since suitable task decompositions (along with good criteria of when to select each decomposition) have to be designed for all possible situations.This makes the SHOP2 model less robust to unexpected situations that were not anticipated by the domain modeler.", "PANDA, on the other hand, supports insertion of primitive actions BID9 , allowing the planner to consider placing additional blocks, e.g., to build supporting structures that do not correspond to any task in the HTN.", "This could help to increase the robustness of the planner in unexpected situations where auxiliary structures that have not been anticipated by the modeler are needed.", "However, this is currently only supported by the POCL-plan-based search component and considering all possibilities for task insertion significantly slows down the search and it runs out of memory in our scenarios.", "This may point out new avenues of research on more efficient ways to consider task insertion.In related Minecraft applications, cognitive priming has been suggested as a possible solution to keep the size of the world considered by the planner at bay BID17 .", "In construction scenarios, however, large parts of the environment can be relevant so incremental grounding approaches may be needed to consider different parts of the scenario at different points in the construction plan.Our models are still a simple prototype and they do not yet capture the whole complexity of the domain.", "We plan to extend them in different directions in order to capture how hard it is to describe actions or method decompositions in natural language.", "For example, while considering the position of the user is not strictly necessary, his visibility may be important because objects in his field of view are easier to describe in natural language.", "How to effectively model the field of vision is a challenging topic, which may lead to combinations with external solvers like in the planning modulo theories paradigm BID10 .Another", "interesting extension is to consider how easy it is to express the given action in natural language and for example by reducing action cost for placing blocks near objects that can be easily referred to. Such objects", "could be landmarks e.g. blocks of a different type (\"put a stone block next to the blue block\") or just the previously placed block (e.g., \"Now, put another stone block on top of it\")." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12903225421905518, 0.1538461446762085, 0.15789473056793213, 0.9729729890823364, 0.1395348757505417, 0.3030303120613098, 0.30434781312942505, 0.0952380895614624, 0.1875, 0.13636362552642822, 0.052631575614213943, 0.21875, 0.06666666269302368, 0.14814814925193787, 0.1428571343421936, 0.25, 0.380952388048172, 0.3684210479259491, 0.19999998807907104, 0.3333333432674408, 0.290909081697464, 0.1666666567325592, 0.10256409645080566, 0.29411762952804565, 0.5, 0.1666666567325592, 0.23255813121795654, 0.08510638028383255, 0, 0.0476190410554409, 0.2083333283662796, 0.09677419066429138, 0.15686273574829102, 0.1395348757505417, 0.1666666567325592, 0.13793103396892548, 0.2295081913471222, 0.09999999403953552, 0.12765957415103912, 0.260869562625885, 0.11999999731779099, 0.12244897335767746 ]
BkgyvHSWFV
true
[ "We model a house-construction scenario in Minecraft in classical and HTN planning and compare the advantages and disadvantages of both kinds of models." ]
[ "Attacks on natural language models are difficult to compare due to their different definitions of what constitutes a successful attack.", "We present a taxonomy of constraints to categorize these attacks.", "For each constraint, we present a real-world use case and a way to measure how well generated samples enforce the constraint.", "We then employ our framework to evaluate two state-of-the art attacks which fool models with synonym substitution.", "These attacks claim their adversarial perturbations preserve the semantics and syntactical correctness of the inputs, but our analysis shows these constraints are not strongly enforced.", "For a significant portion of these adversarial examples, a grammar checker detects an increase in errors.", "Additionally, human studies indicate that many of these adversarial examples diverge in semantic meaning from the input or do not appear to be human-written.", "Finally, we highlight the need for standardized evaluation of attacks that share constraints.", "Without shared evaluation metrics, it is up to researchers to set thresholds that determine the trade-off between attack quality and attack success.", "We recommend well-designed human studies to determine the best threshold to approximate human judgement.", "Advances in deep learning have led to impressive performance on many tasks, but models still make mistakes.", "Models are particularly vulernable to adversarial examples, inputs designed to fool models (Szegedy et al., 2014) .", "Goodfellow et al. (2014) demonstrated that image classification models could be fooled by perturbations indistinguishable to humans.", "Due to the importance of natural language processing (NLP) tasks, a large body of research has focused on applying the concept of adversarial examples to text, including (Alzantot et al., 2018; Jin et al., 2019; Kuleshov et al., 2018; Zhang et al., 2019; Ebrahimi et al., 2017; Gao et al., 2018; Li et al., 2018; Samanta & Mehta, 2017; Jia & Liang, 2017; Iyyer et al., 2018; Papernot et al., 2016a) .", "The importance of tasks such as spam and plagiarism detection highlights the need for robust NLP models.", "However, there are fundamental differences between image and text data.", "Unlike images, two different sequences of text are never entirely indistinguishable.", "This raises the question: if indistinguishable perturbations aren't possible, what are adversarial examples in text?", "We observe that each work from recent literature has a slightly different definition of what constitutes an adversarial example in natural language.", "Comparing the success rate of two attacks is meaningless if the attacks use different methods to evaluate the same constraints or define different constraints altogether.", "In this paper, we build on Gilmer et al. (2018) to introduce a taxonomy of constraints specific to adversarial examples in natural language.", "To the best of our knowledge, our work provides the first comprehensive framework for categorizing and evaluating attack constraints in natural language.", "We discuss use cases and propose standardized evaluation methods for each of these constraints.", "We then apply our evaluation methods to the synonym-substitution based attacks of Jin et al. (2019) and Alzantot et al. (2018) .", "These attacks claimed to preserve the syntax and semantics of the original sentence, while remaining non-suspicious to a human interpreter.", "However, we find that most of their adversarial examples contain additional grammatical errors, and human surveys reveal that many adversarial examples also change the meaning of the sentence and/or do not appear to be written by humans.", "These results call into question the ubiquity of synonym-based adversarial examples and emphasize the need for more careful evaluation of attack approaches.", "Lastly, we discuss how previous works rely on arbitrary thresholds to determine the semantic similarity of two sentences.", "These thresholds can be tuned by the researcher to make their methods seem more successful with little penalty in quantitative metrics.", "Thus, we highlight the importance of standardized human evaluations to approximate the true threshold value.", "Any method that introduces a novel approach to measure semantic similarity should support their choice of threshold with defensible human studies.", "The three main contributions of this paper are:", "• We formally define and categorize constraints on adversarial examples in text, and introduce evaluation methods for each category.", "• Using these categorizations and evaluation methods, we quantitatively disprove claims that stateof-the-art synonym-based substitutions preserve semantics and grammatical correctness.", "• We show the sensitivity of attack success rate to changes in semantic similarity thresholds set by researchers.", "We assert that perturbations which claim semantic similarity should use standardized human evaluation studies with precise wording to determine an appropriate threshold.", "We introduced a framework for evaluating fulfillment of attack constraints in natural language.", "Applying this framework to synonym substitution attacks raised concerns about the semantic preservation, syntactic accuracy, and conspicuity of the adversarial examples they generate.", "Future work may expand our hierarchy to categorize and evaluate different attack constraints in natural language.", "Standardized terminology and evaluation metrics will make it easier for defenders to determine which attacks they must protect themselves from-and how.", "It remains to be seen how robust BERT is when subject to synonym attacks which rigorously preserve semantics and syntax.", "It is up to future research to determine how prevalent adversarial examples are throughout the broader space of paraphrases." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.1764705777168274, 0.1818181723356247, 0.09756097197532654, 0.1666666567325592, 0.1538461446762085, 0.25, 0.10810810327529907, 0.09090908616781235, 0.0555555522441864, 0.04878048226237297, 0.09999999403953552, 0.04878048226237297, 0.1764705777168274, 0.09756097197532654, 0.11764705181121826, 0.05714285373687744, 0.20512820780277252, 0.30434781312942505, 0.045454539358615875, 0.260869562625885, 0.3181818127632141, 0.15789473056793213, 0.09302324801683426, 0.1428571343421936, 0.178571417927742, 0.1818181723356247, 0, 0.04444443807005882, 0, 0.08888888359069824, 0, 0.2857142686843872, 0.09302324801683426, 0.0952380895614624, 0.08695651590824127, 0.4324324131011963, 0.17391303181648254, 0.19999998807907104, 0.08888888359069824, 0.04651162400841713, 0.1428571343421936 ]
BkxmKgHtwH
true
[ "We present a framework for evaluating adversarial examples in natural language processing and demonstrate that generated adversarial examples are often not semantics-preserving, syntactically correct, or non-suspicious." ]
[ "In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions.", "For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification.", "How to interpret black-box predictors is thus an important and active area of research. ", "A fundamental question is: how much can we trust the interpretation itself?", "In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations.", "We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10.", "Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label.", "We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile.", "Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.", "Predictions made by machine learning algorithms play an important role in our everyday lives and can affect decisions in technology, medicine, and even the legal system (Rich, 2015; Obermeyer & Emanuel, 2016) .", "As the algorithms become increasingly complex, explanations for why an algorithm makes certain decisions are ever more crucial.", "For example, if an AI system predicts a given pathology image to be malignant, then the doctor would want to know what features in the image led the algorithm to this classification.", "Similarly, if an algorithm predicts an individual to be a credit risk, then the lender (and the borrower) might want to know why.", "Therefore having interpretations for why certain predictions are made is critical for establishing trust and transparency between the users and the algorithm (Lipton, 2016) .Having", "an interpretation is not enough, however. The explanation", "itself must be robust in order to establish human trust. Take the pathology", "predictor; an interpretation method might suggest that a particular section in an image is important for the malignant classification (e.g. that section could have high scores in saliency map). The clinician might", "then focus on that section for investigation, treatment or even look for similar features in other patients. It would be highly", "disconcerting if in an extremely similar image, visually indistinguishable from the original and also classified as malignant, a very different section is interpreted as being salient for the prediction. Thus, even if the", "predictor is robust (both images are correctly labeled as malignant), that the interpretation is fragile would still be highly problematic in deployment.Our contributions. The fragility of", "prediction in deep neural networks against adversarial attacks is an active area of research BID4 Kurakin et al., 2016; Papernot et al., 2016; Moosavi-Dezfooli et al., 2016) . In that setting,", "fragility is exhibited when two perceptively indistinguishable images are assigned different labels by the neural network. In this paper, we", "extend the definition of fragility to neural network interpretation. More precisely, we", "define the interpretation of neural network to be fragile if perceptively indistinguishable images that have the same prediction label by the neural network are given substantially different interpretations. We systematically", "The fragility of feature-importance maps. We generate feature-importance", "scores, also called saliency maps, using three popular interpretation methods: simple gradient (a), DeepLIFT (b) and integrated", "gradient (c).", "The top row shows the the original", "images", "and their saliency maps and the bottom row shows the perturbed images (using the center attack with = 8, as described in Section 3) and the corresponding saliency maps. In all three images, the predicted label", "has not changed due to perturbation; in fact the network's (SqueezeNet) confidence in the prediction has actually increased. However, the saliency maps of the perturbed", "images are meaningless.investigate two classes of interpretation methods: methods that assign importance scores to each feature (this includes simple gradient (Simonyan et al., 2013) , DeepLift (Shrikumar et al., 2017) , and integrated gradient (Sundararajan et al., 2017) ), as well as a method that assigns importances to each training example: influence functions (Koh & Liang, 2017) . For both classes of interpretations, we show", "that targeted perturbations can lead to dramatically different interpretations ( FIG0 ).Our findings highlight the fragility of interpretations", "of neural networks, which has not been carefully considered in literature. Fragility directly limits how much we can trust and learn", "from the interpretations. It also raises a significant new security concern. Especially", "in medical or economic applications, users often take", "the interpretation of a prediction as containing causal insight (\"this image is a malignant tumor likely because of the section with a high saliency score\"). An adversary could minutely manipulate the input to draw attention", "away from relevant features or onto his/her desired features. Such attacks might be especially hard to detect as the actual labels", "have not changed.While we focus on image data here because most of the interpretation methods have been motivated by images, the fragility of neural network interpretation could be a much broader problem. Fig. 2 illustrates the intuition that when the decision boundary in", "the", "input feature space is complex, as is the case with deep nets, a small perturbation in the input can push the example into a region with very different loss contours. Because the feature importance is closely related to the gradient", "which is perpendicular to the loss contours, the importance scores can also be dramatically different. We provide additional analysis of this in Section 5.", "Related works To the best of our knowledge, the notion of adversarial examples has not previously been studied in the context of interpretation of neural networks.", "Adversarial attacks to the input that changes the prediction of a network have been actively studied.", "Szegedy et al. (2013) demonstrated that it is relatively easy to fool neural networks into making very different predictions for test images that are visually very similar to each other.", "BID4 introduced the Fast Gradient Sign Method (FGSM) as a one-step prediction attack.", "This was followed by more effective iterative attacks (Kurakin et al., 2016) Interpretation of neural network predictions is also an active research area.", "Post-hoc interpretability (Lipton, 2016) is one family of methods that seek to \"explain\" the prediction without talking about the details of black-box model's hidden mechanisms.", "These included tools to explain predictions by networks in terms of the features of the test example (Simonyan et al., 2013; Shrikumar et al., 2017; Sundararajan et al., 2017; Zhou et al., 2016) , as well as in terms of contribution of training examples to the prediction at test time (Koh & Liang, 2017) .", "These interpretations have gained increasing popularity, as they confer a degree of insight to human users of what the neural network might be doing (Lipton, 2016) .Conclusion", "This paper demonstrates that interpretation of neural networks can be fragile in the specific sense that two similar inputs with the same predicted label can be given very different interpretations. We develop", "new perturbations to illustrate this fragility and propose evaluation metrics as well as insights on why fragility occurs. Fragility", "of neural network interpretation is orthogonal to fragility of the prediction-we demonstrate how perturbations can substantially change the interpretation without changing the predicted label. The two types", "of fragility do arise from similar factors, as we discuss in Section 5. Our focus is", "on the interpretation method, rather than on the original network, and as such we do not explore how interpretable is the original predictor. There is a separately", "line of research that tries to design simpler and more interpretable prediction models BID0 .Our main message is that", "robustness of the interpretation of a prediction is an important and challenging problem, especially as in many applications (e.g. many biomedical and social settings) users are as interested in the interpretation as in the prediction itself. Our results raise concerns", "on how interpretations of neural networks are sensitive to noise and can be manipulated. Especially in settings where", "the importance of individual or a small subset of features are interpreted, we show that these importance scores can be sensitive to even random perturbation. More dramatic manipulations", "of interpretations can be achieved with our targeted perturbations, which raise security concerns. We do not suggest that interpretations", "are meaningless, just as adversarial attacks on predictions do not imply that neural networks are useless. Interpretation methods do need to be used", "and evaluated with caution while applied to neural networks, as they can be fooled into identifying features that would not be considered salient by human perception.Our results demonstrate that the interpretations (e.g. saliency maps) are vulnerable to perturbations, but this does not imply that the interpretation methods are broken by the perturbations. This is a subtle but important distinction", ". Methods such as saliency measure the infinitesimal", "sensitivity of the neural network at a particular input x. After a perturbation, the input has changed tox =", "x + δ, and the salency now measures the sensitivity at the perturbed input. The saliency correctly captures the infinitesimal", "sensitivity at the two inputs; it's doing what it is supposed to do. The fact that the two resulting saliency maps are", "very different is fundamentally due to the network itself being fragile to such perturbations, as we illustrate with Fig. 2 .While we focus on image data (ImageNet and CIFAR-10", "), because these are the standard benchmarks for popular interpretation tools, this fragility issue can be wide-spread in biomedical, economic and other settings where neural networks are increasingly used. Understanding interpretation fragility in these applications", "and develop more robust methods are important agendas of research." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1071428507566452, 0.10526315122842789, 0.08695651590824127, 0.1395348757505417, 0.09677419066429138, 0.18518517911434174, 0.0714285671710968, 0.08163265138864517, 0.11538460850715637, 0.06557376682758331, 0.08163265138864517, 0.06896550953388214, 0.07843136787414551, 0.15094339847564697, 0.05128204822540283, 0.09090908616781235, 0.1355932205915451, 0.039215680211782455, 0.13333332538604736, 0.07017543166875839, 0.17543859779834747, 0.11764705181121826, 0.1860465109348297, 0.14035087823867798, 0.15789473056793213, 0.1249999925494194, 0.054054051637649536, 0.1355932205915451, 0.19607841968536377, 0.1463414579629898, 0.08163265138864517, 0.19230768084526062, 0.09302325546741486, 0, 0.13114753365516663, 0.039215680211782455, 0.14492753148078918, 0.06557376682758331, 0.1111111044883728, 0.23076923191547394, 0.1304347813129425, 0.10344827175140381, 0.09090908616781235, 0.072727270424366, 0.1111111044883728, 0.11764705181121826, 0.14035087823867798, 0.16949151456356049, 0.0416666604578495, 0.1111111044883728, 0.08510638028383255, 0.15094339847564697, 0.12244897335767746, 0.16393442451953888, 0.16326530277729034, 0.14035087823867798, 0.08163265138864517, 0.11538460850715637, 0.12195121496915817, 0.10526315122842789, 0.1702127605676651, 0.1249999925494194, 0.11764705181121826, 0.10344827175140381, 0.19672130048274994, 0.09756097197532654 ]
H1xJjlbAZ
true
[ "Can we trust a neural network's explanation for its prediction? We examine the robustness of several popular notions of interpretability of neural networks including saliency maps and influence functions and design adversarial examples against them." ]
[ "Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification.", "However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data.", "In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model.", "Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a {\\it non-convex concave} min-max problem.", "The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well.", "In particular, we propose to explore Polyak-\\L{}ojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme.", "An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate.", "Our experimental results demonstrate the effectiveness of the proposed algorithms.", "Deep learning has been witnessed with tremendous success for various tasks, including computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016; Ren et al., 2015) , speech recognition (Hinton et al., 2012; Mohamed et al., 2012; Graves, 2013) , natural language processing (Bahdanau et al., 2014; Sutskever et al., 2014; Devlin et al., 2018) , etc.", "From an optimization perspective, all of them are solving an empirical risk minimization problem in which the objective function is a surrogate loss of the prediction error made by a deep neural network in comparison with the ground-truth label.", "For example, for image classification task, the objective function is often chosen as the cross entropy between the probability distribution calculated by forward propagation of a convolutional neural network and the vector encoding true label information (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016) , where the cross entropy is a surrogate loss of the misclassification rate.", "However, when the data is imbalanced, this formulation is not reasonable since the data coming from minor class have little effect in this case and the model is almost determined by the data from the majority class.", "To address this issue, AUC maximization has been proposed as a new learning paradigm (Zhao et al., 2011) .", "Statistically, AUC (short for Area Under the ROC curve) is defined as the probability that the prediction score of a positive example is higher than that of a negative example (Hanley & McNeil, 1982; 1983) .", "Compared with misclassification rate and its corresponding surrogate loss, AUC is more suitable for imbalanced data setting (Elkan, 2001) .", "Several online or stochastic algorithms for time based on a new sampled/received training data.", "Instead of storing all examples in the memory, Zhao et al. (2011) employ reservoir sampling technique to maintain representative samples in a buffer, based on which their algorithms update the model.", "To get optimal regret bound, their buffer size needs to be O( √ n), where n is the number of received training examples.", "Gao et al. (2013) design a new algorithm which is not buffer-based.", "Instead, their algorithm needs to maintain the first-order and second-order statistics of the received data to compute the stochastic gradient, which is prohibitive for high dimensional data.", "Based on a novel saddle-point reformulation of a surrogate loss of AUC proposed by (Ying et al., 2016) , there are several studies (Ying et al., 2016; Liu et al., 2018; Natole et al., 2018) trying to design stochastic primal-dual algorithms.", "Ying et al. (2016) employ the classical primal-dual stochastic gradient (Nemirovski et al., 2009 ) and obtain O(1/ √ t) convergence rate.", "Natole et al. (2018) add a strongly convex regularizer, invoke composite mirror descent (Duchi et al., 2010 ) and achieve O(1/t) convergence rate.", "Liu et al. (2018) leverage the structure of the formulation, design a multi-stage algorithm and achieve O(1/t) convergence rate without strong convexity assumptions.", "However, all of them only consider learning a linear model, which results in a convex objective function.", "Non-Convex Min-max Optimization.", "Stochastic optimization of non-convex min-max problems have received increasing interests recently (Rafique et al., 2018; Lin et al., 2018; Sanjabi et al., 2018; Lu et al., 2019; Jin et al., 2019) .", "When the objective function is weakly convex in the primal variable and is concave in the dual variable, Rafique et al. (2018) design a proximal guided algorithm in spirit of the inexact proximal point method (Rockafellar, 1976) , which solves a sequence of convexconcave subproblems constructed by adding a quadratic proximal term in the primal variable with a periodically updated reference point.", "Due to the potential non-smoothness of objective function, they show the convergence to a nearly-stationary point for the equivalent minimization problem.", "In the same vein as (Rafique et al., 2018) , Lu et al. (2019) design an algorithm by adopting the block alternating minimization/maximization strategy and show the convergence in terms of the proximal gradient.", "When the objective is weakly convex and weakly concave, Lin et al. (2018) propose a proximal algorithm which solves a strongly monotone variational inequality in each epoch and establish its convergence to stationary point.", "Sanjabi et al. (2018) consider non-convex non-concave min-max games where the inner maximization problem satisfies a PL condition, based on which they design a multi-step deterministic gradient descent ascent with convergence to a stationary point.", "It is notable that our work is different in that", "(i) we explore the PL condition for the outer minimization problem instead of the inner maximization problem;", "(ii) we focus on designing stochastic algorithms instead of deterministic algorithms.", "Leveraging PL Condition for Minimization.", "PL condition is first introduced by Polyak (Polyak, 1963) , which shows that gradient descent is able to enjoy linear convergence to a global minimum under this condition.", "Karimi et al. (2016) show that stochastic gradient descent, randomized coordinate descent, greedy coordinate descent are able to converge to a global minimum with faster rates under the PL condition.", "If the objective function has a finite-sum structure and satisfies PL condition, there are several non-convex SVRG-style algorithms (Reddi et al., 2016; Lei et al., 2017; Nguyen et al., 2017; Zhou et al., 2018; Li & Li, 2018; Wang et al., 2018) , which are guaranteed to converge to a global minimum with a linear convergence rate.", "However, the stochastic algorithms in these works are developed for a minimization problem, and hence is not applicable to the min-max formulation for stochastic AUC maximization.", "To the best of our knowledge, Liu et al. (2018) is the only work that leverages an equivalent condition to the PL condition (namely quadratic growth condition) to develop a stochastic primal-dual algorithm for AUC maximization with a fast rate.", "However, as mentioned before their algorithm and analysis rely on the convexity of the objective function, which does not hold for AUC maximization with a deep neural network.", "Finally, we notice that PL condition is the key to many recent works in deep learning for showing there is no spurious local minima or for showing global convergence of gradient descent and stochastic gradient descent methods (Hardt & Ma, 2016; Li & Yuan, 2017; Arora et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018b; a; Li & Liang, 2018; Allen-Zhu et al., 2018; Zou et al., 2018; Zou & Gu, 2019) .", "Using the square loss, it has also been proved that the PL condition holds globally or locally for deep linear residual network (Hardt & Ma, 2016) , deep linear network, one hidden layer neural network with Leaky ReLU activation (Charles & Papailiopoulos, 2017; Zhou & Liang, 2017) .", "Several studies (Li & Yuan, 2017; Arora et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018b; Li & Liang, 2018) consider the trajectory of (stochastic) gradient descent on learning neural networks, and their analysis imply the PL condition in a certain form.", "For example, Du et al. (2018b) show that when the width of a two layer neural network is sufficiently large, a global optimum would lie in the ball centered at the initial solution, in which PL condition holds.", "Allen-Zhu et al. (2018) extends this insight further to overparameterized deep neural networks with ReLU activation, and show that the PL condition holds for a global minimum around a random initial solution.", "In this paper, we consider stochastic AUC maximization problem when the predictive model is a deep neural network.", "By abuilding on the saddle point reformulation and exploring Polyak-Łojasiewicz condition in deep learning, we have proposed two algorithms with state-of-the-art complexities for stochastic AUC maximization problem.", "We have also demonstrated the efficiency of our proposed algorithms on several benchmark datasets, and the experimental results indicate that our algorithms converge faster than other baselines.", "One may consider to extend the analysis techniques to other problems with the min-max formulation.", "Proof.", "It suffices to prove that", "Note that the optimal values of a, b, α are chosen as a *", "2 , (c) comes from the standard analysis of primal-dual stochastic gradient method.", "Denote E k−1 by taking the conditional expectation conditioning on all the stochastic events until v k−1 is generated.", "Taking E k−1 on both sides and noting thatĝ k t is an unbiased estimator of g k t for ∀t, k, we have", "By the update ofᾱ k−1 , 2L-Lipschitz continuity of E [h(w; x)|y = −1] − E [h(w; x)|y = 1], and noting that α", ", then we have", "We can see that φ k (v) is convex and smooth function since γ ≤ 1/L.", "The smoothness parameter of φ k isL = L+γ −1 .", "Define s k = arg min v∈R d+2 φ k (v).", "According to Theorem 2.1.5 of (Nesterov, 2013), we have", "Combining (8) with Lemma 2 yields", "Note that φ k (v) is (γ −1 − L)-strongly convex, and γ = 1 2L , we have", "Plugging in s k into Lemma 2 and combining (10) yield", "2 ), rearranging the terms, and noting that", "Combining (11) and (9) yields", "(12) Taking expectation on both sides over all randomness untilv k−1 is generated and by the tower property, we have", "is L-smooth and hence is L-weakly convex, so we have", "where", "(a) and", "(b) hold by the definition of φ k .", "Rearranging the terms in (14) yields", "where", "(a) holds by using a, b ≤ 1 2 ( a 2 + b 2 ), and", "(b) holds by the PL property of φ.", "Combining (13) and (15), we can see that", "As a result, we have", "Published as a conference paper at ICLR 2020", "2 ), by the setting of η k , we set", "The required number of samples is", "A.4", "PROOF OF LEMMA 3", "2 , (c) holds by Jensen's inequality.", "Now we bound I and II separately.", "Define", "Combining (17) and (20), we have", "By Lemma 4 of (Duchi et al., 2011) and setting δ ≥ max t ĝ k t ∞ , we know that", "T k 2 , and hence", "Denote E k−1 by taking the conditional expectation conditioning on filtration F k−1 , where F k−1 is the σ-algebra generated by all random variables untilv k−1 is generated.", "Taking E k−1 on both sides of (16), and employing (22) yields", "where the equality holds sincev k−1 − s k is measurable with respect to F k−1 .", "Note that", "where (", "By setting", ", then T k is a stopping time which is bounded almost surely.", "By stopping time argument, we have E k−1 (II) = 0, and hence", "A.5", "PROOF OF THEOREM 3", "We can see that φ k (v) is convex and smooth function since γ ≤ 1/L.", "The smoothness parameter of φ k isL = L+γ −1 .", "Define s k = arg min v∈R d+2 φ k (v).", "According to Theorem 2.1.5 of (Nesterov, 2013), we have", "Combining (24) with Lemma 3 yields", "Note that", "(26) Plugging in s k into Lemma 3 and combining (26) yield", ", rearranging the terms, and noting that", "Combining (27) and (25) yields", "Taking expectation on both sides over all randomness untilv k−1 is generated and by the tower property, we have", "Note that φ(v) is L-smooth and hence is L-weakly convex, so we have", "where", "(a) and", "(b) hold by the definition of φ k .", "Rearranging the terms in (30) yields", "where", "(a) holds by using a, b ≤ 1 2 ( a 2 + b 2 ), and", "(b) holds by the PL property of φ.", "Combining (29) and (31), we can see that", "which implies that", "As a result, we have", ", and note that when τ ≥ 1,", ", and hence", ", we can see that the total iteration complexity is", "." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.3461538553237915, 0.4680851101875305, 0.08163265138864517, 0.3571428656578064, 0.15625, 0.1395348757505417, 0.10526315122842789, 0.05714285373687744, 0.32258063554763794, 0.1538461446762085, 0.1090909019112587, 0.1249999925494194, 0.14035087823867798, 0.1249999925494194, 0.1395348757505417, 0.10344827175140381, 0.038461532443761826, 0.04878048226237297, 0.1538461446762085, 0.19672130048274994, 0.07999999821186066, 0, 0.039215680211782455, 0.08888888359069824, 0, 0, 0.10810810327529907, 0.12765957415103912, 0.10344827175140381, 0.06666666269302368, 0.16129031777381897, 0, 0.1818181723356247, 0.10256409645080566, 0.05882352590560913, 0.07407406717538834, 0.1428571343421936, 0.138888880610466, 0.26923075318336487, 0.1875, 0.3571428656578064, 0.09876542538404465, 0.19999998807907104, 0.08955223113298416, 0.1904761791229248, 0.1666666567325592, 0.42553192377090454, 0.4285714328289032, 0.11320754140615463, 0.0952380895614624, 0, 0.1395348757505417, 0.0952380895614624, 0.1304347813129425, 0.039215680211782455, 0.04081632196903229, 0, 0, 0.05128204822540283, 0, 0, 0.05714285373687744, 0, 0, 0.054054051637649536, 0, 0.08163265138864517, 0, 0.10810810327529907, 0.05714285373687744, 0.09302324801683426, 0.10810810327529907, 0, 0, 0.10810810327529907, 0.09999999403953552, 0.05714285373687744, 0, 0.0555555522441864, 0, 0, 0, 0, 0.07999999821186066, 0, 0.09090908616781235, 0.04878048226237297, 0, 0, 0, 0.05128204822540283, 0, 0, 0.05714285373687744, 0, 0.0555555522441864, 0, 0.0833333283662796, 0, 0.10810810327529907, 0.05714285373687744, 0.09302324801683426, 0.10810810327529907, 0, 0.0625, 0, 0.054054051637649536, 0, 0.05128204822540283 ]
HJepXaVYDr
true
[ "The paper designs two algorithms for the stochastic AUC maximization problem with state-of-the-art complexities when using deep neural network as predictive model, which are also verified by empirical studies." ]
[ "Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute.", "The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation.", "Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical.", "Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward.", "Unfortunately, without tricks like resetting to points along the trajectory, HER might take a very long time to discover how to reach certain areas of the state-space.", "In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms.", "Furthermore, our method can be used when only trajectories without expert actions are available, which can leverage kinestetic or third person demonstration.", "Reinforcement Learning (RL) has shown impressive results in a plethora of simulated tasks, ranging from attaining super-human performance in video-games BID18 BID35 and board-games (Silver et al., 2017) , to learning complex locomotion behaviors BID34 BID4 .", "Nevertheless, these successes are shyly echoed in real world robotics (Riedmiller et BID36 .", "This is due to the difficulty of setting up the same learning environment that is enjoyed in simulation.", "One of the critical assumptions that are hard to obtain in the real world are the access to a reward function.", "Self-supervised methods have the power to overcome this limitation.A very versatile and reusable form of self-supervision for robotics is to learn how to reach any previously observed state upon demand.", "This problem can be formulated as training a goal-conditioned policy BID14 BID27 that seeks to obtain the indicator reward of having the observation exactly match the goal.", "Such a reward does not require any additional instrumentation of the environment beyond the sensors the robot already has.", "But in practice, this reward is never observed because in continuous spaces like the ones in robotics, the exact same observation is never observed twice.", "Luckily, if we are using an off-policy RL algorithm BID17 BID11 , we can \"relabel\" a collected trajectory by replacing its goal by a state actually visited during that trajectory, therefore observing the indicator reward as often as we wish.", "This method was introduced as Hindsight Experience Replay BID0 or HER.In theory these approaches could learn how to reach any goal, but the breadth-first nature of the algorithm makes that some areas of the space take a long time to be learned BID7 .", "This is specially challenging when there are bottlenecks between different areas of the statespace, and random motion might not traverse them easily BID5 .", "Some practical examples of this are pick-and-place, or navigating narrow corridors between rooms, as illustrated in Fig. 5 in appendix depicting the diverse set of environments we work with.", "In both cases a specific state needs to be reached (grasp the object, or enter the corridor) before a whole new area of the space is discovered (placing the object, or visiting the next room).", "This problem could be addressed by engineering a reward that guides the agent towards the bottlenecks, but this defeats the purpose of trying to learn without direct reward supervision.", "In this work we study how to leverage a few demonstrations that traverse those bottlenecks to boost the learning of goal-reaching policies.Learning from Demonstrations, or Imitation Learning (IL), is a well-studied field in robotics BID15 BID25 BID2 .", "In many cases it is easier to obtain a few demonstrations from an expert than to provide a good reward that describes the task.", "Most of the previous work on IL is centered around trajectory following, or doing a single task.", "Furthermore it is limited by the performance of the demonstrations, or relies on engineered rewards to improve upon them.", "In this work we study how IL methods can be extended to the goal-conditioned setting, and show that combined with techniques like HER it can outperform the demonstrator without the need of any additional reward.", "We also investigate how the different methods degrade when the trajectories of the expert become less optimal, or less abundant.", "Finally, the method we develop is able to leverage demonstrations that do not include the expert actions.", "This is very convenient in practical robotics where demonstrations might have been given by a motion planner, by kinestetic demonstrations (moving the agent externally, and not by actually actuating it), or even by another agent.", "To our knowledge, this is the first framework that can boost goal-conditioned policy learning with only state demonstrations.", "Hindsight relabeling can be used to learn useful behaviors without any reward supervision for goal-conditioned tasks, but they are inefficient when the state-space is large or includes exploration bottlenecks.", "In this work we show how only a few demonstrations can be leveraged to improve the convergence speed of these methods.", "We introduce a novel algorithm, goal-GAIL, that converges faster than HER and to a better final performance than a naive goal-conditioned GAIL.", "We also study the effect of doing expert relabeling as a type of data augmentation on the provided demonstrations, and demonstrate it improves the performance of our goal-GAIL as well as goal-conditioned Behavioral Cloning.", "We emphasize that our goal-GAIL method only needs state demonstrations, without using expert actions like other Behavioral Cloning methods.", "Finally, we show that goal-GAIL is robust to sub-optimalities in the expert behavior." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.13333332538604736, 0.08695651590824127, 0, 0.17391303181648254, 0.04255318641662598, 0.178571417927742, 0, 0.10344827175140381, 0, 0.05128204822540283, 0.04999999329447746, 0.07692307233810425, 0.0833333283662796, 0.04999999329447746, 0.0476190410554409, 0.06896550953388214, 0.1269841194152832, 0.08695651590824127, 0.03999999538064003, 0.039215680211782455, 0.08163265138864517, 0.10344827175140381, 0.08888888359069824, 0.04999999329447746, 0.1463414579629898, 0.1090909019112587, 0.09999999403953552, 0.05128204822540283, 0.11320754140615463, 0.1463414579629898, 0.11538460850715637, 0.09090908616781235, 0.3333333432674408, 0.19607841968536377, 0.0476190410554409, 0.0555555522441864 ]
HkglHcSj2N
true
[ "We tackle goal-conditioned tasks by combining Hindsight Experience Replay and Imitation Learning algorithms, showing faster convergence than the first and higher final performance than the second." ]
[ "Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets.", "However, generalization bounds for this setting is still missing.\n", "In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the \\emph{Herbst Argument} for the log-Sobolev inequality to bound the moment generating function of the learners risk.", "Deep neural networks are ubiquitous across disciplines and often achieve state of the art results (e.g., Krizhevsky et al. (2012) ; Simonyan & Zisserman (2014) ; He et al. (2016) ).", "Albeit neural networks are able to encode highly complex input-output relations, in practice, they do not tend to overfit (Zhang et al., 2016) .", "This tendency to not overfit has been investigated in numerous works on generalization bounds (Langford & Shawe-Taylor, 2002; Langford & Caruana, 2002; Bartlett et al., 2017a; 2019; McAllester, 2003; Germain et al., 2016; Dziugaite & Roy, 2017) .", "Indeed, many generalization bounds apply to neural networks.", "However, most of these bounds assume that the loss function is bounded (Bartlett et al., 2017a; Neyshabur et al., 2017; Dziugaite & Roy, 2017) .", "Unfortunately, this assumption excludes the popular negative log-likelihood (NLL) loss, which is instrumental to Bayesian neural networks that have been used extensively to calibrate model performance and provide uncertainty measures to the model prediction.", "In this work we introduce a new PAC-Bayesian generalization bound for NLL loss of deep neural networks.", "Our work utilizes the Herbst argument for the logarithmic-Sobolev inequality (Ledoux, 1999) in order to bound the moment-generating function of the model risk.", "Broadly, our PACBayesian bound is comprised of two terms: The first term is dominated by the norm of the gradients with respect to the input and it describes the expressivity of the model over the prior distribution.", "The second term is the KL-divergence between the learned posterior and the prior, and it measures the complexity of the learning process.", "In contrast, bounds for linear models or bounded loss functions lack the term that corresponds to the expressivity of the model over the prior distribution and therefore are the same when applied to shallow and deep models.", "We empirically show that our PAC-Bayesian bound is tightest when we learn the mean and variance of each parameter separately, as suggested by Blundell et al. (2015) in the context of Bayesian neural networks (BNNs).", "We also show that the proposed bound holds different insights regarding model architecture, optimization and prior distribution selection.", "We demonstrate that such optimization minimizes the gap between risk and the empirical risk compared to the standard Bernoulli dropout and other Bayesian inference approximation while being consistent with the theoretical findings.", "Additionally, we explore in-distribution and out-of-distribution examples to show that such optimization produces better uncertainty estimates than the baseline.", "PAC-Bayesian bounds for the NLL loss function are intimately related to learning Bayesian inference (Germain et al., 2016) .", "Recently many works applied various posteriors in Bayesian neural networks.", "Gal & Ghahramani (2015) ; Gal (2016) introduce a Bayesian inference approximation using Monte Carlo (MC) dropout, which approximates a Gaussian posterior using Bernoulli dropout.", "Srivastava et al. (2014) introduced Gaussian dropout which effectively creates a Gaussian posterior that couples between the mean and the variance of the learned parameters.", "Kingma et al. (2015) explored the relation of this posterior to log-uniform priors, while Blundell et al. (2015) suggests to take a full Bayesian perspective and learn separately the mean and the variance of each parameter.", "Our work uses the bridge between PAC-Bayesian bounds and Bayesian inference, as described by Germain et al. (2016) , to find the optimal prior parameters in PAC-Bayesian setting and apply it in the Bayesian setting.", "Most of the literature regarding Bayesian modeling involves around a two-step formalism (Bernardo & Smith, 2009) : (1) a prior is specified for the parameters of the deep net; (2) given the training data, the posterior distribution over the parameters is computed and used to quantify predictive uncertainty.", "Since exact Bayesian inference is computationally intractable for neural networks, approximations are used, including MacKay (1992); Hernández-Lobato & Adams (2015); Hasenclever et al. (2017); Balan et al. (2015) ; Springenberg et al. (2016) .", "In this study we follow this two-step formalism, particularly we follow a similar approach to Blundell et al. (2015) in which we learn the mean and standard deviation for each parameter of the model using variational Bayesian practice.", "Our experimental validation emphasizes the importance of learning both the mean and the variance.", "In the following study we present a new PAC-Bayesian generalization bound for learning a deep net using the NLL loss function.", "The proof relies on bounding the log-partition function using the squared norm of the gradients with respect to the input.", "Experimental validation shows that the resulting bound provides insight for better model optimization and prior distribution search.", "We demonstrate that learning the mean and STD for all parameters together with optimize prior over the parameters leads to better uncertainty estimates over the baselines and makes it harder to overfit." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.072727270424366, 0.1599999964237213, 0.23255813121795654, 0.08888888359069824, 0, 0, 0, 0.052631575614213943, 0, 0.3125, 0.05714285373687744, 0, 0, 0.13333332538604736, 0.0833333283662796, 0.060606054961681366, 0.0476190447807312, 0, 0.1764705777168274, 0, 0.05405404791235924, 0.05405404791235924, 0.04651162400841713, 0.04651162400841713, 0.07407406717538834, 0.045454539358615875, 0.0833333283662796, 0, 0.29411762952804565, 0, 0.0624999962747097, 0.09756097197532654 ]
HkgR8erKwB
true
[ "We derive a new PAC-Bayesian Bound for unbounded loss functions (e.g. Negative Log-Likelihood). " ]
[ "Data augmentation techniques, e.g., flipping or cropping, which systematically enlarge the training dataset by explicitly generating more training samples, are effective in improving the generalization performance of deep neural networks.", "In the supervised setting, a common practice for data augmentation is to assign the same label to all augmented samples of the same source.", "However, if the augmentation results in large distributional discrepancy among them (e.g., rotations), forcing their label invariance may be too difficult to solve and often hurts the performance.", "To tackle this challenge, we suggest a simple yet effective idea of learning the joint distribution of the original and self-supervised labels of augmented samples.", "The joint learning framework is easier to train, and enables an aggregated inference combining the predictions from different augmented samples for improving the performance.", "Further, to speed up the aggregation process, we also propose a knowledge transfer technique, self-distillation, which transfers the knowledge of augmentation into the model itself.", "We demonstrate the effectiveness of our data augmentation framework on various fully-supervised settings including the few-shot and imbalanced classification scenarios." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.1599999964237213, 0.19999998807907104, 0.12244897335767746, 0.2857142686843872, 0.1395348757505417, 0.2380952388048172, 0.5641025304794312 ]
SkliR1SKDS
false
[ "We propose a simple self-supervised data augmentation technique which improves performance of fully-supervised scenarios including few-shot learning and imbalanced classification." ]
[ "Long short-term memory (LSTM) networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes.", "We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes.", "To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape.", "A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation.", "The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer.", "It uses an encoder LSTM to map an input sequence into a fixed length vector representation.", "On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed.", "The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics.", "Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well.", "Data driven virtual product design is nowadays an essential tool in the automotive industry saving time and resources during the development process.", "For a new car model, numerical crash simulations are performed where design parameters are changed to study their effects on physical and functional properties of the car such as firewall intrusion, weight, or cost (Fang et al., 2017) .", "Since one simulation run takes a couple of hours on a compute cluster, running a large number of simulation is not feasible.", "Therefore, a system that is able to use a limited dataset and predict new simulations would make the development process faster and more efficient.", "The rise of deep neural networks (DNNs) in recent years encourages further research and industrial usages.", "Besides manifold research for autonomous driving, it is natural for the automotive industry to seek and evaluate the possible applications of DNNs also in the product design stages.", "As an example, we investigate car crash tests, in which for example the plate thickness of certain parts strongly influences the bending behavior of structural beams and as a result also the intrusion of the firewall into the passenger compartment.", "Here, numerical crash simulations for different variations of such thicknesses are used as a dataset for learning.", "The aim is to design a system based on a DNN architecture that learns the crash behavior and would be able to imitate the crash dynamics.", "Car crash simulations are based on a mathematical model of the plastic deformations and other physical and mechanical effects.", "They are defined on a computing mesh of currently up to three million points and up to a hundred time steps are stored.", "Each data instance is a simulation run-of pre-selected parts and/or time steps-that is very high dimensional.", "Working with this data directly exasperates any machine learning (ML) method, but a transformation of this data presented in IzaTeran & Garcke (2019) allows to obtain a new representation that uses only a small number of coefficients to represent the high resolution numerical solutions.", "The transformed representation is employed here to compress the mesh geometries to feature sets suitable for neural networks, while avoiding to directly handle geometries in the machine learning method.", "This way, a network designed for video prediction and embedding based on a long short-term memory (LSTM) based architecture (Srivastava et al., 2015) can be adapted for mesh data.", "Since LSTM is a recurrent neural network that allows to exhibit temporal dynamic behavior with feedback connections, it is a natural choice for learning the 3D sequences.", "The aim is that the network learns the observed crash behavior including translation, rotation, or deformation of the parts in the model.", "Since the contribution of this paper is using DNNs for analyzing car crash data, the related works are categorized into a group of publications in which DNNs are extended for 3D graphics and one that concerns the use of ML techniques for analyzing car crash simulations.", "For the latter, one typically uses different embedding techniques to obtain a low dimensional representation for the intrinsic underlying data space and to cluster simulations with similar characteristics together (Bohn et al., 2013; Diez, 2018; Garcke & Iza-Teran, 2015; Iza-Teran & Garcke, 2019; Le Guennec et al., 2018) .", "The majority of publications about 3D DNN tried to extend CNN for 3D space and focus on description learning and shape correspondence, also known as geometric deep learning, Monti et al., 2017; Litany et al., 2017; Halimi et al., 2018; Maturana & Scherer, 2015; Su et al., 2015; Wang et al., 2017) and some developed CNN filters for unorganized point clouds (Qi et al., 2017a; b) .", "The very active research is so far very compute resource consuming and there is no extension of ConvLSTM for 3D space to our knowledge, but for prediction one would need an LSTM (or GAN) approach.", "However, a couple of very recent works introduce new feature sets and architectures for mesh embedding using autoencoders and LSTM (Tan et al., 2018b; Qiao et al., 2018; Tan et al., 2018a) .", "The feature representation is using local shape deformations obtained by solving an optimization problem at each node and a global optimization for compensating for rotations.", "They have shown that after training the network, a sequences of 3D shapes as an animation can be generated by doing operations in the latent space.", "The bidirectional LSTM architecture is shown to outperform autoeconders (Tan et al., 2018a ).", "An LSTM based learning network has also been proposed in Qiao et al. (2018) , where the obtained feature representation is then taken as the temporal data to be feed into a CNN that takes the features and represents them in a lower dimensional latent space.", "This information is subsequently feed into the LSTM module.", "Video frames prediction has been in the center of attention of researchers for a while, but there has been only very few extensions of these works to the 3D case so far.", "The problem is addressed here by introducing spectral coefficients to encode functions on the geometry together with a two branch LSTM based architecture without any convolutional layer, which has already proven to be feasible for video embedding and future frames prediction.", "The employed LBO basis and the resulting spectral coefficients provide a trade-off between accuracy and required computational resources.", "We encode the 3D shapes by a set of features using the eigenvectors of the LBO.", "For empirical evaluation, a dataset is employed from a set of numerical simulations of a car during crash under different design conditions, i.e. plate thickness variations.", "The appearance of a bifurcation during the crash in the dataset, motivates an error analysis done for both groups to see how good the network performs in the presence of a bifurcation.", "In both branches, the network is able to perform very good predictions, while we observe different error localisations for reconstruction versus prediction.", "Moreover, the 2D visualization of the reconstruction branch shows the bifurcation as two clusters.", "In any case, from a relatively small number of data, the proposed network using spectral coefficients is able to learn complex dynamical structural mechanical behaviors.", "Future work could go toward scaling the pipeline for learning the crash dynamics of the entire car and larger mesh sizes, which increases the needed computational effort.", "On the other hand, one might be able to use smaller number of eigenvectors by not simply selecting the first few ones, but those with a large variance in the spectral coefficients of the data set.", "Furthermore, in practical settings, re-meshing of the parts can take place, here using spectral coefficients can ease this step since one can encode shapes with different vertices number to fixed size feature vectors, as long as the geometry is (approximately) isometric.", "Still, there is the overall question, if and how a trained network can be evaluated for changed geometries (relevant question for any 3D DNN approach introduced so far) or different crash setups.", "Moreover, adding design parameters could also improve the accuracy but requires modifications of the networks architecture.", "For practical applications, as each crash simulation requires hours of heavy computation running computational solvers on a large cluster, a system that is able to learn the representation of experiments with very few training data and generate the predicted simulation results for new design parameters would save much resources.", "Moreover, the ultimate goal of research along this direction would be a data driven system that receives very little information about the simulation (like design parameters) and output the crash sequences with minimum error.", "Another application of the current system could be feasibility detectors while running the simulation on the compute cluster.", "Using the network, one could check if the simulation goes well or if for some reasons it should be terminated.", "From the current stage of the system, one would be able to generate the parts of the future simulation simply by extrapolating the learned spectral coefficients from a few initial time steps, which are already computed on the cluster, as inputs.", "If the distance between network predicts and simulation gets very large over the iterations, the simulation can be terminated since it failed the feasibility check.", "Further, related works such as Qiao et al. (2018) introduce a specific feature set and LSTM autoencoders, where also graph convolution operation is required.", "This approach could be applied for car crash data under the assumption that the local optimization can still be applied for large deformations as the ones occurring in our applications.", "Further, the resulting features are long vectors, which results in 8 hours for learning on a CPU/GPU system for a data set similar in size to ours, where we need 30 minutes.", "Nevertheless, a comparison of these two approach will be worthwhile future work.", "A APPENDIX time step 6 time step 7 time step 8 time step 9 time step 10 time step 6 time step 7 time step 8 time step 9 time step 10" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.1818181723356247, 0.15789473056793213, 0.6486486196517944, 0.1818181723356247, 0.12121211737394333, 0.1904761791229248, 0.08510638028383255, 0.1538461446762085, 0.10256409645080566, 0.2181818187236786, 0.0555555522441864, 0.14999999105930328, 0.11764705181121826, 0.1395348757505417, 0.1538461446762085, 0.23529411852359772, 0.3499999940395355, 0.3333333134651184, 0.10810810327529907, 0, 0.1428571343421936, 0.09302324801683426, 0.17777776718139648, 0.1860465109348297, 0.2702702581882477, 0.22641508281230927, 0.12903225421905518, 0.0882352888584137, 0.1599999964237213, 0.1304347813129425, 0.09756097197532654, 0.1395348757505417, 0.12121211737394333, 0.20000000298023224, 0.14814814925193787, 0.13333332538604736, 0.24137930572032928, 0.11428570747375488, 0.19354838132858276, 0.1904761791229248, 0.1860465109348297, 0.09999999403953552, 0.2666666507720947, 0.1395348757505417, 0.2380952388048172, 0.07999999821186066, 0.072727270424366, 0.20408162474632263, 0.1818181723356247, 0.158730149269104, 0.1599999964237213, 0.11764705181121826, 0.0555555522441864, 0.07547169178724289, 0.1538461446762085, 0.0952380895614624, 0.09302324801683426, 0.04255318641662598, 0.13333332538604736, 0.07407406717538834 ]
BklekANtwr
true
[ "A two branch LSTM based network architecture learns the representation and dynamics of 3D meshes of numerical crash simulations." ]
[ "The purpose of an encoding model is to predict brain activity given a stimulus.", "In this contribution, we attempt at estimating a whole brain encoding model of auditory perception in a naturalistic stimulation setting.", "We analyze data from an open dataset, in which 16 subjects watched a short movie while their brain activity was being measured using functional MRI.", "We extracted feature vectors aligned with the timing of the audio from the movie, at different layers of a Deep Neural Network pretrained on the classification of auditory scenes.", "fMRI data was parcellated using hierarchical clustering on 500 parcels, and encoding models were estimated using a fully connected neural network with one hidden layer, trained to predict the signals for each parcel from the DNN features.", "Individual encoding models were successfully trained and predicted brain activity on unseen data, in parcels located in the superior temporal lobe, as well as dorsolateral prefrontal regions, which are usually considered as areas involved in auditory and language processing.", "Taken together, this contribution extends previous attempts on estimating encoding models, by showing the ability to model brain activity using a generic DNN (ie not specifically trained for this purpose) to extract auditory features, suggesting a degree of similarity between internal DNN representations and brain activity in naturalistic settings.", "One important motivation for incorporating machine learning in neuroscientific discovery is the establishment of predictive models, as opposed to models based on statistical inference [1] .", "While the latter are unable to generalize to a new dataset, the former aim at sucessful generalization.", "In particular, encoding models aim at predicting brain activity given a model of the stimulus presented to the subject.", "A successful model should enable generalization to unseen data, enabling a better understanding of the underlying brain functions.", "Furthermore, an accurate encoding model could potentially be used to enhance machine learning, by providing an auxiliary source of training data, as recent evidence suggest that actual brain activity can guide machine learning [2] .", "In this study, we tested whether a pretrained network could be used to estimate encoding models, in the case of naturalistic auditory perception.", "We were able to train encoding models on individual subjects to predict brain activity using the deepest layers of SoundNet, using less than 20 minutes of fMRI data.", "The obtained models best predicted the activity in brain areas that are part of a language-related network.", "However, the current study has the following limitations.", "First, we extracted features from the auditory part of the stimuli, while the modeled brain activity involves many other brain functions, namely visual perception, as well as higher level cognitive functions such as memory and emotional responses.", "This probably explains why we obtain R 2 = 0.5 in the best case.", "Providing a richer stimuli representation using more general purpose feature extractors would probably enable a more complete model of brain activity.", "Second, we estimated brain parcellations on single subject data using only 20 minutes of MRI, which might not be enough to obtain a reliable set of ROIs [6] .", "Further studies should use either more repetitions on each subject, or attempt at learning parcellations across subjects, after having spatially normalized each individual to a template.", "Third, we didn't find a clear relationship between spatial extent of our encoding models as a function of the SoundNet layer.", "This could be due to the fact that SoundNet was trained independently of the brain data, and was never optimized for encoding models.", "One possible avenue would be to perform fine tuning, or retrain from scratch, in order to optimize the estimation of encoding models.", "Finally, in our approach we ignored the temporal dynamics of both the feature vectors and the fMRI data, as well as the dependencies between ROIs implied by brain connectivity.", "In future studies, we will consider the use of recurrent neural networks, as well as graph representation learning [7] , in order to tackle those issues." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3030303120613098, 0.2631579041481018, 0.3181818127632141, 0.23255813121795654, 0.14814814925193787, 0.22641508281230927, 0.22580644488334656, 0.09090908616781235, 0.05882352590560913, 0.21621620655059814, 0.1621621549129486, 0.15686273574829102, 0.1904761791229248, 0.22727271914482117, 0.277777761220932, 0, 0.23529411852359772, 0.05882352590560913, 0.21052631735801697, 0.1304347813129425, 0.045454539358615875, 0.15789473056793213, 0.19999998807907104, 0.14999999105930328, 0.22727271914482117, 0.09090908616781235 ]
SyxENQtL8H
true
[ "Feature vectors from SoundNet can predict brain activity of subjects watching a movie in auditory and language related brain regions." ]
[ "In this paper, we describe the \"implicit autoencoder\" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions.", "We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning.", "Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder.", "Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution.", "For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images.", "We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning.", "Deep generative models have achieved remarkable success in recent years.", "One of the most successful models is the generative adversarial network (GAN) BID7 , which employs a two player min-max game.", "The generative model, G, samples the noise vector z ∼ p(z) and generates the sample G(z).", "The discriminator, D(x), is trained to identify whether a point x comes from the data distribution or the model distribution; and the generator is trained to maximally confuse the discriminator.", "The cost function of GAN is DISPLAYFORM0 GANs can be viewed as a general framework for learning implicit distributions BID18 BID12 .", "Implicit distributions are probability distributions that are obtained by passing a noise vector through a deterministic function that is parametrized by a neural network.", "In the probabilistic machine learning problems, implicit distributions trained with the GAN framework can learn distributions that are more expressive than the tractable distributions trained with the maximum-likelihood framework.Variational autoencoders (VAE) BID13 BID20 are another successful generative models that use neural networks to parametrize the posterior and the conditional likelihood distributions.", "Both networks are jointly trained to maximize a variational lower bound on the data log-likelihood.", "One of the limitations of VAEs is that they learn factorized distributions for both the posterior and the conditional likelihood distributions.", "In this paper, we propose the \"implicit autoencoder\" (IAE) that uses implicit distributions for learning more expressive posterior and conditional likelihood distributions.", "Learning a more expressive posterior will result in a tighter variational bound; and learning a more expressive conditional likelihood distribution will result in a global vs. local decomposition of information between the prior and the conditional likelihood.", "This enables the latent code to only capture the information that we care about such as the high-level and abstract information, while the remaining low-level information of data is separately captured by the noise vector of the implicit decoder.Implicit distributions have been previously used in learning generative models in works such as adversarial autoencoders (AAE) BID16 , adversarial variational Bayes (AVB) (Mescheder et al., 2017) , ALI (Dumoulin et al., 2016) , BiGAN BID5 and other works such as BID12 BID22 .", "The global vs. local decomposition of information has also been studied in previous works such as PixelCNN autoencoders (van den Oord et al., 2016) , PixelVAE BID9 , variational lossy autoencoders BID4 , PixelGAN autoencoders BID15 , or other works such as BID2 BID8 BID0 .", "In the next section, we first propose the IAE and then establish its connections with the related works.", "In this paper, we proposed the implicit autoencoder, which is a generative autoencoder that uses implicit distributions to learn expressive variational posterior and conditional likelihood distributions.", "We showed that in IAEs, the information of the data distribution is decomposed between the prior and the conditional likelihood.", "When using a low dimensional Gaussian distribution for the global code, we showed that the IAE can disentangle high-level and abstract information from the low-level and local statistics.", "We also showed that by using a categorical latent code, we can learn discrete factors of variation and perform clustering and semi-supervised learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.2978723347187042, 0.25531914830207825, 0.5, 0.2857142686843872, 0.2666666507720947, 0.2857142686843872, 0.05882352590560913, 0.1818181723356247, 0.1538461446762085, 0.1666666567325592, 0.2222222238779068, 0.1428571343421936, 0.380952388048172, 0.10256409645080566, 0.4390243887901306, 0.4444444477558136, 0.3265306055545807, 0.16091953217983246, 0.03278687968850136, 0.14999999105930328, 0.5416666865348816, 0.3414634168148041, 0.2448979616165161, 0.3478260934352875 ]
HyMRaoAqKX
true
[ "We propose a generative autoencoder that can learn expressive posterior and conditional likelihood distributions using implicit distributions, and train the model using a new formulation of the ELBO." ]
[ "Strong inductive biases allow children to learn in fast and adaptable ways.", "Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another.", "In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption.", "Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation.", "We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.", "Children are remarkable learners, and thus their inductive biases should interest machine learning researchers.", "To help learn the meaning of new words efficiently, children use the \"mutual exclusivity\" (ME) bias -the assumption that once an object has one name, it does not need another (Markman & Wachtel, 1988) (Figure 1 ).", "In this paper, we examine whether or not standard neural networks demonstrate the mutual exclusivity bias, either as a built-in assumption or as a bias that develops through training.", "Moreover, we examine common benchmarks in machine translation and object recognition to determine whether or not a maximally efficient learner should use mutual exclusivity.", "The mutual exclusivity task used in cognitive development research (Markman & Wachtel, 1988) .", "Children tend to associate the novel word (\"dax\") with the novel object (right).", "When children endeavour to learn a new word, they rely on inductive biases to narrow the space of possible meanings.", "Children learn an average of about 10 new words per day from the age of one until the end of high school (Bloom, 2000) , a feat that requires managing a tractable set of candidate meanings.", "A typical word learning scenario has many sources of ambiguity and uncertainty, including ambiguity in the mapping between words and referents.", "Children hear multiple words and see multiple objects within a single scene, often without clear supervisory signals to indicate which word goes with which object (Smith & Yu, 2008) .", "The mutual exclusivity assumption helps to resolve ambiguity in how words maps to their referents.", "Markman & Wachtel (1988) examined scenarios like Figure 1 that required children to determine the referent of a novel word.", "For instance, children who know the meaning of \"cup\" are presented with two objects, one which is familiar (a cup) and another which is novel (an unusual object).", "Given these two objects, children are asked to \"Show me a dax,\" where \"dax\" is a novel nonsense word.", "Markman and Wachtel found that children tend to pick the novel object rather than the familiar object.", "Although it is possible that the word \"dax\" could be another word for referring to cups, children predict that the novel word refers to the novel object -demonstrating a \"mutual exclusivity\" bias that familiar objects do not need another name.", "This is only a preference; with enough evidence, children must eventually override this bias to learn hierarchical categories: a Dalmatian can be called a \"Dalmatian,\" a \"dog\", or a \"mammal\" (Markman & Wachtel, 1988; Markman, 1989) .", "As an often useful but sometimes misleading cue, the ME bias guides children when learning the words of their native language.", "It is instructive to compare word learning in children and machines, since word learning is also a widely studied problem in machine learning and artificial intelligence.", "There has been substantial", "(a)", "(b)", "Figure 2: Evaluating mutual exclusivity in a feedforward", "(a) and seq2seq", "(b) neural network.", "(a) After training on a set of known objects, a novel label (\"dax\") is presented as a one-hot input vector.", "The network maps this vector to a one-hot output vector representing the predicted referent, through an intermediate embedding layer and an optional hidden layer (not shown).", "A representative output vector produced by a trained network is shown, placing almost all of the probability mass on known outputs.", "(b) A similar setup for mapping sequences of labels to their referents.", "During the test phase a novel label \"dax\" is presented and the ME Score at that output position is computed.", "recent progress in object recognition, much of which is attributed to the success of deep neural networks and the availability of very large datasets (LeCun et al., 2015) .", "But when only one or a few examples of a novel word are available, deep learning algorithms lack human-like sample efficiency and flexibility (Lake et al., 2017) .", "Insights from cognitive science and cognitive development can help bridge this gap, and ME has been suggested as a psychologically-informed assumption relevant to machine learning .", "In this paper, we examine standard neural networks to understand if they have an ME bias.", "Moreover, we analyze whether or not ME is a good assumption in lifelong variants of common translation and object recognition tasks.", "The results show that standard neural networks fail to reason by mutual exclusivity when trained in a variety of typical settings.", "The models fail to capture the perfect one-to-one mapping (ME bias) seen in the synthetic data, predicting that new symbols map to familiar outputs in a many-to-many fashion.", "Although our focus is on neural networks, this characteristic is not unique to this model class.", "We posit it more generally affects flexible models trained to maximize log-likelihood.", "In a trained network, the optimal activation value for an unused output node is zero: for any given training example, increasing value of an unused output simply reduces the available probability mass for the Name Languages Sentence Pairs Vocabulary Size IWSLT'14 (Freitag et al., 2014) Eng.-Vietnamese", "∼133K 17K(en), 7K(vi) WMT'14 Eng.-German ∼4.5", "M 50K(en), 50K(de) WMT'15 (Luong & Manning, 2016) Eng.-Czech ∼15.8", "M 50K(en), 50K(cs) target output. Using other", "loss functions could result in different outcomes, but we also did not find that weight decay and entropy regularization of reasonable values could fundamentally alter the use of novel outputs. In the next", "section, we investigate if the lack of ME could hurt performance on common learning tasks such as machine translation and image classification.", "Children use the mutual exclusivity (ME) bias to learn the meaning of new words efficiently, yet standard neural networks learn very differently.", "Our results show that standard deep learning algorithms lack the ability to reason with ME, including feedforward networks and recurrent sequenceto-sequence models trained to maximize log-likelihood with common regularizers.", "Beyond simply lacking this bias, these networks learn an anti-ME bias, preferring to map novel inputs to familiar and frequent (rather than unfamiliar) output classes.", "Our results also show that these characteristics The plots show the probability that a new input image belongs to an unseen class P (N |t), as a function of the number of images t seen so far during training (blue), with its standard deviation.", "This measure is contrasted with the ME score of a neural network classifier trained through a similar run of the dataset (orange).", "are poorly matched to more realistic lifelong learning scenarios where novel classes can appear at any point, as demonstrated in the translation and classification experiments presented here.", "Neural nets may be currently stymied by their lack of ME bias, ignoring a powerful assumption about the structure of learning tasks.", "Mutual exclusivity is relevant elsewhere in machine learning.", "Recent work has contrasted the ability of humans and neural networks to learn compositional instructions from just one or a few examples, finding that neural networks lack the ability to generalize systematically (Lake & Baroni, 2018; .", "The authors suggest that people rely on ME in these learning situations , and thus few-shot learning approaches could be improved by utilizing this bias as well.", "In our analyses, we show that neural networks tend to learn the opposite bias, preferring to map novel inputs to familiar outputs.", "More generally, ME can be generalized from applying to \"novel versus familiar\" stimuli to instead handling \"rare versus frequent\" stimuli (e.g., in translation, rare source words may map to rare target words).", "The utility of reasoning by ME could be extended to early stages of epoch based learning too.", "For example, during epoch-based learning, neural networks take longer to acquire rare stimuli and patterns of exceptions (McClelland & Rogers, 2003) , often mishandling these items for many epochs by mapping them to familiar responses.", "Another direction for future work is studying how the ME bias should interact with hierarchical categorization tasks.", "We posit that the ME assumption will be increasingly important as learners tackle more continual, lifelong, and large-scale learning challenges (Mitchell et al., 2018) .", "Mutual exclusivity is an open challenge for deep neural networks, but there are promising avenues for progress.", "The ME bias will not be helpful for every problem, but it is equally clear that the status quo is sub-optimal: models should not have a strong anti-ME bias regardless of the task and dataset demands.", "Ideally, a model would decide autonomously how strongly to use ME (or not) based on the demands of the task.", "For instance, in our synthetic example, an ideal learner would discover the one-to-one correspondence and use this perfect ME bias as a meta-strategy.", "If the dataset has more many-to-one correspondences, it would adopt another meta-strategy.", "This meta-strategy could even change depending on the stage of learning, yet such an approach is not currently available for training models.", "Previous cognitive models of word learning have found ways to incorporate the ME bias (Kachergis et al., 2012; McMurray et al., 2012; Frank et al., 2009; Lambert et al., 2005) , although in ways that do not generalize to training deep neural networks.", "While successful in some domains, these models are highly simplified or require built-in mechanisms for implementing ME, making them so far impractical for use in realistic settings.", "As outlined above, it would be ideal to acquire a ME bias via meta learning or learning to learn (Allen et al., 2019; Snell et al., 2017) , with the advantage of calibrating the bias to the dataset itself rather than assuming its strength a priori.", "For example, the meta learning model of Santoro et al. (2016) seems capable of learning an ME bias, although it was not specifically probed in this way.", "Recent work by Lake (2019) demonstrated that neural nets can learn to reason by ME if trained explicitly to do so, showing these abilities are within the repertoire of modern tools.", "However acquiring ME is just one step toward the goal proposed here: using ME to facilitate efficient lifelong learning or large-scale classification and translation.", "In conclusion, standard deep neural networks do not naturally reason by mutual exclusivity, but designing them to do so could lead to faster and more flexible learners.", "There is a compelling case for building models that learn through mutual exclusivity." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15789473056793213, 0.29629629850387573, 0.1702127605676651, 0.1818181723356247, 0.08510638028383255, 0.09999999403953552, 0.19354838132858276, 0.307692289352417, 0.19999998807907104, 0.1538461446762085, 0.1621621549129486, 0.17777776718139648, 0.14035087823867798, 0.13333332538604736, 0.07547169178724289, 0.19999998807907104, 0.1304347813129425, 0.038461532443761826, 0.045454539358615875, 0.09756097197532654, 0.10526315122842789, 0.10344827175140381, 0.1304347813129425, 0.1304347813129425, 0, 0.1764705777168274, 0, 0.06896551698446274, 0.045454539358615875, 0.08163265138864517, 0.04255318641662598, 0.052631575614213943, 0.045454539358615875, 0.1538461446762085, 0.037735845893621445, 0.12244897335767746, 0.1904761791229248, 0.08510638028383255, 0.2978723347187042, 0.15686273574829102, 0.09999999403953552, 0.052631575614213943, 0.030303025618195534, 0, 0, 0, 0.1071428507566452, 0.1666666567325592, 0.52173912525177, 0.18867923319339752, 0.12244897335767746, 0.1846153736114502, 0.08888888359069824, 0.2641509473323822, 0.1702127605676651, 0.1764705777168274, 0.14035087823867798, 0.1538461446762085, 0.30434781312942505, 0.072727270424366, 0.0952380895614624, 0.06666666269302368, 0.09302324801683426, 0.11764705181121826, 0.0952380895614624, 0.06896550953388214, 0.13333332538604736, 0.20408162474632263, 0.052631575614213943, 0.0833333283662796, 0.19999998807907104, 0.07843136787414551, 0.158730149269104, 0.15686273574829102, 0.1818181723356247, 0.16326530277729034, 0.15686273574829102, 0.1538461446762085 ]
S1lvn0NtwH
true
[ "Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning." ]
[ "Cortical neurons process and integrate information on multiple timescales.", "In addition, these timescales or temporal receptive fields display functional and hierarchical organization.", "For instance, areas important for working memory (WM), such as prefrontal cortex, utilize neurons with stable temporal receptive fields and long timescales to support reliable representations of stimuli.", "Despite of the recent advances in experimental techniques, the underlying mechanisms for the emergence of neuronal timescales long enough to support WM are unclear and challenging to investigate experimentally.", "Here, we demonstrate that spiking recurrent neural networks (RNNs) designed to perform a WM task reproduce previously observed experimental findings and that these models could be utilized in the future to study how neuronal timescales specific to WM emerge.", "Previous studies have shown that higher cortical areas such as prefrontal cortex operate on a long timescale, measured as the spike-count autocorrelation decay constant at rest [1] .", "These long timescales have been hypothesized to be critical for performing working memory (WM) computations [2, 3] , but it is experimentally challenging to probe the underlying circuit mechanisms that lead to stable temporal properties.", "Recurrent neural network (RNN) models trained to perform WM tasks could be a useful tool if these models also utilize units with long heterogeneous timescales and capture previous experimental findings.", "However, such RNN models have not yet been identified.", "In this study, we construct a spiking RNN model to perform a WM task and compare the emerging timescales with the timescales derived from the prefrontal cortex of rhesus monkeys trained to perform similar WM tasks.", "We show that both macaque prefrontal cortex and the RNN model utilize units/neurons with long timescales during delay period to sustain stimulus information.", "In addition, the number of units with long timescales was significantly reduced in the RNN model trained to perform a non-WM task, further supporting the idea that neuronal timescales are task-specific and functionally organized.", "In this study, we employed a spiking RNN model of WM to investigate if the model exhibits and utilizes heterogeneous timescales for prolonged integration of information.", "We validated the model using an experimental dataset obtained from rhesus monkeys trained on WM tasks: the model and the primate prefrontal cortex both displayed similar heterogeneous neuronal timescales and incorporated units/neurons with long timescales to maintain stimulus information.", "The timescales from the RNN model trained on a non-WM task (Go-NoGo task) were markedly shorter, since units with long timescales were not required to support the simple computation.", "Future works include characterizing the network dynamics and the circuit motifs of the DMS RNN model to elucidate connectivity structures required to give rise to the diverse, stable temporal receptive fields specific to WM." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.2448979616165161, 0.1304347813129425, 0.2857142686843872, 0.1702127605676651, 0.18518517911434174, 0.23999999463558197, 0, 0.23999999463558197, 0.22727271914482117, 0.1538461446762085, 0.13333332538604736, 0.2181818187236786, 0.1702127605676651, 0.04081632196903229 ]
B1em4mFL8H
true
[ "Spiking recurrent neural networks performing a working memory task utilize long heterogeneous timescales, strikingly similar to those observed in prefrontal cortex." ]
[ "Conventional deep learning classifiers are static in the sense that they are trained on\n", "a predefined set of classes and learning to classify a novel class typically requires\n", "re-training.", "In this work, we address the problem of Low-shot network-expansion\n", "learning.", "We introduce a learning framework which enables expanding a pre-trained\n", "(base) deep network to classify novel classes when the number of examples for the\n", "novel classes is particularly small.", "We present a simple yet powerful distillation\n", "method where the base network is augmented with additional weights to classify\n", "the novel classes, while keeping the weights of the base network unchanged.", "We\n", "term this learning hard distillation, since we preserve the response of the network\n", "on the old classes to be equal in both the base and the expanded network.", "We\n", "show that since only a small number of weights needs to be trained, the hard\n", "distillation excels for low-shot training scenarios.", "Furthermore, hard distillation\n", "avoids detriment to classification performance on the base classes.", "Finally, we\n", "show that low-shot network expansion can be done with a very small memory\n", "footprint by using a compact generative model of the base classes training data\n", "with only a negligible degradation relative to learning with the full training set.", "In many real life scenarios, a fast and simple classifier expansion is required to extend the set of classes that a deep network can classify.", "For example, consider a cleaning robot trained to recognize a number of objects in a certain environment.", "If the environment is modified with an additional novel object, it is desired to be able to update the classifier by taking only a few images of that object and expand the robot classifier.", "In such a scenario, the update should be a simple procedure, based on a small collection of images captured in a non-controlled setting.", "Furthermore, such a low-shot network update should be fast and without access the entire training set of previously learned data.", "A common solution to classifier expansion is fine-tuning the network BID6 .", "However fine-tuning requires keeping a large amount of base training data in memory, in addition to collecting sufficient examples of the novel classes.", "Otherwise, fine-tuning can lead to degradation of the network accuracy on the base classes, also known as catastrophic forgetting BID0 .", "In striking contrast, for some tasks, humans are capable of instantly learning novel categories.", "Using one or only a few training examples humans are able to learn a novel class, without compromising previously learned abilities or having access to training examples from all previously learned classes.We consider the classifier expansion problem under the following constraints:1.", "Low-shot: very few samples of the novel classes are available.", "2. No forgetting: preserving classification performance on the base classes.", "3. Small memory footprint: no access to the base classes training data.In this work we introduce a low-shot network expansion technique, augmenting the capability of an existing (base) network trained on base classes by training additional parameters that enables to classify novel classes.", "The expansion of the base network with additional parameters is performed in the last layers of the network.To satisfy low-shot along with no-forgetting constraints, we present a hard distillation framework.", "Distillation in neural networks BID5 is a process for training a target network to imitate another network.", "A loss function is added to the target network so that its output matches the output of the mimicked network.", "In standard soft distillation the trained network is allowed to deviate from the mimicked network.", "Whereas hard distillation enforces that the output of the trained network for base classes matches the output of the mimicked network as a hard constraint.", "We achieve hard distillation by keeping the weights of the base network intact, and learn only the newly added weights.", "Network expansion with hard distillation yields a larger network, distilling the knowledge of the base network in addition to augmented capacity to classify novel classes.", "We show that in the case of low-shot (only 1-15 examples of a novel class), hard distillation outperforms soft distillation.", "Moreover, since the number of additional parameters in the expanded network is small, the inference time of the new network is nearly identical to the base network.To maintain a small memory footprint, we refrain from saving the entire training set.", "Instead, we present a compact generative model, consisting of a collection of generative models fitted in the feature space to each of the base classes.", "We use a Gaussian Mixture Model (GMM) with small number of mixtures, and show it inflicts a minimal degradation in classification accuracy.", "Sampling from the generative GMM model is fast, reducing the low-shot training time and allowing fast expansion of the network.We define a benchmark for low-shot network expansion.", "The benchmark is composed of a series of tests of increasing complexity, ranging from simple tasks where base and novel classes are from different domains and to difficult tasks where base and novel classes are from the same domain and shares objective visual similarities.", "We perform a comprehensive set of experiments on this challenging benchmark, comparing the performance of the proposed to alternative methods.To summarize, the main contributions of the paper are:1.", "A novel hard-distillation solution to a low-shot classifier expansion problem", "2. GMM as a sufficient generative model to represent base classes in a feature space", "3. A new benchmark for the low-shot classifier expansion problem 2 RELATED WORKS A common solution to the class-incremental learning problem is to use a Nearest-Neighbors (NN) based classifier in feature space.", "A significant advantage of a NN-based classifier is that it can be easily extended to classify a novel class, even when only a single example of the class is available (one-shot learning).", "However NN-based classifiers require keeping in the memory significant amount of training data from the base classes.", "BID7 proposed to use Nearest Class Mean (NCM) classifier, where each class is represented by a single prototype example which is the mean feature vector of all class examples.", "One major disadvantage of NCM and NN-based methods is that they are based on a fixed feature representation of the data.", "To overcome this problem BID7 proposed to learn a new distance function in the feature space using metric learning.The ideas of metric learning combined with the NN classifier resonate with recent work by on Matching Networks for one-shot learning, where both feature representation and the distance function are learned end-to-end with attention and memory augmented networks.", "The problem we consider in this paper is different from the one discussed by .", "We aim to expand existing deep classifier trained on large dataset to classify novel classes, rather than to create a general mechanism for one-shot learning.", "BID3 presented an innovative low-shot learning mechanism, where they proposed a Squared Gradient Magnitude regularization technique for an improved fixed feature representation learning designed for low-shot scenarios.", "They also introduced techniques to hallucinate additional training examples for novel data classes.", "In contrast, we present a method which aims to maximize performance in low-shot network expansion given a fixed representation, allowing expanding the representation based on novel low-shot data.", "Furthermore, in our work, we demonstrate the ability to expand the network without storing the entire base classes training data.Recently, BID9 proposed iCaRL -(Incremental Classifier and Representation Learning), to solve the class-incremental learning problem.", "iCaRL is based on Nearest-Mean-of-Exemplars classifier, similar to the NCM classifier of BID7 .", "In the iCaRL method, the feature representation is updated and the class means are recomputed from a small stored number of representative examples of the base classes.", "During the feature representation update, the network parameters are updated by minimizing a combined classification and distillation loss.", "The iCaRL method was introduced as a class-incremental learning method for large training sets.", "In Section 4 we discuss its adaptation to low-shot network expansion and compare it to our method.", "BID11 proposed the Progressive Network for adding new tasks without affecting the performance of old tasks.", "They propose freezing the parameters that were trained on old tasks and expand the network with a additional layers when training a new task.", "BID15 proposed the Progressive learning technique which solves the problem of online sequential learning in extreme learning machines paradigm (OS-ELM).", "The purpose of their work is to incrementally learn the last fully-connected layer of the network.", "When a sample from a novel class arrives, the last layer is expanded with additional parameters.", "The Progressive learning solution updates the last layer only sequentially and only works in the ELM framework (does not update internal layers of the network).", "In another work BID14 proposed an incremental learning technique which augments the base network with additional parameters in last fully connected layer to classify novel classes.", "Similar to iCaRL, they perform soft distillation by learning all parameters of the network.", "Instead of keeping historical training data, they propose phantom sampling -hallucinating data from past distribution modeled with Generative Adversarial Networks.In this work we propose a solution that borrows ideas from freeze-and-expand paradigm, improved feature representation learning, network distillation and modeling past data with a generative model.", "We propose to apply expansion to the last fully connected layer of a base network to enable classification on novel classes, and to deeper layers to extend and improve the feature representation.", "However, in contrast to other methods BID9 ; BID15 , we do not retrain the base network parameters, but only the newly introduced weights of the expansion.Moreover, the extended feature representation is learned from samples of base and novel classes.", "In contrast to BID3 , where the improved feature representation is learned from simulating low-shot scenarios on the base classes only, before the actual novel data is available.", "Finally, in order to avoid keeping all historical training data, we use Gaussian Mixture Model of the feature space as a generative model for base classes." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.1666666567325592, 0.8571428656578064, 0.09999999403953552, 0.1666666567325592, 0, 0, 0.08695651590824127, 0.1904761791229248, 0.43478259444236755, 0.0833333283662796, 0.1538461446762085, 0, 0, 0.09999999403953552, 0, 0.1666666567325592, 0.17391303181648254, 0.17142856121063232, 0.07692307233810425, 0.09999999403953552, 0.19354838132858276, 0.12903225421905518, 0.09090908616781235, 0.1249999925494194, 0.13333332538604736, 0.23999999463558197, 0.08888888359069824, 0.1904761791229248, 0.0952380895614624, 0.2083333283662796, 0.1621621549129486, 0, 0.14814814925193787, 0.1666666567325592, 0.13793103396892548, 0.1428571343421936, 0.11764705181121826, 0.13793103396892548, 0.1395348757505417, 0.19354838132858276, 0.0624999962747097, 0.11764705181121826, 0.0952380895614624, 0.17142856121063232, 0.0952380895614624, 0, 0.15789473056793213, 0.10256409645080566, 0.14814814925193787, 0.10526315122842789, 0.12903225421905518, 0.17241379618644714, 0.3199999928474426, 0.05882352590560913, 0.05882352590560913, 0, 0.1621621549129486, 0.190476194024086, 0.1666666567325592, 0.1764705777168274, 0.0714285671710968, 0.0833333283662796, 0.14814814925193787, 0.1599999964237213, 0.060606054961681366, 0.2857142686843872, 0.1599999964237213, 0.07692307233810425, 0.1818181723356247, 0.1621621549129486, 0.23999999463558197, 0.1538461446762085, 0.10810810327529907, 0.12765957415103912, 0.1111111044883728, 0.1621621549129486 ]
SJw03ceRW
true
[ " In this paper, we address the problem of Low-shot network-expansion learning" ]
[ "People ask questions that are far richer, more informative, and more creative than current AI systems.", "We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network.", "From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings.", "We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data.", "People can ask rich, creative questions to learn efficiently about their environment.", "Question asking is central to human learning yet it is a tremendous challenge for computational models.", "There is always an infinite set of possible questions that one can ask, leading to challenges both in representing the space of questions and in searching for the right question to ask.", "Machine learning has been used to address aspects of this challenge.", "Traditional methods have used heuristic rules designed by humans (Heilman & Smith, 2010; Chali & Hasan, 2015) , which are usually restricted to a specific domain.", "Recently, neural network approaches have also been proposed, including retrieval methods which select the best question from past experience (Mostafazadeh et al., 2016 ) and encoder-decoder frameworks which map visual or linguistic inputs to questions (Serban et al., 2016; Mostafazadeh et al., 2016; Yuan et al., 2017; Yao et al., 2018) .", "While effective in some settings, these approaches do not consider settings where the questions are asked about partially unobservable states.", "Besides, these methods are heavily data-driven, limiting the diversity of generated questions and requiring large training sets for different goals and contexts.", "There is still a large gap between how people and machines ask questions.", "Recent work has aimed to narrow this gap by taking inspiration from cognitive science.", "For instance, Lee et al. (2018) incorporates aspects of \"theory of mind\" (Premack & Woodruff, 1978) in question asking by simulating potential answers to the questions, but the approach relies on imperfect agents for natural language understanding which may lead to error propagation.", "Related to our approach, Rothe et al. (2017) proposed a powerful question-asking framework by modeling questions as symbolic programs, but their algorithm relies on hand-designed program features and requires expensive calculations to ask questions.", "We use \"neural program generation\" to bridge symbolic program generation and deep neural networks, bringing together some of the best qualities of both approaches.", "Symbolic programs provide a compositional \"language of thought\" (Fodor, 1975) for creatively synthesizing which questions to ask, allowing the model to construct new ideas based on familiar building blocks.", "Compared to natural language, programs are precise in their semantics, have clearer internal structure, and require a much smaller vocabulary, making them an attractive representation for question answering systems as well (Johnson et al., 2017; Yi et al., 2018; Mao et al., 2019) .", "However, there has been much less work using program synthesis for question asking, which requires searching through infinitely many questions (where many questions may be informative) rather than producing a single correct answer to a question.", "Deep neural networks allow for rapid question-synthesis using encoder-decoder modeling, eliminating the need for the expensive symbolic search and feature evaluations in Rothe et al. (2017) .", "Together, the questions can be synthesized quickly and evaluated formally for quality groundtruth board partly revealed board example questions", "How long is the red ship?", "(size Red)", "Is purple ship horizontal?", "(== (orient Purple)", "H) Do all three ships have the same size?", "(=== (map (λ x (size x)) (set AllShips)))", "Figure 1: The Battleship task.", "Blue, red, and purple tiles are ships, dark gray tiles are water, and light gray tiles are hidden.", "The agent can see a partly revealed board, and should ask a question to seek information about the hidden board.", "Example questions and translated programs are shown on the right.", "We recommend viewing the figures in color.", "(e.g. the expected information gain), which as we show can be used to train question asking systems using reinforcement learning.", "In this paper, we develop a neural program generation model for asking questions in an informationsearch game similar to \"Battleship\" used in previous work (Gureckis & Markant, 2009; Rothe et al., 2017; .", "The model uses a convolutional encoder to represent the game state, and a Transformer decoder (Vaswani et al., 2017) for generating questions.", "Building on the work of Rothe et al. (2017) , the model uses a grammar-enhanced question asking framework, such that questions as programs are formed through derivation using a context free grammar.", "Importantly, we show that the model can be trained from human demonstrations of good questions using supervised learning, along with a data augmentation procedure that leverages previous work to produce additional human-like questions for training.", "Our model can also be trained without such demonstrations using reinforcement learning.", "We evaluate the model on several aspects of human question asking, including reasoning about optimal questions in synthetic scenarios, density estimation based on free-form question asking, and creative generation of genuinely new questions.", "To summarize, our paper makes three main contributions:", "1) We propose a neural network for modeling human question-asking behavior,", "2) We propose a novel reinforcement learning framework for generating creative human-like questions by exploiting the power of programs, and", "3) We evaluate different properties of our methods extensively through three different experiments.", "We train our model in a fully supervised fashion.", "Accuracy for the counting and missing tile tasks is shown in Figure 3 .", "The full neural program generation model shows strong reasoning abilities, achieving high accuracy for both the counting and missing tile tasks, respectively.", "We also perform ablation analysis of the encoder filters of the model, and provide the results in Appendix D.", "The results for the compositionality task are summarized in Table 1 .", "When no training data regarding the held out question type is provided, the model cannot generalize to situations systematically different from training data, exactly as pointed out in previous work on the compositional skills of encoder-decoder models (Lake & Baroni, 2018) .", "However, when the number of additional training data increases, the model quickly incorporates the new question type while maintaining high accuracy on the familiar question tasks.", "On the last row of Table 1 , we compare our model with another version where the decoder is replaced by two linear transformation operations which directly classify the ship type and location (details in Appendix B.1).", "This model has 33.0% transfer accuracy on compositional scenarios never seen during training.", "This suggests that the model has the potential to generalize to unseen scenarios if the task can be decomposed to subtasks and combined together.", "We evaluate the log-likelihood of reference questions generated by our full model as well as some lesioned variants of the full model, including a model without pretraining, a model with the Transformer decoder replaced by an LSTM decoder, a model with the convolutional encoder replaced by a simple MLP encoder, and a model that only has a decoder (unconditional language model).", "Though the method from Rothe et al. (2017) also works on this task, here we cannot compare with their method for two reasons.", "One is that our dataset is constructed using their method, so the likelihood of their method should be an upper bound in our evaluation setting.", "Additionally, they can only approximate the log-likelihood due to an intractable normalizing constant, and thus it difficult to directly compare with our methods.", "Two different evaluation sets are used, one is sampled from the same process on new boards, the other is a small set of questions collected from human annotators.", "In order to calculate the log-likelihood of human questions, we use translated versions of these questions that were used in previous work (Rothe et al., 2017) , and filtered some human questions that score poorly according to the generative model used for training the neural network (Appendix B.2).", "A summary of the results is shown in Table 2a .", "The full model performs best on both datasets, suggesting that pretraining, the Transformer decoder, and the convolutional encoder are all important components of the approach.", "However, we find that the model without an encoder performs reasonably well too, even out-performing the full model with a LSTM-decoder on the human-produced questions.", "This suggests that while contextual information from the board leads to improvements, it is not the most important factor for predicting human questions.", "To further investigate the role of contextual information and whether or not the model can utilize it effectively, we conduct another analysis.", "Intuitively, if there is little uncertainty about the locations of the ships, observing the board is critical since there are fewer good questions to ask.", "To examine this factor, we divide the scenarios based on the entropy of the hypothesis space of possible ship locations into a low entropy set (bottom 30%), medium entropy set (40% in the middle), and high entropy set (top 30%).", "We evaluate different models on the split sets of sampled data and report the results in Table 2b .", "When the entropy is high, it is easier to ask a generally good question like \"how long is the red ship\" without information of the board, so the importance of the encoder is reduced.", "If entropy is low, the models with access to the board has substantially higher log-likelihood than the model without encoder.", "Also, the first experiment (section 5.1) would be impossible without an encoder.", "Together, this implies that our model can capture important context-sensitive characteristics of how people ask questions.", "The models are evaluated on 2000 randomly sampled boards, and the results are shown in Table 3 .", "Note that any ungrammatical questions are excluded when we calculate the number of unique questions.", "First, when the text-based model is evaluated on new contexts, 96.3% of the questions it generates were included in the training data.", "We also find that the average EIG and the ratio of EIG>0 is worse than the supervised model trained on programs.", "Some of these deficiencies are due to the very limited text-based training data, but using programs instead can help overcome these limitations.", "With the program-based framework, we can sample new boards and questions to create a much larger dataset with executable program representations.", "This self-supervised training helps to boost performance, especially when combined with grammar-enhanced RL.", "From the table, the grammar-enhanced RL model is able to generate informative and creative questions.", "It can be trained from scratch without examples of human questions, and produces many novel questions with high EIG.", "In contrast, the supervised model rarely produces new questions beyond the training set.", "The sequence-level RL model is also comparatively weak at generating novel questions, perhaps because it is also pre-trained on human questions.", "It also more frequently generates ungrammatical questions.", "We also provide examples in Figure 4 to show the diversity of questions generated by the grammar enhanced model, and more in the supplementary materials.", "Figure 4a shows novel questions the model produces, which includes clever questions such as \"Where is the bottom right of all the purple and blue tiles?\" or \"What is the size of the blue ship minus the purple ship?\", while it can also sometimes generates meaningless questions such as \"Is the blue ship shorter than itself?\"", "Additional examples of generated questions are provided in Appendix B. Is any ship two tiles long?", "(> (++ (map (lambda x (== (size x) 2)) (set AllShips))) 0)", "Are there any ships in row 1?", "(> (++ (map (lambda y (and (== (rowL y) 1) (not (== (color y) Water)))) (set AllTiles))) 0)", "Is part of a ship on tile 4-6?", "(not (== (color 4-6)", "Water)) What is the size of the blue ship?", "(setSize (coloredTiles Blue))", "What is the size of the purple ship?", "(size Purple)", "Which column is the first part of the blue ship?", "(colL (topleft (coloredTiles Blue)))", "What is the orientation of the blue ship?", "With the grammar enhanced framework, we can also guide the model to ask different types of questions, consistent with the goal-directed nature and flexibility of human question asking.", "The model can be queried for certain types of questions by providing different start conditions to the model.", "Instead of starting derivation from the start symbol \"A\", we can start derivation from a intermediate state such as \"B\" for Boolean questions or a more complicated \"(and B B)\" for composition of two Boolean questions.", "In Figure 4b , we show examples where the model is asked to generate four specific types of questions: true/false questions, number questions, location-related questions, and compositional true/false questions.", "We see that the model can flexibly adapt to new constraints and generate meaningful questions.", "In Figure 4c , we compare the model generated questions with human questions, each randomlysampled from the model outputs and the human dataset.", "These examples again demonstrate that our model is able to generate clever and human-like questions.", "However, we also find that people sometimes generate questions with quantifiers such as \"any\" and \"all\", which are operationalized in program form with lambda functions.", "These questions are complicated in representation and not favored by our model, showing a current limitation in our model's capacity.", "We introduce a neural program generation framework for question asking task under partially unobservable settings, which is able to generate creative human-like questions with human question demonstrations by supervised learning or without demonstrations by grammar-enhanced reinforcement learning.", "Programs provide models with a \"machine language of thought\" for compositional thinking, and neural networks provide an efficient means of question generation.", "We demonstrate the effectiveness of our method in extensive experiments covering a range of human question asking abilities.", "The current model has important limitations.", "It cannot generalize to systematically different scenarios, and it sometimes generates meaningless questions.", "We plan to further explore the model's compositional abilities in future work.", "Another promising direction is to model question asking and question answering jointly within one framework, which could guide the model to a richer sense of the question semantics.", "Besides, allowing the agent to iteratively ask questions and try to win the game is another interesting future direction.", "We would also like to use our framework in dialog systems and open-ended question asking scenarios, allowing such systems to synthesize informative and creative questions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.3396226465702057, 0.2181818187236786, 0.4000000059604645, 0.20512820780277252, 0.1904761791229248, 0.25925925374031067, 0.10526315122842789, 0.11538460850715637, 0.2028985470533371, 0.04255318641662598, 0.1249999925494194, 0.14999999105930328, 0.04878048226237297, 0.1492537260055542, 0.20338982343673706, 0.2448979616165161, 0.2181818187236786, 0.11940298229455948, 0.16949151456356049, 0.15686273574829102, 0.13636362552642822, 0, 0, 0, 0, 0, 0, 0.05128204822540283, 0.21739129722118378, 0.10810810327529907, 0.05882352590560913, 0.2083333283662796, 0.20338982343673706, 0.20408162474632263, 0.24561403691768646, 0.36666667461395264, 0.1538461446762085, 0.2545454502105713, 0, 0.21052631735801697, 0.25531914830207825, 0.10256409645080566, 0.2222222238779068, 0.04999999701976776, 0.12244897335767746, 0.1395348757505417, 0, 0.1249999925494194, 0.12244897335767746, 0.158730149269104, 0.04878048226237297, 0.21276594698429108, 0.2647058665752411, 0.04081632196903229, 0.08163265138864517, 0.16326530277729034, 0.1538461446762085, 0.23529411852359772, 0.054054051637649536, 0.1599999964237213, 0.2448979616165161, 0.16326530277729034, 0.2083333283662796, 0.1666666567325592, 0.10344827175140381, 0.13636362552642822, 0.22641508281230927, 0.17777776718139648, 0.04999999701976776, 0.23255813121795654, 0.04651162400841713, 0.1463414579629898, 0.1249999925494194, 0.260869562625885, 0.1249999925494194, 0.25, 0.09999999403953552, 0.24390242993831635, 0.3478260934352875, 0.1538461446762085, 0.1304347813129425, 0.05882352590560913, 0.2448979616165161, 0.20895521342754364, 0.1395348757505417, 0, 0, 0, 0.11428570747375488, 0, 0.05714285373687744, 0, 0.05882352590560913, 0.0555555522441864, 0, 0.05882352590560913, 0.3461538553237915, 0.22727271914482117, 0.1818181723356247, 0.2641509473323822, 0.380952388048172, 0.21739129722118378, 0.3333333432674408, 0.23529411852359772, 0.13333332538604736, 0.5, 0.2978723347187042, 0.27272728085517883, 0.060606058686971664, 0.14999999105930328, 0.10256409645080566, 0.3199999928474426, 0.13636362552642822, 0.2448979616165161 ]
SylR-CEKDS
true
[ "We introduce a model of human question asking that combines neural networks and symbolic programs, which can learn to generate good questions with or without supervised examples." ]
[ "The classification of images taken in special imaging environments except air is the first challenge in extending the applications of deep learning.", "We report on an UW-Net (Underwater Network), a new convolutional neural network (CNN) based network for underwater image classification.", "In this model, we simulate the visual correlation of background attention with image understanding for special environments, such as fog and underwater by constructing an inception-attention (I-A) module.", "The experimental results demonstrate that the proposed UW-Net achieves an accuracy of 99.3% on underwater image classification, which is significantly better than other image classification networks, such as AlexNet, InceptionV3, ResNet and Se-ResNet.", "Moreover, we demonstrate the proposed IA module can be used to boost the performance of the existing object recognition networks.", "By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10.7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications.", "Underwater images and videos contain a lot of valuable information for many underwater scientific researches (Klausner & Azimi-Sadjadi, 2019; Peng et al., 2018) .", "However, the image analysis systems and classification algorithms designed for natural images (Redmon & Farhadi, 2018; He et al., 2017) cannot be directly applied to the underwater images due to the complex distortions existed in underwater images (e.g., low contrast, blurring, non-uniform brightness, non-uniform color casting and noises) and there is, to the best of our knowledge, no model for underwater image classification.", "Except for the inevitable distortions exhibited in underwater images, there are other three key problems for the classification of underwater images: (1) the background in underwater images taken in different environments are various; (2) the salient objects such as ruins, fish, diver exist not only in underwater environment, but also in air.", "The features extracted from the salient objects cannot be relied on primarily in the classification of underwater images; and (3) since the classification of underwater images is only a dualistic classification task, the structure of the designed network should be simple to avoid the over-fitting.", "Increasing the depth and width of a CNN can usually improve the performance of the model, but is more prone to cause over-fitting when the training dataset is limited, and needs more computational resource (LeCun et al., 2015; Srivastava et al., 2014) .", "To remit this issue, (Szegedy et al., 2015) proposed the inception module, which simultaneously performs the multi-scale convolution and pooling on a level of CNN to output multi-scale features.", "In addition, the attention mechanism (Chikkerur et al., 2010; Borji & Itti, 2012) is proposed and applied in the recent deep models which takes the advantage that human vision pays attention to different parts of the image depending on the recognition tasks Zhu et al., 2018; Ba et al., 2014) .", "Although these strategies play an important role in advancing the field of image classifications, we find that the large-scale features such as the background area play a more important role in the visual attention mechanism when people understanding of underwater images, which is unlike the attention mechanism applied in natural scene image classification (Xiao et al., 2015; Fu et al., 2017) .", "In this paper, we propose an underwater image classification network, called UW-Net.", "The overview network structure is shown in Fig. 1 .", "Unlike other models, the UW-Net pays more attention to the Figure 1 : The structure of the UW-Net.", "The bottom part is the output of the eighth layer in the I-A module.", "The red area represents a higher response of features for the underwater image classification.", "As shown, our I-A module concerns more about the background regions of underwater images.", "background features of images by construct the inception-attention (I-A) modules and thus achieves better performance.", "The contributions of this paper are as follows:", "(i) to the best of our knowledge, it is the first CNN-based model for underwater image classification;", "(ii) an inception-attention module is proposed, which joints the multi-dimensional inception module with the attention module to realize the multiple weighting of the output of various scales of features;", "(iii) this work is a first attempt to simulate the visual correlation between understanding images and background areas through I-A modules.", "The rest of the paper is organized as follows: Section 2 introduces the related work.", "The proposed UW-Net is described in Section 3.", "Section 4 illustrates the experimental results and analysis, and we summarize this paper in Section 5.", "A new underwater image classification network UW-Net is proposed in this work, wherein an inception-attention module is constructed.", "In this model, we simulate the visual correlation between understanding images and background areas through I-A modules, which joint the multidimensional inception module with the attention module to realize the multiple weighting of the output of various scales of features.", "The 100% accuracy on the training set and 99.3% accuracy on the testing set of the UW-Net is achieved benefiting from the refinement of the usefulness of multiscale features by the I-A module.", "In the future, we will try to improve the performance of other underwater image visual analysis models by introducing the proposed I-A module." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07692307233810425, 0.07999999821186066, 0.22857142984867096, 0, 0, 0, 0.06451612710952759, 0.035087715834379196, 0.042553190141916275, 0, 0, 0, 0.040816325694322586, 0.1111111119389534, 0, 0, 0, 0, 0.0952380895614624, 0, 0, 0, 0.08695651590824127, 0, 0.1428571343421936, 0, 0, 0, 0.0833333283662796, 0.09999999403953552, 0, 0.0714285671710968 ]
HklCmaVtPS
true
[ "A visual understanding mechanism for special environment" ]
[ "Deep networks have achieved impressive results across a variety of important tasks.", "However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. ", "We propose \\emph{Fortified Networks}, a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well.", "Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; (ii) suggest that our improvements are not primarily due to the problem of deceptively good results due to degraded quality in the gradient signal (the gradient masking problem) and (iii) show the advantage of doing this fortification in the hidden layers instead of the input space. ", "We demonstrate improvements in adversarial robustness on three datasets (MNIST, Fashion MNIST, CIFAR10), across several attack parameters, both white-box and black-box settings, and the most widely studied attacks (FGSM, PGD, Carlini-Wagner). ", "We show that these improvements are achieved across a wide variety of hyperparameters. ", "The success of deep neural networks across a variety of tasks has also driven applications in domains where reliability and security are critical, including self-driving cars BID6 , health care, face recognition BID25 , and the detection of malware BID17 .", "Security concerns arise when an agent using such a system could benefit from the system performing poorly.", "Reliability concerns come about when the distribution of input data seen during training can differ from the distribution on which the model is evaluated.Adversarial examples BID11 result from attacks on neural network models, applying small perturbations to the inputs that change the predicted class.", "Such perturbations can be small enough to be unnoticeable to the naked eye.", "It has been shown that gradient-based methods allow one to find modifications of the input that often change the predicted class BID26 BID11 .", "More recent work demonstrated that it is possible to create modifications such that even when captured through a camera, they change the predicted class with high probability BID7 .Some", "of the most prominent classes of defenses against adversarial examples include feature squeezing BID29 , adapted encoding of the input (Jacob BID14 , and distillation-related approaches BID20 . Existing", "defenses provide some robustness but most are not easy to deploy. In addition", ", many have been shown to be providing the illusion of defense by lowering the quality of the gradient signal, without actually providing improved robustness BID1 . Still others", "require training a generative model directly in the visible space, which is still difficult today even on relatively simple datasets.Our work differs from the approaches using generative models in the input space in that we instead employ this robustification on the distribution of the learned hidden representations, which makes the The plot on the right shows direct experimental evidence for this hypothesis: we added fortified layers with different capacities to MLPs trained on MNIST, and display the value of the total reconstruction errors for adversarial examples divided by the total reconstruction errors for clean examples. A high value", "indicates success at detecting adversarial examples. Our results", "support the central motivation for fortified networks: that off-manifold points can much more easily be detected in the hidden space (as seen by the relatively constant ratio for the autoencoder in hidden space) and are much harder to detect in the input space (as seen by this ratio rapidly falling to zero as the input-space autoencoder's capacity is reduced).identification", "of off-manifold examples easier. We do this by", "training denoising autoencoders on top of the hidden layers of the original network. We call this", "method Fortified Networks.We demonstrate that Fortified Networks (i) can be generically", "added into an existing network; (ii) robustify the network", "against adversarial attacks and (iii) provide a reliable signal", "of the existence of input data that do not lie on the manifold on which it the network trained.In the sections that follow, we discuss the intuition behind the fortification of hidden layers and lay out some of the method's salient properties. Furthermore, we evaluate our proposed", "approach on MNIST, Fashion-MNIST, CIFAR10 datasets against whitebox and blackbox attacks.", "Protecting against adversarial examples could be of paramount importance in mission-critical applications.", "We have presented Fortified Networks, a simple method for the robustification of existing deep neural networks.", "Our method is practical, as fortifying an existing network entails introducing DAEs between the hidden layers of the network, which can be automated.", "Furthermore, the DAE reconstruction error at test time is a reliable signal of distribution shift, which can result in examples unlike those encountered during training.", "High error can signify either adversarial attacks or significant domain shift; both are important cases for the analyst or system to be aware of.", "Moreover, fortified networks are efficient: since not every layer needs to be fortified to achieve improvements, fortified networks are an efficient way to improve robustness to adversarial examples.", "For example, we have shown improvements on ResNets where only two fortified layers are added, and thus the change to the computational cost is very slight.", "Finally, fortified networks are effective, as they improve results on adversarial defense on three datasets (MNIST, Fashion MNIST, and CIFAR10), across a variety of attack parameters (including the most widely used ε values), across three widely studied attacks (FGSM, PGD, Carlini-Wagner L2), and in both the black-box and white-box settings.A EXPERIMENTAL SETUP All attacks used in this work were carried out using the Cleverhans BID21 ) library.A.1", "WHITE-BOX ATTACKS Our convolutional models (Conv, in the tables) have 2 strided convolutional layers with 64 and 128 filters followed by an unstrided conv layer with 128 filters.", "We use ReLU activations between layers then followed by a single fully connected layer.", "The convolutional and fully-connected DAEs have a single bottleneck layer with leaky ReLU activations with some ablations presented in the table below.With white-box PGD attacks, we used only convolutional DAEs at the first and last conv layers with Gaussian noise of σ = 0.01 whereas with FGSM attacks we used a DAE only at the last fully connected layer.", "The weight on the reconstruction error λ rec and adversarial cost λ adv were set to 0.01 in all white-box attack experiments.", "We used the Adam optimizer with a learning rate of 0.001 to train all models.The table below lists results a few ablations with different activation functions in the autoencoder Our black-box results are based on a fully-connected substitute model (input-200-200-output) , which was subsequently used to attack a fortified convolutional network.", "The CNN was trained for 50 epochs using adversarial training, and the predictions of the trained CNN were used to train the substitute model.", "6 iterations of Jacobian data augmentation were run during training of the substitute, with λ = 0.1.", "The test set data holdout for the adversary was fixed to 150 examples.", "The learning rate was set to 0.003 and the Adam optimizer was used to train both models.", "TAB0 : More attack steps to uncover gradient masking effects." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.2857142686843872, 0.3396226465702057, 0.1818181723356247, 0.1666666567325592, 0.06451612710952759, 0.07547169178724289, 0.060606054961681366, 0.145454540848732, 0.1428571343421936, 0.10526315122842789, 0.13333332538604736, 0.09756097197532654, 0.06666666269302368, 0.1463414579629898, 0.17777778208255768, 0.07999999821186066, 0.16393442451953888, 0.07999999821186066, 0.25806450843811035, 0, 0.07692307233810425, 0.07692307233810425, 0.15094339847564697, 0, 0.13793103396892548, 0.060606054961681366, 0.10256409645080566, 0.1428571343421936, 0.14999999105930328, 0.10526315122842789, 0.0952380895614624, 0.07999999821186066, 0.19512194395065308, 0.06451612710952759, 0.0952380895614624, 0.20512819290161133, 0.16129031777381897, 0.1621621549129486, 0.23529411852359772, 0.19999998807907104, 0.1818181723356247, 0.07407406717538834 ]
SkgVRiC9Km
true
[ "Better adversarial training by learning to map back to the data manifold with autoencoders in the hidden states. " ]
[ "Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset.", "In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training.", "In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value.", "This result is contrary to the conclusions of recent related works such as (Soudry et al., 2018), and we identify the reason for this contradiction.", "In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class.", "We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin.", "The results reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them.", "Training neural networks is challenging and involves making several design choices.", "Among these are the architecture of the network, the training loss function, the optimization algorithm used for training, and their hyperparameters, such as the learning rate and the batch size.", "Most of these design choices influence the solution obtained by the training procedure and have been studied in detail BID9 BID4 BID5 Wilson et al., 2017; BID17 BID19 .", "Nevertheless, one choice has been mostly taken for granted when the network is trained for a classification task: the training loss function.Cross-entropy loss function is almost the sole choice for classification tasks in practice.", "Its prevalent use is backed theoretically by its association with the minimization of the Kullback-Leibler divergence between the empirical distribution of a dataset and the confidence of the classifier for that dataset.", "Given the particular success of neural networks for classification tasks BID11 BID18 BID5 , there seems to be little motivation to search for alternatives for this loss function, and most of the software developed for neural networks incorporates an efficient implementation for it, thereby facilitating its use.Recently there has been a line of work analyzing the dynamics of training a linear classifier with the cross-entropy loss function BID15 b; BID7 .", "They specified the decision boundary that the gradient descent algorithm yields on linearly separable datasets and claimed that this solution achieves the maximum margin.1", "However, these claims were observed not to hold in the simple experiments we ran.", "For example, FIG6 displays a case where the cross-entropy minimization for a linear classifier leads to a decision boundary which attains an extremely poor margin and is nearly orthogonal to the solution given by the hard-margin support vector machine (SVM).We", "set out to understand this discrepancy between the claims of the previous works and our observations on the simple experiments. We", "can summarize our contributions as follows.", "We compare our results with related works and discuss their implications for the following subjects.Adversarial examples.", "State-of-the-art neural networks have been observed to misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset (Szegedy et al., 2013; BID3 MoosaviDezfooli et al., 2017; .", "Our results reveal that the combination of gradient methods, cross-entropy loss function and the low-dimensionality of the training dataset (at least in some domain) has a responsibility for this problem.", "Note that SVM with the radial basis function was shown to be robust against adversarial examples, and this was attributed to the high nonlinearity of the radial basis function in BID3 .", "Given that the SVM uses neither the cross entropy loss function nor the gradient descent algorithm for training, we argue that the robustness of SVM is no surprise -independent of its nonlinearity.", "Lastly, effectiveness of differential training for neural networks against adversarial examples is our ongoing work.", "The activations feeding into the soft-max layer could be considered as the features for a linear classifier.", "Plot shows the cumulative variance explained for these features as a function of the number of principle components used.", "Almost all the variance in the features is captured by the first 20 principle components out of 84, which shows that the input to the soft-max layer resides predominantly in a low-dimensional subspace.Low-dimensionality of the training dataset.", "As stated in Remark 3, as the dimension of the affine subspace containing the training dataset gets very small compared to the dimension of the input space, the training algorithm will become more likely to yield a small margin for the classifier.", "This observation confirms the results of BID13 , which showed that if the set of training data is projected onto a low-dimensional subspace before feeding into a neural network, the performance of the network against adversarial examples is improved -since projecting the inputs onto a low-dimensional domain corresponds to decreasing the dimension of the input space.", "Even though this method is effective, it requires the knowledge of the domain in which the training points are low-dimensional.", "Because this knowledge will not always be available, finding alternative training algorithms and loss functions that are suited for low-dimensional data is still an important direction for future research.Robust optimization.", "Using robust optimization techniques to train neural networks has been shown to be effective against adversarial examples BID12 BID0 .", "Note that these techniques could be considered as inflating the training points by a presumed amount and training the classifier with these inflated points.", "Consequently, as long as the cross-entropy loss is involved, the decision boundaries of the neural network will still be in the vicinity of the inflated points.", "Therefore, even though the classifier is robust against the disturbances of the presumed magnitude, the margin of the classifier could still be much smaller than what it could potentially be.Differential training.", "We introduced differential training, which allows the feature mapping to remain trainable while ensuring a large margin between different classes of points.", "Therefore, this method combines the benefits of neural networks with those of support vector machines.", "Even though moving from 2N training points to N 2 seems prohibitive, it points out that a true classification should in fact be able to differentiate between the pairs that are hardest to differentiate, and this search will necessarily require an N 2 term.", "Some heuristic methods are likely to be effective, such as considering only a smaller subset of points closer to the boundary and updating this set of points as needed during training.", "If a neural network is trained with this procedure, the network will be forced to find features that are able to tell apart between the hardest pairs.Nonseparable data.", "What happens when the training data is not linearly separable is an open direction for future work.", "However, as stated in Remark 4, this case is not expected to arise for the state-of-the-art networks, since they have been shown to achieve zero training error even on randomly generated datasets (Zhang et al., 2017) , which implies that the features represented by the output of their penultimate layer eventually become linearly separable.", "A PROOF OF THEOREM 1Theorem 1 could be proved by using Theorem 2, but we provide an independent proof here.", "Gradient descent algorithm with learning rate δ on the cross-entropy loss (1) yields DISPLAYFORM0 1 + e −w x + δỹ e −w ỹ 1 + e −w ỹ .Ifw(0", ") = 0, thenw(t) = p(t)x + q(t)ỹ for all t ≥ 0, wherė DISPLAYFORM1 Then we can writeα Lemma 2. If b", "< 0, then there exists t 0 ∈ (0, ∞) such that DISPLAYFORM2 Proof. Note", "that DISPLAYFORM3 which implies that DISPLAYFORM4 as long as DISPLAYFORM5 By using Lemma 2, DISPLAYFORM6 Proof. Solving", "the set of equations DISPLAYFORM7 , DISPLAYFORM8 Proof. Note thatż", "≥ a/2 andv ≥ c/2; therefore, DISPLAYFORM9 if either side exists. Remember thaṫ", "DISPLAYFORM10 We can compute f (w) = 2acw + bcw 2 + ab b 2 w 2 + 2abw + a 2 . The function", "f is strictly increasing and convex for w > 0. We have DISPLAYFORM11", "Therefore, when b ≥ a, the only fixed point of f over [0, ∞) is the origin, and when a > b, 0 and (a − b)/(c − b) are the only fixed points of f over [0, ∞). Figure 4 shows the curves", "over whichu = 0 andẇ = 0. Since lim t→∞ u = lim t→∞", "w, the only points (u, w) can converge to are the fixed points of f . Remember thaṫ DISPLAYFORM12", "so when a > b, the origin (0, 0) is unstable in the sense of Lyapunov, and (u, w) cannot converge to it. Otherwise, (0, 0) is the only", "fixed point, and it is stable. As a result, DISPLAYFORM13 Figure", "4: Stationary points of function f . DISPLAYFORM14 Proof. From Lemma 6", ", DISPLAYFORM15 Consequently", ", DISPLAYFORM16 which gives the same solution as Lemma 5: DISPLAYFORM17 Proof. We can obtain a lower bound for square", "of the denominator as DISPLAYFORM18 DISPLAYFORM19 As a result, Then, we can write w as DISPLAYFORM20 Remember, by definition, w SVM = arg min w 2 s.t. w, x i + y j ≥ 2 ∀i ∈ I, ∀j ∈ J.Since the vector u also satisfies u, x i + y j = w, x i + y j ≥ 2 for all i ∈ I, j ∈ J, we have u ≥ w SVM = 1 γ . As a result, the margin obtained by minimizing", "the cross-entropy loss is DISPLAYFORM21" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23999999463558197, 0.36666667461395264, 0.5625, 0.11764705181121826, 0.2641509473323822, 0.3181818127632141, 0.23076923191547394, 0, 0.11999999731779099, 0.1111111044883728, 0.11538460850715637, 0.23529411852359772, 0.14814814925193787, 0.2083333283662796, 0.09999999403953552, 0.25806450843811035, 0.2222222238779068, 0, 0.09302324801683426, 0.19354838132858276, 0.30188679695129395, 0.1599999964237213, 0.19230768084526062, 0.04878048226237297, 0.1904761791229248, 0.1860465109348297, 0.35087719559669495, 0.28070175647735596, 0.23529411852359772, 0.1818181723356247, 0.1071428507566452, 0.045454539358615875, 0.21739129722118378, 0.17391303181648254, 0.1599999964237213, 0.25, 0.14999999105930328, 0.1249999925494194, 0.15094339847564697, 0.19230768084526062, 0.0476190410554409, 0.1818181723356247, 0.1304347813129425, 0.1666666567325592, 0, 0.04878048226237297, 0.09756097197532654, 0.1111111044883728, 0.052631575614213943, 0.09090908616781235, 0.05128204822540283, 0.10526315122842789, 0, 0.1428571343421936, 0.1599999964237213, 0.054054051637649536, 0.054054051637649536, 0, 0.1304347813129425, 0.1538461446762085, 0.19354838132858276 ]
ByfbnsA9Km
true
[ "We show that minimizing the cross-entropy loss by using a gradient method could lead to a very poor margin if the features of the dataset lie on a low-dimensional subspace." ]
[ "The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. ", "However, RNN still has a limited capacity to manipulate long-term memory. ", "To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms.", "In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM).", "The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. ", "Moreover, the rotational unit also serves as associative memory.", "We evaluate our model on synthetic memorization, question answering and language modeling tasks. ", "RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. ", "RUM’s performance in the bAbI Question Answering task is comparable to that of models with attention mechanism.", "We also improve the state-of-the-art result to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data.", "The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation.", "Recurrent neural networks are widely used in a variety of machine learning applications such as language modeling BID7 ), machine translation BID5 ) and speech recognition BID11 ).", "Their flexibility of taking inputs of dynamic length makes RNN particularly useful for these tasks.", "However, the traditional RNN models such as Long Short-Term Memory (LSTM, BID12 ) and Gated Recurrent Unit (GRU, BID5 ) exhibit some weaknesses that prevent them from achieving human level performance:", "1) limited memory-they can only remember a hidden state, which usually occupies a small part of a model;", "2) gradient vanishing/explosion BID4 ) during training-trained with backpropagation through time the models fail to learn long-term dependencies.Several ways to address those problems are known.", "One solution is to use soft and local attention mechanisms BID5 ), which is crucial for most modern applications of RNN.", "Nevertheless, researchers are still interested in improving basic RNN cell models to process sequential data better.", "Numerous works BID7 ; BID2 ) use associative memory to span a large memory space.", "For example, a practical way to implement associative memory is to set weight matrices as trainable structures that change according to input instances for training.", "Furthermore, the recent concept of unitary or orthogonal evolution matrices BID0 ; BID14 ) also provides a theoretical and empirical solution to the problem of memorizing long-term dependencies.Here, we propose a novel RNN cell that resolves simultaneously those weaknesses of basic RNN.", "The Rotational Unit of Memory is a modified gated model whose rotational operation acts as associative memory and is strictly an orthogonal matrix.", "We tested our model on several benchmarks.", "RUM is able to solve the synthetic Copying Memory task while traditional LSTM and GRU fail.", "For synthetic Recall task, RUM exhibits a stronger ability to remember sequences, hence outperforming state-of-the-art RNN models such as Fastweight RNN BID2 ) and WeiNet (Zhang & Zhou (2017) ).", "By using RUM we achieve the state-of-the-art result in the real-world Character Level Penn Treebank task.", "RUM also outperforms all basic RNN models in the bAbI question answering task.", "This performance is competitive with that of memory networks, which take advantage of attention mechanisms.Our contributions are as follows:1.", "We develop the concept of the Rotational Unit that combines the memorization advantage of unitary/orthogonal matrices with the dynamic structure of associative memory; 2.", "The Rotational Unit of Memory serves as the first phase-encoded model for Recurrent Neural Networks, which improves the state-of-the-art performance of the current frontier of models in a diverse collection of sequential task.", "We proposed a novel RNN architecture: Rotational Unit of Memory.", "The model takes advantage of the unitary and associative memory concepts.", "RUM outperforms many previous state-of-the-art models, including LSTM, GRU, GORU and NTM in synthetic benchmarks: Copying Memory and Associative Recall tasks.", "Additionally, RUM's performance in real-world tasks, such as question answering and language modeling, is competetive with that of advanced architectures, some of which include attention mechanisms.", "We claim the Rotational Unit of Memory can serve as the new benchmark model that absorbs all advantages of existing models in a scalable way.", "Indeed, the rotational operation can be applied to many other fields, not limited only to RNN, such as Convolutional and Generative Adversarial Neural Networks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3255814015865326, 0.13793103396892548, 0.1764705777168274, 0.3333333134651184, 0.1666666567325592, 0.07692307233810425, 0.12903225421905518, 0.12903225421905518, 0.23529411852359772, 0.2222222238779068, 0.14999999105930328, 0.1860465109348297, 0.19354838132858276, 0.12765957415103912, 0.1818181723356247, 0.0952380895614624, 0.1621621549129486, 0.24242423474788666, 0.06451612710952759, 0.04999999329447746, 0.1818181723356247, 0.1538461446762085, 0.0833333283662796, 0.060606054961681366, 0.13333332538604736, 0.1249999925494194, 0.3333333432674408, 0.1111111044883728, 0.1111111044883728, 0.4444444477558136, 0.29629629850387573, 0.2142857164144516, 0.1621621549129486, 0.1428571343421936, 0.29999998211860657, 0.04999999329447746 ]
Sk4w0A0Tb
true
[ "A novel RNN model which outperforms significantly the current frontier of models in a variety of sequential tasks." ]
[ "While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics.", "One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search (MCTS)—a model-based method—with deep-Q networks (DQNs)—a model-free method.", "MCTS requires generating rollouts, which is computationally expensive.", "In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment.", "Our proposed algorithm, generative adversarial tree search (GATS), simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor.", "GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states.", "Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical results show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions.", "However, GATS fails to outperform DQNs in 4 out of 5 games.", "Notably, in these experiments, MCTS has only short rollouts (up to tree depth 4), while previous successes of MCTS have involved tree depth in the hundreds.", "We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling.", "The earliest and best-publicized applications of deep reinforcement learning (DRL) involve Atari games (Mnih et al., 2015) and the board game of Go (Silver et al., 2016) , where experience is inexpensive because the environments are simulated.", "In such scenarios, DRL can be combined with Monte-Carlo tree search (MCTS) methods (Kearns et al., 2002; Kocsis & Szepesvári, 2006) for planning, where the agent executes roll-outs on the simulated environment (as far as computationally feasible) to finds suitable policies.", "However, for RL problems with long episodes, e.g. Go, MCTS can be very computationally expensive.", "In order to speed up MCTS for Go and learn an effective policy, Alpha Go (Silver et al., 2016 ) employs a depth-limited MCTS with the depth in the hundreds on their Go emulator and use an estimated Q-function to query the value of leaf nodes.", "However, in real-world applications, such as robotics (Levine et al., 2016) and dialogue systems (Lipton et al., 2016) , collecting samples often takes considerable time and effort.", "In such scenarios, the agent typically cannot access either the environment model or a corresponding simulator.Recently, generative adversarial networks (GANs) BID15 have emerged as a popular tool for synthesizing realistic-seeming data, especially for high-dimensional domains, including images and audio.", "Unlike previous approaches to image generation, which typically produced blurry images due to optimizing an L1 or L2 objective, GANs produces crisp images.", "Since theire original conception as an unsupervised method, GANs have been extended for conditional generation, e.g., generating an image conditioned on a label (Mirza & Osindero, 2014; Odena et al., 2016) or the next frame in a video given a context window (Mathieu et al., 2015) .", "Recently, the PIX2PIX approach has demonstrated impressive results on a range of image-to-image transduction tasks (Isola et al., 2017) .In", "this work, we propose and analyze generative adversarial tree search (GATS), a new DRL algorithm that utilizes samples from the environment to learn both a Q-function approximator, a near-term reward predictor, and a GAN-based model of the environment's dynamics (state transitions). Together", ", the dynamics model and reward predictor constitute a learned simulator on which MCTS can be performed. GATS leverages", "PIX2PIX GANs to learn a generative dynamics model (GDM) that efficiently learns the dynamics of the environment, producing images that agree closely with the actual observed transactions and are also visually crisp. We thoroughly", "study various image transduction models, arriving ultimately at a GDM that converges quickly (compared to the DQN), and appears from our evaluation to be reasonably robust to subtle distribution shifts, including some that destroy a DQN policy. We also train", "a reward predictor that converges quickly, achieving negligible error (over 99% accuracy). GATS bridges", "model-based and model-free reinforcement learning, using the learned dynamics and reward predictors to simulate roll-outs in combination with a DQN. Specifically", ", GATS deploys the MCTS method for planning over a bounded tree depth and uses the DQN algorithm to estimate the Q-function as a value for the leaf states (Mnih et al., 2015; Van Hasselt et al., 2016) .One notable", "aspect of the GATS algorithm is its flexibility, owing to consisting of a few modular building blocks: (i) value learning", ": we deployed DQN and DDQN (ii) planning: we", "use pure Monte Carlo sampling; (iii) a reward predictor", ": we used a simple 3-class classifier; (iv) dynamics model: we", "propose the GDM architecture. Practically, one can swap", "in other methods for any among these blocks and we highlight some alternatives in the related work. Thus, GATS constitutes a", "general framework for studying the trade-offs between model-based and model-free reinforcement learning.", "Discussion of negative results In this section, we enumerate several hypotheses for why GATS under-performs DQN despite near-perfect modeling, and discuss several attempts to improve GATS based on these hypotheses.", "The following are shown in TAB1 .", "DISPLAYFORM0 Replay Buffer: The agent's decision under GATS sometimes differs from that of the learned Q model.", "Therefore, it is important that we allow the Q-learner to observe the outcome of important outcomes in the generated MCTS states.", "To address this problem, we tried storing the samples generated in tree search and use them to further train the Q-model.", "We studied two scenarios:", "(i) using plain DQN with no generated samples and", "(ii) using Dyna-Q to train the Q function on the generated samples in MCTS.", "However, these techniques did not improve the performance of GATS.Optimizer: Since the problem is slightly different from DQN, especially in the Dyna-Q setting with generated frames, we tried a variety of different learning rates and minibatch sizes to tune the Q-learner." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0, 0, 0, 0, 0, 0.09999999403953552, 0, 0, 0, 0.04878048598766327, 0.03999999538064003, 0.07999999821186066, 0.0416666641831398, 0, 0, 0, 0.038461536169052124, 0.13333332538604736, 0, 0.0714285671710968, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.1666666567325592, 0, 0, 0, 0, 0, 0, 0.09090908616781235, 0 ]
BJl4f2A5tQ
true
[ "Surprising negative results on Model Based + Model deep RL" ]
[ "We present a novel architecture of GAN for a disentangled representation learning.", "The new model architecture is inspired by Information Bottleneck (IB) theory thereby named IB-GAN.", "IB-GAN objective is similar to that of InfoGAN but has a crucial difference; a capacity regularization for mutual information is adopted, thanks to which the generator of IB-GAN can harness a latent representation in disentangled and interpretable manner.", "To facilitate the optimization of IB-GAN in practice, a new variational upper-bound is derived.", "With experiments on CelebA, 3DChairs, and dSprites datasets, we demonstrate that the visual quality of samples generated by IB-GAN is often better than those by β-VAEs.", "Moreover, IB-GAN achieves much higher disentanglement metrics score than β-VAEs or InfoGAN on the dSprites dataset.", "Learning good representations for data is one of the essential topics in machine learning community.", "Although any strict definition for it may not exist, the consensus about the useful properties of good representations has been discussed throughout many studies BID9 Lake et al., 2017; BID10 .", "A disentanglement, one of those useful properties of representation, is often described as a statistical independence or factorization; each independent factor is expected to be semantically well aligned with the human intuition on the data generative factor (e.g. a chair-type from azimuth on Chairs dataset BID6 , or age from azimuth on CelebA dataset (Liu et al., 2015) ).", "The learned representation distilling each important factors of data into a single independent direction is hard to be done but highly valuable for many other downstream tasks (Ridgeway, 2016; Higgins et al., 2017b; .Many", "models have been proposed for disentangled representation learning (Hinton et al., 2011; Kingma et al., 2014; Reed et al., 2014; Narayanaswamy et al., 2017; BID13 . Despite", "their impressive results, they either require knowledge of ground-truth generative factors or weak-supervision (e.g. domain knowledge or partial labels). In contrast", ", among many unsupervised approaches BID14 Kingma & Welling, 2013; Rezende et al., 2014; Springenberg, 2015; BID15 ), yet the two most successful approaches for the independent factor learning are β-VAE BID20 and InfoGAN BID12 . BID20 demonstrate", "that encouraging the KL-divergence term of Variational autoencoder (VAE) objective (Kingma & Welling, 2013; Rezende et al., 2014) by multiplying a constant β > 1 induces a high-quality disentanglement of latent factors. As follow-up research", ", BID10 provide a theoretical justification of the disentangling effect of β-VAE in the context of Information Bottleneck theory BID25 BID24 BID12 propose another fully unsupervised approach based on Generative Adversarial Network (GAN) BID18 . He achieves the goal", "by enforcing the generator to learn disentangled representations through increasing the mutual information (MI) between the generated samples and the latent representations. Although InfoGAN can", "learn to disentangle representations for relatively simple datasets (e.g. MNIST, 3D Chairs), it struggles to do so on more complicated datasets such as CelebA. Moreover, the disentangling", "performance of the learned representations from InfoGAN is known as not good as the performance of the β-VAE and its variant models BID20 Kim & Mnih, 2018; BID11 .Stimulated by the success of", "β-VAE models BID10 BID20 Kim & Mnih, 2018; BID11 BID17 with the Information Bottleneck theory BID5 BID0 ) in disentangled representations learning task, we hypothesize that the weakness of InfoGAN in the representation learning may originate from that it can only maximize the mutual information but lacks any constraining mechanisms. In other words, InfoGAN misses", "the term upper-bounding the mutual information from the perspective of IB theory.We present a novel unsupervised model named IB-GAN (Information Bottleneck GAN) for learning disentangled representations based on IB theory. We propose a new architecture", "of GANs from IB theory so that the training objective involves an information capacity constraint that InfoGAN lacks but β-VAE has. We also derive a new variational", "approximation algorithm to optimize IB-GAN objective in practice. Thanks to the information regularizer", ", the generator can use the latent representations in a manner that is both more interpretable and disentangled than InfoGANThe contributions of this work are summarized as follows:1. IB-GAN is a new GAN-based model for", "fully unsupervised learning of disentangled representations. To the best of our knowledge, there", "is no other unsupervised GAN-based model for this sake except the InfoGAN's variants BID20 Kim & Mnih, 2018) .2. Our work is the first attempt to utilize", "the IB theory into the GAN-based deep generative model. IB-GAN can be seen as an extension to the", "InfoGAN, supplementing an information constraining regularizer that InfoGAN misses.3. IB-GAN surpasses state-of-the-art disentanglement", "scores of BID20 BID16 on dSprites dataset (Matthey et al., 2017) . The quality of generated samples by IB-GAN on 3D", "Chairs BID6 and CelebA (Liu et al., 2015) is also much realistic compared to that of the existing β-VAE variants of the same task.", "Connection to rate-distortion theory.", "Information Bottleneck theory is a generalization of the rate-distortion theory BID25 authors, 2019) , in which the rate R is the code length per data sample to be transmitted through a noisy channel, and the distortion D represents the approximation error of reconstructing the input from the source code authors, 2019; Shannon et al., 1951) .", "The goal of RD-theory is minimizing D without exceeding a certain level of rate R, can be formulated as min R,D D + βR, where β ∈ [0, ∞] decides a theoretical achievable optimal frontier in the auto-encoding limit .Likewise", ", z and r in IB-GAN can be treated as an input and the encoding of the input, respectively. The distortion", "D is minimized by optimizing the variational reconstructor q φ (z|x(r)) to predict the input z from its encoding r, that is equivalent to maximizing I L (z, G(z)). The minimization", "of rate R is related minimizing the KL(e ψ (r|z)||m(r)) which measures the in-efficiency (or excess rate) of the representation encoder e ψ (r|z) in terms of how much it deviates from the prior m(r).Disentanglement-promoting", "behavior. The disentanglement-promoting", "behavior of β-VAE is encouraged by the variational upper-bound of MI term (i.e. KL(q(z|x)||p(z))). Since p(z) is often a factored", "Gaussian distribution, the KL-divergence term is decomposed into the form containing a total correlation term (Hoffman & Johnson, 2016; Kim & Mnih, 2018; BID11 BID17 BID10 , which essentially enforces the encoder to output statistically factored representations (Kim & Mnih, 2018; BID11 . Nevertheless, in IB-GAN, a noise", "input z is fed into the representation encoder e ψ (r|z) instead of the image x. Therefore, the disentangling mechanism", "of IB-GAN must be different from those of β-VAEs.From the formulation of the Eq.(11), we could obtain another important insight: the GAN loss in IB-GAN can be seen as the secondary capacity regularizer over the noisy channel since the discriminator of GAN is the JS-divergence (or the reverse KL-divergence) between the generator and the empirical data distribution p(x) in its optimal BID18 BID22 . Hence, λ controls the information compression", "level of z in the its encoding x = G(r(z)) 6 . In other words, the GAN loss in IB-GAN is a second", "rate constraint in addition to the first rate constraint KL(e ψ (r|z)||m(r)) in the context of the rate-distortion theorem.Therefore, we describe the disentanglement-promoting behavior of IB-GAN regarding the ratedistortion theorem. Here, the goal is to deliver the input source z through", "the noisy channel using the coding r and x. We want to use compact encoding schemes for r and x. (1", ") The efficient encoding scheme for r is defined by", "minimizing KL(e ψ (r|z)||m(r)) with the factored Gaussian prior m(r), which promotes statistical independence of the r. (2) The efficient encoding scheme for x is defined by", "minimizing the divergence between G(z) and the data distribution p(x) via the discriminator; this promote the encoding x to be the realistic image. (3) Maximizing I L (z, G(z)) in IB-GAN indirectly maximize", "I(r, G(r)) since I(z, G(z)) ≤ I(r, G(r)). In other words, maximizing the lower-bound of MI will increases", "the statistical dependency between the coding r and G(r), while these encoding need to be efficient in terms of their rate. Therefore, a single independent changes in r must be coordinated", "with the variations of a independent image factor.How to choose hyperparameters. Although setting any positive values for λ and β is possible , we", "set β ∈ [0, 1] and fix λ = 1. We observe that, in the most of the cases, IU (r, R(z)) collapses", "to 0 when β > 0.75 in the experiments with dSprites. Although λ is another interesting hyperparameter that can control", "the rate of x (i.e. the divergence of the G(z) from p(x)), we aims to support the usefulness of IB-GAN in the disentangled representation learning tasks, and thus we focus on the effect of β ∈ [0, 1.2] on the I U (r, R(z)) while fixing λ = 1. More discussion on the hyperparameter setting will be discussed in", "Appendix. (Kim & Mnih, 2018; BID16 . Our model's scores are obtained from 32", "random seeds, with a peak", "score of (0.91, 0.78). The baseline scores except InfoGAN are referred to BID17 . We use", "DCGAN (Radford et al., 2016) with batch normalization (Ioffe", "& Szegedy, 2015) as our base model for the generator and the discriminator. We let the reconstructor share the same frontend feature with the discriminator", "for efficient use of parameters as in the InfoGAN BID12 . Also, the MLP-based representation encoder is used before the generator. We train", "the model using RMSProp BID23 optimizer with momentum of 0.9. The minibatch", "size is 64 in all experiments. Lastly, we constrain true and synthetic", "images to be normalized as [−1, 1]. Almost identical", "architectural configurations for the generator, discriminator, reconstructor", ", and representation encoder are used in all experiments except that the numbers of parameters are changed depending on the datasets. We defer more details on the models and experimental settings to Appendix.", "The proposed IB-GAN is a novel unsupervised GAN-based model for learning disentangled representation.", "We made a crucial modification on the InfoGAN's objective inspired by the IB theory and β-VAE; specifically, we developed an information capacity constraining term between the generator and the latent representation.", "We also derived a new variational approximation technique for optimizing IB-GAN.", "Our experimental results showed that IB-GAN achieved the state-of-the-art performance on disentangled representation learning.", "The qualitatively generated samples of IB-GAN often had better quality than those of β-VAE on CelebA and 3D Chairs.", "IB-GAN attained higher quantitative scores than β-VAE and InfoGAN with disentanglement metrics on dSprites dataset.There are many possible directions for future work.", "First, our model can be naturally extended to adapt a discrete latent representation, as discussed in section 3.3.", "Second, many extensions of β-VAE have been actively proposed such as BID10 Kim & Mnih, 2018; BID11 BID17 , most of which are complementary for the IB-GAN objective.", "Further exploration toward this direction could be another interesting next topic.", "Reconstruction of input noise z.", "The resulting architecture of IB-GAN is partly analogous to that of β-VAE since both are derived from the IB theory.", "However, β-VAE often generates blurry output images due to the large β > 1 ( Kim & Mnih, 2018; BID11 BID17 since setting β > 1 typically increases the distortion .", "Recently, demonstrates the possibility of achieving small distortion with the minimum rate by adopting a complex auto-regressive decoder in β-VAE and by setting β < 1.", "However, their experiment is performed on relatively small dataset (e.g. MNIST, Omniglot).In", "contrast, IB-GAN may not suffer from this shortcoming since the generator in IB-GAN learns to generate image by minimizing the rate. Moreover", ", it does not rely on any probabilistic modeling assumption of the decoder unlike VAEs and can inherit all merits of InfoGANs (e.g. producing images of good quality by an implicit decoder, and an adaptation of categorical distribution). One downside", "of our model would be the introduction of additional capacity control parameter λ. Although, we", "fixed λ = 1 in all of our experiment, which could also affect the convergence or the generalization ability of the generator. Further investigation", "on this subject could be an interesting future work.Behaviors of IB-GAN according to β. If β is too large such", "that the KL-divergence term is almost zero, then there would be no difference between the samples from the representation encoder e ψ (r|z) and the distortion prior m(r). Then, both representation", "r and generated data x contain no information about z at all, resulting in that the signal from the reconstructor is meaningless to the generator. In this case, the IB-GAN", "reduces to a vanilla GAN with an input r ∼ p(r).Maximization of variational", "lower-bound. Maximizing the variational", "lower-bound of generative MI has been employed in IM algorithm BID2 and InfoGAN BID12 . Recently, offer the lower-bound", "of MI, named GILBO, as a data independent measure for the complexity of the learned representations for trained generative models. They discover the optimal lower-bound", "of the generative MI correlates well with the common image quality metrics of generative models (e.g. INCEPTION Salimans et al. (2016) or FID Heusel et al. (2017) ). In this work, we discover a new way of", "upper-bounding the generative MI based on the causal relationship of deep learning architecture, and show the effectiveness of the upper-bound by measures the disentanglement of learned representation.Implementation of IB-GAN. Since the representation encoder e ψ (", "r|z) is stochastic, reparametrization trick (Kingma & Welling, 2013 ) is needed to backpropagate gradient signals for training the encoder model. The representation r can be embedded", "along with an extra discrete code c ∼ p(c) before getting into the generator (i.e. G(r, c)), and accordingly the reconstructor network becomes q(r, c|x) to predict the discrete code c as well. In this way, it is straightforward to", "introduce a discrete representation into IB-GAN, which is not an easy task in β-VAE based models.Theoretically, we can choose the any number for the dimension of r and z. However, The disentangled representation", "of IB-GAN is learned via the representation encoder e ψ (r|z). To obtain the representation r back from", "the real data x, we first sample z using the learned reconstructor q φ (z|x), and input it to the representation encoder e ψ (r|z). Therefore, we typically choose a smaller", "r dimension than that of z. For more details on the architecture of", "IB-GAN, please refer Appendix.E.Related Work. Many extensions of β-VAE BID20", "have been proposed", ". BID10 modify β-VAE's objective such that the KL", "term is minimized to a specific target constant C instead of scaling the term using β. Kim & Mnih (2018) and BID11 demonstrate using the", "ELBO surgery (Hoffman & Johnson, 2016; Makhzani & Frey, 2017 ) that minimizing the KL-divergence enforces factorization of the marginal encoder, and thus promotes the independence of learned representation. However, a high value of β can decrease the MI term", "too much, and thus often leads to worse reconstruction fidelity compared to the standard VAE. Hence, they introduce a total correlation BID26 based", "regularization to overcome the reconstruction and disentanglement trade-off. These approaches could be complementary to IB-GAN, since", "the objective of IB-GAN also involves with the KL term. This exploration could be an interesting future work." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5925925970077515, 0.3333333134651184, 0.2083333283662796, 0.19999998807907104, 0.1463414579629898, 0, 0.19354838132858276, 0.08695651590824127, 0.0615384578704834, 0.15686273574829102, 0.2222222238779068, 0.0555555522441864, 0.07999999821186066, 0.11999999731779099, 0.20000000298023224, 0.10810810327529907, 0.0476190410554409, 0.0952380895614624, 0.21875, 0.3913043439388275, 0.1428571343421936, 0, 0.2083333283662796, 0.2142857164144516, 0.04878048226237297, 0, 0, 0.11428570747375488, 0.05128204822540283, 0, 0.1355932205915451, 0.07547169178724289, 0.05714285373687744, 0.045454539358615875, 0.08695651590824127, 0, 0.17142856121063232, 0.03703703358769417, 0.11764705181121826, 0.08571428060531616, 0.1666666567325592, 0.08695651590824127, 0.060606054961681366, 0.1538461446762085, 0.1428571343421936, 0, 0.0624999962747097, 0.09302324801683426, 0.19512194395065308, 0.052631575614213943, 0, 0.16129031777381897, 0, 0.0952380895614624, 0.060606054961681366, 0, 0.0555555522441864, 0.1666666567325592, 0.06896550953388214, 0.0714285671710968, 0, 0.08695651590824127, 0.09090908616781235, 0.3448275923728943, 0.1860465109348297, 0.2222222238779068, 0.19999998807907104, 0.05882352590560913, 0.05128204822540283, 0.05882352590560913, 0.09302324801683426, 0, 0.0952380895614624, 0.11428570747375488, 0, 0.14999999105930328, 0, 0.0555555522441864, 0.07692307233810425, 0.13333332538604736, 0.05405404791235924, 0.0555555522441864, 0.04651162400841713, 0, 0.19999998807907104, 0, 0.060606054961681366, 0.1621621549129486, 0.1666666567325592, 0.1818181723356247, 0.0952380895614624, 0, 0.25, 0.1249999925494194, 0.13636362552642822, 0.1428571343421936, 0.0714285671710968, 0, 0, 0.10526315122842789, 0.11999999731779099, 0.052631575614213943, 0, 0.060606054961681366 ]
ryljV2A5KX
true
[ "Inspired by Information Bottleneck theory, we propose a new architecture of GAN for a disentangled representation learning" ]
[ "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs.", "As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters.", "We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic.", "Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs.", "In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance.", "We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "Generative Adversarial Networks (GANs; BID10 provide a powerful method for general-purpose generative modeling of datasets.", "Given examples from some distribution, a GAN attempts to learn a generator function, which maps from some fixed noise distribution to samples that attempt to mimic a reference or target distribution.", "The generator is trained to trick a discriminator, or critic, which tries to distinguish between generated and target samples.", "This alternative to standard maximum likelihood approaches for training generative models has brought about a rush of interest over the past several years.", "Likelihoods do not necessarily correspond well to sample quality BID13 , and GAN-type objectives focus much more on producing plausible samples, as illustrated particularly directly by Danihelka et al. (2017) .", "This class of models has recently led to many impressive examples of image generation (e.g. Huang et al., 2017a; Jin et al., 2017; Zhu et al., 2017) .GANs", "are, however, notoriously tricky to train (Salimans et al., 2016) . This", "might be understood in terms of the discriminator class. BID10", "showed that, when the discriminator is trained to optimality among a rich enough function class, the generator network attempts to minimize the Jensen-Shannon divergence between the generator and target distributions. This", "result has been extended to general f -divergences by Nowozin et al. (2016) . According", "to BID1 , however, it is likely that both the GAN and reference probability measures are supported on manifolds within a larger space, as occurs for the set of images in the space of possible pixel values. These manifolds", "might not intersect at all, or at best might intersect on sets of measure zero. In this case, the", "Jensen-Shannon divergence is constant, and the KL and reverse-KL divergences are infinite, meaning that they provide no useful gradient for the generator to follow. This helps to explain", "some of the instability of GAN training.The lack of sensitivity to distance, meaning that nearby but non-overlapping regions of high probability mass are not considered similar, is a long-recognized problem for KL divergence-based discrepancy measures (e.g. Gneiting & Raftery, 2007, Section 4.2) . It is natural to address", "this problem using Integral Probability Metrics (IPMs; Müller, 1997) : these measure the distance between probability measures via the largest discrepancy in expectation over a class of \"well behaved\" witness functions. Thus, IPMs are able to signal", "proximity in the probability mass of the generator and reference distributions. (Section 2 describes this framework", "in more detail.) BID1 proposed to use the Wasserstein", "distance between distributions as the discriminator, which is an integral probability metric constructed from the witness class of 1-Lipschitz functions. To implement the Wasserstein critic,", "Arjovsky et al. originally proposed weight clipping of the discriminator network, to enforce k-Lipschitz smoothness. Gulrajani et al. (2017) improved on", "this result by directly constraining the gradient of the discriminator network at points between the generator and reference samples. This new Wasserstein GAN implementation", ", called WGAN-GP, is more stable and easier to train.A second integral probability metric used in GAN variants is the maximum mean discrepancy (MMD), for which the witness function class is a unit ball in a reproducing kernel Hilbert space (RKHS). Generative adversarial models based on", "minimizing the MMD were first considered by Li et al. (2015) and Dziugaite et al. (2015) . These works optimized a generator to minimize", "the MMD with a fixed kernel, either using a generic kernel on image pixels or by modeling autoencoder representations instead of images directly. BID9 instead minimized the statistical power", "of an MMD-based test with a fixed kernel. Such approaches struggle with complex natural", "images, where pixel distances are of little value, and fixed representations can easily be tricked, as in the adversarial examples of BID10 .Adversarial training of the MMD loss is thus an", "obvious choice to advance these methods. Here the kernel MMD is defined on the output of", "a convolutional network, which is trained adversarially. Recent notable work has made use of the IPM representation", "of the MMD to employ the same witness function regularization strategies as BID1 and Gulrajani et al. (2017) , effectively corresponding to an additional constraint on the MMD function class. Without such constraints, the convolutional features are unstable", "and difficult to train BID9 . Li et al. (2017b) essentially used the weight clipping strategy of", "Arjovsky et al., with additional constraints to encourage the kernel distribution embeddings to be injective. 1 In light of the observations by Gulrajani et al., however, we use", "a gradient constraint on the MMD witness function in the present work (see Sections 2.1 and 2.2).2 Bellemare et al. (2017) 's method, the Cramér GAN, also used the gradient", "constraint strategy of Gulrajani et al. in their discriminator network. As we discuss in Section 2.3, the Cramér GAN discriminator is related to", "the energy distance, which is an instance of the MMD (Sejdinovic et al., 2013) , and which can therefore use a gradient constraint on the witness function. Note, however, that there are important differences between the Cramér GAN", "critic and the energy distance, which make it more akin to the optimization of a scoring rule: we provide further details in Appendix A. Weight clipping and gradient constraints are not the only approaches possible: variance features (Mroueh et al., 2017) and constraints (Mroueh & Sercu, 2017) can work, as can other optimization strategies (Berthelot et al., 2017; Li et al., 2017a) .Given that both the Wasserstein distance and the MMD are integral probability", "metrics, it is of interest to consider how they differ when used in GAN training. Bellemare et al. (2017) showed that optimizing the empirical Wasserstein distance", "can lead to biased gradients for the generator, and gave an explicit example where optimizing with these biased gradients leads the optimizer to incorrect parameter values, even in expectation. They then claim that the energy distance does not suffer from these problems. As", "our main theoretical contribution, we substantially clarify the bias situation", "in Section 3. First, we show (Theorem 1) that the natural maximum mean discrepancy estimator, including", "the estimator of energy distance, has unbiased gradients when used \"on top\" of a fixed deep network representation. The generator gradients obtained from a trained representation, however, will be biased relative", "to the desired gradients of the optimal critic based on infinitely many samples. This situation is exactly analogous to WGANs: the generator's gradients with a fixed critic are", "unbiased, but gradients from a learned critic are biased with respect to the supremum over critics.MMD GANs, though, do have some advantages over Wasserstein GANs. Certainly we would not expect the MMD on its own to perform well on raw image data, since these", "data lie on a low dimensional manifold embedded in a higher dimensional pixel space. Once the images are mapped through appropriately trained convolutional layers, however, they can", "follow a much simpler distribution with broader support across the mapped domain: a phenomenon also observed in autoencoders (Bengio et al., 2013) . In this setting, the MMD with characteristic kernels BID4 shows strong discriminative performance", "between distributions. To achieve comparable performance, a WGAN without the advantage of a kernel on the transformed space", "requires many more convolutional filters in the critic. In our experiments (Section 5), we find that MMD GANs achieve the same generator performance as WGAN-GPs", "with smaller discriminator networks, resulting in GANs with fewer parameters and computationally faster training. Thus, the MMD GAN discriminator can be understood as a hybrid model that plays to the strengths of both", "the initial convolutional mappings and the kernel layer that sits on top." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.1875, 0.15789473056793213, 0.12121211737394333, 0.2857142686843872, 0.0476190410554409, 0, 0.04878048226237297, 0, 0, 0, 0.0476190410554409, 0, 0, 0, 0, 0.038461532443761826, 0, 0, 0.032258059829473495, 0, 0, 0, 0.04999999329447746, 0, 0.10256409645080566, 0.06779660284519196, 0.05405404791235924, 0.09302324801683426, 0.06666666269302368, 0.04444443807005882, 0.0624999962747097, 0.05882352590560913, 0.04081632196903229, 0, 0.0476190410554409, 0.09090908616781235, 0.05128204822540283, 0.07843136787414551, 0.053333330899477005, 0.04651162400841713, 0.03703703358769417, 0.14814814925193787, 0, 0, 0.14999999105930328, 0.14035087823867798, 0, 0.07999999821186066, 0, 0.1463414579629898, 0.21276594698429108, 0 ]
r1lUOzWCW
true
[ "Explain bias situation with MMD GANs; MMD GANs work with smaller critic networks than WGAN-GPs; new GAN evaluation metric." ]
[ "We extend the Consensus Network framework to Transductive Consensus Network (TCN), a semi-supervised multi-modal classification framework, and identify its two mechanisms: consensus and classification.", "By putting forward three variants as ablation studies, we show both mechanisms should be functioning together.", "Overall, TCNs outperform or align with the best benchmark algorithms when only 20 to 200 labeled data points are available." ]
[ 1, 0, 0 ]
[ 0.2666666507720947, 0, 0 ]
HyeWvcQOKm
false
[ "A semi-supervised multi-modal classification framework, TCN, that outperforms various benchmarks." ]
[ "Separating mixed distributions is a long standing challenge for machine learning and signal processing.", "Applications include: single-channel multi-speaker separation (cocktail party problem), singing voice separation and separating reflections from images.", "Most current methods either rely on making strong assumptions on the source distributions (e.g. sparsity, low rank, repetitiveness) or rely on having training samples of each source in the mixture.", "In this work, we tackle the scenario of extracting an unobserved distribution additively mixed with a signal from an observed (arbitrary) distribution.", "We introduce a new method: Neural Egg Separation - an iterative method that learns to separate the known distribution from progressively finer estimates of the unknown distribution.", "In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce GLO Masking which ensures a good initialization.", "Extensive experiments show that our method outperforms current methods that use the same level of supervision and often achieves similar performance to full supervision.", "Humans are remarkably good at separating data coming from a mixture of distributions, e.g. hearing a person speaking in a crowded cocktail party.", "Artificial intelligence, on the the hand, is far less adept at separating mixed signals.", "This is an important ability as signals in nature are typically mixed, e.g. speakers are often mixed with other speakers or environmental sounds, objects in images are typically seen along other objects as well as the background.", "Understanding mixed signals is harder than understanding pure sources, making source separation an important research topic.Mixed signal separation appears in many scenarios corresponding to different degrees of supervision.", "Most previous work focused on the following settings:Full supervision: The learner has access to a training set including samples of mixed signals {y i } ∈ Y as well as the ground truth sources of the same signals {b i } ∈ B and {x i } ∈ X (such that y i = x i + b i ).", "Having such strong supervision is very potent, allowing the learner to directly learn a mapping from the mixed signal y i to its sources (x i , b i ).", "Obtaining such strong supervision is typically unrealistic, as it requires manual separation of mixed signals.", "Consider for example a musical performance, humans are often able to separate out the different sounds of the individual instruments, despite never having heard them play in isolation.", "The fully supervised setting does not allow the clean extraction of signals that cannot be observed in isolation e.g. music of a street performer, car engine noises or reflections in shop windows.", "GLO vs. Adversarial Masking: GLO Masking as a stand alone technique usually performed worse than Adversarial Masking.", "On the other hand, finetuning from GLO masks was far better than finetuning from adversarial masks.", "We speculate that mode collapse, inherent in adversarial training, makes the adversarial masks a lower bound on the X source distribution.", "GLOM can result in models that are too loose (i.e. that also encode samples outside of X ).", "But as an initialization for NES finetuning, it is better to have a model that is too loose than a model which is too tight.Supervision Protocol: Supervision is important for source separation.", "Completely blind source separation is not well specified and simply using general signal statistics is generally unlikely to yield competitive results.", "Obtaining full supervision by providing a labeled mask for training mixtures is unrealistic but even synthetic supervision in the form of a large training set of clean samples from each source distribution might be unavailable as some sounds are never observed on their own (e.g. sounds of car wheels).", "Our setting significantly reduces the required supervision to specifying if a certain sound sample contains or does not contain the unobserved source.", "Such supervision can be quite easily and inexpensively provided.", "For further sample efficiency increases, we hypothesize that it would be possible to label only a limited set of examples as containing the target sound and not, and to use this seed dataset to finetune a deep sound classifier to extract more examples from an unlabeled dataset.", "We leave this investigation to future work.", "In this paper we proposed a novel method-Neural Egg Separation-for separating mixtures of observed and unobserved distributions.", "We showed that careful initialization using GLO Masking improves results in challenging cases.", "Our method achieves much better performance than other methods and was usually competitive with full-supervision." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0, 0, 0.23529411852359772, 0.1538461446762085, 0, 0.1111111044883728, 0.0555555522441864, 0.14814814925193787, 0.23255813121795654, 0.0952380895614624, 0.09999999403953552, 0.04999999701976776, 0.13793103396892548, 0.09756097197532654, 0.13333332538604736, 0, 0.07407406717538834, 0.060606054961681366, 0.1249999925494194, 0.10256409645080566, 0, 0.10344827175140381, 0, 0, 0.07547169178724289, 0, 0.06451612710952759, 0.07407406717538834, 0.20689654350280762 ]
SkelJnRqt7
true
[ "An iterative neural method for extracting signals that are only observed mixed with other signals" ]
[ "In health, machine learning is increasingly common, yet neural network embedding (representation) learning is arguably under-utilized for physiological signals. ", "This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision (CV), and natural language processing (NLP). ", "For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models (i.e., representation models), which map patient data to an output embedding. ", "Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the \"stacked\" models (i.e., feature embedding model followed by prediction model). ", "PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings.", "2) We present a tractable method to obtain feature attributions through stacked models. ", "We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models.", "3) PHASE was extensively tested in a cross-hospital setting including publicly available data. ", "In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use.", "Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective.", "Representation learning (i.e., learning embeddings) BID14 has been applied to medical images and clinical text (Tajbakhsh et al., 2016; BID16 BID13 ) but has been under-explored for time series physiological signals in electronic health records.", "This paper introduces the PHASE (PHysiologicAl Signal Embeddings) framework to learn embeddings of physiological signals FIG1 ), which can be used for various prediction tasks FIG1 , and has been extensively tested in terms of its transferability using data from multiple hospitals ( FIG1 ).", "In addition, this paper introduces an interpretability method to compute per-sample feature attributions of the original features (i.e., not embeddings) for a prediction result in a tricky \"stacked\" model situation (i.e., embedding model followed by prediction model) ( FIG1 ).Based", "on computer vision (CV) and natural language processing (NLP), exemplars of representation learning, physiological signals are well suited to embeddings. In particular", ", CV and NLP share two notable traits with physiological signals. The first is", "consistency. For CV, the", "domain has consistent features: edges, colors, and other visual attributes. For NLP, the", "domain is a particular language with semantic relationships consistent across bodies of text. For sequential", "signals, physiological patterns are arguably consistent across individuals. The second attribute", "is complexity. Across these three domains", ", each particular domain is sufficiently complex such that learning embeddings is non-trivial. Together, consistency and", "complexity suggest that for a particular domain, every research group independently spends a significant time to learn embeddings that may ultimately be Figure 1: The PHASE framework, which consists of embedding learning, prediction, interpretation, and transference. The checkered patterns denote", "that a model is being trained in the corresponding stage, whereas solid colors denote fixed weights/models. The red side of the LSTM denotes", "the hidden layer we will use to generate embeddings. In (c), the size of the black circles", "on the left represent the feature attributions being assigned to the original input features. The signals and the outputs of the LSTMs", "are vectors. Multiple connections into a single XGB model", "are simply concatenated. More details on the experimental setup can be", "found in Sections 4.1 and 6.1.quite similar. In order to avoid this negative externality,", "NLP and CV have made great progress on standardizing their embeddings; in health, physiological signals are a natural next step.Furthermore, physiological signals have unique properties that make them arguably better suited to representation learning than traditional CV and NLP applications. First, physiological signals are typically generated", "in the health domain, which is constrained by patient privacy concerns. These concerns make sharing data between hospitals next", "to impossible; however, sharing models between hospitals is intuitively safer and generally accepted. Second, a key component to successful transfer learning", "is a community of researchers that work on related problems. According to Faust et al. (2018) , there were at least", "fifty-three research publications using deep learning methods for physiological signals in the past ten years. Additionally, we discuss particular examples of neural", "networks for physiological signals in Section 2.2. These varied applications of neural networks imply that", "there is a large community of machine learning research scientists working on physiological signals, a community that could one day work collaboratively to help patients by sharing models.Although embedding learning has many aforementioned advantages, it makes interpretation more difficult. Naive applications of existing interpretation methods (", "Shrikumar et al., 2016; Sundararajan et al., 2017; do not work for models trained using learned embeddings, because they will assign attributions to the embeddings. Feature attributions assigned to embeddings will be meaningless", ", because the embeddings do not map to any particular input feature. Instead, each embedding is a complicated, potentially non-linear", "combination of the original raw physiological signals. In a health domain, inability to meaningfully interpret your model", "is unsatisfactory. Healthcare providers and patients alike generally want to know the", "reasoning behind predictions/diagnoses. Interpretability can enhance both scientific discovery as well as", "provide credibility to predictive models. In order to provide a principled methodology for mapping embedding", "attributions back into physiological signal attributions, we provide a proof that justifies PHASE's Shapley value framework in Section 3.3. This framework generalizes across arbitrary stacked models and currently", "encompasses neural network models (e.g., linear models, neural networks) and tree-based models (e.g., gradient boosting machines and random forests).In the following sections, we discuss previous related work (Section 2) and", "describe the PHASE framework (Section 3). In Section 4, we first evaluate how well our neural network embeddings make", "accurate predictions (Section 4.2.1). Second, we evaluate whether transferring these embedding learners still enables", "accurate predictions across three different hospitals separated by location and across hospital departments (Section 4.2.2). Lastly, we present a visualization of our methodology for providing Shapley value", "feature attributions through stacked models in Section 4.2.3.", "This paper presents PHASE, a new approach to machine learning with physiological signals based on transferring embedding learners.", "PHASE has potentially far-reaching impacts, because neural networks inherently create an embedding before the final output layer.", "As discussed in Section 2.2, there is a large body of research independently working on neural networks for physiological signals.", "PHASE offers a potential method of collaboration by analyzing partially supervised univariate networks as semi-private ways to share meaningful signals without sharing data sets.In the results section we offer several insights into transference of univariate LSTM embedding functions.", "First, closeness of upstream (LSTM) and downstream prediction tasks is indeed important for both predictive performance and transference.", "For performance, we found that predicting the minimum of the future five minutes was sufficient for the LSTMs to generate good embeddings.", "For transference, predicting the minimum of the next five minutes was sufficient to transfer across similar domains (operating room data from an academic medical center and a trauma center) when predicting hypoxemia.", "However when attempting to utilize a representation from Hospital P, we found that the difference between operating rooms and intensive care units was likely too large to provide good predictions.", "Two solutions to this include fine tuning the Min LSTM models as well as acknowledging the large amount of domain shift and training specific LSTM embedding models with a particular downstream prediction in mind.", "Last but not least, this paper introduced a way to obtain feature attributions for stacked models of neural networks and trees.", "By showing that Shapley values may be computed as the mean over single reference Shapley values, this model stacking framework generalizes to all models for which single reference Shapley values can be obtained, which was quantitatively verified in Section 4.2.3.We intend to release code pertinent to training the LSTM models, obtaining embeddings, predicting with XGB models, and model stacking feature attributions -submitted as a pull request to the SHAP github (https://github.com/slundberg/shap).", "Additionally, we intend to release our embedding models, which we primarily recommend for use in forecasting \"hypo\" predictions.In the direction of future work, it is important to carefully consider representation learning in health -particularly in light of model inversion attacks as discussed in Fredrikson et al. (2015) .", "To this end, future work in making precise statements about the privacy of models deserves attention, for which one potential avenue may be differential privacy (Dwork, 2008) .", "Other important areas to explore include extending these results to higher sampling frequencies.", "Our data was sampled once per minute, but higher resolution data may beget different neural network architectures.", "Lastly, further work may include quantifying the relationship between domain shifts in hospitals and PHASE and determining other relevant prediction tasks for which embeddings can be applied (e.g., \"hyper\" predictions, doctor action prediction, etc.", "Labels For hypoxemia, a particular time point t is labelled to be one if the minimum of the next five minutes is hypoxemic (min(SaO t+1:t+6 2 ) ≤ 92).", "All points where the current time step is currently hypoxemic are ignored (SaO t 2 ≤ 92).", "Additionally we ignore time points where the past ten minutes were all missing or the future five minutes were all missing.", "Hypocapnia and hypotension are only labelled for hospitals 0 and 1.", "Additionally, we have stricter label conditions.", "We labeled the current time point t to be one if (min(S t−10:t ) > T ) and the minimum of the next five minutes is \"hypo\" (min(S t+1:t+5 ) ≤ T ).", "We labeled the current time point t to be zero if (min(S t−10:t ) > T ) and the minimum of the next ten minutes is not \"hypo\" (min(S t+1:t+10 ) > T ).", "All other time points were not considered.", "For hypocapnia, the threshold T = 34 and the signal S is ETCO 2 .", "For hypotension the threshold is T = 59 and the signal S is NIBPM.", "Additionally we ignore time points where the past ten minutes were all missing or the future five minutes were all missing.", "As a result, we have different sample sizes for different prediction tasks (reported in TAB7 ).", "For Min predictions, the label is the value of min(S t+1:t+5 ), points without signal for in the future five minutes are ignored.", "For Auto predictions, the label is all the time points: S t−59:t .", "The sample sizes for Min and Auto are the same and are reported in Table 3.", "Table 3 : Sample sizes for the Min and Auto predictions for training the LSTM autoencoders.", "For the autoencoders we utilize the same data, without looking at the labels.", "We only utilize the 15 features above the line in both hospitals ( Figure 5 ) for training our models.", "(2000) , implemented in the Keras library with a Tensorflow back-end.", "We train our networks with either regression (Auto and Min embeddings) or classification (Hypox) objectives.", "For regression, we optimize using Adam with an MSE loss function.", "For classification we optimize using RMSProp with a binary cross-entropy loss function (additionally, we upsample to maintain balanced batches during training).", "Our model architectures consist of two hidden layers, each with 200 LSTM cells with dense connections between all layers.", "We found that important steps in training LSTM networks for our data are to impute missing values by the training mean, standardize data, and to randomize sample ordering prior to training (allowing us to sample data points in order without replacement).", "To prevent overfitting, we utilized dropouts between layers as well as recurrent dropouts for the LSTM nodes.", "Using a learning rate of 0.001 gave us the best final results.", "The LSTM models were run to convergence (until their validation accuracy did not improve for five rounds of batch stochastic gradient descent).", "In order to train these models, we utilize three GPUs (GeForce GTX 1080 Ti graphics cards).", "DISPLAYFORM0" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0555555522441864, 0.04878048226237297, 0.1071428507566452, 0.12121211737394333, 0.09999999403953552, 0.25, 0.19999998807907104, 0.0624999962747097, 0.09756097197532654, 0.1428571343421936, 0.07692307233810425, 0.13333332538604736, 0.17543859779834747, 0.09999999403953552, 0.1249999925494194, 0, 0.06451612710952759, 0.12121211737394333, 0, 0, 0.12121211737394333, 0.1818181723356247, 0.04999999329447746, 0.060606054961681366, 0.0555555522441864, 0.07407406717538834, 0, 0.05882352590560913, 0.0714285671710968, 0, 0.15789473056793213, 0.051282044500112534, 0.04999999329447746, 0.0624999962747097, 0.06666666269302368, 0.1304347813129425, 0.10526315122842789, 0.05714285373687744, 0.06666666269302368, 0, 0.19354838132858276, 0.31111109256744385, 0.08695651590824127, 0.05405404791235924, 0, 0.260869562625885, 0.1428571343421936, 0.1111111044883728, 0, 0.10256409645080566, 0.1090909019112587, 0.2857142686843872, 0.10526315122842789, 0.0833333283662796, 0.08510638028383255, 0.2083333283662796, 0.25641024112701416, 0.15584415197372437, 0.03333332762122154, 0.09090908616781235, 0, 0, 0.15094339847564697, 0.04444443807005882, 0, 0, 0.1428571343421936, 0, 0.045454539358615875, 0.045454539358615875, 0, 0.12903225421905518, 0.13333332538604736, 0, 0.1818181723356247, 0.1538461446762085, 0, 0.1249999925494194, 0.1249999925494194, 0, 0.10810810327529907, 0.13793103396892548, 0.12121211737394333, 0.06896550953388214, 0.10526315122842789, 0.0555555522441864, 0.07843136787414551, 0.060606054961681366, 0.06451612710952759, 0.09999999403953552, 0 ]
SygInj05Fm
true
[ "Physiological signal embeddings for prediction performance and hospital transference with a general Shapley value interpretability method for stacked models." ]
[ "We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients.", "Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex.", "This was a major challenge until recently, when provable algorithms for dictionary learning were proposed.", "Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients.", "Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients.", "This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest.", "To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately.", "Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations.", "Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques.", "Sparse models avoid overfitting by favoring simple yet highly expressive representations.", "Since signals of interest may not be inherently sparse, expressing them as a sparse linear combination of a few columns of a dictionary is used to exploit the sparsity properties.", "Of specific interest are overcomplete dictionaries, since they provide a flexible way of capturing the richness of a dataset, while yielding sparse representations that are robust to noise; see BID13 ; Chen et al. (1998); Donoho et al. (2006) .", "In practice however, these dictionaries may not be known, warranting a need to learn such representations -known as dictionary learning (DL) or sparse coding BID14 .", "Formally, this entails learning an a priori unknown dictionary A ∈ R n×m and sparse coefficients x * (j) ∈ R m from data samples y (j) ∈ R n generated as DISPLAYFORM0 This particular model can also be viewed as an extension of the low-rank model BID15 .", "Here, instead of sharing a low-dimensional structure, each data vector can now reside in a separate low-dimensional subspace.", "Therefore, together the data matrix admits a union-of-subspace model.", "As a result of this additional flexibility, DL finds applications in a wide range of signal processing and machine learning tasks, such as denoising (Elad and Aharon, 2006) , image inpainting BID12 , clustering and classification (Ramirez et al., 2010; BID16 BID17 BID18 2019b; a) , and analysis of deep learning primitives (Ranzato et al., 2008; BID0 ; see also Elad (2010) , and references therein.Notwithstanding the non-convexity of the associated optimization problems (since both factors are unknown), alternating minimization-based dictionary learning techniques have enjoyed significant success in practice.", "Popular heuristics include regularized least squares-based BID14 BID8 BID12 BID9 BID7 , and greedy approaches such as the method of optimal directions (MOD) (Engan et al., 1999) and k-SVD (Aharon et al., 2006) .", "However, dictionary learning, and matrix factorization models in general, are difficult to analyze in theory; see also BID10 .To", "this end, motivated from a string of recent theoretical works BID1 BID4 Geng and Wright, 2014) , provable algorithms for DL have been proposed recently to explain the success of aforementioned alternating minimization-based algorithms (Agarwal et al., 2014; Arora et al., 2014; BID20 . However", ", these works exclusively focus on guarantees for dictionary recovery. On the", "other hand, for applications of DL in tasks such as classification and clusteringwhich rely on coefficient recovery -it is crucial to have guarantees on coefficients recovery as well.Contrary to conventional prescription, a sparse approximation step after recovery of the dictionary does not help; since any error in the dictionary -which leads to an error-in-variables (EIV) (Fuller, 2009 ) model for the dictionary -degrades our ability to even recover the support of the coefficients (Wainwright, 2009) . Further", ", when this error is non-negligible, the existing results guarantee recovery of the sparse coefficients only in 2 -norm sense (Donoho et al., 2006) . As a result", ", there is a need for scalable dictionary learning techniques with guaranteed recovery of both factors.", "we note that Arora15(''biased'') and Arora15(''unbiased'') incur significant bias, while NOODL converges to A * linearly.", "NOODL also converges for significantly higher choices of sparsity k, i.e., for k = 100 as shown in panel (d), beyond k = O( √ n), indicating a potential for improving this bound.", "Further, we observe that Mairal '09 exhibits significantly slow convergence as compared to NOODL.", "Also, in panels (a-ii), (b-ii), (c-ii) and (d-ii) we show the corresponding performance of NOODL in terms of the error in the overall fit ( Y − AX F / Y F ), and the error in the coefficients and the dictionary, in terms of relative Frobenius error metric discussed above.", "We observe that the error in dictionary and coefficients drops linearly as indicated by our main result.", "We present NOODL, to the best of our knowledge, the first neurally plausible provable online algorithm for exact recovery of both factors of the dictionary learning (DL) model.", "NOODL alternates between:", "(a) an iterative hard thresholding (IHT)-based step for coefficient recovery, and", "(b) a gradient descent-based update for the dictionary, resulting in a simple and scalable algorithm, suitable for large-scale distributed implementations.", "We show that once initialized appropriately, the sequence of estimates produced by NOODL converge linearly to the true dictionary and coefficients without incurring any bias in the estimation.", "Complementary to our theoretical and numerical results, we also design an implementation of NOODL in a neural architecture for use in practical applications.", "In essence, the analysis of this inherently non-convex problem impacts other matrix and tensor factorization tasks arising in signal processing, collaborative filtering, and machine learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3181818127632141, 0.19354838132858276, 0.32258063554763794, 0.13793103396892548, 0.1249999925494194, 0.29411762952804565, 0.21739129722118378, 0.045454539358615875, 0.1818181723356247, 0, 0.1904761791229248, 0.11999999731779099, 0.1463414579629898, 0.2142857164144516, 0.1249999925494194, 0.23999999463558197, 0.1573033630847931, 0.08695651590824127, 0.05882352590560913, 0.1818181723356247, 0.2142857164144516, 0.1599999964237213, 0.1428571343421936, 0.4375, 0, 0.12765957415103912, 0, 0.08163265138864517, 0.1818181723356247, 0.6000000238418579, 0, 0.07407406717538834, 0.1764705777168274, 0.1904761791229248, 0.15789473056793213, 0.14999999105930328 ]
HJeu43ActQ
true
[ "We present a provable algorithm for exactly recovering both factors of the dictionary learning model. " ]
[ "Real-world Relation Extraction (RE) tasks are challenging to deal with, either due to limited training data or class imbalance issues.", "In this work, we present Data Augmented Relation Extraction (DARE), a simple method to augment training data by properly finetuning GPT2 to generate examples for specific relation types.", "The generated training data is then used in combination with the gold dataset to train a BERT-based RE classifier.", "In a series of experiments we show the advantages of our method, which leads in improvements of up to 11 F1 score points compared to a strong baseline.", "Also, DARE achieves new state-of-the-art in three widely used biomedical RE datasets surpassing the previous best results by 4.7 F1 points on average.", "Relation Extraction (RE) is the task of identifying semantic relations from text, for given entity mentions in it.", "This task, along with Named Entity Recognition, has become increasingly important recently due to the advent of knowledge graphs and their applications.", "In this work, we focus on supervised RE (Zeng et al., 2014; Lin et al., 2016; Wu et al., 2017; Verga et al., 2018) , where relation types come from a set of predefined categories, as opposed to Open Information Extraction approaches that represent relations among entities using their surface forms (Banko et al., 2007; Fader et al., 2011) .", "RE is inherently linked to Natural Language Understanding in the sense that a successful RE model should manage to capture adequately well language structure and meaning.", "So, almost inevitably, the latest advances in language modelling with Transformer-based architectures (Radford et al., 2018a; Devlin et al., 2018; Radford et al., 2018b) have been quickly employed to also deal with RE tasks (Soares et al., 2019; Lin et al., 2019; Shi and Lin, 2019; Papanikolaou et al., 2019) .", "These recent works have mainly leveraged the discriminative power of BERT-based models to improve upon the state-of-the-art.", "In this work we take a step further and try to assess whether the text generating capabilities of another language model, GPT-2 (Radford et al., 2018b) , can be applied to augment training data and deal with class imbalance and small-sized training sets successfully.", "Specifically, given a RE task we finetune a pretrained GPT-2 model per each relation type and then use the resulting finetuned models to generate new training samples.", "We then combine the generated data with the gold dataset and finetune a pretrained BERT model (Devlin et al., 2018) on the resulting dataset to perform RE.", "We conduct extensive experiments, studying different configurations for our approach and compare DARE against two strong baselines and the stateof-the-art on three well established biomedical RE benchmark datasets.", "The results show that our approach yields significant improvements against the rest of the approaches.", "To the best of our knowledge, this is the first attempt to augment training data with GPT-2 for RE.", "In Table 1 we show some generated examples with GPT-2 models finetuned on the datasets that are used in the experiments (refer to Section 4).", "In the following, we provide a brief overview of related works in Section 2, we then describe our approach in Section 3, followed by our experimental results (Section 4) and the conclusions (Section 5).", "We have presented DARE, a novel method to augment training data in Relation Extraction.", "Given a gold RE dataset, our approach proceeds by finetuning a pre-trained GPT-2 model per relation type and then uses the finetuned models to generate new training data.", "We sample subsets of the synthetic data with the gold dataset to finetune an ensemble of RE classifiers that are based on BERT.", "On a series of experiments we show empirically that our method is particularly suited to deal with class imbalance or limited data settings, recording improvements up to 11 F1 score points over two strong baselines.", "We also report new state-of-the-art performance on three biomedical RE benchmarks.", "Our work can be extended with minor improvements on other Natural Language Understanding tasks, a direction that we would like to address in future work." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1599999964237213, 0.24242423474788666, 0.07999999821186066, 0, 0, 0.1666666567325592, 0.0714285671710968, 0.036363635212183, 0, 0.045454543083906174, 0, 0.08695651590824127, 0.0624999962747097, 0.06451612710952759, 0, 0, 0.1666666567325592, 0.13333332538604736, 0, 0.20000000298023224, 0.060606058686971664, 0.07407406717538834, 0.04999999701976776, 0, 0.06666666269302368 ]
rJedNwij2r
true
[ "Data Augmented Relation Extraction with GPT-2" ]
[ "This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN).", "The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs.", "To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form.", "Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis.", "As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme.", "We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods.", "Whenever machine learning methods are used for safety-critical applications such as medical image analysis or autonomous driving, it is crucial to provide a precise estimation of the failure probability of the learned predictor.", "Therefore, most of the current learning approaches return distributions rather than single, most-likely predictions.", "For example, DNNs trained for classification usually use the softmax function to provide a distribution over predicted class labels.", "Unfortunately, this method tends to severely underestimate the true failure probability, leading to overconfident predictions (Guo et al., 2017) .", "The main reason for this is that neural networks are typically trained with a principle of maximum likelihood, neglecting their epistemic or model uncertainty with the point estimates.", "A widely known work by Gal (2016) shows that this can be mitigated by using dropout at test time.", "This so-called Monte-Carlo dropout (MC-dropout) has the advantage that it is relatively easy to use and therefore very popular in practice.", "However, MC-dropout also has significant drawbacks.", "First, it requires a specific stochastic regularization during training.", "This limits its use on already well trained architectures, because current networks are often trained with other regularization techniques such as batch normalization.", "Moreover, it uses a Bernoulli distribution to represent the complex model uncertainty, which in return, leads to an underestimation of the predictive uncertainty.", "Several strong alternatives exist without these drawbacks.", "Variational inference Kingma et al., 2015; Graves, 2011) and expectation propagation (Herandez-Lobato & Adams, 2015) are such examples.", "Yet, these methods use a diagonal covariance matrix which limits their applicability as the model parameters are often highly correlated.", "Building upon these, Sun et al. (2017) ; Louizos & Welling (2016) ; Zhang et al. (2018) ; Ritter et al. (2018a) show that the correlations between the parameters can also be computed efficiently by decomposing the covariance matrix of MND into Kronecker products of smaller matrices.", "However, not all matrices can be Kronecker decomposed and thus, these simplifications usually induce crude approximations (Bae et al., 2018) .", "As the dimensionality of statistical manifolds are prohibitively too large in DNNs, more expressive, efficient but still easy to use ways of representing such high dimensional distributions are required.", "To tackle this challenge, we propose to represent the model uncertainty in sparse information form of MND.", "As a first step, we devise a new Laplace Approximation (LA) for DNNs, in which we improve the state-of-the-art Kronecker factored approximations of the Hessian (George et al., 2018) by correcting the diagonal variance in parameter space.", "We show that these can be computed efficiently, and that the information matrix of the resulting parameter posterior is more accurate in terms of the Frobenius norm.", "In this way the model uncertainty is approximated in information form of the MND.", "counts [-] Figure 1: Main idea.", "(a) Covariance matrix Σ for DNNs is intractable to infer, store and sample (an example taken from our MNIST experiments).", "(b) Our main insight is that the spectrum (eigenvalues) of information matrix (inverse of covariance) tend to be sparse.", "(c) Exploiting this insight a Laplace Approximation scheme is devised which applies a spectral sparsification (LRA) while keeping the diagonals exact.", "With this formulation, the complexity becomes tractable for sampling while producing more accurate estimates.", "Here, the diagonal elements (nodes in graphical interpretation) corresponds to information content in a parameter whereas the corrections (links) are the off-diagonals.", "As this results in intractable inverse operation for sampling, we further propose a novel low-rank representation of the resulting Kronecker factorization, which paves the way to applications on large network structures trained on realistically sized data sets.", "To realize such sparsification, we propose a novel algorithm that enables a low-rank approximation of the Kronecker factored eigenvalue decomposition, and we demonstrate an associated sampling computations.", "Our experiments demonstrate that our approach is effective in providing more accurate uncertainty estimates and calibration on considered benchmark data sets.", "A detailed theoretical analysis is also provided for further insights.", "We summarize our main contributions below.", "• A novel Laplace Approximation scheme with a diagonal correction to the eigenvalue rescaled approximations of the Hessian, as a practical inference tool (section 2.2).", "• A novel low-rank representation of Kronecker factored eigendecomposition that preserves Kronecker structure (section 2.3).", "This results in a sparse information form of MND.", "• A novel algorithm to enable a low rank approximation (LRA) for the given representation of MND (algorithm 1) and derivation of a memory-wise tractable sampler (section B.2).", "• Both theoretical (section C) and experimental results (section 4) showing the applicability of our approach.", "In our experiments, we showcase the state-of-the-art performance within the class of Bayesian Neural Networks that are scalable and training-free.", "To our knowledge we explore a sparse information form to represent the model uncertainty of DNNs for the first time.", "Figure 1 depicts our main idea which we provide more rigorous formulation next.", "We address an effective approach of representing model uncertainty in deep neural networks using Multivariate Normal Distribution, which has been thought computationally intractable so far.", "This is achieved by designing its novel sparse information form.", "With one of the most expressive representation of model uncertainty in current Bayesian deep learning literature, we show that uncertainty can be estimated more accurately than existing methods.", "For future works, we plan to demonstrate a real world application of this approach, pushing beyond the validity of concepts." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0624999962747097, 0, 0, 0.1538461446762085, 0.0555555522441864, 0, 0.10526315122842789, 0.0952380895614624, 0.07692307233810425, 0, 0.05882352590560913, 0, 0, 0, 0, 0, 0, 0, 0.07692307233810425, 0, 0, 0, 0, 0, 0.04999999701976776, 0, 0, 0, 0.07407406717538834, 0, 0, 0.0952380895614624, 0, 0.0476190447807312, 0.0624999962747097, 0, 0.11764705181121826, 0, 0.06451612710952759, 0, 0, 0.11764705926179886, 0, 0, 0.07692307233810425, 0, 0.0624999962747097, 0, 0.12121211737394333, 0 ]
Bkxd9JBYPH
true
[ "An approximate inference algorithm for deep learning" ]
[ "The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications.", "In comparison to supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data.", "In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts.", "The two parts complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning.", "One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. ", "Both of the two regularizers are achieved by the strategy of virtual adversarial training.", "Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and practical datasets.", "The recent success of supervised learning (SL) models, like deep convolutional neural networks, highly relies on the huge amount of labeled data.", "However, though obtaining data itself might be relatively effortless in various circumstances, to acquire the annotated labels is still costly, limiting the further applications of SL methods in practical problems.", "Semi-supervised learning (SSL) models, which requires only a small part of data to be labeled, does not suffer from such restrictions.", "The advantage that SSL depends less on well-annotated datasets makes it of crucial practical importance and draws lots of research interests.", "The common setting in SSL is that we have access to a relatively small amount of labeled data and much larger amount of unlabeled data.", "And we need to train a classifier utilizing those data.", "Comparing to SL, the main challenge of SSL is how to make full use of the huge amount of unlabeled data, i.e., how to utilize the marginalized input distribution p(x) to improve the prediction model i.e., the conditional distribution of supervised target p(y|x).", "To solve this problem, there are mainly three streams of research.The first approach, based on probabilistic models, recognizes the SSL problem as a specialized missing data imputation task for classification problem.", "The common scheme of this method is to establish a hidden variable model capturing the relationship between the input and label, and then applies Bayesian inference techniques to optimize the model BID10 Zhu et al., 2003; BID21 .", "Suffering from the estimation of posterior being either inaccurate or computationally inefficient, this approach performs less well especially in high-dimensional dataset BID10 .The", "second line tries to construct proper regularization using the unlabeled data, to impose the desired smoothness on the classifier. One", "kind of useful regularization is achieved by adversarial training BID8 , or virtual adversarial training (VAT) when applied to unlabeled data BID15 . Such", "regularization leads to robustness of classifier to adversarial examples, thus inducing smoothness of classifier in input space where the observed data is presented. The", "input space being high dimensional, though, the data itself is concentrated on a underlying manifold of much lower dimensionality BID2 BID17 Chapelle et al., 2009; BID22 . Thus", "directly performing VAT in input space might overly regularize and does potential harm to the classifier. Another", "kind of regularization called manifold regularization aims to encourage invariance of classifier on manifold BID25 BID0 BID18 BID11 BID22 , rather than in input space as VAT has done. Such manifold", "regularization is implemented by tangent propagation BID25 BID11 or manifold Laplacian norm BID0 BID13 , requiring evaluating the Jacobian of classifier (with respect to manifold representation of data) and thus being highly computationally inefficient.The third way is related to generative adversarial network (GAN) BID7 . Most GAN based", "approaches modify the discriminator to include a classifier, by splitting the real class of original discriminator into K subclasses, where K denotes the number of classes of labeled data BID24 BID19 BID5 BID20 . The features extracted", "for distinguishing the example being real or fake, which can be viewed as a kind of coarse label, have implicit benefits for supervised classification task. Besides that, there are", "also works jointly training a classifier, a discriminator and a generator BID14 .Our work mainly follows", "the second line. We firstly sort out three", "important assumptions that motivate our idea:The manifold assumption The observed data presented in high dimensional space is with high probability concentrated in the vicinity of some underlying manifold of much lower dimensionality BID2 BID17 Chapelle et al., 2009; BID22 . We denote the underlying", "manifold as M. We further assume that the classification task concerned relies and only relies on M BID22 . The noisy observation assumption", "The observed data x can be decomposed into two parts as x = x 0 + n, where x 0 is exactly supported on the underlying manifold M and n is some noise independent of x 0 BID1 BID21 . With the assumption that the classifier", "only depends on the underlying manifold M, the noise part might have undesired influences on the learning of the classifier. The semi-supervised learning assumption", "If two points x 1 , x 2 ∈ M are close in manifold distance, then the conditional probability p(y|x 1 ) and p(y|x 2 ) are similar BID0 BID22 BID18 . In other words, the true classifier, or", "the true condition distribution p(y|X) varies smoothly along the underlying manifold M. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = \" > A DISPLAYFORM4 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = \" > A DISPLAYFORM5 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = \" > A DISPLAYFORM6 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > x 0 < l a t e x i t s h a 1 _ b a s e 6 4 = \" U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = \" > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM7 4 2 c d e G 4 N 8 / + S E Y n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = \" > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM8 4 2 c d e G 4 N 8 / + S E Y n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = \" > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM9", "We present the tangent-normal adversarial regularization, a novel regularization strategy for semisupervised learning, composing of regularization on the tangent and normal space separately.", "The tangent adversarial regularization enforces manifold invariance of the classifier, while the normal adversarial regularization imposes robustness of the classifier against the noise contained in the observed data.", "Experiments on artificial dataset and multiple practical datasets demonstrate that our approach outperforms other state-of-the-art methods for semi-supervised learning.", "The performance of our method relies on the quality of the estimation of the underlying manifold, hence the breakthroughs on modeling data manifold could also benefit our strategy for semi-supervised learning, which we leave as future work.", "represent two different classes.", "The observed data is sampled as x = x 0 + n, where x 0 is uniformly sampled from M and n ∼ N (0, 2 −2 ).", "We sample 6 labeled training data, 3 for each class, and 3, 000 unlabeled training data, as shown in FIG9 ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.20512819290161133, 0.35555556416511536, 0.1463414579629898, 0.145454540848732, 0.25, 0.2222222238779068, 0.19512194395065308, 0.0833333283662796, 0.19512194395065308, 0.09999999403953552, 0.0952380895614624, 0.06666666269302368, 0.11538460850715637, 0.19607841968536377, 0.11320754140615463, 0.09302324801683426, 0.1621621549129486, 0.1463414579629898, 0.19512194395065308, 0.2083333283662796, 0.05405404791235924, 0.1702127605676651, 0.1875, 0.11999999731779099, 0.2083333283662796, 0.05882352590560913, 0.1428571343421936, 0.13793103396892548, 0.19999998807907104, 0.178571417927742, 0.307692289352417, 0.07843136787414551, 0.4390243887901306, 0.24390242993831635, 0.1538461446762085, 0.3199999928474426, 0, 0, 0.052631575614213943 ]
BJxz5jRcFm
true
[ "We propose a novel manifold regularization strategy based on adversarial training, which can significantly improve the performance of semi-supervised learning." ]
[ "Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems.", "However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property.", "Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains.", "Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks.", "On the other hand, topological data analysis (TDA) relies on strong theoretical framework of (algebraic) topology along with other mathematical tools for analyzing possibly complex datasets.", "In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework.", "We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks (SCNs).", "SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks.", "We believe the view of SCNs can be used as a step towards building interpretable deep learning models.", "Finally, we verify its performance on a set of regression problems.", "It is well-known that under mild assumptions on the activation function, a neural network with one hidden layer and a finite number of neurons can approximate continuous functions.", "This characteristic of neural networks is generally referred to as the universal approximation property.", "There are various theoretical universal approximators.", "For example, a result of the Stone-Weierstrass theorem Stone (1948) ; Cotter (1990) is that multivariate polynomials are dense in the space of continuous real valued functions defined over a hypercube.", "Another example is that the reproducing kernel Hilbert space (RKHS) associated with kernel functions with particular properties can be dense in the same space of functions.", "Kernel functions with this property are called universal kernels Micchelli et al. (2006) .", "A subsequent result of this theory is that the set of functions generated by a Gaussian process regression with an appropriate kernel can approximate any continuous function over a hypercube with arbitrary precision.", "Although multivariate polynomials and Gaussian processes also have this approximation property, each has practical limitations that cause neural networks to be used more often in practice compared to these approaches.", "For instance, polynomial interpolations may result a model that overfits to the data and suffers from a poor generalization, and Gaussian processes often become computationally intractable for a large number of training data Bernardo et al..", "Neural networks, with an efficient structure for gradient computation using backpropagation, can be trained using gradient based optimization for large datasets in a tractable time.", "Moreover, in contrast to existing polynomial interpolations, neural networks generalize well in practice.", "Theoretical and empirical understanding of the generalization power of neural networks is an ongoing research Novak et al. (2018) ; Neyshabur et al. (2017) .", "Topological Data Analysis (TDA), a geometric approach for data analysis, is a growing field which provides statistical and algorithmic methods to analyze the topological structures of data often referred to as point clouds.", "TDA methods mainly relied on deterministic methods until recently where w l,0 w l,1 ...", "... statistical approaches were proposed for this purpose Carriere et al. (2017); Chazal & Michel (2017) .", "In general, TDA methods assume a point cloud in a metric space with an inducing distance (e.g. Euclidean, Hausdorff, or Wasserstein distance) between samples and build a topological structure upon point clouds.", "The topological structure is then used to extract geometric information from data Chazal & Michel (2017) .", "These models are not trained with gradient based approaches and they are generally limited to predetermined algorithms whose application to high dimensional spaces may be challenging Chazal (2016) .", "In this work, by leveraging geometrical perspective of TDA, we provide a class of restricted neural networks that preserve the universal approximation property and can be trained using a forward pass and the backpropagation algorithm.", "Motivated by the approximation theorem used to develop our method, Simplicial Complex Network (SCN) is chosen to refer these models.", "SCNs do not require an activation function and architecture search in the way that conventional neural networks do.", "Their hidden units are conceptually well defined, in contrast to feed-forward neural networks for which the role of a hidden unit is yet an ongoing problem.", "SCNs are discussed in more details in later sections.", "Our contribution can be summarized in building a novel class of neural networks which we believe can be used in the future for developing deep models that are interpretable, and robust to perturbations.", "The rest of this paper is organized as follows: Section 2 is specified for the explanation of SCNs and their training procedure.", "In section 3, related works are explained.", "Sections 4, 5, and 6 are specified to experiments, limitations, and conclusion.", "In this work, we have used techniques from topological data analysis to build a class of neural network architectures with the universal approximation property which can be trained using the common neural network training framework.", "Topological data analysis methods are based on the geometrical structure of the data and have strong theoretical analysis.", "SCNs are made using the geometrical view of TDA and we believe that they can be used as a step towards building interpretable deep learning models.", "Most of the experiments in the paper are synthetic.", "More practical applications of the paper is considered as an immediate continual work.", "Moreover, throughout this work, bias functions of the simplest kinds (constant parameters) were used.", "We mentioned earlier that a bias function may be an arbitrary function of its input to keep the universal approximation property of SCNs.", "A natural idea is to use common neural network architectures as the bias function.", "In this case, backpropagation can be continued to the bias function parameters as well.", "This is also considered as another continuation of this work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.12121211737394333, 0.10526315122842789, 0.0714285671710968, 0.04999999701976776, 0.19999998807907104, 0.05128204822540283, 0.1666666567325592, 0.14999999105930328, 0.12121211737394333, 0, 0.0952380895614624, 0.13793103396892548, 0, 0.09302324801683426, 0.1666666567325592, 0.0714285671710968, 0.17777776718139648, 0.045454539358615875, 0.08510638028383255, 0.10810810327529907, 0, 0.0555555522441864, 0.08888888359069824, 0, 0.06666666269302368, 0.08888888359069824, 0, 0.04878048226237297, 0.08695651590824127, 0.11764705181121826, 0.1249999925494194, 0.09999999403953552, 0, 0.13333332538604736, 0.11428570747375488, 0, 0, 0.12765957415103912, 0.06666666269302368, 0.09756097197532654, 0.08695651590824127, 0.0714285671710968, 0.06896550953388214, 0.2222222238779068, 0.20689654350280762, 0.13793103396892548, 0 ]
SJlRDCVtwr
true
[ "A novel method for supervised learning through subdivisioning the input space along with function approximation." ]
[ "The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks.", "However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks.", "We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks.", "We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.", "We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted.", "Unsupervised node embedding (network representation learning) approaches are becoming increasingly popular and achieve state-of-the-art performance on many network learning tasks BID5 .", "The goal is to embed each node in a low-dimensional feature space such that the graph's structure is captured.", "The learned embeddings are subsequently used for downstream tasks such as link prediction, node classification, community detection, and visualization.", "Among the variety of proposed approaches, techniques based on random walks (RWs) (Perozzi et al.; Grover & Leskovec) are highly successful since they incorporate higher-order relational information.", "Given the increasing popularity of these method, there is a strong need for an analysis of their robustness.", "In particular, we aim to study the existence and effects of adversarial perturbations.", "A large body of research shows that traditional (deep) learning methods can easily be fooled/attacked: even slight deliberate data perturbations can lead to wrong results BID17 BID28 BID6 BID12 BID26 BID10 .So", "far, however, the question of adversarial perturbations for node embeddings has not been addressed. This", "is highly critical, since especially in domains where graph embeddings are used (e.g. the web) adversaries are common and false data is easy to inject: e.g. spammers might create fake followers on social media or fraudsters might manipulate friendship relations in social networks. Can", "node embedding approaches be easily fooled? The", "answer to this question is not immediately obvious. On", "one hand, the relational (non-i.i.d.) nature of the data might improve robustness since the embeddings are computed for all nodes jointly rather than for individual nodes in isolation. On", "the other hand, the propagation of information might also lead to cascading effects, where perturbations in one part of the graph might affect many other nodes in another part of the graph.Compared to the existing works on adversarial attacks our work significantly differs in various aspects. First", ", by operating on plain graph data, we do not perturb the features of individual instances but rather their interaction/dependency structure. Manipulating", "the structure (the graph) is a highly realistic scenario. For example,", "one can easily add or remove fake friendship relations on a social network, or write fake reviews to influence graph-based recommendation engines. Second, the", "node embedding works are typically trained in an unsupervised and transductive fashion. This means", "that we cannot rely on a single end-task that our attack might exploit to find appropriate perturbations, and we have to handle a challenging poisoning attack where the model is learned after the attack. That is, the", "model cannot be assumed to be static as in most other adversarial attack works. Lastly, since", "graphs are discrete classical gradient-based approaches BID28 for finding adversarial perturbations that were designed for continuous data are not well suited. Particularly", "for RW-based methods, the gradient computation is not directly possible since they are based on a non-differentiable sampling procedure. How to design", "efficient algorithms that are able to find adversarial perturbations in such a challenging -discrete and combinatorial -graph domain?We propose a principled", "strategy for adversarial attacks on unsupervised node embeddings. Exploiting results from", "eigenvalue perturbation theory BID35 we are able to efficiently solve a challenging bi-level optimization problem associated with the poisoning attack. We assume an attacker with", "full knowledge about the data and the model, thus, ensuring reliable vulnerability analysis in the worst case. Nonetheless, our experiments", "on transferability demonstrate that our strategy generalizes -attacks learned based on one model successfully fool other models as well.Overall, we shed light on an important problem that has not been studied so far. We show that node embeddings", "are sensitive to adversarial attacks. Relatively few changes are needed", "to significantly damage the quality of the embeddings even in the scenario where the attacker is restricted. Furthermore, our work highlights", "that more work is needed to make node embeddings robust to adversarial perturbations and thus readily applicable in production systems.", "We demonstrate that node embeddings are vulnerable to adversarial attacks which can be efficiently computed and have a significant negative effect on node classification and link prediction.", "Furthermore, successfully poisoning the system is possible with relatively small perturbations and under restriction.", "More importantly, our attacks generalize -the adversarial edges are transferable across different models.", "Future work includes modeling the knowledge of the attacker, attacking other network representation learning methods, and developing effective defenses against such attacks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.07407406717538834, 0.1538461446762085, 0.1249999925494194, 0.0624999962747097, 0.12903225421905518, 0.0714285671710968, 0.13793103396892548, 0.10526315122842789, 0, 0, 0, 0.1599999964237213, 0.07999999821186066, 0.11764705181121826, 0, 0.052631575614213943, 0.08888888359069824, 0.0624999962747097, 0, 0.0624999962747097, 0.1666666567325592, 0.05128204822540283, 0, 0, 0.1249999925494194, 0, 0.4761904776096344, 0.1764705777168274, 0, 0.17777776718139648, 0.10526315122842789, 0.0714285671710968, 0.13793103396892548, 0.22857142984867096, 0, 0.08695651590824127, 0.06451612710952759 ]
Sye7qoC5FQ
true
[ "Adversarial attacks on unsupervised node embeddings based on eigenvalue perturbation theory." ]
[ "Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains.", "However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge.", "We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework.", "The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks.", "In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain.", "We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains.", "Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions.", "Hierarchical reinforcement learning methods hold the promise of faster learning in complex state spaces and better transfer across tasks, by exploiting planning at multiple levels of detail BID0 .", "A taxi driver, for instance, ultimately must execute a policy in the space of torques and forces applied to the steering wheel and pedals, but planning directly at this low level is beset by the curse of dimensionality.", "Algorithms like HAMS, MAXQ, and the options framework permit powerful forms of hierarchical abstraction, such that the taxi driver can plan at a higher level, perhaps choosing which passengers to pick up or a sequence of locations to navigate to BID19 BID3 BID13 .", "While these algorithms can overcome the curse of dimensionality, they require the designer to specify the set of higher level actions or subtasks available to the agent.", "Choosing the right subtask structure can speed up learning and improve transfer across tasks, but choosing the wrong structure can slow learning BID17 BID1 .", "The choice of hierarchical subtasks is thus critical, and a variety of work has sought algorithms that can automatically discover appropriate subtasks.One line of work has derived subtasks from properties of the agent's state space, attempting to identify states that the agent passes through frequently BID18 .", "Subtasks are then created to reach these bottleneck states (van Dijk & Polani, 2011; BID17 BID4 .", "In a domain of rooms, this style of analysis would typically identify doorways as the critical access points that individual skills should aim to reach (Şimşek & Barto, 2009 ).", "This technique can rely only on passive exploration of the agent, yielding subtasks that do not depend on the set of tasks to be performed, or it can be applied to an agent as it learns about a particular ensemble of tasks, thereby suiting the learned options to a particular task set.Another line of work converts the target MDP into a state transition graph.", "Graph clustering techniques can then identify connected regions, and subtasks can be placed at the borders between connected regions BID11 .", "In a rooms domain, these connected regions might correspond to rooms, with their borders again picking out doorways.", "Alternately, subtask states can be identified by their betweenness, counting the number of shortest paths that pass through each specific node (Şimşek & Barto, 2009; BID17 .", "Other recent work utilizes the eigenvectors of the graph laplacian to specify dense rewards for option policies that are defined over the full state space BID10 .", "Finally, other methods have grounded subtask discovery in the information each state reveals about the eventual goal (van Dijk & Polani, 2011) .", "Most of these approaches aim to learn options with a single or low number of termination states, can require high computational expense BID17 , and have not been widely used to generate multiple levels of hierarchy (but see BID24 ; BID12 ).Here", "we describe a novel subtask discovery algorithm based on the recently introduced Multitask linearly-solvable Markov decision process (MLMDP) framework BID14 , which learns a basis set of tasks that may be linearly combined to solve tasks that lie in the span of the basis BID21 . We show", "that an appropriate basis can naturally be found through non-negative matrix factorization BID8 BID3 , yielding intuitive decompositions in a variety of domains. Moreover", ", we show how the technique may be iterated to learn deeper hierarchies of subtasks.In line with a number of prior methods, BID17 BID12 our method operates in the batch off-line setting; with immediate application to probabilistic planning. The subtask", "discovery method introduced in BID10 , which also utilizes matrix factorization techniques to discover subtasks albeit from a very different theoretical foundation, is notable for its ability to operate in the online RL setting, although it is not immediately clear how the approach taken therein might achieve a deeper hierarchical architecture, or enable immediate generalization to novel tasks.", "We present a novel subtask discovery mechanism based on the low rank approximation of the desirability basis afforded by the LMDP framework.", "The new scheme reliably uncovers intuitive decompositions in a variety of sample domains.", "Unlike methods based on pure state abstraction, the proposed scheme is fundamentally dependent on the task ensemble, recovering different subtask representations for different task ensembles.", "Moreover, by leveraging the stacking procedure for hierarchical MLMDPs, the subtask discovery mechanism may be straightforwardly iterated to yield powerful hierarchical abstractions.", "Finally, the unusual construction allows us to analytically probe a number of natural questions inaccessible to other methods; we consider specifically a measure of the quality of a set of subtasks, and the equivalence of different sets of subtasks.A current drawback of the approach is its reliance on a discrete, tabular, state space.", "Scaling to high dimensional problems will require applying state function approximation schemes, as well as online estimation of Z directly from experience.", "These are avenues of current work.", "More abstractly, the method might be extended by allowing for some concept of nonlinear regularized composition allowing more complex behaviours to be expressed by the hierarchy." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.060606054961681366, 0.1249999925494194, 0.6842105388641357, 0.1111111044883728, 0.1818181723356247, 0.1395348757505417, 0.0624999962747097, 0.045454539358615875, 0.11538460850715637, 0.178571417927742, 0.04999999329447746, 0.10526315122842789, 0.1071428507566452, 0, 0.08510638028383255, 0.05970148742198944, 0.0555555522441864, 0.0555555522441864, 0.09090908616781235, 0.0952380895614624, 0.1538461446762085, 0.03448275476694107, 0.41379308700561523, 0.0476190410554409, 0.1090909019112587, 0.20000000298023224, 0.42105263471603394, 0.06451612710952759, 0.1538461446762085, 0.2631579041481018, 0.06779660284519196, 0, 0, 0.09999999403953552 ]
ry80wMW0W
true
[ "We present a novel algorithm for hierarchical subtask discovery which leverages the multitask linear Markov decision process framework." ]
[ "This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems.", "We introduce a new memory augmented neural model in which the memory is not resettable (i.e the information stored in the memory after processing an input example is kept for the next seen examples).", "We used deep reinforcement learning to train a memory controller agent to store useful memories.", "Our model was able to outperform hand-crafted solver on Binary Linear Programming (Binary LP).", "The proposed model is tested on different Binary LP instances with large number of variables (up to 1000 variables) and constrains (up to 700 constrains).", "An intelligent agent with a long-term memory processes raw data (as images, speech and natural language sentences) and then transfer these data streams into knowledge.", "The knowledge stored in the long-term memory can be used later in inference either by retrieving segments of memory during recalling, matching stored concepts with new raw data (e.g. image classification tasks) or solving more complex mathematical problems that require memorizing either the method of solving a problem or simple steps during solving.", "For example, the addition of long-digit numbers requires memorizing both the addition algorithm and the carries produced from the addition operations BID28 .In", "neural models, the weights connecting the layers are considered long term memories encoding the algorithm that transform inputs to outputs. Other", "neural models as recurrent neural networks (RNNs) introduce a short-term memory encoded as hidden states of previous inputs BID12 BID7 .In memory", "augmented neural networks (MANNs), a controller writes memories projected from its hidden state to a memory bank (usually in the form of a matrix), the controller then reads from the memory using some addressing mechanisms and generates a read vector which will be fed to the controller in the next time step BID6 . The memory", "will contain information about each of the input sequence tokens and the controller enriches its memory capacity by using the read vector form the previous time step.Unfortunately, In MANNs the memory is not a long-term memory and is re-settable when new examples are processed, making it unable to capture general knowledge about the inputs domain. In context", "of natural language processing, one will need general knowledge to answer open-ended questions that do not rely on temporal information only but also on general knowledge from previous input streams. In long-digits", "multiplication, it will be easier to store some intermediate multiplication steps as digit by digit multiplications and use them later when solving other instances than doing the entire multiplication digit by digit each time from scratch.Neural networks have a large capacity of memorizing, a long-term persistent memory will even increase the network capacity to memorize but will decrease the need for learning coarse features of the inputs that requires more depth.Storing features of the inputs will create shortcut paths for the network to learn the correct targets. Such a network", "will no longer need to depend on depth to learn good features of the inputs but instead will depend on stored memory features. In other words", "a long-term memory can provide intermediate answers to the network. Unlike regular", "MANNs and RNNs, a long-term memory can provide shortcut connections to both inputs features and previous time steps inputs.Consider when the memory contains the output of previous examples, the network would cheat from the memory to provide answers. Training such", "a network will focus on two stages: (1) Learning to find similarities between memory vectors and current input data, (2) learning to transform memory vectors into meaningful representations for producing the final output.The No Free Lunch Theorem of optimization BID25 states that: any two algorithms are equivalent when their performance is averaged across all possible problems, this means that an algorithm that solve certain classes of problems efficiently will be incompetent in other problems. In the setting", "of combinatorial optimization, there is no algorithm able to do better than a random strategy in expectation. The only way an", "algorithm outperforms another is to be specialized to a certain class of optimization problems BID0 . Learning optimization", "algorithms from scratch using pairs of input-output examples is a way to outperform other algorithms on certain classes. It is further interesting", "to investigate the ability of learned models to generate better solutions than hand crafted solvers.The focus of this paper is on designing neural models to solve Binary Linear Programming (or 0-1 Integer Programming) which is a special case of Integer Linear Programming problems where all decision variables are binary. The 0-1 integer programming", "is one of Krap's 21 NP-complete problems introduced in BID9 . The goal of Binary LP is to", "optimize a linear function under certain constrains. It is proved by BID3 that Binary", "LP expresses the complexity class NP (i.e any problem in the complexity class NP can be modeled as Binary LP instance).The standard form of a Binary LP", "problem is: DISPLAYFORM0 where c and b are vectors and A is a matrix.We propose a general framework for long-term memory neural models that uses reinforcement learning to store memories from a neural network. A long-term memory is not resettable", "and may or may not store hidden states from individual time steps. Instead a long term memory stores information", "that is considered to be useful for solving similar instances. The controller that decides to write memories", "follows a policy function that properly constructs the memory contents. We train and test this framework on synthetic", "data set of Binary LP instances. We analyze the model capability of generalization", "to more complex instances beyond the training data set.", "This paper introduced a long term memory coupled with a neural network, that is able to memorize useful input features to solve similar instances.", "We applied LTMN model to solve Binary LP instances.", "The LTMN was able to learn from supervised targets provided by a handcrafted solver, and generate better solutions than the solver.", "The LTMN model was able to generalize to more complex instances beyond those in the training set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1111111044883728, 0.3333333432674408, 0.23529411852359772, 0.1764705777168274, 0.2790697515010834, 0.1395348757505417, 0.1249999925494194, 0.052631575614213943, 0.10256409645080566, 0.10256409645080566, 0.13114753365516663, 0.20895521342754364, 0.08163265138864517, 0.20930232107639313, 0.1463414579629898, 0.375, 0.23529411852359772, 0.17977528274059296, 0.1463414579629898, 0.17142856121063232, 0.14999999105930328, 0.2222222238779068, 0.22857142984867096, 0.1764705777168274, 0.1904761791229248, 0.37735849618911743, 0.15789473056793213, 0.22857142984867096, 0.21052631735801697, 0.375, 0.20689654350280762, 0.2857142686843872, 0.48275861144065857, 0.1463414579629898, 0.2222222238779068 ]
Bk_fs6gA-
true
[ "We propose a memory network model to solve Binary LP instances where the memory information is perseved for long-term use. " ]
[ "Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition.", "Performance has further been improved by leveraging unlabeled data, often in the form of a language model.", "In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task.", "We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying", "i) faster convergence and better generalization, and", "ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.", "Sequence-to-sequence (Seq2Seq) BID1 models have achieved state-of-the-art results on many natural language processing problems including automatic speech recognition BID2 BID4 , neural machine translation , conversational modeling and many more.", "These models learn to generate a variable-length sequence of tokens (e.g. texts) from a variable-length sequence of input data (e.g. speech or the same texts in another language).", "With a sufficiently large labeled dataset, vanilla Seq2Seq can model sequential mapping well, but it is often augmented with a language model to further improve the fluency of the generated text.Because language models can be trained from abundantly available unsupervised text corpora which can have as many as one billion tokens BID13 BID19 , leveraging the rich linguistic information of the label domain can considerably improve Seq2Seq's performance.", "A standard way to integrate language models is to linearly combine the score of the task-specific Seq2Seq model with that of an auxiliary langauge model to guide beam search BID5 BID20 .", "BID10 proposed an improved algorithm called Deep Fusion that learns to fuse the hidden states of the Seq2Seq decoder and a neural language model with a gating mechanism, after the two models are trained independently.While this approach has been shown to improve performance over the baseline, it has a few limitations.", "First, because the Seq2Seq model is trained to output complete label sequences without a language model, its decoder learns an implicit language model from the training labels, taking up a significant portion of the decoder capacity to learn redundant information.", "Second, the residual language model baked into the Seq2Seq decoder is biased towards the training labels of the parallel corpus.", "For example, if a Seq2Seq model fully trained on legal documents is later fused with a medical language model, the decoder still has an inherent tendency to follow the linguistic structure found in legal text.", "Thus, in order to adapt to novel domains, Deep Fusion must first learn to discount the implicit knowledge of the language.In this work, we introduce Cold Fusion to overcome both these limitations.", "Cold Fusion encourages the Seq2Seq decoder to learn to use the external language model during training.", "This means that Seq2Seq can naturally leverage potentially limitless unsupervised text data, making it particularly proficient at adapting to a new domain.", "The latter is especially important in practice as the domain from which the model is trained can be different from the real world use case for which it is deployed.", "In our experiments, Cold Fusion can almost completely transfer to a new domain for the speech recognition task with 10 times less data.", "Additionally, the decoder only needs to learn task relevant information, and thus trains faster.The paper is organized as follows: Section 2 outlines the background and related work.", "Section 3 presents the Cold Fusion method.", "Section 4 details experiments on the speech recognition task that demonstrate Cold Fusion's generalization and domain adaptation capabilities.2", "BACKGROUND AND RELATED WORK 2.1 SEQUENCE-TO-SEQUENCE MODELS A basic Seq2Seq model comprises an encoder that maps an input sequence x = (x 1 , . . . , x T ) into an intermediate representation h, and a decoder that in turn generates an output sequence y = (y 1 , . . . , y K ) from h BID21 .", "The decoder can also attend to a certain part of the encoder states with an attention mechanism.", "The attention mechanism is called hybrid attention BID7 , if it uses both the content and the previous context to compute the next context.", "It is soft if it computes the expectation over the encoder states BID1 as opposed to selecting a slice out of the encoder states.For the automatic speech recognition (ASR) task, the Seq2Seq model is called an acoustic model (AM) and maps a sequence of spectrogram features extracted from a speech signal to characters.", "In this work, we presented a new general Seq2Seq model architecture where the decoder is trained together with a pre-trained language model.", "We study and identify architectural changes that are vital for the model to fully leverage information from the language model, and use this to generalize better; by leveraging the RNN language model, Cold Fusion reduces word error rates by up to 18% compared to Deep Fusion.", "Additionally, we show that Cold Fusion models can transfer more easily to new domains, and with only 10% of labeled data nearly fully transfer to the new domain." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814925193787, 0.12765957415103912, 0.1090909019112587, 0.3478260934352875, 0.1111111119389534, 0.5416666865348816, 0.10344827175140381, 0.18518517911434174, 0.23255813121795654, 0.25, 0.23999999463558197, 0.158730149269104, 0.12765957415103912, 0.16129031777381897, 0.17241378128528595, 0.13636362552642822, 0.26923075318336487, 0.07407406717538834, 0.4150943458080292, 0.0714285671710968, 0.054054051637649536, 0.12244897335767746, 0.10958903282880783, 0.21276594698429108, 0.07999999821186066, 0.14084506034851074, 0.19999998807907104, 0.1818181723356247, 0.4727272689342499 ]
rybAWfx0b
true
[ "We introduce a novel method to train Seq2Seq models with language models that converge faster, generalize better and can almost completely transfer to a new domain using less than 10% of labeled data." ]
[ "A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks.", "Two distinct research paradigms have studied this question.", "Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch.", "In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation.", "This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning.", "We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting.", "Theoretically, this work provides an O(logT) regret guarantee for the FTML algorithm.", "Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.", "Two distinct research paradigms have studied how prior tasks or experiences can be used by an agent to inform future learning.", "Meta-learning (Schmidhuber, 1987) casts this as the problem of learning to learn, where past experience is used to acquire a prior over model parameters or a learning procedure.", "Such an approach, where we draw upon related past tasks and form associated priors, is particularly crucial to effectively learn when data is scarce or expensive for each task.", "However, meta-learning typically studies a setting where a set of meta-training tasks are made available together upfront as a batch.", "In contrast, online learning (Hannan, 1957 ) considers a sequential setting where tasks are revealed one after another, but aims to attain zero-shot generalization without any task-specific adaptation.", "We argue that neither setting is ideal for studying continual lifelong learning.", "Metalearning deals with learning to learn, but neglects the sequential and non-stationary nature of the world.", "Online learning offers an appealing theoretical framework, but does not generally consider how past experience can accelerate adaptation to a new task.", "In this work, we motivate and present the online meta-learning problem setting, where the agent simultaneously uses past experiences in a sequential setting to learn good priors, and also adapt quickly to the current task at hand.Our contributions: In this work, we first formulate the online meta-learning problem setting.", "Subsequently, we present the follow the meta-leader (FTML) algorithm which extends MAML (Finn et al., 2017) to this setting.", "FTML is analogous to follow the leader in online learning.", "We analyze FTML and show that it enjoys a O(log T ) regret guarantee when competing with the best metalearner in hindsight.", "In this endeavor, we also provide the first set of results (under any assumptions) where MAML-like objective functions can be provably and efficiently optimized.", "We also develop a practical form of FTML that can be used effectively with deep neural networks on large scale tasks, and show that it significantly outperforms prior methods in terms of learning efficiency on vision-based sequential learning problems with the MNIST, CIFAR, and PASCAL 3D+ datasets.", "In this paper, we introduced the online meta-learning problem statement, with the aim of connecting the fields of meta-learning and online learning.", "Online meta-learning provides, in some sense, a more natural perspective on the ideal real-world learning procedure.", "An intelligent agent interacting with a constantly changing environment should utilize streaming experience to both master the task at hand, and become more proficient at learning new tasks in the future.", "We summarize prior work related to our setting in Appendix D. For the online meta-learning setting, we proposed the FTML algorithm and showed that it enjoys logarithmic regret.", "We then illustrated how FTML can be adapted to a practical algorithm.", "Our experimental evaluations demonstrated that the proposed practical variant outperforms prior methods." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25641024112701416, 0, 0.1666666567325592, 0.17391303181648254, 0.5714285373687744, 0.32258063554763794, 0.06896550953388214, 0.15789473056793213, 0.10526315122842789, 0.2380952388048172, 0.08888888359069824, 0.11428570747375488, 0.17777776718139648, 0.3448275923728943, 0.3125, 0.10256409645080566, 0.2222222238779068, 0.1666666567325592, 0.29629629850387573, 0.1538461446762085, 0.1463414579629898, 0.17241379618644714, 0.3529411852359772, 0.12121211737394333, 0.17391303181648254, 0.27272728085517883, 0.13793103396892548, 0.06896550953388214 ]
HkxzljA4_N
true
[ "We introduce the online meta learning problem setting to better capture the spirit and practice of continual lifelong learning." ]
[ "We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.", "We employ two methods—taking the maximum attention weight and computing the maximum spanning tree—to extract implicit dependency relations from the attention weights of each layer/head, and compare them to the ground-truth Universal Dependency (UD) trees.", "We show that, for some UD relation types, there exist heads that can recover the dependency type significantly better than baselines on parsed English text, suggesting that some self-attention heads act as a proxy for syntactic structure.", "We also analyze BERT fine-tuned on two datasets—the syntax-oriented CoLA and the semantics-oriented MNLI—to investigate whether fine-tuning affects the patterns of their self-attention, but we do not observe substantial differences in the overall dependency relations extracted using our methods.", "Our results suggest that these models have some specialist attention heads that track individual dependency types, but no generalist head that performs holistic parsing significantly better than a trivial baseline, and that analyzing attention weights directly may not reveal much of the syntactic knowledge that BERT-style models are known to learn.", "Pretrained Transformer models like OpenAI GPT BID9 and BERT BID1 have shown stellar performance on language understanding tasks.", "BERT and BERTbased models significantly improve the state-ofthe-art on many tasks such as constituency parsing BID5 , question answering BID11 , and have attained top positions on the GLUE leaderboard .", "As BERT becomes a staple component of many NLP models, many researchers have attempted to analyze the linguistic knowledge that BERT has learned by analyzing the BERT model BID3 or training probing classifiers on the contextualized embeddings of BERT BID12 .BERT", ", as a Transformer-based language model, computes the hidden representation at each layer for each token by attending to all the tokens in an input sentence. The", "attention heads of Transformer have been claimed to capture the syntactic structure of the sentences BID13 . Intuitively", ", for a given token, some specific tokens in the sentence would be more linguistically related to it than the others, and therefore the selfattention mechanism should be expected to allocate more weight to the linguistically related tokens in computing the hidden state of the given token. In this work", ", we aim to investigate the hypothesis that syntax is implicitly encoded by BERT's self-attention heads. We use two", "relation extraction methods to extract dependency relations from all the self-attention heads of BERT. We analyze", "the resulting dependency relations to investigate whether the attention heads of BERT implicitly track syntactic dependencies significantly better than chance, and what type of dependency relations BERT learn.We extract the dependency relations from the self-attention heads instead of the contextualized embeddings of BERT. In contrast", "to probing models, our dependency extraction methods require no further training. Our experiments", "suggest that the attention heads of BERT encode most dependency relation types with substantially higher accuracy than our baselines-a randomly initialized Transformer and relative positional baselines. Finetuning BERT", "on the syntax-oriented CoLA does not appear to impact the accuracy of extracted dependency relations. However, when fine-tuned", "on the semantics-oriented MNLI dataset, there is a slight improvement in accuracy for longer-term clausal relations and a slight loss in accuracy for shorter-term relations. Overall, while BERT models", "obtain non-trivial accuracy for some dependency types such as nsubj, obj, nmod, aux, and conj, they do not substantially outperform the trivial right-branching trees in terms of undirected unlabeled attachment scores (UUAS). Therefore, although the attention", "heads of BERT reflect a small number of dependency relation types, it does not reflect the full extent of the significant amount of syntactic knowledge BERT is shown to learn by the previous probing work.", "In this work, we investigate whether the attention heads of BERT exhibit the implicit syntax depen- dency by extracting and analyzing the dependency relations from the attention heads of BERT at all layers.", "We use two simple dependency relation extraction methods that require no additional training, and observe that there are attention heads of BERT that track more than 75% of the dependency types with higher accuracy than our baselines.", "However, the hypothesis that the attention heads of BERT track the dependency syntax is not well-supported as the linguistically uninformed baselines outperform BERT on nearly 25% of the dependency types.", "Additionally, BERT's performance in terms of UUAS is only slightly higher than that of the trivial right-branching trees, suggesting that the dependency syntax learned by the attention heads is trivial.", "Additionally, we observe that fine-tuning on the CoLA and MNLI does not affect the pattern of self-attention, although the fine-tuned models shows different performance from BERT on the GLUE benchmark." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.05882352590560913, 0.05128204822540283, 0, 0.04255318641662598, 0.036363635212183, 0.0714285671710968, 0.0555555522441864, 0.04651162400841713, 0, 0, 0, 0.06896550953388214, 0.07692307233810425, 0.0952380895614624, 0, 0.054054051637649536, 0, 0.060606054961681366, 0, 0.052631575614213943, 0.1111111044883728, 0.0476190447807312, 0.12121211737394333, 0.05882352590560913, 0.0555555522441864 ]
rJgoYekkgB
true
[ "Attention weights don't fully expose what BERT knows about syntax." ]
[ "State-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high-resolution facial patterns by exploring local appearance knowledge.", "However, most of these methods do not well exploit facial structures and identity information, and struggle to deal with facial images that exhibit large pose variation and misalignment.", "In this paper, we propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.", "Firstly, the 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge.", "Secondly, the Spatial Attention Mechanism is used to better exploit this hierarchical information (i.e. intensity similarity, 3D facial structure, identity content) for the super-resolution problem.", "Extensive experiments demonstrate that the proposed algorithm achieves superior face super-resolution results and outperforms the state-of-the-art.", "Face images provide crucial clues for human observation as well as computer analysis (Fasel & Luettinb, 2003; Zhao et al., 2003) .", "However, the performance of most face image tasks, such as face recognition and facial emotion detection (Han et al., 2018; Thies et al., 2016) , degrades dramatically when the resolution of a facial image is relatively low.", "Consequently, face super-resolution, also known as face hallucination, was coined to restore a low-resolution face image to its high-resolution counterpart.", "A multitude of deep learning methods (Zhou & Fan, 2015; Yu & Porikli, 2016; Zhu et al., 2016; Cao et al., 2017; Dahl et al., 2017a; Yu et al., 2018b) have been successfully applied in face Super-Resolution (SR) problems and achieve state-of-the-art results.", "But super-resolving arbitrary facial images, especially at high magnification factors, is still an open and challenging problem due to the ill-posed nature of the SR problem and the difficulty in learning and integrating strong priors into a face hallucination model.", "Some researches (Grm et al., 2018; Yu et al., 2018a; Ren et al., 2019) on exploiting the face priors to assist neural networks to capture more facial details have been proposed recently.", "A face hallucination model incorporating identity priors is presented in Grm et al. (2018) .", "But the identity prior is extracted only from the multi-scale up-sampling results in the training procedure and therefore cannot provide enough extra priors to guide the network to achieve a better result.", "Yu et al. (2018a) employ facial component heatmaps to encourage the upsampling stream to generate super-resolved faces with higher-quality details, especially for large pose variations.", "Although heatmaps can provide global component regions, it cannot learn the reconstruction of detailed edges, illumination or expression priors.", "Besides, all of these aforementioned face SR approaches ignore facial structure and identity recovery.", "In contrast to previous methods, we propose a novel face super-resolution method that embeds 3D face structures and identity priors.", "Firstly, a deep 3D face reconstruction branch is set up to explicitly obtain 3D face render priors which facilitate the face super-resolution branch.", "Specifically, the 3D face render prior is generated by the ResNet-50 network (He et al., 2016) .", "It contains rich hierarchical information, such as low-level (e.g., sharp edge, illumination) and perception level (e.g., identity).", "The Spatial Attention Mechanism is proposed here to adaptively integrate the 3D facial prior into the network.", "Specifically, we employ the Spatial Feature Transform (SFT) (Wang et al., 2018) to generate affine transformation parameters for spatial feature modulation.", "Afterwards, it encourages the network to learn the spatial interdepenencies of features between 3D facial priors and input images after adding the attention module into the network.", "The main contributions of this paper are:", "1. A novel face SR model is proposed by explicitly exploiting facial structure in the form of facial-prior estimation.", "The estimated 3D facial prior provides not only spatial information of facial components but also their visibility information, which are ignored by the pixel-level content.", "2. We propose a feature-fusion-based network to better extract and integrate the face rendered priors by employing the Spatial Attention Mechanism (SAM).", "3. We qualitatively and quantitatively explore multi-scale face super-resolution, especially at very low input resolutions.", "The proposed network achieves better SR criteria and superior visual quality compared to state-of-the-art face SR methods.", "In this paper, we proposed a novel network that incorporates 3D facial priors of rendered faces and identity knowledge.", "The 3D rendered branch utilizes the face rendering loss to encourage a highquality guided image providing clear spatial locations of facial components and other hierarchical information (i.e., expression, illumination, and face pose).", "To well exploit 3D priors and consider the channel correlation between priors and inputs, the Spatial Attention Mechanism is presented by employing the Spatial Feature Transform and Attention block.", "The comprehensive experimental results have demonstrated that the proposed method can deliver the better performance and largely decrease artifacts in comparison with the state-of-the-art methods by using significantly fewer parameters." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.13636362552642822, 0.800000011920929, 0.31578946113586426, 0.13636362552642822, 0.1764705777168274, 0, 0.20408162474632263, 0.1111111044883728, 0.03703703358769417, 0.18518517911434174, 0.1702127605676651, 0.12121211737394333, 0.12765957415103912, 0.09302324801683426, 0.10526315122842789, 0.12121211737394333, 0.4736841917037964, 0.3684210479259491, 0.17142856121063232, 0.05405404791235924, 0.17142856121063232, 0.04878048226237297, 0.1904761791229248, 0, 0.2631579041481018, 0.1860465109348297, 0.29999998211860657, 0.11764705181121826, 0.05714285373687744, 0.3684210479259491, 0.19607841968536377, 0.1463414579629898, 0.12765957415103912 ]
HJeOHJHFPH
true
[ "We propose a novel face super resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures." ]
[ "We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning.", "On labeled examples, the model is trained with standard cross-entropy loss.", "On an unlabeled example, the model first performs inference (acting as a \"teacher\") to produce soft targets.", "The model then learns from these soft targets (acting as a ``\"student\").", "We deviate from prior work by adding multiple auxiliary student prediction layers to the model.", "The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image).", "The students can learn from the teacher (the full model) because the teacher sees more of each example.", "Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data.", "When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN.", "We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data.", "On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.\n", "Deep learning classifiers work best when trained on large amounts of labeled data.", "However, acquiring labels can be costly, motivating the need for effective semi-supervised learning techniques that leverage unlabeled examples during training.", "Many semi-supervised learning algorithms rely on some form of self-labeling.", "In these approaches, the model acts as both a \"teacher\" that makes predictions about unlabeled examples and a \"student\" that is trained on the predictions.", "As the teacher and the student have the same parameters, these methods require an additional mechanism for the student to benefit from the teacher's outputs.One approach that has enjoyed recent success is adding noise to the student's input BID0 BID50 .", "The loss between the teacher and the student becomes a consistency cost that penalizes the difference between the model's predictions with and without noise added to the example.", "This trains the model to give consistent predictions to nearby data points, encouraging smoothness in the model's output distribution with respect to the input.", "In order for the student to learn effectively from the teacher, there needs to be a sufficient difference between the two.", "However, simply increasing the amount of noise can result in unrealistic data points sent to the student.", "Furthermore, adding continuous noise to the input makes less sense when the input consists of discrete tokens, such in natural language processing.We address these issues with a new method we call Cross-View Training (CVT).", "Instead of only training the full model as a student, CVT adds auxiliary softmax layers to the model and also trains them as students.", "The input to each student layer is a sub-network of the full model that sees a restricted view of the input example (e.g., only seeing part of an image), an idea reminiscent of cotraining BID1 .", "The full model is still used as the teacher.", "Unlike when using a large amount of input noise, CVT does not unrealistically alter examples during training.", "However, the student layers can still learn from the teacher because the teacher has a better, unrestricted view of the input.", "Meanwhile, the student layers improve the model's representations (and therefore the teacher) as they learn to make accurate predictions with a limited view of the input.", "Our method can be easily combined with adding noise to the students, but works well even when no noise is added.We propose variants of our method for Convolutional Neural Network (CNN) image classifiers, Bidirectional Long Short-Term Memory (BiLSTM) sequence taggers, and graph-based dependency parsers.", "For CNNs, each auxiliary softmax layer sees a region of the input image.", "For sequence taggers and dependency parsers, the auxiliary layers see the input sequence with some context removed.", "For example, one auxiliary layer is trained to make predictions without seeing any tokens to the right of the current one.We first evaluate Cross-View Training on semi-supervised CIFAR-10 and semi-supervised SVHN.", "When combined with Virtual Adversarial Training BID39 , CVT improves upon the current state-of-the-art on both datasets.", "We also train semi-supervised models on five tasks from natural language processing: English dependency parsing, combinatory categorical grammar supertagging, named entity recognition, text chunking, and part-of-speech tagging.", "We use the 1 billion word language modeling benchmark BID3 as a source of unlabeled data.", "CVT works substantially better than purely supervised training, resulting in models that improve upon or are competitive with the current state-of-the-art on every task.", "We consider these results particularly important because many recently proposed semi-supervised learning methods work best on continuous inputs and have only been evaluated on vision tasks BID0 BID50 BID26 BID59 .", "In contrast, CVT can handle discrete inputs such as language very effectively." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.13333332538604736, 0.0555555522441864, 0, 0.05882352590560913, 0.13333332538604736, 0.11428570747375488, 0.15789473056793213, 0.2222222238779068, 0.051282044500112534, 0.0952380895614624, 0.0624999962747097, 0.1538461446762085, 0.13793103396892548, 0.09999999403953552, 0.15094339847564697, 0.1463414579629898, 0.1538461446762085, 0.10810810327529907, 0.11428570747375488, 0.1538461446762085, 0.14999999105930328, 0.12244897335767746, 0.0714285671710968, 0.1111111044883728, 0.1666666567325592, 0.1904761791229248, 0.25806450843811035, 0.25, 0.3529411852359772, 0.1702127605676651, 0.1111111044883728, 0.17391303181648254, 0.11428570747375488, 0.09302324801683426, 0.1249999925494194, 0 ]
BJubPWZRW
true
[ "Self-training with different views of the input gives excellent results for semi-supervised image recognition, sequence tagging, and dependency parsing." ]
[ "I show how it can be beneficial to express Metropolis accept/reject decisions in terms of comparison with a uniform [0,1] value, and to then update this uniform value non-reversibly, as part of the Markov chain state, rather than sampling it independently each iteration.", "This provides a small improvement for random walk Metropolis and Langevin updates in high dimensions. ", "It produces a larger improvement when using Langevin updates with persistent momentum, giving performance comparable to that of Hamiltonian Monte Carlo (HMC) with long trajectories. ", "This is of significance when some variables are updated by other methods, since if HMC is used, these updates can be done only between trajectories, whereas they can be done more often with Langevin updates. ", "This is seen for a Bayesian neural network model, in which connection weights are updated by persistent Langevin or HMC, while hyperparameters are updated by Gibbs sampling.\n" ]
[ 1, 0, 0, 0, 0 ]
[ 0.2448979616165161, 0, 0.05714285373687744, 0.1463414579629898, 0 ]
Hked5J2EKr
false
[ "A non-reversible way of making accept/reject decisions can be beneficial" ]
[ "We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES).", "Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies.", "We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement.", "Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable.", "We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.", "Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data.", "Our focus in this paper is on meta-learning in reinforcement learning (RL) , where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world.", "A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (Finn et al., 2017; , a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment.", "We provide a formal description of MAML in Section 2.", "MAML has proven to be successful for many applications.", "However, implementing and running MAML continues to be challenging.", "One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML (Finn et al., 2017) did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, (Foerster et al., 2018) ) and further analysis of the credit assignment mechanism in MAML (Rothfuss et al., 2019) .", "Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML (Liu et al., 2019) , through careful adaptive hyperparameter tuning (Behl et al., 2019; Antoniou et al., 2019) and learning rate annealing (Loshchilov & Hutter, 2017) .", "To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms.", "We provide a detailed discussion of ES in Section 3.1.", "ES has several advantages:", "1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives.", "This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details).", "2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation.", "It does not use backpropagation, so it can be run on CPUs only.", "3. ES is highly flexible with different adaptation operators (Section 3.3).", "4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3).", "ES is also capable of learning linear and other compact policies (Section 4.2).", "On the point (4), a feature of ES algorithms is that exploration takes place in the parameter space.", "Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model.", "In the context of MAML, the notions of \"exploration\" and \"task identification\" have thus been shifted to the parameter space instead of the action space.", "This distinction plays a key role in the stability of the algorithm.", "One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies.", "Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode.", "While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles.", "This paper is organized as follows.", "In Section 2, we give a formal definition of MAML, and discuss related works.", "In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML.", "In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4).", "Additional material can be found in the Appendix.", "We have presented a new framework for MAML based on ES algorithms.", "The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement.", "ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators.", "In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments.", "ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable.", "but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML.", "(Rothfuss et al., 2019; Liu et al., 2019) argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the 'naive' estimator in Algorithm 4.", "Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively 'easier' tasks like ForwardBackward walking but possibly increase the exploration on four corners.", "We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2 .", "In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts.", "We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.12765957415103912, 0.19607841968536377, 0.17777776718139648, 0.2926829159259796, 0.21739129722118378, 0.1071428507566452, 0.1846153736114502, 0.29411762952804565, 0.12121211737394333, 0.12121211737394333, 0.12820512056350708, 0.09090908616781235, 0.0833333283662796, 0.22857142984867096, 0, 0, 0.09090908616781235, 0, 0.05405404791235924, 0.11428570747375488, 0.19512194395065308, 0.10526315122842789, 0.19512194395065308, 0.07843136787414551, 0.09302324801683426, 0.17142856121063232, 0.1428571343421936, 0.1428571343421936, 0.1599999964237213, 0, 0.10526315122842789, 0.31111109256744385, 0.17543859779834747, 0.1249999925494194, 0.3333333432674408, 0.13636362552642822, 0.21739129722118378, 0.1538461446762085, 0.17391303181648254, 0.1428571343421936, 0.09090908616781235, 0.072727270424366, 0.1538461446762085, 0.04347825422883034, 0.14705881476402283 ]
S1exA2NtDB
true
[ "We provide a new framework for MAML in the ES/blackbox setting, and show that it allows deterministic and linear policies, better exploration, and non-differentiable adaptation operators." ]
[ "Transforming one probability distribution to another is a powerful tool in Bayesian inference and machine learning.", "Some prominent examples are constrained-to-unconstrained transformations of distributions for use in Hamiltonian Monte-Carlo and constructing flexible and learnable densities such as normalizing flows.", "We present Bijectors.jl, a software package for transforming distributions implemented in Julia, available at github.com/TuringLang/Bijectors.jl.", "The package provides a flexible and composable way of implementing transformations of distributions without being tied to a computational framework. \n\n", "We demonstrate the use of Bijectors.jl on improving variational inference by encoding known statistical dependencies into the variational posterior using normalizing flows, providing a general approach to relaxing the mean-field assumption usually made in variational inference.", "When working with probability distributions in Bayesian inference and probabilistic machine learning, transforming one probability distribution to another comes up quite often.", "For example, when applying Hamiltonian Monte Carlo on constrained distributions, the constrained density is usually transformed to an unconstrained density for which the sampling is performed (Neal, 2012) .", "Another example is to construct highly flexible and learnable densities often referred to as normalizing flows (Dinh et al., 2014; Huang et al., 2018; Durkan et al., 2019) ; for a review see Kobyzev et al. (2019) .", "When a distribution P is transformed into some other distribution Q using some measurable function b, we write Q = b * P and say Q is the push-forward of P .", "When b is a differentiable bijection with a differentiable inverse, i.e. a diffeomorphism or a bijector (Dillon et al., 2017) , the induced or pushed-forward distribution Qit is obtained by a simple application of change of variables.", "Specifically, given a distribution P on some Ω ⊆ R d with density p : Ω → [0, ∞), and a bijector b : Ω →Ω for someΩ ⊆ R d , the induced or pushed forward distribution Q = b * P has density q(y", ") = p b −1 (", "y) |det J b −1", "(y)| or q b(x", ") = p(x", ") |det J b (x)|", "We presented Bijectors.jl, a framework for working with bijectors and thus transformations of distributions.", "We then demonstrated the flexibility of Bijectors.jl in an application of introducing correlation structure to the mean-field ADVI approach.", "We believe Bijectors.jl will be a useful tool for future research, especially in exploring normalizing flows and their place in variational inference.", "An interesting note about the NF variational posterior we constructed is that it only requires a constant number of extra parameters on top of what is required by mean-field normal VI.", "This approach can be applied in more general settings where one has access to the directed acyclic graph (DAG) of the generative model we want to perform inference.", "Then this approach will scale linearly with the number of unique edges between random variables.", "It is also possible in cases where we have an undirected graph representing a model by simply adding a coupling in both directions.", "This would be very useful for tackling issues faced when using mean-field VI and would be of interest to explore further.", "For related work we have mainly compared against Tensorflow's tensorflow probability, which is used by other known packages such pymc4, and PyTorch's torch.distributions, which is used by packages such as pyro.", "Other frameworks which make heavy use of such transformations using their own implementations are stan, pymc3, and so on.", "But in these frameworks the transformations are mainly used to transform distributions from constrained to unconstrained and vice versa with little or no integration between those transformation and the more complex ones, e.g. normalizing flows.", "pymc3 for example support normalizing flows, but treat them differently from the constrained-to-unconstrained transformations.", "This means that composition between standard and parameterized transformations is not supported.", "Of particular note is the bijectors framework in tensorflow probability introduced in (Dillon et al., 2017) .", "One could argue that this was indeed the first work to take such a drastic approach to the separation of the determinism and stochasticity, allowing them to implement a lot of standard distributions as a TransformedDistribution.", "This framework was also one of the main motivations that got the authors of Bijectors.jl interested in making a similar framework in Julia.", "With that being said, other than the name, we have not set out to replicate tensorflow probability and most of the direct parallels were observed after-the-fact, e.g. a transformed distribution is defined by the TransformedDistribution type in both frameworks.", "Instead we believe that Julia is a language well-suited for such a framework and therefore one can innovate on the side of implementation.", "For example in Julia we can make use of code-generation or meta-programming to do program transformations in different parts of the framework, e.g. the composition b • b −1 is transformed into the identity function at compile time." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2083333283662796, 0.25925925374031067, 0.3199999928474426, 0.23529411852359772, 0.40625, 0.2641509473323822, 0.1428571343421936, 0.158730149269104, 0.1428571343421936, 0.12903225421905518, 0.17910447716712952, 0, 0, 0, 0, 0, 0.3404255211353302, 0.3199999928474426, 0.29629629850387573, 0.19672130048274994, 0.24137930572032928, 0.12765957415103912, 0.15094339847564697, 0.19607841968536377, 0.03448275476694107, 0.15686273574829102, 0.2461538463830948, 0.1304347813129425, 0.045454543083906174, 0.1249999925494194, 0.19672130048274994, 0.19230768084526062, 0.19999998807907104, 0.25925925374031067, 0.1515151411294937 ]
BklKK1nEFH
true
[ "We present a software framework for transforming distributions and demonstrate its flexibility on relaxing mean-field assumptions in variational inference with the use of coupling flows to replicate structure from the target generative model." ]
[ "Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains.", "Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias.", "This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting.", "Our contributions include:", "i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence;", "ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation;", "iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell.", "Advances in technology have enabled large scale dataset generation by life sciences laboratories.", "These datasets contain information about overlapping but non-identical known and unknown experimental conditions.", "A challenge is how to best leverage information across multiple datasets on the same subject, and to make discoveries that could not have been obtained from any individual dataset alone.Transfer learning provides a formal framework for addressing this challenge, particularly crucial in cases where data acquisition is expensive and heavily impacted by experimental settings.", "One such field is automated microscopy, which can capture thousands of images of cultured cells after exposure to different experimental perturbations (e.g from chemical or genetic sources).", "A goal is to classify mechanisms by which perturbations affect cellular processes based on the similarity of cell images.", "In principle, it should be possible to tackle microscopy image classification as yet another visual object recognition task.", "However, two major challenges arise compared to mainstream visual object recognition problems BID51 .", "First, biological images are heavily impacted by experimental choices, such as microscope settings and experimental reagents.", "Second, there is no standardized set of labeled perturbations, and datasets often contain labeled examples for a subset of possible classes only.", "This has limited microscopy image classification to single datasets and does not leverage the growing number of datasets collected by the life sciences community.", "These challenges make it desirable to learn models across many microscopy datasets, that achieve both good robustness w.r.t. experimental settings and good class coverage, all the while being robust to the fact that datasets contain samples from overlapping but distinct class sets.Multi-domain learning (MDL) aims to learn a model of minimal risk from datasets drawn from distinct underlying distributions BID20 , and is a particular case of transfer learning BID46 .", "As such, it contrasts with the so-called domain adaptation (DA) problem BID7 BID5 BID22 BID46 .", "DA aims at learning a model with minimal risk on a distribution called \"target\" by leveraging other distributions called \"sources\".", "Notably, most DA methods assume that target classes are identical to source classes, or a subset thereof in the case of partial DA BID77 .The", "expected benefits of MDL, compared to training a separate model on each individual dataset, are two-fold. First", ", MDL leverages more (labeled and unlabeled) information, allowing better generalization while accommodating the specifics of each domain BID20 BID72 . Thus", ", MDL models have a higher chance of ab initio performing well on a new domain − a problem referred to as domain generalization BID44 or zero-shot domain adaptation BID74 . Second", ", MDL enables knowledge transfer between domains: in unsupervised and semi-supervised settings, concepts learned on one domain are applied to another, significantly reducing the need for labeled examples from the latter BID46 . Learning", "a single model from samples drawn from n distributions raises the question of available learning guarantees regarding the model error on each distribution. BID32 introduced", "the notion of H-divergence to measure the distance between source and target marginal distributions in DA. BID4 have shown", "that a finite sample estimate of this divergence can be used to bound the target risk of the learned model.The contributions of our work are threefold. First, we extend", "the DA guarantees to MDL (Sec. 3.1), showing that the risk of the learned model over all considered domains is upper bounded by the oracle risk and the sum of the H-divergences between any two domains. Furthermore, an", "upper bound on the classifier imbalance (the difference between the individual domain risk, and the average risk over all domains) is obtained, thus bounding the worst-domain risk. Second, we propose", "the approach Multi-domain Learning Adversarial Neural Network (MULANN), which extends Domain Adversarial Neural Networks (DANNs) BID22 to semi-supervised DA and MDL. Relaxing the DA assumption", ", MULANN handles the so-called class asymmetry issue (when each domain may contain varying numbers of labeled and unlabeled examples of a subset of all possible classes), through designing a new loss (Sec. 3.2). Finally, MULANN is empirically", "validated in both DA and MDL settings (Sec. 4), as it significantly outperforms the state of the art on three standard image benchmarks BID52 BID35 , and a novel bioimage benchmark, CELL, where the state of the art involves extensive domain-dependent pre-processing.Notation. Let X denote an input space and", "Y = {1, . . . , L} a set of classes. For i = 1, . . . , n, dataset S", "i is an iid sample drawn from distribution D i on X × Y. The marginal distribution of D i on X is denoted by D X i . Let H be a hypothesis space; for", "each h in H (h : X → Y) we define the risk under distribution D i as i (h) = P x,y∼Di (h(x) = y). h i (respectively h ) denotes", "the oracle", "hypothesis", "according to distribution D i (resp. with minimal total risk over all domains): DISPLAYFORM0 In the semi-supervised setting, the label associated with an instance might be missing. In the following, \"domain\" and \"distribution\" will", "be used interchangeably, and the \"classes of a domain\" denote the classes for which labeled or unlabeled examples are available in this domain.", "This paper extends the use of domain adversarial learning to multi-domain learning, establishing how the H-divergence can be used to bound both the risk across all domains and the worst-domain risk (imbalance on a specific domain).", "The stress is put on the notion of class asymmetry, that is, when some domains contain labeled or unlabeled examples of classes not present in other domains.", "Showing the significant impact of class asymmetry on the state of the art, this paper also introduces MULANN, where a new loss is meant to resist the contractive effects of the adversarial domain discriminator and to repulse (a fraction of) unlabeled examples from labeled ones in each domain.The merits of the approach are satisfactorily demonstrated by comparison to DANN and MADA on DIGITS, RoadSigns and OFFICE, and results obtained on the real-world CELL problem establish a new baseline for the microscopy image community.A perspective for further study is to bridge the gap between the proposed loss and importance sampling techniques, iteratively exploiting the latent representation to identify orphan samples and adapt the loss while learning.", "Further work will also focus on how to identify and preserve relevant domain-specific behaviours while learning in a domain adversarial setting (e.g., if different cell types have distinct responses to the same class of perturbations)." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.08888888359069824, 0.25, 0, 0.25, 0.4000000059604645, 0.15789473056793213, 0.06451612710952759, 0.06451612710952759, 0.1428571343421936, 0.04444443807005882, 0.10810810327529907, 0.0555555522441864, 0.06451612710952759, 0.060606054961681366, 0.15789473056793213, 0.14999999105930328, 0.1315789371728897, 0.12121211737394333, 0.0555555522441864, 0.2380952388048172, 0.11428570747375488, 0.10256409645080566, 0.17777776718139648, 0.19999998807907104, 0.09999999403953552, 0.2222222238779068, 0.13333332538604736, 0.12244897335767746, 0.09090908616781235, 0.3589743673801422, 0.18867924809455872, 0.13333332538604736, 0.11764705181121826, 0.04651162400841713, 0.08888888359069824, 0.1666666567325592, 0.25, 0.16326530277729034, 0.1395348757505417, 0.13592232763767242, 0.2222222238779068 ]
Sklv5iRqYX
true
[ "Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting." ]
[ "We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions.", "Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions.", "On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art.", "Furthermore, DRN generalizes the conventional multilayer perceptron (MLP).", "In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution.", "The field of regression analysis is largely established with methods ranging from linear least squares to multilayer perceptrons.", "However, the scope of the regression is mostly limited to real valued inputs and outputs BID4 BID14 .", "In this paper, we perform distribution-todistribution regression where one regresses from input probability distributions to output probability distributions.Distribution-to-distribution regression (see work by BID17 ) has not been as widely studied compared to the related task of functional regression BID3 .", "Nevertheless, regression on distributions has many relevant applications.", "In the study of human populations, probability distributions capture the collective characteristics of the people.", "Potential applications include predicting voting outcomes of demographic groups BID5 and predicting economic growth from income distribution BID19 .", "In particular, distribution-to-distribution regression is very useful in predicting future outcomes of phenomena driven by stochastic processes.", "For instance, the Ornstein-Uhlenbeck process, which exhibits a mean-reverting random walk, has wide-ranging applications.", "In the commodity market, prices exhibit mean-reverting patterns due to market forces BID23 .", "It is also used in quantitative biology to model phenotypic traits evolution BID0 .Variants", "of the distribution regression task have been explored in literature BID18 . For the", "distribution-to-distribution regression task, BID17 proposed an instance-based learning method where a linear smoother estimator (LSE) is applied across the inputoutput distributions. However", ", the computation time of LSE scales badly with the size of the dataset. To that", "end, BID16 developed the Triple-Basis Estimator (3BE) where the prediction time is independent of the number of data by using basis representations of distributions and Random Kitchen Sink basis functions. BID9 proposed", "the Extrapolating the Distribution Dynamics (EDD) method which predicts the future state of a time-varying probability distribution given a sequence of samples from previous time steps. However, it is", "unclear how it can be used for the general case of regressing distributions of different objects.Our proposed Distribution Regression Network (DRN) is based on a completely different scheme of network learning, motivated by spin models in statistical physics and similar to artificial neural networks. In many variants", "of the artificial neural network, the network encodes real values in the nodes BID21 BID10 BID1 . DRN is novel in", "that it generalizes the conventional multilayer perceptron (MLP) by encoding a probability distribution in each node. Each distribution", "in DRN is treated as a single object which is then processed by the connecting weights. Hence, the propagation", "behavior in DRN is much richer, enabling DRN to represent distribution regression mappings with fewer parameters than MLP. We experimentally demonstrate", "that compared to existing methods, DRN achieves comparable or better regression performance with fewer model parameters. Figure 1 : (Left) An example", "DRN with multiple input probability distributions and multiple hidden layers mapping to an output probability distribution. (Right) A connection unit in", "the network", ", with 3 input nodes in layer l − 1 connecting to a node in layer l. Each node encodes a probability", "distribution, as illustrated by the probability density function P (l) k . The tunable parameters are the", "connecting weights and the bias parameters at the output node.", "The distribution-to-distribution regression task has many useful applications ranging from population studies to stock market prediction.", "In this paper, we propose our Distribution Regression Network which generalizes the MLP framework by encoding a probability distribution in each node.Our DRN is able to learn the regression mappings with fewer model parameters compared to MLP and 3BE.", "MLP has not been used for distribution-to-distribution regression in literature and we have adapted it for this task.", "Though both DRN and MLP are network-based methods, they encode the distribution very differently.", "By generalizing each node to encode a distribution, each distribution in DRN is treated as a single object which is then processed by the connecting weight.", "Thus, the propagation behavior in DRN is much richer, enabling DRN to represent the regression mappings with fewer parameters.", "In 3BE, the number of model parameters scales linearly with the number of projection coefficients of the distributions and number of Random Kitchen Sink features.", "In our experiments, DRN is able to achieve similar or better regression performance using less parameters than 3BE.", "Furthermore, the runtime for DRN is competitive with other methods (see comparison of mean prediction times in Appendix C).For", "future work, we look to extend DRN for variants of the distribution regression task such as distribution-to-real regression and distribution classification. Extensions", "may also be made for regressing multivariate distributions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2142857164144516, 0.06896550953388214, 0.07999999821186066, 0.19999998807907104, 0.1428571343421936, 0.13333332538604736, 0.2142857164144516, 0.1702127605676651, 0.09999999403953552, 0.0833333283662796, 0, 0.13793103396892548, 0.1538461446762085, 0.1599999964237213, 0.07692307233810425, 0.1666666567325592, 0.23529411852359772, 0.07999999821186066, 0.05128204822540283, 0.10810810327529907, 0.10526315122842789, 0.13793103396892548, 0.13793103396892548, 0.13793103396892548, 0.1875, 0.11764705181121826, 0.12903225421905518, 0.06896550953388214, 0.07407406717538834, 0.0952380895614624, 0.2142857164144516, 0.2857142686843872, 0.20689654350280762, 0.1538461446762085, 0.17142856121063232, 0.20689654350280762, 0.06666666269302368, 0.13333332538604736, 0.0624999962747097, 0.1875, 0 ]
ByYPLJA6W
true
[ "A learning network which generalizes the MLP framework to perform distribution-to-distribution regression" ]
[ "Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence.", "While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call.", "The time span between events can carry important information about the sequence dependence of human behaviors.", "In this work, we propose a set of methods for using time in sequence prediction.", "Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization.", "We also introduce two methods for using next event duration as regularization for training a sequence prediction model.", "We discuss these methods based on recurrent neural nets.", "We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks.", "The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings.", "Event sequence prediction is a task to predict the next event 1 based on a sequence of previously occurred events.", "Event sequence prediction has a broad range of applications, e.g., next word prediction in language modeling BID10 , next place prediction based on the previously visited places, or next app to launch given the usage history.", "Depending on how the temporal information is modeled, event sequence prediction often decomposes into the following two categories: discrete-time event sequence prediction and continuous-time event sequence prediction.Discrete-time event sequence prediction primarily deals with sequences that consist of a series of tokens (events) where each token can be indexed by its order position in the sequence.", "Thus such a sequence evolves synchronously in natural unit-time steps.", "These sequences are either inherently time-independent, e.g, each word in a sentence, or resulted from sampling a sequential behavior at an equally-spaced point in time, e.g., busy or not busy for an hourly traffic update.", "In a discrete-time event sequence, the distance between events is measured as the difference of their order positions.", "As a consequence, for discrete-time event sequence modeling, the primary goal is to predict what event will happen next.Continuous-time event sequence prediction mainly attends to the sequences where the events occur asynchronously.", "For example, the time interval between consecutive clinical visits of a patient may potentially vary largely.", "The duration between consecutive log-in events into an online service can change from time to time.", "Therefore, one primary goal of continuous-time event sequence prediction is to predict when the next event will happen in the near future.Although these two tasks focus on different aspects of a future event, how to learn a proper representation for the temporal information in the past is crucial to both of them.", "More specifically, even though for a few discrete-time event sequence prediction tasks (e.g., neural machine translation), they do not involve an explicit temporal information for each event (token), a proper representation of the position in the sequence is still of great importance, not to mention the more general cases where each event is particularly associated with a timestamp.", "For example, the next destination people want to go to often depends on what other places they have gone to and how long they have stayed in each place in the past.", "When the next clinical visit BID3 will occur for a patient depends on the time of the most recent visits and the respective duration between them.", "Therefore, the temporal information of events and the interval between them are crucial to the event sequence prediction in general.", "However, how to effectively use and represent time in sequence prediction still largely remains under explored.A natural and straightforward solution is to bring time as an additional input into an existing sequence model (e.g., recurrent neural networks).", "However, it is notoriously challenging for recurrent neural networks to directly handle continuous input that has a wide value range, as what is shown in our experiments.", "Alternatively, we are inspired by the fact that humans are very good at characterizing time span as high-level concepts.", "For example, we would say \"watching TV for a little while\" instead of using the exact minutes and seconds to describe the duration.", "We also notice that these high-level descriptions about time are event dependent.", "For example, watching movies for 30 minutes might feel much shorter than waiting in the line for the same amount of time.", "Thus, it is desirable to learn and incorporate these time-dependent event representations in general.", "Our paper offers the following contributions:• We propose two methods for time-dependent event representation in a neural sequence prediction model: time masking of event embedding and event-time joint embedding.", "We use the time span associated with an event to better characterize the event by manipulating its embedding to give a recurrent model additional resolving power for sequence prediction.•", "We propose to use next event duration as a regularizer for training a recurrent sequence prediction model. Specifically", ", we define two flavors of duration-based regularization: one is based on the negative log likelihood of duration prediction error and the other measures the cross entropy loss of duration prediction in a projected categorical space.• We evaluated", "these proposed methods as well as several baseline methods on five datasets (four are public). These datasets", "span a diverse range of sequence behaviors, including mobile app usage, song listening pattern, and medical history. The baseline methods", "include vanilla RNN models and those found in the recent literature. These experiments offer", "valuable findings about how these methods improve prediction accuracy in a variety of settings.", "We proposed a set of methods for leveraging the temporal information for event sequence prediction.", "Based on our intuition about how humans tokenize time spans as well as previous work on contextual representation of words, we proposed two methods for time-dependent event representation.", "They transform a regular event embedding with learned time masking and form time-event joint embedding based on learned soft one-hot encoding.", "We also introduced two methods for using next duration as a way of regularization for training a sequence prediction model.", "Experiments on a diverse range of real data demonstrate consistent performance gain by blending time into the event representation before it is fed to a recurrent neural network." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.16326530277729034, 0.13333332538604736, 0.10526315122842789, 0.3243243098258972, 0.23728813230991364, 0.3589743673801422, 0.19354838132858276, 0.5365853905677795, 0.25, 0.29999998211860657, 0.2222222238779068, 0.2461538463830948, 0.1249999925494194, 0.07407406717538834, 0.1538461446762085, 0.20408162474632263, 0.10526315122842789, 0, 0.317460298538208, 0.2571428418159485, 0.0833333283662796, 0.2222222238779068, 0.25, 0.10526315122842789, 0.1249999925494194, 0.04999999329447746, 0.1818181723356247, 0.1764705777168274, 0.0952380895614624, 0.2222222238779068, 0.40816324949264526, 0.20408162474632263, 0.25641024112701416, 0.1818181723356247, 0.277777761220932, 0.2857142686843872, 0.0555555522441864, 0.277777761220932, 0.3888888955116272, 0.2978723347187042, 0.19512194395065308, 0.3499999940395355, 0.2448979616165161 ]
SJDJNzWAZ
true
[ "Proposed methods for time-dependent event representation and regularization for sequence prediction; Evaluated these methods on five datasets that involve a range of sequence prediction tasks." ]
[ "Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired.", "This property makes these algorithms appealing for real world problems such as robot control.", "In practice, however, standard off-policy algorithms fail in the batch setting for continuous control.", "In this paper, we propose a simple solution to this problem.", "It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task.", "Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources.", "We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.", "Batch reinforcement learning (RL) (Ernst et al., 2005; Lange et al., 2011) is the problem of learning a policy from a fixed, previously recorded, dataset without the opportunity to collect new data through interaction with the environment.", "This is in contrast to the typical RL setting which alternates between policy improvement and environment interaction (to acquire data for policy evaluation).", "In many real world domains collecting new data is laborious and costly, both in terms of experimentation time and hardware availability but also in terms of the human labour involved in supervising experiments.", "This is especially evident in robotics applications (see e.g. Haarnoja et al. 2018b; Kalashnikov et al. 2018 for recent examples learning on robots).", "In these settings where gathering new data is expensive compared to the cost of learning, batch RL promises to be a powerful solution.", "There exist a wide class of off-policy algorithms for reinforcement learning designed to handle data generated by a behavior policy µ which might differ from π, the policy that we are interested in learning (see e.g. Sutton & Barto (2018) for an introduction).", "One might thus expect solving batch RL to be a straightforward application of these algorithms.", "Surprisingly, for batch RL in continuous control domains, however, Fujimoto et al. (2018) found that policies obtained via the naïve application of off-policy methods perform dramatically worse than the policy that was used to generate the data.", "This result highlights the key challenge in batch RL: we need to exhaustively exploit the information that is in the data but avoid drawing conclusions for which there is no evidence (i.e. we need to avoid over-valuing state-action sequences not present in the training data).", "As we will show in this paper, the problems with existing methods in the batch learning setting are further exacerbated when the provided data contains behavioral trajectories from different policies µ 1 , . . . , µ N which solve different tasks, or the same task in different ways (and thus potentially execute conflicting actions) that are not necessarily aligned with the target task that π should accomplish.", "We empirically show that previously suggested adaptations for off-policy learning (Fujimoto et al., 2018; Kumar et al., 2019) can be led astray by behavioral patterns in the data that are consistent (i.e. policies that try to accomplish a different task or a subset of the goals for the target task) but not relevant for the task at hand.", "This situation is more damaging than learning from noisy or random data where the behavior policy is sub-optimal but is not predictable, i.e. the randomness is not a correlated signal that will be picked up by the learning algorithm.", "We propose to solve this problem by restricting our solutions to 'stay close to the relevant data'.", "This is done by:", "1) learning a prior that gives information about which candidate policies are potentially supported by the data (while ensuring that the prior focuses on relevant trajectories),", "2) enforcing the policy improvement step to stay close to the learned prior policy.", "We propose a policy iteration algorithm in which the prior is learned to form an advantage-weighted model of the behavior data.", "This prior biases the RL policy towards previously experienced actions that also have a high chance of being successful in the current task.", "Our method enables stable learning from conflicting data sources and we show improvements on competitive baselines in a variety of RL tasks -including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.", "We also find that utilizing an appropriate prior is sufficient to stabilize learning; demonstrating that the policy evaluation step is implicitly stabilized when a policy iteration algorithm is used -as long as care is taken to faithfully evaluate the value function within temporal difference calculations.", "This results in a simpler algorithm than in previous work (Fujimoto et al., 2018; Kumar et al., 2019) .", "In this work, we considered the problem of stable learning from logged experience with off-policy RL algorithms.", "Our approach consists of using a learned prior that models the behavior distribution contained in the data (the advantage weighted behavior model) towards which the policy of an RL algorithm is regularized.", "This allows us to avoid drawing conclusions for which there is no evidence in the data.", "Our approach is robust to large amounts of sub-optimal data, and compares favourably to strong baselines on standard continuous control benchmarks.", "We further demonstrate that our approach can work in challenging robot manipulation domains -learning some tasks without ever seeing a single trajectory for them.", "A ALGORITHM A full algorithm listing for our procedure is given in Algorithm 1." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.22641508281230927, 0.04999999701976776, 0.09999999403953552, 0.1111111044883728, 0.3125, 0.21739129722118378, 0.23529411852359772, 0.3448275923728943, 0.2916666567325592, 0.14814814925193787, 0.1249999925494194, 0.2916666567325592, 0.3030303120613098, 0.19512194395065308, 0.23333333432674408, 0.19354838132858276, 0.1012658178806305, 0.21333332359790802, 0.23728813230991364, 0.1463414579629898, 0.06666666269302368, 0.16326530277729034, 0.21621620655059814, 0.43478259444236755, 0.25, 0.3103448152542114, 0.1875, 0.0476190410554409, 0.3255814015865326, 0.3333333134651184, 0.2380952388048172, 0.1304347813129425, 0.11999999731779099, 0.10256409645080566 ]
rke7geHtwH
true
[ "We develop a method for stable offline reinforcement learning from logged data. The key is to regularize the RL policy towards a learned \"advantage weighted\" model of the data." ]
[ "One of the main challenges in applying graph convolutional neural networks on gene-interaction data is the lack of understanding of the vector space to which they belong and also the inherent difficulties involved in representing those interactions on a significantly lower dimension, viz Euclidean spaces.", "The challenge becomes more prevalent when dealing with various types of heterogeneous data.", "We introduce a systematic, generalized method, called iSOM-GSN, used to transform ``multi-omic'' data with higher dimensions onto a two-dimensional grid.", "Afterwards, we apply a convolutional neural network to predict disease states of various types.", "Based on the idea of Kohonen's self-organizing map, we generate a two-dimensional grid for each sample for a given set of genes that represent a gene similarity network. ", "We have tested the model to predict breast and prostate cancer using gene expression, DNA methylation and copy number alteration, yielding prediction accuracies in the 94-98% range for tumor stages of breast cancer and calculated Gleason scores of prostate cancer with just 11 input genes for both cases.", "The scheme not only outputs nearly perfect classification accuracy, but also provides an enhanced scheme for representation learning, visualization, dimensionality reduction, and interpretation of the results.", "Large scale projects such as \"The Cancer Genome Atlas\" (TCGA) generate a plethora of multidimensional data by applying high-resolution microarrays and next generation sequencing.", "This leads to diverse multi-dimensional data in which the need for devising dimensionality reduction and representation learning methods to integrate and analyze such data arises.", "An earlier study by Shen et al. proposed algorithms iCluster (Shen et al., 2009a) and iCluster+ (Shen et al., 2009b) , which made use of the latent variable model and principal component analysis (PCA) on multi-omic data and aimed to cluster cancer data into sub-types; even though it performed well, it did not use multi-omics data.", "In another study, (Lyu and Haque, 2018) attempted to apply heatmaps as a dimensionality reduction scheme on gene expression data to deduce biological insights and then classify cancer types from a Pan-cancer cohort.", "However, the accuracy obtained by using that method was limited to 97% on Pan-cancer data, lacking the benefits of integrated multi-omics data.", "In a recent study (Choy et al., 2019) used self-Organizing maps (SOMs) to embed gene expression data into a lower dimensional map, while the works of (Bustamam et al., 2018; Mallick et al., 2019; Paul and Shill, 2018; Loeffler-Wirth et al., 2019) generate clusters using SOMs on gene expression data with different aims.", "In addition, the work of (Hopp et al., 2018) combines gene expression and DNA methylation to identify subtypes of cancer similar to those of (Roy et al., 2018) , which identifies modules of co-expressing genes.", "On the other hand, the work of (Kartal et al., 2018) uses SOMs to create a generalized regression neural network, while the model proposed in (Yoshioka and Dozono, 2018; Shah and Luo, 2017) uses SOMs to classify documents based on a word-tovector model.", "Apart from dimensionality reduction methods, attempts have been made by applying supervised deep machine learning, such as deepDriver (Luo et al., 2019) , which predicts candidate driver genes based on mutation-based features and gene similarity networks.", "Although these works have been devised to use embedding and conventional machine learning approaches, the use deep neural networks on multi-omics data integration is still in its infancy.", "In addition, these methods lack Gleason Score Number of Samples Group 3+4 147 34 4+3 101 43 4+5,5+4 139 9 Table 1 : Distribution of the different Gleason groups considered for PRCA.", "in adequacy to generalize them multi-omics data to predic disease states.", "More specifically, none of these models combine the strength of SOMs for representation learning combined with the CNN for image classification as we do in this work.", "In this paper, a deep learning-based method is proposed, and is used to predict disease states by integrating multi-omic data.", "The method, which we call iSOM-GSN, leverages the power of SOMs to transform multi-omic data into a gene similarity network (GSN) by the use of gene expression data.", "Such data is then combined with other genomic features to improve prediction accuracy and help visualization.", "To our knowledge, this the first deep learning model that uses SOMs to transform multi-omic data into a GSN for representation learning, and uses CNNs for classification of disease states or other clinical features.", "The main contributions of this work can be summarized as follows:", "• A deep learning method for prediction of tumor aggressiveness and progression using iSOM-GSN.", "• A new strategy to derive gene similarity networks via self-organizing maps.", "• Use of iSOM-GSN to identify relevant biomarkers without handcrafted feature engineering.", "• An enhanced scheme to interpret and visualize multi-dimensional, multi-omics data.", "• An efficient model for graph representation learning.", "This paper presents a framework that uses a self-organizing map and a convolutional neural network used to conduct data integration, representation learning, dimensionality reduction, feature selection and classification simultaneously to harness the full potential of integrated high-dimensional large scale cancer genomic data.", "We have introduced a new way to create gene similarity networks, which can lead to novel gene interactions.", "We have also provided a scheme to visualize high-dimensional, multi-omics data onto a two-dimensional grid.", "In addition, we have devised an approach that could also be used to integrate other types of multi-omic data and predict any clinical aspects or states of diseases, such as laterality of the tumor, survivability, or cancer sub types, just to mention a few.", "This work can also be extended to classify Pan-cancer data.", "Omics can be considered as a vector and more than three types of data (i.e., beyond RGB images) can be incorporated for classification.", "Apart from integrating multi-omics data, the proposed approach can be considered as an unsupervised clustering algorithm, because of the competitive learning nature of SOMs.", "We can also apply iSOM-GSN on other domains, such as predicting music genre's for users based on their music preference.", "As a first step, we have applied the SOM to a Deezer dataset and the results are encouraging 14.", "Applications of iSOM-GSN can also be in drug response or re-purposing, prediction of passenger or oncogenes, revealing topics in citation networks, and other prediction tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24137930572032928, 0.12121211737394333, 0.10256409645080566, 0.23529411852359772, 0.2222222238779068, 0.1355932205915451, 0.17777776718139648, 0.1818181723356247, 0.2857142686843872, 0.1538461446762085, 0.11999999731779099, 0.19512194395065308, 0.16129031777381897, 0.1249999925494194, 0.178571417927742, 0.1071428507566452, 0.2978723347187042, 0.07999999821186066, 0.13333332538604736, 0.1818181723356247, 0.20512819290161133, 0.13636362552642822, 0.1111111044883728, 0.38461539149284363, 0.06451612710952759, 0.29411762952804565, 0.1875, 0.0624999962747097, 0.19354838132858276, 0.2857142686843872, 0.42105263471603394, 0.0555555522441864, 0.1764705777168274, 0.1666666567325592, 0.13333332538604736, 0.23255813121795654, 0.1428571343421936, 0.052631575614213943, 0.10810810327529907, 0.09756097197532654 ]
BJgRjTNtPH
true
[ "This paper presents a deep learning model that combines self-organizing maps and convolutional neural networks for representation learning of multi-omics data" ]
[ "State-of-the-art performances on language comprehension tasks are achieved by huge language models pre-trained on massive unlabeled text corpora, with very light subsequent fine-tuning in a task-specific supervised manner.", "It seems the pre-training procedure learns a very good common initialization for further training on various natural language understanding tasks, such that only few steps need to be taken in the parameter space to learn each task.", "In this work, using Bidirectional Encoder Representations from Transformers (BERT) as an example, we verify this hypothesis by showing that task-specific fine-tuned language models are highly close in parameter space to the pre-trained one.", "Taking advantage of such observations, we further show that the fine-tuned versions of these huge models, having on the order of $10^8$ floating-point parameters, can be made very computationally efficient.", "First, fine-tuning only a fraction of critical layers suffices.", "Second, fine-tuning can be adequately performed by learning a binary multiplicative mask on pre-trained weights, \\textit{i.e.} by parameter-sparsification.", "As a result, with a single effort, we achieve three desired outcomes: (1) learning to perform specific tasks, (2) saving memory by storing only binary masks of certain layers for each task, and (3) saving compute on appropriate hardware by performing sparse operations with model parameters. ", "One very puzzling fact about overparameterized deep neural networks is that sheer increases in dimensionality of the parameter space seldom make stochastic gradient-based optimization more difficult.", "Given an effective network architecture reflecting proper inductive biases, deeper and/or wider networks take just about the same, if not a lower, number of training iterations to converge, a number often by orders of magnitude smaller than the dimensionality of the parameter space.", "For example, ResNet-18 (parameter count 11.7M) and ResNet-152 (parameter count 60.2M) both train to converge, at similar convergence rates, in no more than 600K iterations on Imagenet (He et al., 2015) .", "Meaningful optimization seems to happen in only a very low-dimensional parameter subspace, viz. the span of those relatively few weight updates, with its dimensionality not ostensibly scaling with the model size.", "In other words, the network seems already perfectly converged along most of the parameter dimensions at initialization, suggesting that training only marginally alters a high-dimensional parameter configuration.", "This phenomenon is epitomized in fine-tuning of pre-trained models.", "Pre-training is a, often unsupervised, learning procedure that yields a good common initialization for further supervised learning of various downstream tasks.", "The better a pre-trained model is, the fewer iterations are required on average to fine-tune it to perform specific tasks, resulting in fine-tuned models hypothetically closer 1 to the pre-trained one in parameter space.", "However, better pre-trained models are, almost always, larger models (Hestness et al., 2017) , and nowhere is this trend more prominent than recent pretrained language models that achieved state-of-the-art natural language understanding performance, e.g. GPT-2 (Radford et al., 2019 ) has 1.5B parameters.", "Thus, a problem naturally arises hand-in-hand with an obvious hint to its solution: as pre-trained models get larger, on the one hand, computation of each fine-tuned model becomes more expensive in terms of both memory and compute for inference, while on the other hand, greater closeness between the pre-trained and fine-tuned models in the parameter space prescribes a higher degree of computational redundancy that could be potentially avoided.", "Additionally, there might exist more computationally efficient fine-tuned networks that are not necessarily close to, but cheaply attainable from, the pre-trained parameters, which are shared across all tasks.", "In this study, we seek to address these questions, using Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) and the General Language Understanding Evaluation (GLUE) benchmark tasks (Wang et al., 2018) as a working example.", "We first found that the fine-tuned and pre-trained parameters are both L 1 -close and angular-close in parameter space, consistent with the small number of fine-tuning iterations separating them.", "Next, we demonstrated that there also exist good fine-tuned models that are L 0 -close (i.e. having a small number of different components) to the pre-trained one.", "Further, we showed that there exist good fine-tuned parameters that are L 0 -small (i.e. sparse, or having a large fraction of zero components).", "Finally, we successfully found fine-tuned language models that are both L 0 -small and L 0 -close to the pre-trained models.", "We remark the practical implications of these constraints.", "By forcing fine-tuned parameters to be L 0 -close to the pre-trained ones, one only needs to store a small number of different weights per task, in addition to the common pre-trained weights, substantially saving parameter memory.", "By forcing fine-tuned parameters to be sparse, one potentially saves memory and compute, provided proper hardware acceleration of sparse linear algebraic operations.", "Surprisingly, our findings also reveal an abundance of good task-specific parameter configurations within a sparse L 0 -vicinity of large pre-trained language models like BERT: a specific task can be learned by simply masking anywhere between 1% to 40% of the pre-trained weights to zero.", "See Figure 1 for an explanation of the L 0 -and sparse L 0 -vicinities.", "Figure 1: An illustration of the L 0 -vicinity and the sparse L 0 -vicinity of a pre-trained parameter in a three-dimensional parameter space.", "The L 0 -vicinity is continuous and contains parameters that are L 0 -close, whereas the sparse L 0 -vicinity is a discrete subset of L 0 -close parameters that are also L 0 -small.", "We show that, due to surprisingly frequent occurrences of good parameter configurations in the sparse L 0 -vicinity of large pre-trained language models, two techniques are highly effective in producing efficient fine-tuned networks to perform specific language understanding tasks: (1) optimizing only the most sensitive layers and (2) learning to sparsify parameters.", "In contrast to commonly employed post-training compression methods that have to trade off with performance degradation, our procedure of generating sparse networks is by itself an optimization process that learns specific tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1875, 0.04878048598766327, 0.1538461446762085, 0.060606058686971664, 0.2666666507720947, 0.07999999821186066, 0.040816325694322586, 0.0624999962747097, 0.04651162400841713, 0, 0.05714285373687744, 0.06451612710952759, 0.4000000059604645, 0.07692307233810425, 0.05714285373687744, 0.08695651590824127, 0.09836065024137497, 0, 0.04878048598766327, 0.12121211737394333, 0.12121211737394333, 0.06666666269302368, 0.1666666567325592, 0.1428571343421936, 0.052631575614213943, 0.0714285671710968, 0.1304347813129425, 0.10526315122842789, 0.08695651590824127, 0.0714285671710968, 0.07692307233810425, 0.0555555522441864 ]
SJx7004FPH
true
[ "Sparsification as fine-tuning of language models" ]
[ "We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning.", "It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost.", "Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize.", "We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails.", "We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.", "Training artificial agents to perform complex tasks is essential for many applications in robotics, video games and dialogue.", "If success on the task can be accurately described using a reward or cost function, reinforcement learning (RL) methods offer an approach to learning policies which has been shown to be successful in a wide variety of applications (Mnih et al., 2015; Hessel et al., 2018) However, in other cases the desired behavior may only be roughly specified and it is unclear how to design a reward function to characterize it.", "For example, training a video game agent to adopt more human-like behavior using RL would require designing a reward function which characterizes behaviors as more or less human-like, which is difficult.", "Imitation learning (IL) offers an elegant approach whereby agents are trained to mimic the demonstrations of an expert rather than optimizing a reward function.", "Its simplest form consists of training a policy to predict the expert's actions from states in the demonstration data using supervised learning.", "While appealingly simple, this approach suffers from the fact that the distribution over states observed at execution time can differ from the distribution observed during training.", "Minor errors which initially produce small deviations from the expert trajectories become magnified as the policy encounters states further and further from its training distribution.", "This phenomenon, initially noted in the early work of (Pomerleau, 1989) , was formalized in the work of (Ross & Bagnell, 2010) who proved a quadratic O( T 2 ) bound on the regret and showed that this bound is tight.", "The subsequent work of (Ross et al., 2011) showed that if the policy is allowed to further interact with the environment and make queries to the expert policy, it is possible to obtain a linear bound on the regret.", "However, the ability to query an expert can often be a strong assumption.", "In this work, we propose a new and simple algorithm called DRIL (Disagreement-Regularized Imitation Learning) to address the covariate shift problem in imitation learning, in the setting where the agent is allowed to interact with its environment.", "Importantly, the algorithm does not require any additional interaction with the expert.", "It operates by training an ensemble of policies on the demonstration data, and using the disagreement in their predictions as a cost which is optimized through RL together with a supervised behavioral cloning cost.", "The motivation is that the policies in the ensemble will tend to agree on the set of states covered by the expert, leading to low cost, but are more likely to disagree on states not covered by the expert, leading to high cost.", "The RL cost thus pushes the agent back towards the distribution of the expert, while the supervised cost ensures that it mimics the expert within the expert's distribution.", "Our theoretical results show that, subject to realizability and optimization oracle assumptions, our algorithm obtains a O( κ T ) regret bound for tabular MDPs, where κ is a measure which quantifies a tradeoff between the concentration of the demonstration data and the diversity of the ensemble outside the demonstration data.", "We evaluate DRIL empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning, often recovering expert performance with only a few trajectories.", "Addressing covariate shift has been a long-standing challenge in imitation learning.", "In this work, we have proposed a new method to address this problem by penalizing the disagreement between an ensemble of different policies sampled from the posterior.", "Importantly, our method requires no additional labeling by an expert.", "Our experimental results demonstrate that DRIL can often match expert performance while using only a small number of trajectories across a wide array of tasks, ranging from tabular MDPs to pixel-based Atari games and continuous control tasks.", "On the theoretical side, we have shown that our algorithm can provably obtain a low regret bound for tabular problems in which the κ parameter is low.", "There are multiple directions for future work.", "On the theoretical side, extending our analysis to continuous state spaces and characterizing the κ parameter on a larger array of problems would help to better understand the settings where our method can expect to do well.", "Empirically, there are many other settings in structured prediction (Daumé et al., 2009 ) where covariate shift is an issue and where our method could be applied.", "For example, in dialogue and language modeling it is common for generated text to become progressively less coherent as errors push the model off the manifold it was trained on.", "Our method could potentially be used to fine-tune language or translation models (Cho et al., 2014; Welleck et al., 2019) after training by applying our uncertainty-based cost function to the generated text.", "A PROOFS", "Proof.", "We will first show that for any π ∈ Π and U ⊆ S, we have", ". We can rewrite this as:", "We begin by bounding the first term:", "We next bound the second term:", "Now observe we can decompose the RL cost as follows:", "Putting these together, we get the following:", "Here we have used the fact that β(U) ≤ 1 since 0 ≤ π(a|s) ≤ 1 and α(U) ≥ s∈U", "Taking the minimum over subsets U ⊆ S, we get J exp (π) ≤ κJ alg (π).", "Proof.", "Plugging the optimal policy into J alg , we get:", "We will first bound Term 1:", "We will next bound Term 2:", "The last step follows from our optimization oracle assumption:", "Combining the bounds on the two terms, we get J alg (π ) ≤ 2 .", "Since π ∈ Π, the result follows.", "Theorem 1.", "Letπ be the result of minimizing J alg using our optimization oracle, and assume that", "Proof.", "By our optimization oracle and Lemma 2, we have", "Combining with Lemma 1, we get:", "Applying Theorem 1 from (Ross et al., 2011) , we get J(π) ≤ J(π ) + 3uκ T ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3571428656578064, 0.09756097197532654, 0.07692307233810425, 0.09999999403953552, 0.10526315122842789, 0.13793103396892548, 0.08695651590824127, 0.05128204822540283, 0.05882352590560913, 0.1875, 0, 0, 0.043478257954120636, 0, 0, 0.1818181723356247, 0, 0.1428571343421936, 0.09756097197532654, 0, 0.07843136787414551, 0.043478257954120636, 0.45454543828964233, 0.0555555522441864, 0, 0.043478257954120636, 0.1111111044883728, 0.1111111044883728, 0, 0.15789473056793213, 0.10256409645080566, 0, 0.07407406717538834, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.07692307233810425, 0, 0, 0 ]
rkgbYyHtwB
true
[ "Method for addressing covariate shift in imitation learning using ensemble uncertainty" ]
[ "We present and discuss a simple image preprocessing method for learning disentangled latent factors. \n", "In particular, we utilize the implicit inductive bias contained in features from networks pretrained on the ImageNet database. \n", "We enhance this bias by explicitly fine-tuning such pretrained networks on tasks useful for the NeurIPS2019 disentanglement challenge, such as angle and position estimation or color classification.\n", "Furthermore, we train a VAE on regionally aggregate feature maps, and discuss its disentanglement performance using metrics proposed in recent literature.", "Fully unsupervised methods, that is, without any human supervision, are doomed to fail for tasks such as learning disentangled representations (Locatello et al., 2018) .", "In this contribution, we utilize the implicit inductive bias contained in models pretrained on the ImageNet database (Russakovsky et al., 2014) , and enhance it by finetuning such models on challenge-relevant tasks such as angle and position estimation or color classification.", "In particular, our submission for challenge stage 2 builds on our submission from stage 1 1 , in which we employed pretrained CNNs to extract convolutional feature maps as a preprocessing step before training a VAE (Kingma and Welling, 2013) .", "Although this approach already results in partial disentanglement, we identified two issues with the feature vectors extracted this way.", "Firstly, the feature extraction network is trained on ImageNet, which is rather dissimilar to the MPI3d dataset used in the challenge.", "Secondly, the feature aggregation mechanism was chosen ad-hoc and likely does not retain all information needed for disentanglement.", "We attempt to fix these issues by finetuning the feature extraction network as well as learning the aggregation of feature maps from data by using the labels of the simulation datasets MPI3d-toy and MPI3d-realistic.", "On the public leaderboard (i.e. on MPI3D-real ), our best submission achieves the first rank on the FactorVAE (Kim and Mnih, 2018) , and DCI (Eastwood and Williams, 2018 ) metrics, with a large gap to the second-placed entry.", "See appendix A for a discussion of the results.", "Unsurprisingly, introducing prior knowledge simplifies the disentanglement task considerably, reflected in improved scores.", "To do so, our approach makes use of task-specific supervision obtained from simulation, which restricts its applicability.", "Nevertheless, it constitutes a demonstration that this type of supervision can transfer to better disentanglement on real world data, which was one of the goals of the challenge." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.06666666269302368, 0.12121211737394333, 0.0952380895614624, 0.0555555522441864, 0.04999999701976776, 0.07843136787414551, 0.11999999731779099, 0.1818181723356247, 0.1818181723356247, 0.12121211737394333, 0.380952388048172, 0.08163265138864517, 0.1666666567325592, 0.0714285671710968, 0.1875, 0.29999998211860657 ]
S1gHsYFhsB
true
[ "We use supervised finetuning of feature vectors to improve transfer from simulation to the real world" ]
[ "A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy.", "In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals.", "In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning.", "However, reinforcement learning agents have only recently been endowed with such capacity for hindsight.", "In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms.", "Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.", "In a traditional reinforcement learning setting, an agent interacts with an environment in a sequence of episodes, observing states and acting according to a policy that ideally maximizes expected cumulative reward.", "If an agent is required to pursue different goals across episodes, its goal-conditional policy may be represented by a probability distribution over actions for every combination of state and goal.", "This distinction between states and goals is particularly useful when the probability of a state transition given an action is independent of the goal pursued by the agent.Learning such goal-conditional behavior has received significant attention in machine learning and robotics, especially because a goal-conditional policy may generalize desirable behavior to goals that were never encountered by the agent BID17 BID3 Kupcsik et al., 2013; Deisenroth et al., 2014; BID16 BID29 Kober et al., 2012; Ghosh et al., 2018; Mankowitz et al., 2018; BID11 .", "Consequently, developing goal-based curricula to facilitate learning has also attracted considerable interest (Fabisch & Metzen, 2014; Florensa et al., 2017; BID20 BID19 . In hierarchical reinforcement learning, goal-conditional policies may enable agents to plan using subgoals, which abstracts the details involved in lower-level decisions BID10 BID26 Kulkarni et al., 2016; Levy et al., 2017) .In", "a typical sparse-reward environment, an agent receives a non-zero reward only upon reaching a goal state. Besides", "being natural, this task formulation avoids the potentially difficult problem of reward shaping, which often biases the learning process towards suboptimal behavior BID9 . Unfortunately", ", sparse-reward environments remain particularly challenging for traditional reinforcement learning algorithms BID0 Florensa et al., 2017) . For example,", "consider an agent tasked with traveling between cities. In a sparse-reward", "formulation, if reaching a desired destination by chance is unlikely, a learning agent will rarely obtain reward signals. At the same time,", "it seems natural to expect that an agent will learn how to reach the cities it visited regardless of its desired destinations.In this context, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended is called hindsight. This capacity was", "recently introduced by BID0 to off-policy reinforcement learning algorithms that rely on experience replay (Lin, 1992) . In earlier work,", "Karkus et al. (2016) introduced hindsight to policy search based on Bayesian optimization BID5 .In this paper, we", "demonstrate how hindsight can be introduced to policy gradient methods BID27 BID28 BID22 , generalizing this idea to a successful class of reinforcement learning algorithms BID13 Duan et al., 2016) .In contrast to previous", "work on hindsight, our approach relies on importance sampling BID2 . In reinforcement learning", ", importance sampling has been traditionally employed in order to efficiently reuse information obtained by earlier policies during learning BID15 BID12 Jie & Abbeel, 2010; BID7 . In comparison, our approach", "attempts to efficiently learn about different goals using information obtained by the current policy for a specific goal. This approach leads to multiple", "formulations of a hindsight policy gradient that relate to well-known policy gradient results.In comparison to conventional (goal-conditional) policy gradient estimators, our proposed estimators lead to remarkable sample efficiency on a diverse selection of sparse-reward environments.", "We introduced techniques that enable learning goal-conditional policies using hindsight.", "In this context, hindsight refers to the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended.", "Prior to our work, hindsight has been limited to off-policy reinforcement learning algorithms that rely on experience replay BID0 and policy search based on Bayesian optimization (Karkus et al., 2016) .In", "addition to the fundamental hindsight policy gradient, our technical results include its baseline and advantage formulations. These", "results are based on a self-contained goal-conditional policy framework that is also introduced in this text. Besides", "the straightforward estimator built upon the per-decision hindsight policy gradient, we also presented a consistent estimator inspired by weighted importance sampling, together with the corresponding baseline formulation. A variant", "of this estimator leads to remarkable comparative sample efficiency on a diverse selection of sparsereward environments, especially in cases where direct reward signals are extremely difficult to obtain. This crucial", "feature allows natural task formulations that require just trivial reward shaping.The main drawback of hindsight policy gradient estimators appears to be their computational cost, which is directly related to the number of active goals in a batch. This issue may", "be mitigated by subsampling active goals, which generally leads to inconsistent estimators. Fortunately, our", "experiments suggest that this is a viable alternative. Note that the success", "of hindsight experience replay also depends on an active goal subsampling heuristic (Andrychowicz et al., 2017, Sec. 4.5) .The inconsistent hindsight", "policy gradient estimator with a value function baseline employed in our experiments sometimes leads to unstable learning, which is likely related to the difficulty of fitting such a value function without hindsight. This hypothesis is consistent", "with the fact that such instability is observed only in the most extreme examples of sparse-reward environments. Although our preliminary experiments", "in using hindsight to fit a value function baseline have been successful, this may be accomplished in several ways, and requires a careful study of its own. Further experiments are also required", "to evaluate hindsight on dense-reward environments.There are many possibilities for future work besides integrating hindsight policy gradients into systems that rely on goal-conditional policies: deriving additional estimators; implementing and evaluating hindsight (advantage) actor-critic methods; assessing whether hindsight policy gradients can successfully circumvent catastrophic forgetting during curriculum learning of goal-conditional policies; approximating the reward function to reduce required supervision; analysing the variance of the proposed estimators; studying the impact of active goal subsampling; and evaluating every technique on continuous action spaces. Theorem A.1. The gradient ∇η(θ) of the", "expected return", "with respect to θ is given by DISPLAYFORM0 Proof. The partial derivative ∂η(θ)/∂θ j of the", "expected return η(θ) with respect to θ j is given by DISPLAYFORM1 The likelihood-ratio trick allows rewriting the previous equation as DISPLAYFORM2 Note that DISPLAYFORM3 Therefore, DISPLAYFORM4 A.2 THEOREM 3.1Theorem 3.1 (Goal-conditional", "policy gradient). The gradient ∇η(θ) of the expected return", "with respect to θ is given by DISPLAYFORM5 Proof. Starting from Eq. 17, the partial derivative", "∂η(θ)/∂θ j of η(θ) with respect to θ j is given by DISPLAYFORM6 The previous equation can be rewritten as DISPLAYFORM7 Let c denote an expectation inside Eq. 19 for t ≥ t. In that case, A t ⊥ ⊥ S t | S t , G, Θ, and", "so DISPLAYFORM8 Reversing the likelihood-ratio trick, DISPLAYFORM9 Therefore, the terms where t ≥ t can be dismissed from Eq. 19, leading to DISPLAYFORM10 The previous equation can be conveniently rewritten as DISPLAYFORM11 A.3 LEMMA A.1Lemma A.1. For every j, t, θ, and", "associated real-valued", "(baseline) function b DISPLAYFORM12 Proof. Letting c denote an expectation inside Eq. 24", ", DISPLAYFORM13 Reversing the likelihood-ratio trick, DISPLAYFORM14 A.4 THEOREM 3.2 Theorem 3.2 (Goal-conditional policy", "gradient, baseline formulation). For every t, θ, and associated real-valued (baseline", ") function b θ t , the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM15 Proof. The result is obtained by subtracting Eq. 24 from Eq.", "23. Importantly, for every combination of θ and t, it would", "also be possible to have a distinct baseline function for each parameter in θ.A.5 LEMMA A.2 Lemma A.2. The gradient ∇η(θ) of the expected", "return with", "respect to", "θ is given by DISPLAYFORM16 Proof. Starting from Eq. 23 and rearranging terms, DISPLAYFORM17", "By the definition of action-value function, DISPLAYFORM18 A.6 THEOREM 3.3Theorem 3.3 (Goal-conditional policy gradient,", "advantage formulation). The gradient ∇η(θ) of the expected return with respect to", "θ is given by DISPLAYFORM19 Proof. The result is obtained by choosing b θ t = V θ t and subtracting", "Eq. 24 from Eq. 29.A.7 THEOREM A.2For arbitrary j and θ, consider the following definitions", "of f and h. DISPLAYFORM20 DISPLAYFORM21 For every b j ∈ R, using Theorem 3.1 and", "the fact that DISPLAYFORM22 Proof. The result is an application of Lemma D.4. The following theorem relies", "on importance sampling, a traditional technique", "used to obtain estimates related to a random variable X ∼ p using samples from an arbitrary positive distribution q. This technique relies on the following equalities:" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.04651162400841713, 0.7199999690055847, 0.10810810327529907, 0.13333332538604736, 0.0476190410554409, 0.11764705181121826, 0.15094339847564697, 0.13793103396892548, 0.1111111044883728, 0.10526315122842789, 0.08695651590824127, 0, 0.05882352590560913, 0.045454539358615875, 0.5538461208343506, 0.0476190410554409, 0.09756097197532654, 0.145454540848732, 0, 0.15094339847564697, 0.2666666507720947, 0.11764705181121826, 0.060606054961681366, 0.782608687877655, 0.15094339847564697, 0.14999999105930328, 0.04999999329447746, 0.08163265138864517, 0.039215680211782455, 0.16129031777381897, 0.10810810327529907, 0.05882352590560913, 0.08888888359069824, 0.18518517911434174, 0.04651162400841713, 0.07547169178724289, 0.10752687603235245, 0.10256409645080566, 0.07017543166875839, 0.1875, 0.10256409645080566, 0.0615384578704834, 0.06451612710952759, 0.0555555522441864, 0.10526315122842789, 0, 0.11999999731779099, 0, 0.1249999925494194, 0, 0.10526315122842789, 0.17142856121063232, 0, 0.10256409645080566, 0, 0.09999999403953552, 0, 0.16326530277729034 ]
Bkg2viA5FQ
true
[ "We introduce the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended to policy gradient methods." ]
[ "Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets.", "However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements.", "Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive.", "We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure.", "In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved.", "Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning.", "In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable.", "Using CATER, we provide insights into some of the most recent state of the art deep video architectures.", "While deep features have revolutionized static image analysis, video descriptors have struggled to outperform classic hand-crafted descriptors (Wang & Schmid, 2013) .", "Though recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016) , simpler 2D models (Wang et al., 2016b) still routinely appear among top performers in video benchmarks such as the Kinetics Challenge at CVPR'17.", "This raises the natural question: are videos trivially understandable by simply averaging the predictions over a sampled set of frames?", "At some level, the answer must be no. Reasoning about high-level cognitive concepts such as intentions, goals, and causal relations requires reasoning over long-term temporal structure and order (Shoham, 1987; Bobick, 1997) .", "Consider, for example, the movie clip in Fig. 1 (a) , where an actor leaves the table, grabs a firearm from another room, and returns.", "Even though no gun is visible in the final frames, an observer can easily infer that the actor is surreptitiously carrying the gun.", "Needless to say, any single frame from the video seems incapable of supporting that inference, and one needs to reason over space and time in order to reach that conclusion.", "As a simpler instance of the problem, consider the cup-and-balls magic routine 1 , or the gamblingbased shell game 2 , as shown in Fig. 1", "(b) .", "In these games, an operator puts a target object (ball) under one of multiple container objects (cups), and moves them about, possibly revealing the target at various times and recursively containing cups within other cups.", "The task at the end is to tell which of the cups is covering the ball.", "Even in its simplest instantiation, one can expect any human or computer system that solves this task to require the ability to model state of the world over long temporal horizons, reason about occlusion, understand the spatiotemporal implications of containment, etc.", "An important aspect of both our motivating examples is the adversarial nature of the task, where the operator in control is trying to make the observer fail.", "Needless to say, a frame by frame prediction model would be incapable of solving such tasks.", "Figure 1: Real world video understanding.", "Consider this iconic movie scene from The Godfather in", "(a), where the protagonist leaves the table, goes to the bathroom to extract a hidden firearm, and returns to the table presumably with the intentions of shooting a person.", "While the gun itself is visible in only a few frames of the whole clip, it is trivial for us to realize that the protagonist has it in the last frame.", "An even simpler instantiation of such a reasoning task could be the cup-and-ball shell game in", "(b) , where the task is to determine which of the cups contain the ball at the end of the trick.", "Can we design similarly hard tasks for computers?", "Given these motivating examples, why don't spatiotemporal models dramatically outperform their static counterparts for video understanding?", "We posit that this is due to limitations of existing video benchmarks.", "Even though video datasets have evolved from the small regime with tens of labels (Soomro et al., 2012; Kuehne et al., 2011; Schuldt et al., 2004) to large with hundreds of labels (Sigurdsson et al., 2016; Kay et al., 2017) , tasks have remained highly correlated to the scene and object context.", "For example, it is trivial to recognize a swimming action given a swimming pool in the background (He et al., 2016b) .", "This is further reinforced by the fact that state of the art pose-based action recognition models are outperformed by simpler frame-level models (Wang et al., 2016b) on the Kinetics (Kay et al., 2017) benchmark, with a difference of nearly 45% in accuracy!", "Sigurdsson et al. also found similar results for their Charades (Sigurdsson et al., 2016) benchmark, where adding ground truth object information gave the largest boosts to action recognition performance (Sigurdsson et al., 2017) .", "In this work, we take an alternate approach to developing a video understanding dataset.", "Inspired by the recent CLEVR dataset (Johnson et al., 2017) (that explores spatial reasoning in tabletop scenes) and inspired by the adversarial parlor games above (that require temporal reasoning), we introduce CATER, a diagnostic dataset for Compositional Actions and TEmporal Reasoning in dynamic tabletop scenes.", "We define three tasks on the dataset, each with an increasingly higher level of complexity, but set up as classification problems in order to be comparable to existing benchmarks for easy transfer of existing models and approaches.", "Specifically, we consider primitive action recognition, compositional action recognition, and adversarial target tracking under occlusion and containment.", "However, note that this does not limit the usability of our dataset to these tasks, and we provide full metadata with the rendered videos that can be used for more complex, structured prediction tasks like detection, tracking, forecasting, and so on.", "Our dataset does not model an operator (or hand) moving the tabletop objects, though this could be simulated as well in future variants, as in (Rogez et al., 2015) .", "Being synthetic, CATER can easily be scaled up in size and complexity.", "It also allows for detailed model diagnostics by controlling various dataset generation parameters.", "We use CATER to benchmark state-of-the-art video understanding models Hochreiter & Schmidhuber, 1997) , and show even the best models struggle on our dataset.", "We also uncover some insights into the behavior of these models by changing parameters such as the temporal duration of an occlusion, the degree of camera motion, etc., which are difficult to both tune and label in real-world video data.", "We use CATER to analyze several leading network designs on hard spatiotemporal tasks.", "We find most models struggle on our proposed dataset, especially on the snitch localization task which requires long term reasoning.", "Interestingly, average pooling clip predictions or short temporal cues (optical flow) perform rather poorly on CATER, unlike most previous benchmarks.", "Such temporal reasoning challenges are common in the real world (eg.", "Fig.", "1", "(a) ), and solving those would be the cornerstone of the next improvements in machine video understanding.", "We believe CATER would serve as an intermediary in building systems that will reason over space and time to understand actions.", "That said, CATER is, by no means, a complete solution to the video understanding problem.", "Like any other synthetic or simulated dataset, it should be considered in addition to real world benchmarks.", "While we have focused on classification tasks for simplicity, our fully-annotated dataset can be used for much richer parsing tasks such as spacetime action localization.", "One of our findings is that while high-level semantic tasks such as activity recognition may be addressable with current architectures given a richly labeled dataset, \"mid-level\" tasks such as tracking still pose tremendous challenges, particularly under long-term occlusions and containment.", "We believe addressing such challenges will enable broader temporal reasoning tasks that capture intentions, goals, and causal behavior.", "We analyze the top most confident", "a) correct and", "b) incorrect predictions on the test videos for localization task.", "For each video, we show the last frame, followed by a top-down view of the 6 × 6 grid.", "The grid is further overlayed with:", "1) the ground truth positions of the snitch over time, shown as the golden trail, which fades in color over time =⇒ brighter yellow depicts later positions; and", "2) the softmax prediction confidence scores for each location (black is low, white is high).", "The model has easiest time classifying the location when the snitch does not move much or moves early on in the video.", "Full video in supplementary." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09756097197532654, 0.10526315122842789, 0.10256409645080566, 0.2857142686843872, 0.25531914830207825, 0.2083333283662796, 0.13333332538604736, 0.10810810327529907, 0.09999999403953552, 0.0624999962747097, 0.04999999329447746, 0.11538460850715637, 0.04444443807005882, 0.04999999329447746, 0.12765957415103912, 0.04651162400841713, 0.037735845893621445, 0.05882352590560913, 0.13793103396892548, 0.04651162400841713, 0.2222222238779068, 0.14814814925193787, 0, 0.1395348757505417, 0.1304347813129425, 0.1621621549129486, 0.05405404791235924, 0.06896550953388214, 0.05405404791235924, 0.3030303120613098, 0.16949151456356049, 0.09756097197532654, 0.14035087823867798, 0.07999999821186066, 0.22857142984867096, 0.13333332538604736, 0.2181818187236786, 0, 0.16949151456356049, 0.04081632196903229, 0.060606054961681366, 0, 0.1818181723356247, 0.13793103396892548, 0.1764705777168274, 0.14999999105930328, 0.1463414579629898, 0.1249999925494194, 0.1621621549129486, 0.1428571343421936, 0.2222222238779068, 0.10526315122842789, 0.09090908616781235, 0.17241378128528595, 0.25641024112701416, 0.14814814925193787, 0, 0, 0.052631575614213943, 0, 0, 0, 0.04878048226237297, 0.07999999821186066 ]
HJgzt2VKPB
true
[ "We propose a new video understanding benchmark, with tasks that by-design require temporal reasoning to be solved, unlike most existing video datasets." ]
[ "We address the efficiency issues caused by the straggler effect in the recently emerged federated learning, which collaboratively trains a model on decentralized non-i.i.d. (non-independent and identically distributed) data across massive worker devices without exchanging training data in the unreliable and heterogeneous networks.", "We propose a novel two-stage analysis on the error bounds of general federated learning, which provides practical insights into optimization.", "As a result, we propose a novel easy-to-implement federated learning algorithm that uses asynchronous settings and strategies to control discrepancies between the global model and delayed models and adjust the number of local epochs with the estimation of staleness to accelerate convergence and resist performance deterioration caused by stragglers.", "Experiment results show that our algorithm converges fast and robust on the existence of massive stragglers.", "Distributed machine learning has received increasing attention in recent years, e.g., distributed stochastic gradient descent (DSGD) approaches (Gemulla et al., 2011; Lan et al., 2017) and the well-known parameter server paradigm (Agarwal & Duchi, 2011; Li et al., 2013; 2014) .", "However, these approaches always suffer from communication overhead and privacy risk (McMahan et al., 2017) .", "Federated learning (FL) (Konečnỳ et al., 2016 ) is proposed to alleviate the above issues, where a subset of devices are randomly selected, and training data in devices are locally kept when training a global model, thus reducing communication and protecting user privacy.", "Furthermore, FL approaches are dedicated to a more complex context with", "1) non-i.i.d. (Non-independent and identically distributed), unbalanced and heterogeneous data in devices,", "2) constrained computing resources with unreliable connections and unstable environments (McMahan et al., 2017; Konečnỳ et al., 2016) .", "Typically, FL approaches apply weight averaging methods for model aggregation, e.g., FedAvg (McMahan et al., 2017) and its variants (Sahu et al., 2018; Wang et al., 2018; Kamp et al., 2018; Leroy et al., 2019; Nishio & Yonetani, 2019) .", "Such methods are similar to the synchronous distributed optimization domain.", "However, synchronous optimization methods are costly in synchronization (Chen et al., 2018) , and they are potentially inefficient due to the synchrony even when collecting model updates from a much smaller subset of devices (Xie et al., 2019b) .", "Besides, waiting time for slow devices (i.e., stragglers or stale workers) is inevitable due to the heterogeneity and unreliability as mentioned above.", "The existence of such devices is proved to affect the convergence of FL (Chen et al., 2018) .", "To address this problem, scholars propose asynchronous federated learning (AFL) methods (Xie et al., 2019a; Mohammad & Sorour, 2019; Samarakoon et al., 2018) that allow model aggregation without waiting for slow devices.", "However, asynchrony magnifies the straggler effect because", "1) when the server node receives models uploaded by the slow workers, it probably has already updated the global model for many times, and", "2) real-world data are usually heavy-tailed in distributed heterogeneous devices, where the rich get richer, i.e., the straggler effect accumulates when no adjustment operations in stale workers, and eventually it affects the convergence of the global model.", "Furthermore, dynamics in AFL brings more challenges in parameter tuning and speed-accuracy trade-off, and the guidelines for designing efficient and stale-robust algorithms in this context are still missing.", "Contributions Our main contributions are summarized as follows.", "We first establish a new twostage analysis on federated learning, namely training error decomposition and convergence analysis.", "To the best of our knowledge, it is the first analysis based on the above two stages that address the optimization roadmap for the general federated learning entirely.", "Such analysis provides insight into designing efficient and stale-robust federated learning algorithms.", "By following the guidelines of the above two stages, we propose a novel FL algorithm with asynchronous settings and a set of easy-to-implement training strategies.", "Specifically, the algorithm controls model training by estimating the model consistency and dynamically adjusting the number of local epochs on straggle workers to reduce the impact of staleness on the convergence of the global model.", "We conduct experiments to evaluate the efficiency and robustness of our algorithm on imbalanced and balanced data partitions with different proportions of straggle worker nodes.", "Results show that our approach converges fast and robust on the existence of straggle worker nodes compared to the state-of-the-art solutions.", "Related Work Our work is targeting the AFL and staleness resilience approaches in this context.", "Straggler effect (also called staleness) is one of the main problems in the similar asynchronous gradient descent (Async-SGD) approaches, which has been discussed by various studies and its remedies have been proposed (Hakimi et al., 2019; Lian et al., 2015; Chen et al., 2016; Cui et al., 2016; Chai et al., 2019; Zheng et al., 2017; Dai et al., 2018; Hakimi et al., 2019) .", "However, these works are mainly targeting the distributed Async-SGD scenarios, which is different from FL as discussed in the previous section.", "Existing FL solutions that address the straggler effect are mainly consensus-based.", "Consensus mechanisms are introduced where a threshold metric (i.e., control variable) is computed, and only the workers who satisfy this threshold are permitted to upload their model (Chen et al., 2018; Smith et al., 2017; Nishio & Yonetani, 2019) .", "Thus it significantly reduces the number of communications and updates model without waiting for straggle workers.", "However, current approaches are mainly focusing on synchronized FL.", "Xie et al. (2019a) propose an AFL algorithm which uses a mixing hyperparameter to adaptively control the trade-off between the convergence speed and error reduction on staleness.", "However, this work and above mentioned FL solutions only consider the staleness caused by network delay instead of imbalanced data size in each worker and only evaluate on equal size of local data, which is inconsistent with the real-world cases.", "Our approach is similar to (Xie et al., 2019a) , but instead we adaptively control the number of local epochs combined with the approximation of staleness and model discrepancy, and prove the performance guarantee on imbalanced data partitions.", "We illustrate our approach in the rest of this paper.", "In this paper, we propose a new two-stage analysis on federated learning, and inspired by such analysis, we propose a novel AFL algorithm that accelerates convergence and resists performance deterioration caused by stragglers simultaneously.", "Experimental results show that our approach converges two times faster than baselines, and it can resist the straggler effect without sacrificing accuracy and communication.", "As a byproduct, our approach improves the generalization ability of neural network models.", "We will theoretically analyze it in future work.", "Besides, while not the focus of our work, security and privacy are essential concerns in federated learning, and as the future work, we can apply various security methods to our approach.", "Furthermore, besides the stale- We respectively test the performance with 20%, 60%, 80%, and 90% of stale workers.", "The green dotted line is FedAvg which waits all selected workers.", "resistance ability, the discrepancy estimation in our method also has the potential ability to resist malicious attacks to the worker nodes such as massive Byzantine attacks, which has been addressed in (Bagdasaryan et al., 2018; Li et al., 2019; Muñoz-González et al., 2019) .", "We will analyze and evaluate such ability in future work." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.18518517911434174, 0.34285715222358704, 0.3214285671710968, 0.5161290168762207, 0.11538460850715637, 0.06451612710952759, 0.14814814925193787, 0, 0.0714285671710968, 0.0624999962747097, 0.043478257954120636, 0.07999999821186066, 0.11999999731779099, 0.1538461446762085, 0.1875, 0.17391303181648254, 0.09090908616781235, 0.10810810327529907, 0.11999999731779099, 0.1538461446762085, 0, 0.25806450843811035, 0.25641024112701416, 0.29629629850387573, 0.3243243098258972, 0.25, 0.31578946113586426, 0.34285715222358704, 0.13333332538604736, 0.12903225421905518, 0.05714285373687744, 0.07692307233810425, 0.07692307233810425, 0.19354838132858276, 0.0833333283662796, 0.2926829159259796, 0.1599999964237213, 0.16326530277729034, 0.23999999463558197, 0.27272728085517883, 0.10526315122842789, 0.1428571343421936, 0.08695651590824127, 0.19512194395065308, 0.25, 0, 0.03999999538064003, 0.1599999964237213 ]
B1lL9grYDS
true
[ "We propose an efficient and robust asynchronous federated learning algorithm on the existence of stragglers" ]
[ "Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data.", "We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights.", "This addition shows significant increases of performance on some of the tasks from the bAbI dataset.", "In the recent years, Recurrent Neural Networks (RNNs) have been successfully used to tackle problems with data that can be represented in the shape of time series.", "Application domains include Natural Language Processing (NLP) (translation BID12 , summarisation BID9 , question answering and more), speech recogition BID5 BID3 ), text to speech systems BID0 , computer vision tasks BID13 BID16 , and differentiable programming language interpreters BID10 BID11 ).An", "intuitive explanation for the success of RNNs in fields such as natural language understanding is that they allow words at the beginning of a sentence or paragraph to be memorised. This", "can be crucial to understanding the semantic content. Thus", "in the phrase \"The cat ate the fish\" it is important to memorise the subject (cat). However", ", often later words can change the meaning of a senstence in subtle ways. For example", ", \"The cat ate the fish, didn't it\" changes a simple statement into a question. In this paper", ", we study a mechanism to enhance a standard RNN to enable it to modify its memory, with the hope that this will allow it to capture in the memory cells sequence information using a shorter and more robust representation.One of the most used RNN units is the Long Short-Term Memory (LSTM) BID7 . The core of", "the LSTM is that each unit has a cell state that is modified in a gated fashion at every time step. At a high level", ", the cell state has the role of providing the neural network with memory to hold long-term relationships between inputs. There are many", "small variations of LSTM units in the literature and most of them yield similar performance BID4 .The memory (cell", "state) is expected to encode information necessary to make the next prediction. Currently the ability", "of the LSTMs to rotate and swap memory positions is limited to what can be achieved using the available gates. In this work we introduce", "a new operation on the memory that explicitly enables rotations and swaps of pairwise memory elements. Our preliminary tests show", "performance improvements on some of the bAbI tasks compared with LSTM based architectures.", "A limitation of the models in our experiments is only applying pairwise 2D rotations.", "Representations of past input can be larger groups of the cell state vector, thus 2D rotations might not fully exploit the benefits of transformations.", "In the future we hope to explore rotating groups of elements and multi-dimensional rotations.", "Rotating groups of elements of the cell state could potentially also force the models to learn a more structured representation of the world, similar to how forcing a model to learn specific representations of scenes, as presented in BID6 , yields semantic representations of the scene.Rotations also need not be fully flexible.", "Introducing hard constraints on the rotations and what groups of parameters can be rotated might lead the model to learn richer memory representations.", "Future work could explore how adding such constraints impacts learning times and final performance on different datasets, but also look at what constraints can qualitatively improve the representation of long-term dependencies.In this work we presented prelimiary tests for adding rotations to simple models but we only used a toy dataset.", "The bAbI dataset has certain advantages such as being small thus easy to train many models on a single machine, not having noise as it is generated from a simulation, and having a wide range of tasks of various difficulties.", "However it is a toy dataset that has a very limited vocabulary and lacks the complexity of real world datasets (noise, inconsistencies, larger vocabularies, more complex language constructs, and so on).", "Another limitation of our evaluation is only using text, specifically question answering.", "To fully evaluate the idea of adding rotations to memory cells, in the future, we aim to look into incorporating our rotations on different domains and tasks including speech to text, translation, language generation, stock prices, and other common problems using real world datasets.Tuning the hyperparameters of the rotation models might give better insights and performance increases and is something we aim to incorporate in our training pipeline in the future.A brief exploration of the angles produced by u and the weight matrix W rot show that u does not saturate, thus rotations are in fact applied to our cell states and do not converge to 0 (or 360 degress).", "A more in-depth qualitative analysis of the rotation gate is planned for future work.", "Peeking into the activations of our rotation gates could help understand the behaviour of rotations and to what extent they help better represent long-term memory.A very successful and popular mutation of the LSTM is the Gated Recurrent Unit (GRU) unit BID1 .", "The GRU only has an output as opposed to both a cell state and an output and uses fewer gates.", "In the future we hope to explore adding rotations to GRU units and whether we can obtain similar results.", "We have introduced a novel gating mechanism for RNN units that enables applying a parametrised transformation matrix to the cell state.", "We picked pairwise 2D rotations as the transformation and shown how this can be added to the popular LSTM units to create what we call RotLSTM.", "Figure 3: Accuracy comparison on training, validation (val) and test sets over 40 epochs for LSTM and RotLSTM models.", "The models were trained 10 times and shown is the average accuracy and in faded colour is the standard deviation.", "Test set accuracy was computed every 10 epochs.We trained a simple model using RotLSTM units and compared them with the same model based on LSTM units.", "We show that for the LSTM-based architetures adding rotations has a positive impact on most bAbI tasks, making the training require fewer epochs to achieve similar or higher accuracy.", "On some tasks the RotLSTM model can use a lower dimensional cell state vector and maintain its performance.", "Significant accracy improvements of approximatively 20% for the RotLSTM model over the LSTM model are visible on bAbI tasks 5 (three argument relations) and 18 (reasoning about size)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.3589743673801422, 0.42424240708351135, 0.17777776718139648, 0.07017543166875839, 0.2083333283662796, 0.1428571343421936, 0.11764705181121826, 0.17142856121063232, 0.1111111044883728, 0.1846153736114502, 0.25, 0.25, 0.2702702581882477, 0.1249999925494194, 0.2380952388048172, 0.3684210479259491, 0.5, 0.12121211737394333, 0.14999999105930328, 0.1818181723356247, 0.16949151456356049, 0.24390242993831635, 0.1846153736114502, 0.2222222238779068, 0.1666666567325592, 0.06451612710952759, 0.1764705926179886, 0.12121211737394333, 0.18518517911434174, 0.1666666567325592, 0.1111111044883728, 0.25641024112701416, 0.1395348757505417, 0.10810810327529907, 0.0555555522441864, 0.22727271914482117, 0.25531914830207825, 0.3243243098258972, 0.2666666507720947 ]
ByUEelW0-
true
[ "Adding a new set of weights to the LSTM that rotate the cell memory improves performance on some bAbI tasks." ]
[ "We address the problem of marginal inference for an exponential family defined over the set of permutation matrices.", "This problem is known to quickly become intractable as the size of the permutation increases, since its involves the computation of the permanent of a matrix, a #P-hard problem.", "We introduce Sinkhorn variational marginal inference as a scalable alternative, a method whose validity is ultimately justified by the so-called Sinkhorn approximation of the permanent.", "We demonstrate the efectiveness of our method in the problem of probabilistic identification of neurons in the worm C.elegans", "Let P ∈ R n×n be a binary matrix representing a permutation of n elements (i.e. each row and column of P contains a unique 1).", "We consider the distribution over P defined as", "where A, B F is the Frobenius matrix inner product, log L is a parameter matrix and Z L is the normalizing constant.", "Here we address the problem of marginal inference, i.e. computing the matrix of expectations ρ := E(P).", "This problem is known to be intractable since it requires access to Z L , also known as the permanent of L, and whose computation is known to be a #P-hard problem Valiant (1979) To overcome this difficulty we introduce Sinkhorn variational marginal inference, which can be computed efficiently and is straightforward to implement.", "Specifically, we approximate ρ as S(L), the Sinkhorn operator applied to L (Sinkhorn, 1964) .", "S(L) is defined as the (infinite) successive row and column normalization of L (Adams and Zemel, 2011; , a limit that is known to result in a doubly stochastic matrix (Altschuler et al., 2017) .", "In section 2 we argue the Sinkhorn approximation is sensible, and in section 3 we describe the problem of probabilistic inference of neural identity in C.elegans and demonstrate the Sinkhorn approximation produces the best results.", "We have introduced the Sinkhorn approximation for marginal inference, and our it is a sensible alternative to sampling, and it may provide faster, simpler and more accurate approximate marginals than the Bethe approximation, despite typically leading to worse permanent approximations.", "We leave for future work a thorough analysis of the relation between quality of permanent approximation and corresponding marginals.", "Also, it can be verified that S(L) = diag(x)Ldiag(y", "), where", "diag(x), diag(y", ") are some", "positive vectors x, y turned into diagonal matrices (Peyré et al., 2019) . Then,", "Additionally, we obtain the (log) Sinkhorn approximation of the permanent of L, perm S (L), by evaluating S(L) in the problem it solves, (2.3).", "By simple algebra and using the fact that S(L) is a doubly stochastic matrix we see that", "By combining the last three displays we obtain", "from which the result follows." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.10256409645080566, 0.25641024112701416, 0.25, 0.04999999329447746, 0, 0, 0.12121211737394333, 0.1666666567325592, 0.19354838132858276, 0.0833333283662796, 0.1860465109348297, 0.1538461446762085, 0.11428570747375488, 0, 0, 0, 0.10256409645080566, 0, 0, 0 ]
HkxPtJh4YB
true
[ "New methodology for variational marginal inference of permutations based on Sinkhorn algorithm, applied to probabilistic identification of neurons" ]
[ "The robustness of neural networks to adversarial examples has received great attention due to security implications.", "Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness.", "In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation.", "Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.", "The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks.", "Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that", "(i) CLEVER is aligned with the robustness indication measured by the $\\ell_2$ and $\\ell_\\infty$ norms of adversarial examples from powerful attacks, and", "(ii) defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores.", "To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers.\n\n", "Recent studies have highlighted the lack of robustness in state-of-the-art neural network models, e.g., a visually imperceptible adversarial image can be easily crafted to mislead a well-trained network BID28 BID9 BID3 .", "Even worse, researchers have identified that these adversarial examples are not only valid in the digital space but also plausible in the physical world BID17 BID8 .", "The vulnerability to adversarial examples calls into question safety-critical applications and services deployed by neural networks, including autonomous driving systems and malware detection protocols, among others.In the literature, studying adversarial examples of neural networks has twofold purposes:", "(i) security implications: devising effective attack algorithms for crafting adversarial examples, and", "(ii) robustness analysis: evaluating the intrinsic model robustness to adversarial perturbations to normal examples.", "Although in principle the means of tackling these two problems are expected to be independent, that is, the evaluation of a neural network's intrinsic robustness should be agnostic to attack methods, and vice versa, existing approaches extensively use different attack results as a measure of robustness of a target neural network.", "Specifically, given a set of normal examples, the attack success rate and distortion of the corresponding adversarial examples crafted from a particular attack algorithm are treated as robustness metrics.", "Consequently, the network robustness is entangled with the attack algorithms used for evaluation and the analysis is limited by the attack capabilities.", "More importantly, the dependency between robustness evaluation and attack approaches can cause biased analysis.", "For example, adversarial training is a commonly used technique for improving the robustness of a neural network, accomplished by generating adversarial examples and retraining the network with corrected labels.", "However, while such an adversarially trained network is made robust to attacks used to craft adversarial examples for training, it can still be vulnerable to unseen attacks.Motivated by the evaluation criterion for assessing the quality of text and image generation that is completely independent of the underlying generative processes, such as the BLEU score for texts BID25 and the INCEPTION score for images BID27 , we aim to propose a comprehensive and attack-agnostic robustness metric for neural networks.", "Stemming from a perturbation analysis of an arbitrary neural network classifier, we derive a universal lower bound on the minimal distortion required to craft an adversarial example from an original one, where the lower bound applies to any attack algorithm and any p norm for p ≥ 1.", "We show that this lower bound associates with the maximum norm of the local gradients with respect to the original example, and therefore robustness evaluation becomes a local Lipschitz constant estimation problem.", "To efficiently and reliably estimate the local Lipschitz constant, we propose to use extreme value theory BID6 for robustness evaluation.", "In this context, the extreme value corresponds to the local Lipschitz constant of our interest, which can be inferred by a set of independently and identically sampled local gradients.With the aid of extreme value theory, we propose a robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.", "We note that CLEVER is an attack-independent robustness metric that applies to any neural network classifier.", "In contrast, the robustness metric proposed in BID11 , albeit attack-agnostic, only applies to a neural network classifier with one hidden layer.We highlight the main contributions of this paper as follows:• We propose a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.", "To the best of our knowledge, CLEVER is the first robustness metric that is attack-independent and can be applied to any arbitrary neural network classifier and scales to large networks for ImageNet.•", "The proposed CLEVER score is well supported by our theoretical analysis on formal robustness guarantees and the use of extreme value theory. Our", "robustness analysis extends the results in BID11 from continuously differentiable functions to a special class of non-differentiable functions -neural+ networks with ReLU activations.• We", "corroborate the effectiveness of CLEVER by conducting experiments on state-of-theart models for ImageNet, including ResNet BID10 , Inception-v3 BID29 and MobileNet (Howard et al., 2017) . We also", "use CLEVER to investigate defended networks against adversarial examples, including the use of defensive distillation BID23 and bounded ReLU BID34 . Experimental", "results show that our CLEVER score well aligns with the attack-specific robustness indicated by the 2 and ∞ distortions of adversarial examples.", "In this paper, we propose the CLEVER score, a novel and generic metric to evaluate the robustness of a target neural network classifier to adversarial examples.", "Compared to the existing robustness evaluation approaches, our metric has the following advantages:", "(i) attack-agnostic;", "(ii) applicable to any neural network classifier;", "(iii) comes with strong theoretical guarantees; and", "(iv) is computationally feasible for large neural networks.", "Our extensive experiments show that the CLEVER score well matches the practical robustness indication of a wide range of natural and defended networks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1764705777168274, 0.1538461446762085, 0.21276594698429108, 0.15789473056793213, 0.0624999962747097, 0.06451612710952759, 0.10256409645080566, 0, 0.5853658318519592, 0.3199999928474426, 0.09302324801683426, 0.11320754140615463, 0, 0.19354838132858276, 0.2711864411830902, 0.13636362552642822, 0.1666666567325592, 0.1818181723356247, 0.2222222238779068, 0.25, 0.21052631735801697, 0.25531914830207825, 0.20512819290161133, 0.25806450843811035, 0.5294117331504822, 0.3125, 0.5416666865348816, 0.0952380895614624, 0.23255813121795654, 0.08695651590824127, 0.10256409645080566, 0.14999999105930328, 0.380952388048172, 0.19354838132858276, 0.307692289352417, 0, 0.07407406717538834, 0.19999998807907104 ]
BkUHlMZ0b
true
[ "We propose the first attack-independent robustness metric, a.k.a CLEVER, that can be applied to any neural network classifier." ]
[ "Multi-agent collaboration is required by numerous real-world problems.", "Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks.", "For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture.", "However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity.", "Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available.", "Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication (SSoC) learning scheme.", "By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way.", "Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states.", "In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner.", "The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions.", "Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart.", "With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines.", "Many real-world applications involve participation of multiple agents, for example, multi-robot control BID12 , network packet delivery BID20 and autonomous vehicles planning BID0 , etc..", "Learning such systems is ideally required to be autonomous (e.g., using reinforcement learning).", "Recently, with the rise of deep learning, deep reinforcement learning (RL) has demonstrated many exciting results in several challenging scenarios e.g. robotic manipulation BID3 [10], visual navigation BID22 BID10 , as well as the well-known application in game playing BID13 [17] etc..", "However, unlike its success in solving single-agent tasks, deep RL still faces many challenges in solving multi-agent learning scenarios.Modeling multiple agents has two extreme solutions: one is treating all agents as an unity to apply a single centralized framework, the other is modelling the agents as completely independent learners.", "Studies following the former design are often known as \"centralized approach\", for example BID18 BID14 etc.", "The obvious advantage of this class of approaches is a good guarantee of optimality since it is equivalent to the single agent Markov decision process (MDP) essentially.", "However, it is usually unfeasible to assume a global controller that knows everything about the environment in practice.", "The other class of methods can be marked as \"independent multi-agent reinforcement learning\".", "These approaches assumes a totally independent setting in which the agents treat all others as a part of the observed environment.", "BID2 has pointed out that such a setup will suffer from the problem of non-stationarity, which renders it hard to learn an optimal joint policy.In essence, there are three key factors that determine a communication.", "That is when, where and how the participants initiate the communication.", "Most of existing approaches, including the abovementioned Meanfield and Commnet, try to predefine each ingredient and thus lead to an inflexible communication architecture.", "Recently, VAIN BID4 and ATOC BID6 incorporate attentional communication for collaborative multi-agent reinforcement learning.", "Compared with Meanfield and Commnet, VAIN and ATOC have made one step further towards more flexible communication.", "However, the step is still limited.", "Take ATOC as an example, although it learns a dynamic attention to diversify agent messages, the message flow is only limited to the local range.", "This is unfavorable for learning complex and long range communications.", "The communication time is also specified manually (every ten steps).", "Hence it is requisite to find a new method that allows more flexible communication on both learnable time and scopes.In this regard, we propose a new solution with learnable spontaneous communication behaviours and self-organizing message flow among agents.", "The proposed architecture is named as \"Spontaneous and Self-Organizing Communication\" (SSoC) network.", "The key to such a spontaneous communication lies in the design that the communication is treated as an action to be learned in a reinforcement manner.", "The corresponding action is called \"Speak\".", "Each agent is eligible to take such an action based on its current observation.", "Once an agent decides to \"Speak\", it sends a message to partners within the communication scope.", "In the next step, agents receiving this message will decide whether to pass the message forward to more distant agents or keep silence.", "This is exactly how SSoC distinguishes itself from existing approaches.", "Instead of predestining when and who will participate in the communication, SSoC agents start communication only when necessary and stop transferring received messages if they are useless.", "A self-organizing communication policy is learned via maximizing the total collaborative reward.", "The communication process of SSoC is depicted in Fig.1 .", "It shows an example of the message flow among four communicating agents.", "Specifically, agent 3 sends a message to ask for help for remote partners.", "Due to agent 3's communication range, the message can be seen only by agent 1.", "Then agent 1 decides to transfer the collected message to its neighbors.", "Finally agent 2 and agent 4 read the messages from agent 3.", "These two agents are directly unreachable from agent 3.", "In this way, each agent learns to send or transfer messages spontaneously and finally form a communication route.", "Compared with the communication channels predefined in previous works, the communication here is dynamically changing according to real needs of the participating agents.", "Hence the communication manner forms a self-organizing mechanism.We instantiate SSoC with a policy network with four functional units as shown in FIG0 .", "Besides the agent's original action, an extra \"Speak\" action is output based on the current observation and hidden states.", "Here we simply design \"Speak\" as a binary {0, 1} output.", "Hence it works as a \"switch\" to control whether to send or transfer a message.", "The \"Speak\" action determines when and who to communicate in a fully spontaneous manner.", "A communication structure will naturally emerge after several steps of message propagation.", "Here in our SSoC method, the \"Speak\" policy is learned by a reward-driven reinforcement learning algorithm.", "The assumption is that a better message propagation strategy should also lead to a higher accumulated reward.We evaluate SSoC on several representative benchmarks.", "As we have observed, the learned policy does demonstrate novel clear message propagation patterns which enable complex collaborative strategies, for example, remote partners can be requested to help the current agent to get over hard times.", "We also show the high efficiency of communication by visualizing a heat map showing how often the agents \"speak\".", "The communication turns out to be much sparser than existing predefined communication channels which produce excessive messages.", "With such emerged collaborations enabled by SSoC's intelligent communication manner, it is also expected to see clear performance gains compared with existing methods on the tested tasks.", "In this paper, we propose a SSoC network for MARL tasks.", "Unlike previous methods which often assume a predestined communication structure, the SSoC agent learns when to start a communication or transfer its received message via a novel \"Speak\" action.", "Similar to the agent's original action, this \"Speak\" can also be learned in a reinforcement manner.", "With such a spontaneous communication action, SSoC is able to establish a dynamic self-organizing communication structure according to the current state.", "Experiments have been performed to demonstrate better collaborative policies and improved on communication efficiency brought by such a design.", "In future work, we will continue to enhance the learning of \"Speak\" action e.g. encoding a temporal abstraction to make the communication flow more stable or develop a specific reward for this \"Speak\" action." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1666666567325592, 0.12903225421905518, 0.16326530277729034, 0.11428570747375488, 0.47058823704719543, 0.12121211737394333, 0.060606054961681366, 0.25806450843811035, 0.060606054961681366, 0, 0, 0.10256409645080566, 0, 0.03703703358769417, 0.13793103396892548, 0.06451612710952759, 0.05128204822540283, 0.060606054961681366, 0.0714285671710968, 0.05882352590560913, 0.08163265138864517, 0.1599999964237213, 0.1111111044883728, 0.3448275923728943, 0.12903225421905518, 0, 0.052631575614213943, 0.3199999928474426, 0.07999999821186066, 0.20408162474632263, 0.14814814925193787, 0.1666666567325592, 0, 0, 0.13333332538604736, 0, 0.07999999821186066, 0.09999999403953552, 0.14814814925193787, 0.07999999821186066, 0, 0.14814814925193787, 0.06896550953388214, 0, 0.07999999821186066, 0, 0.1818181723356247, 0.05714285373687744, 0.1666666567325592, 0.060606054961681366, 0.07692307233810425, 0.0714285671710968, 0.20689654350280762, 0.07407406717538834, 0.12903225421905518, 0.052631575614213943, 0.04081632196903229, 0.12121211737394333, 0.06451612710952759, 0.0952380895614624, 0.23076923191547394, 0.09756097197532654, 0.06451612710952759, 0.24242423474788666, 0.1764705777168274, 0.17777776718139648 ]
rJ4vlh0qtm
true
[ "This paper proposes a spontaneous and self-organizing communication (SSoC) learning scheme for multi-agent RL tasks." ]
[ "We study the BERT language representation model and the sequence generation model with BERT encoder for multi-label text classification task.", "We experiment with both models and explore their special qualities for this setting.", "We also introduce and examine experimentally a mixed model, which is an ensemble of multi-label BERT and sequence generating BERT models.", "Our experiments demonstrated that BERT-based models and the mixed model, in particular, outperform current baselines in several metrics achieving state-of-the-art results on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts.", "Multi-label text classification (MLTC) is an important natural language processing task with many applications, such as document categorization, automatic text annotation, protein function prediction (Wehrmann et al., 2018) , intent detection in dialogue systems, and tickets tagging in client support systems (Molino et al., 2018) .", "In this task, text samples are assigned to multiple labels from a finite label set.", "In recent years, it became clear that deep learning approaches can go a long way toward solving text classification tasks.", "However, most of the widely used approaches in MLTC tend to neglect correlation between labels.", "One of the promising yet fairly less studied methods to tackle this problem is using sequence-to-sequence modeling.", "In this approach, a model treats an input text as a sequence of tokens and predict labels in a sequential way taking into account previously predicted labels.", "Nam et al. (2017) used Seq2Seq architecture with GRU encoder and attention-based GRU decoder, achieving an improvement over a standard GRU model on several datasets and metrics.", "Yang et al. (2018b) continued this idea by introducing Sequence Generation Model (SGM) consisting of BiLSTM-based encoder and LSTM decoder coupled with additive attention mechanism .", "In this paper, we argue that the encoder part of SGM can be successfully replaced with a heavy language representation model such as BERT (Devlin et al., 2018) .", "We propose Sequence Generating BERT model (BERT+SGM) and a mixed model which is an ensemble of vanilla BERT and BERT+SGM models.", "We show that BERT+SGM model achieves decent results after less than a half of an epoch of training, while the standard BERT model needs to be trained for 5-6 epochs just to achieve the same accuracy and several dozens epochs more to converge.", "On public datasets, we obtain 0.4%, 0.8%, and 1.6% average improvement in miF 1 , maF 1 , and accuracy respectively in comparison with BERT.", "On datasets with hierarchically structured classes, we achieve 2.8% and 1.5% average improvement in maF 1 and accuracy.", "Our main contributions are as follows:", "1. We present the results of BERT as an encoder in the sequence-to-sequence framework for MLTC datasets with and without a given hierarchical tree structure over classes.", "2. We introduce and examine experimentally a novel mixed model for MLTC.", "3. We fine-tune the vanilla BERT model to perform multi-label text classification.", "To the best of our knowledge, this is the first work to experiment with BERT and explore its particular properties for the multi-label setting and hierarchical text classification.", "4. We demonstrate state-of-the-art results on three well-studied MLTC datasets with English texts and two private Yandex Taxi datasets with Russian texts.", "We present the results of the suggested models and baselines on the five considered datasets in Table 2 .", "First, we can see that both BERT and BERT+SGM show favorable results on multi-label classification datasets mostly outperforming other baselines by a significant margin.", "On RCV1-v2 dataset, it is clear that the BERT-based models perform the best in micro-F 1 metrics.", "The methods dealing with the class structure (tree hierarchy in HMCN and HiLAP, label frequency in BERT+SGM) also have the highest macro-F 1 score.", "In some cases, BERT performs better than the sequence-to-sequence version, which is especially evident on the Reuters-21578 dataset.", "Since BERT+SGM has more learnable parameters, a possible reason might be a fewer number of samples provided on the dataset.", "However, sometimes BERT+SGM might be a more preferable option: on RCV1-v2 dataset the macro-F 1 metrics of BERT + SGM is much larger while other metrics are still comparable with the BERT's results.", "Also, for both Yandex Taxi datasets on the Russian language, we can see that the hamming accuracy and the set accuracy of the BERT+SGM model is higher compared to other models.", "On Y.Taxi Riders there is also an improvement in terms of macro-F 1 metrics.", "In most cases, better performance can be achieved after mixing BERT and BERT+SGM.", "On public datasets, we see 0.4%, 0.8%, and 1.6% average improvement in miF 1 , maF 1 , and accuracy respectively in comparison with BERT.", "On datasets with tree hierarchy over classes, we observe 2.8% and 1.5% average improvement in maF 1 and accuracy.", "Metrics of interest for the mixed model depending on α on RCV1-v2 validation set are shown in Figure 4 .", "Visualization of feature importance for BERT and sequence generating BERT models is provided in Appendix A.", "In our experiments, we also found that BERT for multi-label text classification tasks takes far more epochs to converge compared to 3-4 epochs needed for multi-class datasets (Devlin et al., 2018) .", "For AAPD, we performed 20 epochs of training; for RCV1-v2 and Reuters-21578 -around 30 epochs; for Russian datasets -45-50 epochs.", "BERT + SGM achieves decent accuracy much faster than multi-label BERT and converges after 8-12 epochs.", "The behavior of performance of both models on the validation set of Reuters-21578 during the training process is shown in Figure 3 .", "Another finding of our experiments is that the beam size in the inference stage does not appear to influence much on the performance.", "We obtained optimal results with the beam size in the range from 5 to 9.", "However, a greedy approach with the beam size 1 still gives similar results with less than 1.5% difference in the metrics.", "A possible explanation for this might be that, while in neural machine translation (NMT) the word ordering in the output sequence matters a lot and there might be confusing options, label set generation task is much simpler and we do not have any problems with ordering.", "Also, due to a quite limited 'vocabulary' size |L|, we may not have as many options here to perform a beam search as in NMT or another natural sequence generation task.", "In this research work, we examine BERT and sequence generating BERT on the multi-label setting.", "We experiment with both models and explore their particular properties for this task.", "We also introduce and examine experimentally a mixed model which is an ensemble of vanilla BERT and sequence-to-sequence BERT models.", "Our experimental studies showed that BERT-based models and the mixed model, in particular, outperform current baselines by several metrics achieving state-of-the-art results on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts.", "We established that multi-label BERT typically needs several dozens of epochs to converge, unlike to BERT+SGM model which demonstrates decent results just after a few hundreds of iterations (less than a half of an epoch)." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.42424240708351135, 0.06896550953388214, 0.22857142984867096, 0.11999999731779099, 0.25, 0.12903225421905518, 0.1111111044883728, 0.19354838132858276, 0.12121211737394333, 0.3499999940395355, 0.09999999403953552, 0.09756097197532654, 0.17777776718139648, 0.1764705777168274, 0.15094339847564697, 0.15789473056793213, 0.11764705181121826, 0.09090908616781235, 0.3333333432674408, 0.0714285671710968, 0.2857142686843872, 0.2926829159259796, 0, 0.1249999925494194, 0.14999999105930328, 0.1249999925494194, 0.052631575614213943, 0.060606054961681366, 0.05714285373687744, 0.08510638028383255, 0.09302324801683426, 0.25806450843811035, 0.06896550953388214, 0.15789473056793213, 0.11428570747375488, 0.1764705777168274, 0.25806450843811035, 0.2222222238779068, 0.11764705181121826, 0.12903225421905518, 0.11428570747375488, 0.10810810327529907, 0.06666666269302368, 0.05714285373687744, 0.1071428507566452, 0.13636362552642822, 0.13333332538604736, 0.13793103396892548, 0.1764705777168274, 0.11538460850715637, 0.1702127605676651 ]
BJeHFlBYvB
true
[ "On using BERT as an encoder for sequential prediction of labels in multi-label text classification task" ]
[ "Click Through Rate (CTR) prediction is a critical task in industrial applications, especially for online social and commerce applications.", "It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks.", "We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM).", "Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields.", "The embeddings generated from encoder are beneficial for the further feature interactions.", "Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs.", "Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads.", "Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance.", "This paper studies the problem of predicting the Click Through Rate (CTR), which is an essential task in industrial applications, such as online advertising, and e-commerce.", "To be exact, the advertisements of cost-per-click (CPC) advertising system are normally ranked by the eCPM (effective cost per mille), which is computed as the prodcut of bid price and CTR (click-through rate).", "To predict CTR precisely, feature representation is an important step in extracting the good, interpretable patterns from training data.", "For example, the co-occurrence of \"Valentine's Day\", \"chocolate\" and \"male\" can be viewed as one meaningful indicator/feature for the recommendation.", "Such handcrafted feature type is predominant in CTR prediction (Lee et al., 2012) , until the renaissance of Deep Neural Networks (DNNs).", "Recently, a more effective manner, i.e., representation learning has been investigated in CTR prediction with some works (Guo et al., 2017; Qu et al., 2016; Wang et al., 2017; Lian et al., 2018; Song et al., 2018) , which implicitly or explicitly learn the embeddings of high-order feature extractions among neurons or input elements by the expressive power of DNNs or FM.", "Despite their noticeable performance improvement, DNNs and explicit high order feature-based methods (Wang et al., 2017; Guo et al., 2017; Lian et al., 2018) seek better feature interactions merely based on the naive feature embeddings.", "Few efforts have been made in addressing the task of holistically understanding and learning representations of inputs.", "This leads to many practical problems, such as \"polysemy\" in the learned feature embeddings existed in previous works.", "For example, the input feature 'chocolate' is much closer to the 'snack' than 'gift' in normal cases, while we believe 'chocolate' should be better paired with 'gift' if given the occurrence input as \"Valentine's Day\".", "This is one common polysemy problem in CTR prediction.", "Towards fully understanding the inputs, we re-introduce to CTR, the idea of Transformer encoder (Vaswani et al., 2017) , which is oriented in Natural Language Processing (NLP).", "Such an encoder can efficiently accumulate and extract patterns from contextual word embeddings in NLP, and thus potentially would be very useful in holistically representation learning in CTR.", "Critically, the Transformer encoder has seldom been applied to CTR prediction with the only one exception arxiv paper AutoInt (Song et al., 2018) , which, however, simply implements the multi-head selfattention (MHSA) mechanism of encoders, to directly extract high-order feature interactions.", "We argue that the output of MHSA/encoder should be still considered as first-order embedding influenced by the other fields, rather than a high-order interaction feature.", "To this end, our main idea is to apply the encoder to learn a context-aware feature embedding, which contains the clues from the content of other features.", "Thus the \"polysemy\" problem can be solved naturally, and the second-order interaction of such features can represent more meaning.", "Contrast to AutoInt (Song et al., 2018) , which feeds the output of encoder directly to the prediction layer or a DNN, our work not only improves the encoder to be more suitable for CTR task, but also feeds the encoder output to FM, since both our encoder and FM are based on vector-wise learning mechanism.", "And we adopt DNN to learn the bit-wise high-order feature interactions in a parallel way, which avoids interweaving the vector-wise and bit-wise interactions in a stacked way.", "Formally, we propose a novel framework -Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM).", "DeepEnFM focuses on generating better contextual aligned vectors for FM and uses DNN as a bit-wise information supplement.", "The architecture adopting both Deep and FM part is inspired by DeepFM (Guo et al., 2017) .", "The encoder is endowed with bilinear attention and max-pooling power.", "First, we observed that unlike the random order of words in a sentence, the features in a transaction are in a fixed order of fields.", "For example, the fields of features are arranged in an order of {Gender, Age, Price ...}.", "When the features are embedded in dense vectors, the first and second vectors in a transaction always represent the field \"Gender\" and \"Age\".", "To make use of this advantage, we add a bilinear mechanism to the Transformer encoder.", "We use bilinear functions to replace the simple dot product in attention.", "In this way, feature similarity of different field pairs is modeled with different functions.", "The embedding size in CTR tasks is usually around 10, which allows the application of bilinear functions without unbearable computing complexity.", "Second, the original multi-head outputs are merged by concatenation, which considers the outputs are complementary to each other.", "We argue that there are also suppressing information between different heads.", "We apply a max-pooling merge mechanism to extract both complementary and suppressing information from the multi-head outputs.", "Experimental results on Criteo and Avazu datasets have demonstrated the efficacy of our proposed model.", "In this paper, we propose a novel framework named Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM), which aims to learn a better aligned vector embedding through the encoder.", "The encoder combines the bilinear attention and max-pooling method to gather both the complementary and suppressing information from the content of other fields.", "The extensive experiments demonstrate that our approach achieves state-of-art performance on Criteo and Avazu dataset." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.0714285671710968, 0.3448275923728943, 0.060606054961681366, 0.08695651590824127, 0.1538461446762085, 0.20689654350280762, 0.0833333283662796, 0.0555555522441864, 0.09756097197532654, 0.06666666269302368, 0.13333332538604736, 0.060606054961681366, 0.09836065024137497, 0.04878048226237297, 0.07407406717538834, 0, 0.04878048226237297, 0.09999999403953552, 0, 0.1111111044883728, 0.08163265138864517, 0, 0, 0.0714285671710968, 0.145454540848732, 0.12121211737394333, 0.23076923191547394, 0.27586206793785095, 0.1428571343421936, 0.4761904776096344, 0, 0, 0.06666666269302368, 0.07692307233810425, 0.17391303181648254, 0.0833333283662796, 0.1249999925494194, 0, 0, 0.1428571343421936, 0.07692307233810425, 0.14999999105930328, 0.25806450843811035, 0.07692307233810425 ]
SJlyta4YPS
true
[ "DNN and Encoder enhanced FM with bilinear attention and max-pooling for CTR" ]
[ "For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence.", "In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons.", "Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated.", "However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches.", "This is because the used log-likelihood estimate discourages diversity.", "In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states.", "We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset.", "Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.", "The ability to anticipate future scene states which involves mapping one scene state to likely future states under uncertainty is key for autonomous agents to successfully operate in the real world e.g., to anticipate the movements of pedestrians and vehicles for autonomous vehicles.", "The future states of street scenes are inherently uncertain and the distribution of outcomes is often multi-modal.", "This is especially true for important classes like pedestrians.", "Recent works on anticipating street scenes BID13 BID9 BID23 do not systematically consider uncertainty.Bayesian inference provides a theoretically well founded approach to capture both model and observation uncertainty but with considerable computational overhead.", "A recently proposed approach BID6 BID10 uses dropout to represent the posterior distribution of models and capture model uncertainty.", "This approach has enabled Bayesian inference with deep neural networks without additional computational overhead.", "Moreover, it allows the use of any existing deep neural network architecture with minor changes.However, when the underlying data distribution is multimodal and the model set under consideration do not have explicit latent state/variables (as most popular deep deep neural network architectures), the approach of BID6 ; BID10 is unable to recover the true model uncertainty (see FIG0 and BID19 ).", "This is because this approach is known to conflate risk and uncertainty BID19 .", "This limits the accuracy of the models over a plain deterministic (non-Bayesian) approach.", "The main cause is the data log-likelihood maximization step during optimization -for every data point the average likelihood assigned by all models is maximized.", "This forces every model to explain every data point well, pushing every model in the distribution to the mean.", "We address this problem through an objective leveraging synthetic likelihoods BID26 BID21 which relaxes the constraint on every model to explain every data point, thus encouraging diversity in the learned models to deal with multi-modality.In this work:", "1. We develop the first Bayesian approach to anticipate the multi-modal future of street scenes and demonstrate state-of-the-art accuracy on the diverse Cityscapes dataset without compromising on calibrated probabilities,", "2. We propose a novel optimization scheme for dropout based Bayesian inference using synthetic likelihoods to encourage diversity and accurately capture model uncertainty,", "3. Finally, we show that our approach is not limited to street scenes and generalizes across diverse tasks such as digit generation and precipitation forecasting.", "We propose a novel approach for predicting real-world semantic segmentations into the future that casts a convolutional deep learning approach into a Bayesian formulation.", "One of the key contributions is a novel optimization scheme that uses synthetic likelihoods to encourage diversity and deal with multi-modal futures.", "Our proposed method shows state of the art performance in challenging street scenes.", "More importantly, we show that the probabilistic output of our deep learning architecture captures uncertainty and multi-modality inherent to this task.", "Furthermore, we show that the developed methodology goes beyond just street scene anticipation and creates new opportunities to enhance high performance deep learning architectures with principled formulations of Bayesian inference." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.12903225421905518, 0.35555556416511536, 0.04999999701976776, 0.07999999821186066, 0.1304347813129425, 0.21621620655059814, 0.12121211737394333, 0.1599999964237213, 0.1249999925494194, 0.07999999821186066, 0.2448979616165161, 0.11428570747375488, 0.19999998807907104, 0.12121211737394333, 0.2142857164144516, 0, 0.05405404791235924, 0.06666666269302368, 0.20000000298023224, 0.1904761791229248, 0.25641024112701416, 0.19999998807907104, 0.0555555522441864, 0.2631579041481018, 0, 0.1621621549129486, 0.30434781312942505 ]
rkgK3oC5Fm
true
[ "Dropout based Bayesian inference is extended to deal with multi-modality and is evaluated on scene anticipation tasks." ]
[ "Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision.", "The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise.", "The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications.", "In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue.", "Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise.", "We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces.", "Image-to-image translation and more generally conditional image generation lie at the heart of computer vision.", "Conditional Generative Adversarial Networks (cGAN) (Mirza & Osindero, 2014) have become a dominant approach in the field, e.g. in dense 1 regression (Isola et al., 2017; Pathak et al., 2016; Ledig et al., 2017; BID1 Liu et al., 2017; Miyato & Koyama, 2018; Yu et al., 2018; Tulyakov et al., 2018) .", "They accept a source signal as input, e.g. prior information in the form of an image or text, and map it to the target signal (image).", "The mapping of cGAN does not constrain the output to the target manifold, thus the output can be arbitrarily off the target manifold (Vidal et al., 2017) .", "This is a critical problem both for academic and commercial applications.", "To utilize cGAN or similar methods as a production technology, we need to study their generalization even in the face of intense noise.Similarly to regression, classification also suffers from sensitivity to noise and lack of output constraints.", "One notable line of research consists in complementing supervision with unsupervised learning modules.", "The unsupervised module forms a new pathway that is trained with the same, or different data samples.", "The unsupervised pathway enables the network to explore the structure that is not present in the labelled training set, while implicitly constraining the output.", "The addition of the unsupervised module is only required during the training stage and results in no additional computational cost during inference.", "Rasmus et al. (2015) and Zhang et al. (2016) modified the original bottom-up (encoder) network to include top-down (decoder) modules during training.", "However, in dense regression both bottom-up and top-down modules exist by default, and such methods are thus not trivial to extend to regression tasks.Motivated by the combination of supervised and unsupervised pathways, we propose a novel conditional GAN which includes implicit constraints in the latent subspaces.", "We coin this new model 'Robust Conditional GAN' (RoCGAN).", "In the original cGAN the generator accepts a source signal and maps it to the target domain.", "In our work, we (implicitly) constrain the decoder to generate samples that span only the target manifold.", "We replace the original generator, i.e. encoder-decoder, with a two pathway module (see FIG0 ).", "The first pathway, similarly to the cGAN generator, performs regression while the second is an autoencoder in the target domain (unsupervised pathway).", "The two pathways share a similar network structure, i.e. each one includes an encoder-decoder network.", "The weights of the two decoders are shared which promotes the latent representations of the two pathways to be semantically similar.", "Intuitively, this can be thought of as constraining the output of our dense regression to span the target subspace.", "The unsupervised pathway enables the utilization of all the samples in the target domain even in the absence of a corresponding input sample.", "During inference, the unsupervised pathway is no longer required, therefore the testing complexity remains the same as in cGAN.", "(a) The source signal is embedded into a low-dimensional, latent subspace, which is then mapped to the target subspace.", "The lack of constraints might result in outcomes that are arbitrarily off the target manifold.", "(b) On the other hand, in RoCGAN, steps 1b and 2b learn an autoencoder in the target manifold and by sharing the weights of the decoder, we restrict the output of the regression (step 2a).", "All figures in this work are best viewed in color.In the following sections, we introduce our novel RoCGAN and study their (theoretical) properties.", "We prove that RoCGAN share similar theoretical properties with the original GAN, i.e. convergence and optimal discriminator.", "An experiment with synthetic data is designed to visualize the target subspaces and assess our intuition.", "We experimentally scrutinize the sensitivity of the hyper-parameters and evaluate our model in the face of intense noise.", "Moreover, thorough experimentation with both images from natural scenes and human faces is conducted in two different tasks.", "We compare our model with both the state-of-the-art cGAN and the recent method of Rick Chang et al. (2017) .", "The experimental results demonstrate that RoCGAN outperform the baseline by a large margin in all cases.Our contributions can be summarized as following:• We introduce RoCGAN that leverages structure in the target space.", "The goal is to promote robustness in dense regression tasks.•", "We scrutinize the model performance under (extreme) noise and adversarial perturbations.To the authors' knowledge, this robustness analysis has not been studied previously for dense regression.•", "We conduct a thorough experimental analysis for two different tasks. We", "outline how RoCGAN can be used in a semi-supervised learning task, how it performs with lateral connections from encoder to decoder.Notation: Given a set of N samples, s (n) denotes the n th conditional label, e.g. a prior image; y (n) denotes the respective target image. Unless", "explicitly mentioned otherwise || · || will declare an 1 norm. The symbols", "L * define loss terms, while λ * denote regularization hyper-parameters optimized on the validation set.", "We introduce the Robust Conditional GAN (RoCGAN) model, a new conditional GAN capable of leveraging unsupervised data to learn better latent representations, even in the face of large amount of noise.", "RoCGAN's generator is composed of two pathways.", "The first pathway (reg pathway), performs the regression from the source to the target domain.", "The new, added pathway (AE pathway) is an autoencoder in the target domain.", "By adding weight sharing between the two decoders, we implicitly constrain the reg pathway to output images that span the target manifold.", "In this following sections (of the appendix) we include additional insights, a theoretical analysis along with additional experiments.", "The sections are organized as following:• In sec. B we validate our intuition for the RoCGAN constraints through the linear equivalent.•", "A theoretical analysis is provided in sec. C.•", "We implement different networks in sec. D to assess whether the performance gain can be attributed to a single architecture.•", "An ablation study is conducted in sec. E comparing the hyper-parameter sensitivity and the robustness in the face of extreme noise.The FIG3 , 7, 8 include all the outputs of the synthetic experiment of the main paper. As", "a reminder, the output vector is [x + 2y + 4, e x + 1, x + y + 3, x + 2] with x, y ∈ [−1, 1]." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25, 0.08695651590824127, 0.17777776718139648, 0.44897958636283875, 0.3829787075519562, 0.1355932205915451, 0.14999999105930328, 0.0952380895614624, 0.23999999463558197, 0.1666666567325592, 0.0555555522441864, 0.16949151456356049, 0.21052631735801697, 0.2857142686843872, 0.260869562625885, 0.17777776718139648, 0.08888888359069824, 0.2461538463830948, 0.11764705181121826, 0.25, 0.1463414579629898, 0.24390242993831635, 0.17777776718139648, 0.04999999701976776, 0.1904761791229248, 0.1904761791229248, 0.3255814015865326, 0.1904761791229248, 0.23255813121795654, 0.19999998807907104, 0.19230768084526062, 0.1249999925494194, 0.1860465109348297, 0.19512194395065308, 0.19999998807907104, 0.09302324801683426, 0.1860465109348297, 0.29629629850387573, 0.1111111044883728, 0.07843136787414551, 0.11428570747375488, 0.23529411852359772, 0, 0.04999999701976776, 0.42307692766189575, 0.125, 0.21052631735801697, 0.2631579041481018, 0.17777776718139648, 0.1428571343421936, 0.04347825422883034, 0.05882352590560913, 0.2222222238779068, 0.1071428507566452, 0.1304347813129425 ]
Byg0DsCqYQ
true
[ "We introduce a new type of conditional GAN, which aims to leverage structure in the target space of the generator. We augment the generator with a new, unsupervised pathway to learn the target structure. " ]
[ "Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples.", "To solve the problem, some regularization adversarial training methods, constraining the output label or logit, have been studied.", "In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention.", "Instead of constraining a hard distribution (e.g., one-hot vectors or logit) in adversarial training, ATLPA uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.", "Specifically, in addition to minimizing the empirical loss, ATLPA encourages attention map for pairs of examples to be similar.", "When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.", "We evaluate ATLPA with the state of the art algorithms, the experiment results show that our method outperforms these baselines with higher accuracy.", "Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation $\\epsilon$ is 64 and 128 with 10 to 200 attack iterations.", "In recent years, deep neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance (Krizhevsky et al., 2012; He et al., 2015; Li et al., 2019a) .", "Success of deep neural networks has led to an explosion in demand.", "Recent studies (Szegedy et al., 2013; Goodfellow et al., 2014; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2016; Bose & Aarabi, 2018) have shown that they are all vulnerable to the attack of adversarial examples.", "Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful deep neural networks.", "In order to solve this problem, many defence methods have been proposed, among which adversarial training is considered to be the most effective one .Adversarial", "training (Goodfellow et al., 2014; Madry et al., 2017; Kannan et al., 2018; Tramèr et al., 2017; Pang et al., 2019) defends against adversarial perturbations by training networks on adversarial images that are generated on-the-fly during training. Although aforementioned", "methods demonstrated the power of adversarial training in defence, we argue that we need to perform research on at least the following two aspects in order to further improve current defence methods.", "Strictness vs. Tolerant.", "Most existing defence methods only fit the outputs of adversarial examples to the one-hot vectors of clean examples counterparts.", "Kannan et al. (2018) also fit confidence distribution on the all logits of clean examples counterparts, they call it as Logits Pair.", "Despite its effectiveness, this is not necessarily the optimal target to fit, because except for maximizing the confidence score of the primary class (i.e., the ground-truth), allowing for some secondary classes (i.e., those visually similar ones to the ground-truth) to be preserved may help to alleviate the risk of over-fitting (Yang et al., 2018) .", "We fit Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.", "We believe that limited attention should be devoted to top-k classes of the confidence score, rather than strictly fitting the confidence distribution of all classes.", "A More Tolerant Teacher Educates Better Students.", "Process vs. Result.", "In Fig. 1 , we visualize the spatial attention map of a flower and its corresponding adversarial image on ResNet-101 (He et al., 2015) pretrained on ImageNet (Russakovsky et al., 2015) .", "The figure suggests that adversarial perturbations, while small in the pixel space, lead to very substantial noise in the attention map of the network.", "Whereas the features for the clean image appear to focus primarily on semantically informative content in the image, the attention map for the adversarial image are activated across semantically irrelevant regions as well.", "The state of the art adversarial training methods only encourage hard distribution of deep neural networks output (e.g., one-hot vectors (Madry et al., 2017; Tramèr et al., 2017) or logit (Kannan et al., 2018) ) for pairs of clean examples and adversarial counterparts to be similar.", "In our opinion, it is not enough to align the difference between the clean examples and adversarial counterparts only at the output layer of the network, and we need to align the attention maps of middle layers of the whole network, e.g.,o uter layer outputs of conv2.x, conv3.x, conv4.x, conv5.x in ResNet-101.", "We can't just focus on the result, but also on the process.", "(Russakovsky et al., 2015) .", "(a) is original image and", "(b) is corresponding adversarial image.For ResNet-101, which we use exclusively in this paper, we grouped filters into stages as described in (He et al., 2015) .", "These stages are conv2.x, conv3.x, conv4.x, conv5.x.", "The contributions of this paper are the following:", "• We propose a novel regularized adversarial training framework ATLPA : a method that uses Tolerant Logit and encourages attention map for pairs of examples to be similar.", "When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.", "Instead of constraining a hard distribution in adversarial training, Tolerant Logit consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.", "• We explain the reason why our ATLPA can improve the robustness of the model from three dimensions: average activations on discriminate parts, the diversity among learned features of different classes and trends of loss landscapes.", "• We show that our ATLPA achieves the state of the art defense on a wide range of datasets against strong PGD gray-box and black-box attacks.", "Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation ∈ {0.25, 0.5} i.e. L ∞ ∈ {0.25, 0.5} with 10 to 200 attack iterations.", "To our knowledge, such a strong attack has not been previously explored on a wide range of datasets.", "The rest of the paper is organized as follows: in Section 2 related works are summarized, in Section 3 definitions and threat models are introduced, in Section 4 our ATLPA is introduced, in Section 5 experimental results are presented and discussed, and finally in Section 6 the paper is concluded.", "2 RELATED WORK evaluate the robustness of nine papers (Buckman et al., 2018; Ma et al., 2018; Guo et al., 2017; Dhillon et al., 2018; Xie et al., 2017; Song et al., 2017; Samangouei et al., 2018; Madry et al., 2017; Na et al., 2017) accepted to ICLR 2018 as non-certified white-box-secure defenses to adversarial examples.", "They find that seven of the nine defenses use obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.Obfuscated gradients provide a limited increase in robustness and can be broken by improved attack techniques they develop.", "The only defense they observe that significantly increases robustness to adversarial examples within the threat model proposed is adversarial training (Madry et al., 2017) .", "Adversarial training (Goodfellow et al., 2014; Madry et al., 2017; Kannan et al., 2018; Tramèr et al., 2017; Pang et al., 2019) defends against adversarial perturbations by training networks on adversarial images that are generated on-the-fly during training.", "For adversarial training, the most relevant work to our study is (Kannan et al., 2018) , which introduce a technique they call Adversarial Logit Pairing(ALP), a method that encourages logits for pairs of examples to be similar.", "(Engstrom et al., 2018; Mosbach et al., 2018 ) also put forward different opinions on the robustness of ALP.", "Our ATLPA encourages attention map for pairs of examples to be similar.", "When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.", "(Araujo et al., 2019) adds random noise at training and inference time, adds denoising blocks to the model to increase adversarial robustness, neither of the above approaches focuses on the attention map.", "Following (Pang et al., 2018; Yang et al., 2018; Pang et al., 2019) , we propose Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.", "In terms of methodologies, our work is also related to deep transfer learning and knowledge distillation problems, the most relevant work to our study are (Zagoruyko & Komodakis, 2016; Li et al., 2019b) , which constrain the L 2 -norm of the difference between their behaviors (i.e., the feature maps of outer layer outputs in the source/target networks).", "Our ATLPA constrains attention map for pairs of clean examples and their adversarial counterparts to be similar.", "In this paper, we propose a novel regularized adversarial training framework ATLPA a method that uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level, and encourages attention map for pairs of examples to be similar.", "We show that our ATLPA achieves the state of the art defense on a wide range of datasets against strong PGD gray-box and black-box attacks.", "We explain the reason why our ATLPA can improve the robustness of the model from three dimensions: average activations on discriminate parts, the diversity among learned features of different classes and trends of loss landscapes.", "The results of visualization and quantitative calculation show that our method is helpful to improve the robustness of the model." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.045454539358615875, 0.11428570747375488, 1, 0.1538461446762085, 0, 0.12121211737394333, 0.052631575614213943, 0.04651162400841713, 0.035087715834379196, 0, 0.0416666604578495, 0, 0.2380952388048172, 0.08695651590824127, 0.13333332538604736, 0.0952380895614624, 0.05882352590560913, 0, 0.0317460261285305, 0.10526315122842789, 0, 0.07999999821186066, 0, 0.17777776718139648, 0.051282044500112534, 0.045454539358615875, 0.06779660284519196, 0.09836065024137497, 0, 0, 0, 0.1860465109348297, 0, 0.07692307233810425, 0.4000000059604645, 0.12121211737394333, 0.1904761791229248, 0, 0.0476190410554409, 0.0416666604578495, 0.05714285373687744, 0, 0.038461532443761826, 0.06666666269302368, 0.0952380895614624, 0.13333332538604736, 0.15094339847564697, 0, 0, 0.12121211737394333, 0.08510638028383255, 0.1702127605676651, 0.029411761090159416, 0.05714285373687744, 0.4333333373069763, 0.04878048226237297, 0, 0 ]
HJx0Yn4FPB
true
[ "In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention." ]
[ "A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances.", "In this work, we address one such setting which requires solving a task with a novel set of actions.", "Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks.", "Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning.", "Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action.", "We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy.", "We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments.", "Our results and videos can be found at sites.google.com/view/action-generalization/" ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.2222222238779068, 0.13333332538604736, 0.17142856121063232, 0.2222222238779068, 0.23529411852359772, 0.29411762952804565, 0.4117647111415863, 0 ]
rkx35lHKwB
false
[ "We address the problem of generalization of reinforcement learning to unseen action spaces." ]
[ "Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals.", "The standard way of learning in such models is by estimating the conditional intensity function. ", "However, parameterizing the intensity function usually incurs several trade-offs.", "We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. ", "We draw on the literature on normalizing flows to design models that are flexible and efficient.", "We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. ", "The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data.", "Visits to hospitals, purchases in e-commerce systems, financial transactions, posts in social media -various forms of human activity can be represented as discrete events happening at irregular intervals.", "The framework of temporal point processes is a natural choice for modeling such data.", "By combining temporal point process models with deep learning, we can design algorithms able to learn complex behavior from real-world data.", "Designing such models, however, usually involves trade-offs along the following dimensions: flexibility (can the model approximate any distribution?), efficiency (can the likelihood function be evaluated in closed form?), and ease of use (is sampling and computing summary statistics easy?).", "Existing methods (Du et al., 2016; Mei & Eisner, 2017; Omi et al., 2019) that are defined in terms of the conditional intensity function typically fall short in at least one of these categories.", "Instead of modeling the intensity function, we suggest treating the problem of learning in temporal point processes as an instance of conditional density estimation.", "By using tools from neural density estimation (Bishop, 1994; Rezende & Mohamed, 2015) , we can develop methods that have all of the above properties.", "To summarize, our contributions are the following:", "• We connect the fields of temporal point processes and neural density estimation.", "We show how normalizing flows can be used to define flexible and theoretically sound models for learning in temporal point processes.", "• We propose a simple mixture model that performs on par with the state-of-the-art methods.", "Thanks to its simplicity, the model permits closed-form sampling and moment computation.", "• We show through a wide range of experiments how the proposed models can be used for prediction, conditional generation, sequence embedding and training with missing data.", "as a sequence of strictly positive inter-event times τ i = t i − t i−1 ∈ R + .", "Representations in terms of t i and τ i are isomorphic -we will use them interchangeably throughout the paper.", "The traditional way of specifying the dependency of the next arrival time t on the history H t = {t j ∈ T : t j < t} is using the conditional intensity function λ *", "(t) := λ(t|H t ).", "Here, the * symbol reminds us of dependence on H t .", "Given the conditional intensity function, we can obtain the conditional probability density function (PDF) of the time τ i until the next event by integration (Rasmussen, 2011) as p * (τ i ) := p(τ i |H ti ) = λ * (t i−1 + τ i ) exp − τi 0 λ * (t i−1 +", "s)ds .", "Learning temporal point processes.", "Conditional intensity functions provide a convenient way to specify point processes with a simple predefined behavior, such as self-exciting (Hawkes, 1971 ) and self-correcting (Isham & Westcott, 1979) processes.", "Intensity parametrization is also commonly used when learning a model from the data: Given a parametric intensity function λ * θ (t) and a sequence of observations T , the parameters θ can be estimated by maximizing the log-likelihood: θ * = arg max θ i log p * θ (τ i ) = arg max θ i log λ *", "The main challenge of such intensity-based approaches lies in choosing a good parametric form for λ", "Universal approximation.", "The SOSFlow and DSFlow models can approximate any probability density on R arbitrarily well (Jaini et al., 2019, Theorem 3), (Krueger et al., 2018, Theorem 4) .", "It turns out, a mixture model has the same universal approximation (UA) property.", "Theorem 1 (DasGupta, 2008, Theorem 33.2) .", "Let p(x) be a continuous density on R. If q(x) is any density on R and is also continuous, then, given ε > 0 and a compact set S ⊂ R, there exist number of components K ∈ N, mixture coefficients w ∈ ∆ K−1 , locations µ ∈ R K , and scales", "This results shows that, in principle, the mixture distribution is as expressive as the flow-based models.", "Since we are modeling the conditional density, we additionally need to assume for all of the above models that the RNN can encode all the relevant information into the history embedding h i .", "This can be accomplished by invoking the universal approximation theorems for RNNs (Siegelmann & Sontag, 1992; Schäfer & Zimmermann, 2006) .", "Note that this result, like other UA theorems of this kind (Cybenko, 1989; Daniels & Velikova, 2010) , does not provide any practical guarantees on the obtained approximation quality, and doesn't say how to learn the model parameters.", "Still, UA intuitively seems like a desirable property of a distribution.", "This intuition is supported by experimental results.", "In Section 5.1, we show that models with the UA property consistently outperform the less flexible ones.", "Interestingly, Theorem 1 does not make any assumptions about the form of the base density q(x).", "This means we could as well use a mixture of distribution other than log-normal.", "However, other popular distributions on R + have drawbacks: log-logistic does not always have defined moments and gamma distribution doesn't permit straightforward sampling with reparametrization.", "Intensity function.", "For both flow-based and mixture models, the conditional cumulative distribution function (CDF) F * (τ ) and the PDF p * (τ ) are readily available.", "This means we can easily compute the respective intensity functions (see Appendix A).", "However, we should still ask whether we lose anything by modeling p * (τ ) instead of λ * (t).", "The main arguments in favor of modeling the intensity function in traditional models (e.g. self-exciting process) are that it's intuitive, easy to specify and reusable (Upadhyay & Rodriguez, 2019) .", "\"Intensity function is intuitive, while the conditional density is not.\" -While it's true that in simple models (e.g. in self-exciting or self-correcting processes) the dependence of λ * (t) on the history is intuitive and interpretable, modern RNN-based intensity functions (as in Du et al. (2016) ; Mei & Eisner (2017); Omi et al. (2019) ) cannot be easily understood by humans.", "In this sense, our proposed models are as intuitive and interpretable as other existing intensity-based neural network models.", "\"λ * (t) is easy to specify, since it only has to be positive. On the other hand, p * (τ ) must integrate to one.\" -As we saw, by using either normalizing flows or a mixture distribution, we automatically enforce that the PDF integrates to one, without sacrificing the flexibility of our model.", "\"Reusability: If we merge two independent point processes with intensitites λ * 1 (t) and λ * 2 (t), the merged process has intensity λ * (t) = λ * 1 (t) + λ * 2 (t).\" -An equivalent result exists for the CDFs F * 1 (τ ) and F * 2 (τ ) of the two independent processes.", "The CDF of the merged process is obtained as", "2 (τ ) (derivation in Appendix A).", "As we just showed, modeling p * (τ ) instead of λ * (t) does not impose any limitation on our approach.", "Moreover, a mixture distribution is flexible, easy to sample from and has well-defined moments, which favorably compares it to other intensity-based deep learning models.", "We use tools from neural density estimation to design new models for learning in TPPs.", "We show that a simple mixture model is competitive with state-of-the-art normalizing flows methods, as well as convincingly outperforms other existing approaches.", "By looking at learning in TPPs from a different perspective, we were able to address the shortcomings of existing intensity-based approaches, such as insufficient flexibility, lack of closed-form likelihoods and inability to generate samples analytically.", "We hope this alternative viewpoint will inspire new developments in the field of TPPs.", "Constant intensity model as exponential distribution.", "The conditional intensity function of the constant intensity model (Upadhyay et al., 2018 ) is defined as λ", "H is the history embedding produced by an RNN, and b ∈ R is a learnable parameter.", "By setting c = exp(v T h i + b), it's easy to see that the PDF of the constant intensity model p * (τ ) = c exp(−c) corresponds to an exponential distribution.", "Summary The main idea of the approach by Omi et al. (2019) is to model the integrated conditional intensity function", "using a feedforward neural network with non-negative weights", "are non-negative weight matrices, and", "(3) ∈ R are the remaining model parameters.", "FullyNN as a normalizing flow Let z ∼ Exponential(1), that is", "We can view f : R + → R + as a transformation that maps τ to z", "We can now use the change of variables formula to obtain the conditional CDF and PDF of τ .", "Alternatively, we can obtain the conditional intensity as", "and use the fact that p", "Both approaches lead to the same conclusion", "However, the first approach also provides intuition on how to draw samplesτ from the resulting distribution p * (τ ) -an approach known as the inverse method (Rasmussen, 2011)", "1. Samplez ∼ Exponential(1)", "2. Obtainτ by solving f (τ ) −z = 0 for τ (using e.g. bisection method)", "Similarly to other flow-based models, sampling from the FullyNN model cannot be done exactly and requires a numerical approximation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.3571428656578064, 0.1904761791229248, 0.2666666507720947, 0.07407406717538834, 0.10810810327529907, 0.0555555522441864, 0.05128204822540283, 0.307692289352417, 0.12121211737394333, 0.08510638028383255, 0.1860465109348297, 0.4848484694957733, 0.05405404791235924, 0.10526315122842789, 0.3199999928474426, 0.24242423474788666, 0.07407406717538834, 0.0833333283662796, 0.10256409645080566, 0, 0.13333332538604736, 0.1463414579629898, 0, 0.08695651590824127, 0.15094339847564697, 0.375, 0.1538461446762085, 0.1111111044883728, 0.0714285671710968, 0, 0.07999999821186066, 0, 0, 0.1538461446762085, 0.20512820780277252, 0.12903225421905518, 0.0833333283662796, 0, 0.10526315122842789, 0.06896550953388214, 0.14814814925193787, 0, 0.0555555522441864, 0.12121211737394333, 0.1599999964237213, 0.13333332538604736, 0.19512194395065308, 0.1764705926179886, 0, 0.06666666269302368, 0.1702127605676651, 0.0952380895614624, 0.10526315122842789, 0.12121211737394333, 0, 0.07407406717538834, 0, 0.08888888359069824, 0.1538461446762085, 0.1111111044883728, 0.19999998807907104, 0.1428571343421936, 0.0952380895614624, 0.25806450843811035, 0, 0, 0.09999999403953552, 0, 0, 0.1428571343421936, 0.29999998211860657, 0.1111111044883728, 0.10526315122842789, 0.052631575614213943, 0, 0.06896550953388214, 0.06451612710952759 ]
HygOjhEYDH
true
[ "Learn in temporal point processes by modeling the conditional density, not the conditional intensity." ]
[ "We propose a novel yet simple neural network architecture for topic modelling.", "The method is based on training an autoencoder structure where the bottleneck represents the space of the topics distribution and the decoder outputs represent the space of the words distributions over the topics.", "We exploit an auxiliary decoder to prevent mode collapsing in our model. ", "A key feature for an effective topic modelling method is having sparse topics and words distributions, where there is a trade-off between the sparsity level of topics and words.", "This feature is implemented in our model by L-2 regularization and the model hyperparameters take care of the trade-off. ", "We show in our experiments that our model achieves competitive results compared to the state-of-the-art deep models for topic modelling, despite its simple architecture and training procedure.", "The “New York Times” and “20 Newsgroups” datasets are used in the experiments.\n\n" ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3333333432674408, 0, 0.10526315122842789, 0.25806450843811035, 0.0833333283662796, 0.25, 0 ]
BJluV5tjiQ
false
[ "A deep model for topic modelling" ]
[ "Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker.", "Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs.", "Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC.", "Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices.", "In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning).", "In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC.", "We compare our results with the state-of-the-art StarGAN-VC architecture.", "In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively.", "The key strength of the proposed architectures is that it yields these results with less computational complexity.", "AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters. ", "Language is the core of civilization, and speech is the most powerful and natural form of communication.", "Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mechanism (Eriksson & Wretling (1997) ) and challenging concepts of prosodic transfer (Gomathi et al. (2012) ).", "In the literature, this is achieved using Voice Conversion (VC) technique (Stylianou (2009) ).", "Recently, VC has gained more attention due to its fascinating real-world applications in privacy and identity protection, military operations, generating new voices for animated and fictional movies, voice repair in medical-domain, voice assistants, etc.", "Voice Conversion (VC) technique converts source speaker's voice in such a way as if it were spoken by the target speaker.", "This is primarily achieved by modifying spectral and prosodic features while retaining the linguistic information in the given speech signal (Stylianou et al. (1998) ).", "In addition, Voice cloning is one of the closely related task to VC (Arik et al. (2018) ).", "However, in this research work we only focus to advance the Voice Conversion.", "With the emergence of deep learning techniques, VC has become more efficient.", "Deep learningbased techniques have made remarkable progress in parallel VC.", "However, it is difficult to get parallel data, and such data needs alignment (which is a arduous process) to get better results.", "Building a VC system from non-parallel data is highly challenging, at the same time valuable for practical application scenarios.", "Recently, many deep learning-based style transfer algorithms have been applied for non-parallel VC task.", "Hence, this problem can be formulated as a style transfer problem, where one speaker's style is converted into another while preserving the linguistic content as it is.", "In particular, Conditional Variational AutoEncoders (CVAEs), Generative Adversarial Networks (GANs) (proposed by Goodfellow et al. (2014) ), and its variants have gained significant attention in non-parallel VC.", "However, it is known that the training task for GAN is hard, and the convergence property of GAN is fragile (Salimans et al. (2016) ).", "There is no substantial evidence that the gen-erated speech is perceptually good.", "Moreover, CVAEs alone do not guarantee distribution matching and suffers from the issue of over smoothing of the converted features.", "Although, there are few GAN-based systems that produced state-of-the-art results for non-parallel VC.", "Among these algorithms, even fewer can be applied for many-to-many VC tasks.", "At last, there is the only system available for zero-shot VC proposed by Qian et al. (2019) .", "Zero-shot conversion is a technique to convert source speaker's voice into an unseen target speaker's speaker via looking at a few utterances of that speaker.", "As known, solutions to a challenging problem comes with trade-offs.", "Despite the results, architectures have become more complex, which is not desirable in real-world scenarios because the quality of algorithms or architectures is also measured by the training time and computational complexity of learning trainable parameters ).", "Motivated by this, we propose computationally less expensive Adaptive GAN (AdaGAN), a new style transfer framework, and a new architectural training procedure that we apply to the GAN-based framework.", "In AdaGAN, the generator encapsulates Adaptive Instance Normalization (AdaIN) for style transfer, and the discriminator is responsible for adversarial training.", "Recently, StarGAN-VC (proposed by Kameoka et al. (2018) ) is a state-of-the-art method among all the GAN-based frameworks for non-parallel many-to-many VC.", "AdaGAN is also GAN-based framework.", "Therefore, we compare AdaGAN with StarGAN-VC for non-parallel many-to-many VC in terms of naturalness, speaker similarity, and computational complexity.", "We observe that AdaGAN yields state-of-the-art results for this with almost 88.6% less computational complexity.", "Recently proposed AutoVC (by Qian et al. (2019) ) is the only framework for zero-shot VC.", "Inspired by this, we propose AdaGAN for zero-shot VC as an independent study, which is the first GAN-based framework to perform zeroshot VC.", "We reported initial results for zero-shot VC using AdaGAN.The main contributions of this work are as follows:", "• We introduce the concept of latent representation based many-to-many VC using GAN for the first time in literature.", "• We show that in the latent space content of the speech can be represented as the distribution and the properties of this distribution will represent the speaking style of the speaker.", "• Although AdaGAN has much lesser computation complexity, AdaGAN shows much better results in terms of naturalness and speaker similarity compared to the baseline.", "In this paper, we proposed novel AdaGAN primarily for non-parallel many-to-many VC task.", "Moreover, we analyzed our proposed architecture w.r.t. current GAN-based state-of-the-art StarGAN-VC method for the same task.", "We know that the main aim of VC is to convert the source speaker's voice into the target speaker's voice while preserving linguistic content.", "To achieve this, we have used the style transfer algorithm along with the adversarial training.", "AdaGAN transfers the style of the target speaker into the voice of a source speaker without using any feature-based mapping between the linguistic content of the source speaker's speech.", "For this task, AdaGAN uses only one generator and one discriminator, which leads to less complexity.", "AdaGAN is almost 88.6% computationally less complex than the StarGAN-VC.", "We have performed subjective analysis on the VCTK corpus to show the efficiency of the proposed method.", "We can clearly see that AdaGAN gives superior results in the subjective evaluations compared to StarGAN-VC.", "Motivated by the work of AutoVC, we also extended the concept of AdaGAN for the zero-shot conversion as an independent study and reported results.", "AdaGAN is the first GAN-based framework for zero-shot VC.", "In the future, we plan to explore high-quality vocoders, namely, WaveNet, for further improvement in voice quality.", "The perceptual difference observed between the estimated and the ground truth indicates the need for exploring better objective function that can perceptually optimize the network parameters of GAN-based architectures, which also forms our immediate future work.", "At τ → ∞, the assumptions that made in Section 5.1 are true.", "Hence, from eq. (18), we can conclude that there exists a latent space where normalized latent representation of input features will be the same irrespective of speaking style.", "Theorem 2: By optimization of min En,De L C X→Y + L sty X→Y , the assumptions made in Theorem 1 can be satisfied.", "Proof: Our objective function is the following:", "Iterate step by step to calculate the term (t 2 ) used in loss function L sty X→Y .", "Consider, we have the latent representations S x1 and S y1 corresponding to the source and target speech, respectively.", "Step 1: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 2 (τ ) + µ 2 (τ ) (Representation of t 1 ),", "Step 2&3: En De S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 2 (τ ) + µ 2 (τ ) .", "After applying decoder and encoder sequentially on latent representation, we will again get back to the same representation.", "This is ensured by the loss function L C X→Y .", "Formally, we want to make L C X→Y → 0.", "Therefore, we can write step 4 as:", "Step 4: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 2 (τ ) + µ 2 (τ ) (i.e., reconstructed t 1 ),", "Step 5: 1σ 2 (τ ) S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) ¨σ 2 (τ ) +¨μ 2 (τ ) −¨μ 2 (τ ) (Normalization with its own (i.e., latent representation in Step 4) µ and σ during AdaIN ),", "Step 6: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) (Final output of Step 5),", "Step 7: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 1 (τ ) + µ 1 (τ ) (Output after de-normalization in AdaIN . Representation of t 2 ), where µ 1 and σ 1 are the mean and standard deviations of the another input source speech, x 2 .", "Now, using the mathematical representation of t 2 , we can write loss function L sty X→Y as:", "According to eq. (19), we want to minimize the loss function L sty X→Y .", "Formally, L sty X→Y → 0.", "Therefore, we will get µ 1 = µ 1 , and σ 1 = σ 1 to achieve our goal.", "Hence, mean and standard deviation of the same speaker are constant, and different for different speakers irrespective of the linguistic content.", "We come to the conclusion that our loss function satisfies the necessary constraints (assumptions) required in proof of Theorem 1." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.25, 0.17142856121063232, 0.07692307233810425, 0.04081632196903229, 0, 0.10256409645080566, 0, 0.10810810327529907, 0.0714285671710968, 0.038461536169052124, 0, 0.1304347813129425, 0, 0.05128204822540283, 0.060606054961681366, 0, 0.07407406717538834, 0.1599999964237213, 0.11764705181121826, 0.11764705181121826, 0.13793103396892548, 0, 0.0952380895614624, 0.1666666567325592, 0, 0.060606054961681366, 0.1428571343421936, 0.2222222238779068, 0.1875, 0, 0, 0.04255318641662598, 0.1463414579629898, 0.12121211737394333, 0.1621621549129486, 0.09999999403953552, 0.23529411852359772, 0.06451612710952759, 0.25806450843811035, 0.21621620655059814, 0.1818181723356247, 0.3030303120613098, 0.05128204822540283, 0.05405404791235924, 0.2142857164144516, 0.060606054961681366, 0.05714285373687744, 0, 0, 0.06666666269302368, 0, 0, 0, 0.1666666567325592, 0.3333333432674408, 0.0624999962747097, 0.0833333283662796, 0, 0, 0, 0, 0, 0.06451612710952759, 0, 0, 0.060606054961681366, 0, 0, 0, 0, 0.04444444179534912, 0, 0.0416666641831398, 0, 0, 0, 0.06896550953388214, 0.1249999925494194, 0 ]
HJlk-eHFwH
true
[ "Novel adaptive instance normalization based GAN framework for non parallel many-to-many and zero-shot VC. " ]
[ "Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks.", "Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context.", "To tackle the problem, we propose a novel model called Sparse Transformer.", "Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments.", "Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance. \n ", "Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation.", "In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance.", "Understanding natural language requires the ability to pay attention to the most relevant information.", "For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading.", "However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension.", "Such distraction hinders the understanding process, which calls for an effective attention.", "This principle is also applicable to the computation systems for natural language.", "Attention has been a vital component of the models for natural language understanding and natural language generation.", "Recently, Vaswani et al. (2017) proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT).", "Transformer has shown outstanding performance in natural language generation tasks.", "More recently, the success of BERT (Devlin et al., 2018) in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer.", "However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context.", "This causes a lack of focus.", "As illustrated in Figure 1 , the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant k words.", "For the word \"tim\", the most related words should be \"heart\" and the immediate words.", "Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as \"him\".", "Recent works have studied applying sparse attention in Transformer model.", "However, they either add local attention constraints (Child et al., 2019) which break long term dependency or hurt the time efficiency (Martins & Astudillo, 2016) .", "Inspired by Ke et al. (2018) which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism.", "We implement an explicit selection method based on top-k selection.", "Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the k most contributive states.", "Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.", "Figure 1 : Illustration of self-attention in the models.", "The orange bar denotes the attention score of our proposed model while the blue bar denotes the attention scores of the vanilla Transformer.", "The orange line denotes the attention between the target word \"tim\" and the selected top-k positions in the sequence.", "In the attention of vanilla Transformer, \"tim\" assigns too many non-zero attention scores to the irrelevant words.", "But for the proposal, the top-k largest attention scores removes the distraction from irrelevant words and the attention becomes concentrated.", "We first validate our methods on three tasks.", "For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses.", "We are surprised to find that the proposed sparse attention method can also help with training as a regularization method.", "Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment.", "The contributions of this paper are presented below:", "• We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer's attention through explicit selection.", "• We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling.", "Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks.", "Specifically, our model reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation.", "• Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves better results.", "In this section, we performed several analyses for further discussion of Explicit Sparse Transformer.", "First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019) .", "Second, we discuss about the selection of the value of k.", "Third, we demonstrate that the top-k sparse attention method helps training.", "In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14999999105930328, 0.2222222238779068, 0.21621620655059814, 0.800000011920929, 0.2222222238779068, 0.1538461446762085, 0.15789473056793213, 0.2702702581882477, 0.31111109256744385, 0.10256409645080566, 0.21621620655059814, 0.21621620655059814, 0.14999999105930328, 0.1860465109348297, 0.05714285373687744, 0.16326530277729034, 0.2790697515010834, 0.12903225421905518, 0.30188679695129395, 0.10810810327529907, 0.21739129722118378, 0.11428570747375488, 0.07843136787414551, 0.2181818187236786, 0.23529411852359772, 0.29999998211860657, 0.1666666567325592, 0.11764705181121826, 0.19512194395065308, 0.1463414579629898, 0.19999998807907104, 0.1463414579629898, 0.060606058686971664, 0.12244897335767746, 0.13636362552642822, 0.09999999403953552, 0.060606058686971664, 0.4000000059604645, 0.04444443807005882, 0.14999999105930328, 0.05405404791235924, 0.13333332538604736, 0.20512820780277252, 0.145454540848732, 0.1764705777168274, 0.1111111044883728, 0.2380952388048172 ]
Hye87grYDH
true
[ "This work propose Sparse Transformer to improve the concentration of attention on the global context through an explicit selection of the most relevant segments for sequence to sequence learning. " ]
[ "Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge.", "We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence.", "We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data.", "When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%.", "We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset.", "ResNet trained on CPC ResNet trained on pixels With decreasing amounts of labeled data, supervised networks trained on pixels fail to generalize (red).", "When trained on unsupervised representations learned with CPC, these networks retain a much higher accuracy in this low-data regime (blue).", "Equivalently, the accuracy of supervised networks can be matched with significantly fewer labels.", "Deep neural networks excel at perceptual tasks when labeled data are abundant, yet their performance degrades substantially when provided with limited supervision (Fig. 1, red ).", "In contrast, humans and animals can quickly learn about new classes of objects from few examples (Landau et al., 1988; Markman, 1989) .", "What accounts for this monumental difference in data-efficiency between biological and machine vision?", "While highly-structured representations (e.g. as proposed by Lake et al., 2015) may improve data-efficiency, it remains unclear how to program explicit structures that capture the enormous complexity of real visual scenes like those in ImageNet (Russakovsky et al., 2015) .", "An alternative hypothesis has proposed that intelligent systems need not be structured a priori, but can instead learn about the structure of the world in an unsupervised manner (Barlow, 1989; Hinton et al., 1999; LeCun et al., 2015) .", "Choosing an appropriate training objective is an open problem, but a promising guiding principle has emerged recently: good representations should make the spatio-temporal variability in natural signals more predictable.", "Indeed, human perceptual representations have been shown to linearize (or 'straighten') the temporal transformations found in natural videos, a property lacking from current supervised image recognition models (Hénaff et al., 2019) , and theories of both spatial and temporal predictability have succeeded in describing properties of early visual areas (Rao & Ballard, 1999; Palmer et al., 2015) .", "In this work, we hypothesize that spatially predictable representations may allow artificial systems to benefit from human-like data-efficiency.", "Contrastive Predictive Coding (CPC, van den Oord et al., 2018) is an unsupervised objective which learns such predictable representations.", "CPC is a general technique that only requires in its definition that observations be ordered along e.g. temporal or spatial dimensions, and as such has been applied to a variety of different modalities including speech, natural language and images.", "This generality, combined with the strong performance of its representations in downstream linear classification tasks, makes CPC a promising candidate for investigating the efficacy of predictable representations for data-efficient image recognition.", "Our work makes the following contributions:", "• We revisit CPC in terms of its architecture and training methodology, and arrive at a new implementation of CPC with dramatically-improved ability to linearly separate image classes (+17% Top-1 ImageNet classification accuracy).", "• We then train deep networks on top of the resulting CPC representations using very few labeled images (e.g. 1% of the ImageNet dataset), and demonstrate test-time classification accuracy far above networks trained on raw pixels (73% Top-5 accuracy, a 28% absolute improvement), outperforming all other unsupervised representation learning methods (+15% Top-5 accuracy over the previous state-of-the-art ).", "Surprisingly, this representation also surpasses supervised methods when given the entire ImageNet dataset (+1% Top-5 accuracy).", "• We isolate the contributions of different components of the final model to such downstream tasks.", "Interestingly, we find that linear classification accuracy is not always predictive of low-data classification accuracy, emphasizing the importance of this metric as a stand-alone benchmark for unsupervised learning.", "• Finally, we assess the generality of CPC representations by transferring them to a new task and dataset: object detection on PASCAL-VOC 2007.", "Consistent with the results from the previous section, we find CPC to give state-of-the-art performance in this setting.", "We asked whether CPC could enable data-efficient image recognition, and found that it indeed greatly improves the accuracy of classifiers and object detectors when given small amounts of labeled data.", "Surprisingly, CPC even improves results given ImageNet-scale labels.", "Our results show that there is still room for improvement using relatively straightforward changes such as augmentation, optimization, and network architecture.", "Furthermore, we found that the standard method for evaluating unsupervised representations-linear classification-is only partially predictive of efficient recognition performance, suggesting that further research should focus on efficient recognition as a standalone benchmark.", "Overall, these results open the door toward research on problems where data is naturally limited, e.g. medical imaging or robotics.", "image detection accuracy to other transfer methods.", "The supervised baseline learns from the entire labeled ImageNet dataset and fine-tunes for PASCAL detection.", "The second class of methods learns from the same unlabeled images before transferring.", "All of these methods pre-train on the ImageNet dataset, except for DeeperCluster which learns from the larger, but uncurated, YFCC100M dataset (Thomee et al., 2015) .", "All results are reported in terms of mean average precision (mAP).", "† denotes methods implemented in this work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05714285373687744, 0.11764705181121826, 0.1111111044883728, 0.0833333283662796, 0.09999999403953552, 0, 0.19354838132858276, 0.0833333283662796, 0.0555555522441864, 0, 0, 0.04081632196903229, 0, 0.05128204822540283, 0.06557376682758331, 0.06896550953388214, 0.25806450843811035, 0, 0.2631579041481018, 0, 0.1463414579629898, 0.0634920597076416, 0, 0, 0.054054051637649536, 0.05882352590560913, 0.0714285671710968, 0.1538461446762085, 0, 0, 0, 0, 0.1111111044883728, 0, 0, 0, 0, 0 ]
rJerHlrYwH
true
[ "Unsupervised representations learned with Contrastive Predictive Coding enable data-efficient image classification." ]
[ "We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels.", "We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently.", "Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer.", "Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights.", "We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms.", "Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training.", "In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network.", "Current state-of-the-art neural networks need extensive computational resources to be trained and can have capacities of close to one billion connections between neurons (Vaswani et al., 2017; Devlin et al., 2018; Child et al., 2019) .", "One solution that nature found to improve neural network scaling is to use sparsity: the more neurons a brain has, the fewer connections neurons make with each other (Herculano-Houzel et al., 2010) .", "Similarly, for deep neural networks, it has been shown that sparse weight configurations exist which train faster and achieve the same errors as dense networks .", "However, currently, these sparse configurations are found by starting from a dense network, which is pruned and re-trained repeatedly -an expensive procedure.", "In this work, we demonstrate the possibility of training sparse networks that rival the performance of their dense counterparts with a single training run -no re-training is required.", "We start with random initializations and maintain sparse weights throughout training while also speeding up the overall training time.", "We achieve this by developing sparse momentum, an algorithm which uses the exponentially smoothed gradient of network weights (momentum) as a measure of persistent errors to identify which layers are most efficient at reducing the error and which missing connections between neurons would reduce the error the most.", "Sparse momentum follows a cycle of (1) pruning weights with small magnitude, (2) redistributing weights across layers according to the mean momentum magnitude of existing weights, and (3) growing new weights to fill in missing connections which have the highest momentum magnitude.", "We compare the performance of sparse momentum to compression algorithms and recent methods that maintain sparse weights throughout training.", "We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet-1k.", "For CIFAR-10, we determine the percentage of weights needed to reach dense performance levels and find that AlexNet, VGG16, and Wide Residual Networks need between 35-50%, 5-10%, and 20-30% weights to reach dense performance levels.", "We also estimate the overall speedups of training our sparse convolutional networks to dense performance levels on CIFAR-10 for optimal sparse convolution algorithms and naive dense convolution algorithms compared to dense baselines.", "For sparse convolution, we estimate speedups between 2.74x and 5.61x and for dense convolution speedups between 1.07x and 1.36x.", "In your analysis, ablations demonstrate that the momentum redistribution and growth components are increasingly important as networks get deeper and larger in size -both are critical for good ImageNet performance." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3928571343421936, 0.2142857164144516, 0.2978723347187042, 0.35555556416511536, 0.2142857164144516, 0.4313725531101227, 0.23999999463558197, 0.158730149269104, 0.12903225421905518, 0.21052631735801697, 0.18518517911434174, 0.31578946113586426, 0.4000000059604645, 0.1944444328546524, 0.3030303120613098, 0.4000000059604645, 0.1428571343421936, 0.33898305892944336, 0.3448275923728943, 0.1599999964237213, 0.19999998807907104 ]
ByeSYa4KPS
true
[ "Redistributing and growing weights according to the momentum magnitude enables the training of sparse networks from random initializations that can reach dense performance levels with 5% to 50% weights while accelerating training by up to 5.6x." ]
[ "To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions.", "We introduce interesting aspects for understanding the local minima and overall structure of the loss surface.", "The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent.", "We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum.", "This means that every differentiable local minimum is the global minimum of the corresponding region.", "We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions.", "There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs.", "Deep Neural Networks (DNNs) have achieved state-of-the-art performances in computer vision, natural language processing, and other areas of machine learning .", "One of the most promising features of DNNs is its significant expressive power.", "The expressiveness of DNNs even surpass shallow networks as a network with few layers need exponential number of nodes to have similar expressive power (Telgarsky, 2016) .", "The DNNs are getting even deeper after the vanishing gradient problem has been solved by using rectified linear units (ReLUs) BID12 .", "Nowadays, RELU has become the most popular activation function for hidden layers.", "Leveraging this kind of activation functions, depth of DNNs has increased to more than 100 layers BID7 .Another", "problem of training DNNs is that parameters can encounter pathological curvatures of the loss surfaces prolonging training time. Some of", "the pathological curvatures such as narrow valleys would cause unnecessary vibrations. To avoid", "these obstacles, various optimization methods were introduced (Tieleman & Hinton, 2012; BID9 . These methods", "utilize the first and second order moments of the gradients to preserve the historical trends. The gradient", "descent methods also have a problem of getting stuck in a poor local minimum. The poor local", "minima do exist (Swirszcz et al., 2016) in DNNs, but recent works showed that errors at the local minima are as low as that of global minima with high probability BID4 BID2 BID8 BID14 Soudry & Hoffer, 2017) .In case of linear", "DNNs in which activation function does not exist, every local minimum is a global minimum and other critical points are saddle points BID8 . Although these beneficial", "properties do not hold in general DNNs, we conjecture that it holds in each region of parameters where the activation values for each data point are the same as shown in FIG0 . We prove this for a simple", "network. The activation values of a", "node can be different between data points as shown in FIG0 , so it is hard to apply proof techniques used for linear DNNs. The whole parameter space", "is a disjoint union of these regions, so we call it loss surface decomposition.Using the concepts of loss surface decomposition, we explain why poor local minima do exist even in large networks. There are poor local minima", "where gradient flow disappears when using the ReLU (Swirszcz et al., 2016) . We introduce another kind of", "poor local minima where the loss is same as that of linear regression. To be more general, we prove", "that for each local minimum in a network, there exists a local minimum of the same loss in the larger network that is constructed by adding a node to that network. DISPLAYFORM0 T . In each region", ", activation values", "are the same. There are six nonempty regions. The", "parameters on the boundaries hit", "the non-differentiable point of the rectified linear unit.", "We conjecture that the loss surface is a disjoint union of activation regions where every local minimum is a subglobal minimum.", "Using the concept of loss surface decomposition, we studied the existence of poor local minima and experimentally investigated losses of subglobal minima.", "However, the structure of non-differentiable local minima is not yet well understood yet.", "These non-differentiable points exist within the boundaries of the activation regions which can be obstacles when using gradient descent methods.", "Further work is needed to extend knowledge about the local minima, activation regions, their boundaries.", "Let θ ∈ R A be a differentiable point, so it is not in the boundaries of the activation regions.", "This implies that w T j x i + b j = 0 for all parameters.", "Without loss of generality, we assume w T j x i + b j < 0.", "Then there exist > 0 such that w T j x i + b j + < 0.", "This implies that small changes in the parameters for any direction does not change the activation region.", "Since L f (θ) and L g A (θ) are equivalent in the region R A , the local curvatures of these two function around the θ are also the same.", "Thus, the θ is a local minimum (saddle point) in L f (θ) if and only if it is a local minimum (saddle point) in L g A (θ).", "DISPLAYFORM0 is a linear transformation of p j , q j , and c, the DISPLAYFORM1 2 is convex in terms of p j , q j , and c.", "Summation of convex functions is convex, so the lemma holds.A.3", "PROOF OF THEOREM 2.5(1) Assume that activation values are not all zeros, and then consider the following Hessian matrix evaluated from v j and b j for some non-zero activation values a ij > 0: DISPLAYFORM2 Let v j = 0 and b j = 0, then two eigenvalues of the Hessian matrix are as follows: DISPLAYFORM3 There exist c > 0 such that g A (x i , θ) > y i for all", "i. If we choose such c, then DISPLAYFORM4 ∂vj ∂bj > 0 which implies that two eigenvalues are positive and negative.", "Since the Hessian matrix is not positive semidefinite nor negative semidefinite, the function L g A (θ) is non-convex and non-concave.(2", ", 3) We", "organize some of the gradients as follows: DISPLAYFORM5 We select a critical point θ * where ∇ wj L g A (θ * ) = 0, ∇ vj L g A (θ * ) = 0, ∇ bj L g A (θ * ) = 0, and ∇ c L g A (θ * ) = 0 for all j.", "Case 1) Assume that ∇ pj L g A (θ * ) = 0 and ∇ qj L g A (θ * ) = 0 for all j.", "These points are global minima, since ∇ c L g A (θ * ) = 0 and L g A (θ) is convex in terms of p j , q j , and c.Case 2) Assume that there exist j such that ∇ pj L g A (θ DISPLAYFORM6 There exist an element w * in w j such that ∇ vj ∇ w * L g A (θ * ) = 0. Consider", "a Hessian matrix evaluated from w * and v j . Analogous", "to the proof of (1), this matrix is not positive semidefinite nor negative semidefinite. Thus θ *", "is a saddle point.Case 3) Assume that there exist j such that ∇ qj L g A (θ * ) = 0. Since ∇", "bj L g A (θ * ) = v j ∇ qj L g A (θ * ) = 0, the v j is zero. Analogous", "to the Case 2, a Hessian matrix evaluated from b j and v j is not positive semidefinite nor negative semidefinite. Thus θ *", "is a saddle point.As a result, every critical point is a global minimum or a saddle point. Since L", "g A (θ) is a differentiable function, every local minimum is a critical point. Thus every", "local minimum is a global minimum." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23255813121795654, 0.29411762952804565, 0.27272728085517883, 0.5909090638160706, 0.5625, 0.10810810327529907, 0.1538461446762085, 0.051282044500112534, 0.19354838132858276, 0.1818181723356247, 0.09999999403953552, 0.06451612710952759, 0.0555555522441864, 0.2222222238779068, 0.0624999962747097, 0, 0.1764705777168274, 0.3030303120613098, 0.14035087823867798, 0.2790697515010834, 0.19230768084526062, 0.23999999463558197, 0.08510638028383255, 0.4000000059604645, 0.1621621549129486, 0.31578946113586426, 0.35555556416511536, 0, 0.2222222238779068, 0.0833333283662796, 0.1538461446762085, 0.7027027010917664, 0.2702702581882477, 0.25806450843811035, 0.15789473056793213, 0.1764705777168274, 0.2631579041481018, 0, 0.11764705181121826, 0, 0.11428570747375488, 0.1860465109348297, 0.2631579041481018, 0.21052631735801697, 0.19354838132858276, 0.08219178020954132, 0, 0.10256409645080566, 0, 0.1538461446762085, 0, 0.09836065024137497, 0.06666666269302368, 0.17142856121063232, 0.0952380895614624, 0.1111111044883728, 0.1463414579629898, 0.3125, 0.3125, 0.4166666567325592 ]
SJDYgPgCZ
true
[ "The loss surface of neural networks is a disjoint union of regions where every local minimum is a global minimum of the corresponding region." ]
[ "Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task.", "In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information.", "To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors.", "We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.", "Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting.", "Human language 1 is a complex system, involving an intricate interplay between meaning (semantics) and structural rules between words and phrases (syntax).", "Self-supervised neural sequence models for text trained with a language modeling objective, such as ELMo (Peters et al., 2018) , BERT (Devlin et al., 2019) , and RoBERTA (Liu et al., 2019b) , were shown to produce representations that excel in recovering both structure-related information (Gulordava et al., 2018; van Schijndel & Linzen; Wilcox et al., 2018; Goldberg, 2019) as well as in semantic information (Yang et al., 2019; Joshi et al., 2019) .", "In this work, we study the problem of disentangling structure from semantics in neural language representations: we aim to extract representations that capture the structural function of words and sentences, but which are not sensitive to their content.", "For example, consider the sentences:", "We aim to learn a function from contextualized word representations to a space that exposes these similarities.", "Crucially, we aim to do this in an unsupervised manner: we do not want to inform the process of the kind of structural information we want to obtain.", "We do this by learning a transformation that attempts to remove the lexical-semantic information in a sentence, while trying to preserve structural properties.", "Disentangling syntax from lexical semantics in word representations is a desired property for several reasons.", "From a purely scientific perspective, once disentanglement is achieved, one can better control for confounding factors and analyze the knowledge the model acquires, e.g. attributing the predictions of the model to one factor of variation while controlling for the other.", "In addition to explaining model predictions, such disentanglement can be useful for the comparison of the representations the model acquires to linguistic knowledge.", "From a more practical perspective, disentanglement can be a first step toward controlled generation/paraphrasing that considers only aspects of the structure, akin to the style-transfer works in computer vision, i.e., rewriting a sentence while preserving its structural properties while ignoring its meaning, or vice-versa.", "It can also inform search-based application in which one can search for \"similar\" texts while controlling various aspects of the desired similarity.", "To achieve this goal, we begin with the intuition that the structural component in the representation (capturing the form) should remain the same regardless of the lexical semantics of the sentence (the meaning).", "Rather than beginning with a parsed corpus, we automatically generate a large number of structurally-similar sentences, without presupposing their formal structure ( §3.1).", "This allows us to pose the disentanglement problem as a metric-learning problem: we aim to learn a transformation of the contextualized representation, which is invariant to changes in the lexical semantics within each group of structurally-similar sentences ( §3.3).", "We demonstrate the structural properties captured by the resulting representations in several experiments ( §4), among them automatic identification of structurally-similar words and few-shot parsing.", "In this work, we propose an unsupervised method for the distillation of structural information from neural contextualized word representations.", "We used a process of sequential BERT-based substitu- Figure 4 : Results of the few shot parsing setup tion to create a large number of sentences which are structurally similar, but semantically different.", "By controlling for one aspect -structure -while changing the other -lexical choice, we learn a metric (via triplet loss) under which pairs of words that come from structurally-similar sentences are close in space.", "We demonstrated that the representations acquired by this method share structural properties with their neighbors in space, and show that with a minimal supervision, those representations outperform ELMo in the task of few-shots parsing.", "The method presented here is a first step towards a better disentanglement between various kinds of information that is represented in neural sequence models.", "The method used to create the structurally equivalent sentences can be useful by its own for other goals, such as augmenting parse-tree banks (which are often scarce and require large resources to annotate).", "In a future work, we aim to extend this method to allow for a more soft alignment between structurally-equivalent sentences.", "Table 4 : Results in the closest-word queries, before and after the application of the syntactic transformation.", "\"Basline\" refers to unmodified vectors derived from BERT, and \"Transformed\" refers to the vectors after the learned syntactic transformation f .", "\"Difficult\" refers to evaluation on the subset of POS tags which are most structurally diverse." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.09090908616781235, 0, 0.14814814925193787, 0.12903225421905518, 0.06451612710952759, 0.1269841194152832, 0.08888888359069824, 0, 0.1538461446762085, 0.06451612710952759, 0.1875, 0.23076923191547394, 0.045454543083906174, 0.13333332538604736, 0, 0.0624999962747097, 0, 0, 0, 0.17142856121063232, 0.19999998807907104, 0.04878048226237297, 0.09090908616781235, 0.14999999105930328, 0.060606054961681366, 0.09302325546741486, 0.06896550953388214, 0, 0, 0 ]
HJlRFlHFPS
true
[ "We distill language models representations for syntax by unsupervised metric learning" ]
[ "We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.", "The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations.", "The graph stores no metric information, only connectivity of locations corresponding to the nodes.", "We use SPTM as a planning module in a navigation system.", "Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals.", "The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three.", "Deep learning (DL) has recently been used as an efficient approach to learning navigation in complex three-dimensional environments.", "DL-based approaches to navigation can be broadly divided into three classes: purely reactive BID49 , based on unstructured general-purpose memory such as LSTM BID33 BID31 , and employing a navigation-specific memory structure based on a metric map BID36 .However", ", extensive evidence from psychology suggests that when traversing environments, animals do not rely strongly on metric representations BID16 BID47 BID13 . Rather", ", animals employ a range of specialized navigation strategies of increasing complexity. According", "to BID13 , one such strategy is landmark navigation -\"the ability to orient with respect to a known object\". Another is", "route-based navigation that \"involves remembering specific sequences of positions\". Finally, map-based", "navigation assumes a \"survey knowledge of the environmental layout\", but the map need not be metric and in fact it is typically not: \"[. . .] humans do not integrate experience on specific routes into a metric cognitive map for navigation [. . .] Rather, they primarily depend on a landmark-based navigation strategy, which can be supported by qualitative topological knowledge of the environment.\"In this paper, we propose semi-parametric topological memory (SPTM) -a deep-learning-based memory architecture for navigation, inspired by landmark-based navigation in animals. SPTM consists of two", "components: a non-parametric memory graph G where each node corresponds to a location in the environment, and a parametric deep network R capable of retrieving nodes from the graph based on observations. The graph contains no", "metric relations between the nodes, only connectivity information. While exploring the environment", ", the agent builds the graph by appending observations to it and adding shortcut connections based on detected visual similarities. The network R is trained to retrieve", "nodes from the graph based on an observation of the environment. This allows the agent to localize itself", "in the graph. Finally, we build a complete SPTM-based", "navigation agent by complementing the memory with a locomotion network L, which allows the agent to move between nodes in the graph. The R and L networks are trained in self-supervised", "fashion, without any manual labeling or reward signal.We evaluate the proposed system and relevant baselines on the task of goal-directed maze navigation in simulated three-dimensional environments. The agent is instantiated in a previously unseen maze", "and given a recording of a walk through the maze (images only, no information about actions taken or ego-motion). Then the agent is initialized at a new location in the", "maze and has to reach a goal location in the maze, given an image of that goal. To be successful at this task, the agent must represent", "the maze based on the footage it has seen, and effectively utilize this representation for navigation.The proposed system outperforms baseline approaches by a large margin. Given 5 minutes of maze walkthrough footage, the system", "is able to build an internal representation of the environment and use it to confidently navigate to various goals within the maze. The average success rate of the SPTM agent in goal-directed", "navigation across test environments is higher than the best-performing baseline by a factor of three. Qualitative results and an implementation of the method are", "available at https://sites.google.com/view/SPTM.", "We have proposed semi-parametric topological memory (SPTM), a memory architecture that consists of a non-parametric component -a topological graph, and a parametric component -a deep network capable of retrieving nodes from the graph given observations from the environment.", "We have shown that SPTM can act as a planning module in a navigation system.", "This navigation agent can efficiently reach goals in a previously unseen environment after being presented with only 5 minutes of footage.", "We see several avenues for future work.", "First, improving the performance of the networks R and L will directly improve the overall quality of the system.", "Second, while the current system explicitly avoids using ego-motion information, findings from experimental psychology suggest that noisy ego-motion estimation and path integration are useful for navigation.", "Incorporating these into our model can further improve robustness.", "Third, in our current system the size of the memory grows linearly with the duration of the exploration period.", "This may become problematic when navigating in very large environments, or in lifelong learning scenarios.", "A possible solution is adaptive subsampling, by only retaining the most informative or discriminative observations in memory.", "Finally, it would be interesting to integrate SPTM into a system that is trainable end-to-end.SUPPLEMENTARY MATERIAL S1 METHOD DETAILS S1.1 NETWORK ARCHITECTURESThe retrieval network R and the locomotion network L are both based on ResNet-18 BID19 .", "Both take 160×120 pixel images as inputs.", "The networks are initialized as proposed by BID19 .", "We used an open ResNet implementation: https://github.com/raghakot/ keras-resnet/blob/master/resnet.py.The network R admits two observations as input.", "Each of these is processed by a convolutional ResNet-18 encoder.", "Each of the encoders produces a 512-dimensional embedding vector.", "These are concatenated and fed through a fully-connected network with 4 hidden layers with 512 units each and ReLU nonlinearities.The network L also admits two observations, but in contrast with the network R it processes them jointly, after concatenating them together.", "A convolutional ResNet-18 encoder is followed by a single fully-connected layer with 7 outputs and a softmax.", "The 7 outputs correspond to all available actions: do nothing, move forward, move backward, move left, move right, turn left, and turn right." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.13333332538604736, 0, 0.307692289352417, 0.1818181723356247, 0.20512819290161133, 0.12121211737394333, 0.11999999731779099, 0.10526315122842789, 0.2142857164144516, 0.11764705181121826, 0.07407406717538834, 0.24390242993831635, 0.12765957415103912, 0, 0.04878048226237297, 0, 0.1599999964237213, 0.23255813121795654, 0.2448979616165161, 0.1428571343421936, 0.09756097197532654, 0.1702127605676651, 0.04651162400841713, 0.15789473056793213, 0, 0.17777776718139648, 0.2666666507720947, 0.2702702581882477, 0.17391303181648254, 0, 0.09756097197532654, 0, 0.12903225421905518, 0.13333332538604736, 0.1818181723356247, 0.038461532443761826, 0, 0.0833333283662796, 0.05882352590560913, 0.1538461446762085, 0.07999999821186066, 0.07692307233810425, 0.1249999925494194, 0 ]
SygwwGbRW
true
[ "We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals." ]
[ "The available resolution in our visual world is extremely high, if not infinite.", "Existing CNNs can be applied in a fully convolutional way to images of arbitrary resolution, but as the size of the input increases, they can not capture contextual information.", "In addition, computational requirements scale linearly to the number of input pixels, and resources are allocated uniformly across the input, no matter how informative different image regions are.", "We attempt to address these problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it uses a hard attention mechanism to selectively process only the most informative image parts.", "We conduct experiments on MNIST and ImageNet datasets, and we show that our models can significantly outperform fully convolutional counterparts, when the resolution of the input is that big that the receptive field of the baselines can not adequately cover the objects of interest.", "Gains in performance come for less FLOPs, because of the selective processing that we follow.", "Furthermore, our attention mechanism makes our predictions more interpretable, and creates a trade-off between accuracy and complexity that can be tuned both during training and testing time.", "Our visual world is very rich, and there is information of interest in an almost infinite number of different scales.", "As a result, we would like our models to be able to process images of arbitrary resolution, in order to capture visual information with arbitrary level of detail.", "This is possible with existing CNN architectures, since we can use fully convolutional processing (Long et al. (2015) ), coupled with global pooling.", "However, global pooling ignores the spatial configuration of feature maps, and the output essentially becomes a bag of features 1 .", "To demonstrate why this an important problem, in Figure 1", "(a) and", "(b) we provide an example of a simple CNN that is processing an image in two different resolutions.", "In", "(a) we see that the receptive field of neurons from the second layer suffices to cover half of the kid's body, while in", "(b) the receptive field of the same neurons cover area that corresponds to the size of a foot.", "This shows that as the input size increases, the final representation becomes a bag of increasingly more local features, leading to the absence of coarselevel information, and potentially harming performance.", "We call this phenomenon the receptive field problem of fully convolutional processing.", "An additional problem is that computational resources are allocated uniformly to all image regions, no matter how important they are for the task at hand.", "For example, in Figure 1 (b), the same amount of computation is dedicated to process both the left half of the image that contains the kid, and the right half that is merely background.", "We also have to consider that computational complexity scales linearly with the number of input pixels, and as a result, the bigger the size of the input, the more resources are wasted on processing uninformative regions.", "We attempt to resolve the aforementioned problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way.", "The receptive field problem of fully convolutional processing.", "A simple CNN consisted of 2 convolutional layers (colored green), followed by a global pooling layer (colored red), processes an image in two different resolutions.", "The shaded regions indicate the receptive fields of neurons from different layers.", "As the resolution of the input increases, the final latent representation becomes a bag of increasingly more local features, lacking coarse information.", "(c) A sketch of our proposed architecture.", "The arrows on the left side of the image demonstrate how we focus on image sub-regions in our top-down traversal, while the arrows on the right show how we combine the extracted features in a bottom-up fashion.", "In Figure 1 (c) we provide a simplified sketch of our approach.", "We start at level 1, where we process the input image in low resolution, to get a coarse description of its content.", "The extracted features (red cube) are used to select out of a predefined grid, the image regions that are worth processing in higher resolution.", "This process constitutes a hard attention mechanism, and the arrows on the left side of the image show how we extend processing to 2 additional levels.", "All extracted features are combined together as denoted by the arrows on the right, to create the final image representation that is used for classification (blue cube).", "We evaluate our model on synthetic variations of MNIST (LeCun et al., 1998 ) and on ImageNet (Deng et al., 2009 ), while we compare it against fully convolutional baselines.", "We show that when the resolution of the input is that big, that the receptive field of the baseline 2 covers a relatively small portion of the object of interest, our network performs significantly better.", "We attribute this behavior to the ability of our model to capture both contextual and local information by extracting features from different pyramid levels, while the baselines suffer from the receptive field problem.", "Gains in accuracy are achieved for less floating point operations (FLOPs) compared to the baselines, due to the attention mechanism that we use.", "If we increase the number of attended image locations, computational requirements increase, but the probability of making a correct prediction is expected to increase as well.", "This is a trade-off between accuracy and computational complexity, that can be tuned during training through regularization, and during testing by stopping processing on early levels.", "Finally, by inspecting attended regions, we are able to get insights about the image parts that our networks value the most, and to interpret the causes of missclassifications.", "We proposed a novel architecture that is able to process images of arbitrary resolution without sacrificing spatial information, as it typically happens with fully convolutional processing.", "This is achieved by approaching feature extraction as a top-down image pyramid traversal, that combines information from multiple different scales.", "The employed attention mechanism allows us to adjust the computational requirements of our models, by changing the number of locations they attend.", "This way we can exploit the existing trade-off between computational complexity and accuracy.", "Furthermore, by inspecting the image regions that our models attend, we are able to get important insights about the causes of their decisions.", "Finally, there are multiple future research directions that we would like to explore.", "These include the improvement of the localization capabilities of our attention mechanism, and the application of our model to the problem of budgeted batch classification.", "In addition, we would like our feature extraction process to become more adaptive, by allowing already extracted features to affect the processing of image regions that are attended later on.", "Figure 8 we provide the parsing tree that our model implicitly creates." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0555555522441864, 0.16326530277729034, 0.16326530277729034, 0.6545454263687134, 0.10526315122842789, 0.15789473056793213, 0.08510638028383255, 0.09756097197532654, 0.08510638028383255, 0, 0.09756097197532654, 0.12121211737394333, 0.25, 0.1860465109348297, 0.15789473056793213, 0.11999999731779099, 0.11428570747375488, 0.12765957415103912, 0.16326530277729034, 0.18518517911434174, 0.8461538553237915, 0, 0.1702127605676651, 0.11428570747375488, 0.0952380895614624, 0.06666666269302368, 0.2448979616165161, 0.05714285373687744, 0.2222222238779068, 0.260869562625885, 0.12765957415103912, 0.1249999925494194, 0.11764705181121826, 0.16326530277729034, 0.1538461446762085, 0.13636362552642822, 0.1304347813129425, 0.08510638028383255, 0.1249999925494194, 0.2448979616165161, 0.23255813121795654, 0.04651162400841713, 0.1111111044883728, 0.17777776718139648, 0.0555555522441864, 0.04878048226237297, 0.1538461446762085, 0.11428570747375488 ]
rkeH6AEYvr
true
[ "We propose a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way." ]
[ "Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters.", "We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases.", "We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer.", "We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism.", "We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function.", "Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities.", "Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values.", "Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems.", "We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs).", "Regularization hyperparameters such as weight decay, data augmentation, and dropout (Srivastava et al., 2014) are crucial to the generalization of neural networks, but are difficult to tune.", "Popular approaches to hyperparameter optimization include grid search, random search BID3 , and Bayesian optimization (Snoek et al., 2012) .", "These approaches work well with low-dimensional hyperparameter spaces and ample computational resources; however, they pose hyperparameter optimization as a black-box optimization problem, ignoring structure which can be exploited for faster convergence, and require many training runs.We can formulate hyperparameter optimization as a bilevel optimization problem.", "Let w denote parameters (e.g. weights and biases) and λ denote hyperparameters (e.g. dropout probability).", "Let L T and L V be functions mapping parameters and hyperparameters to training and validation losses, respectively.", "We aim to solve 1 : DISPLAYFORM0 Substituting the best-response function w * (λ) = arg min w L T (λ, w) gives a single-level problem: DISPLAYFORM1 If the best-response w * is known, the validation loss can be minimized directly by gradient descent using Equation 2, offering dramatic speed-ups over black-box methods.", "However, as the solution to a high-dimensional optimization problem, it is difficult to compute w * even approximately.Following Lorraine & Duvenaud (2018) , we propose to approximate the best-response w * directly with a parametric functionŵ φ .", "We jointly optimize φ and λ, first updating φ so thatŵ φ ≈ w * in a neighborhood around the current hyperparameters, then updating λ by usingŵ φ as a proxy for w * in Eq. 2: DISPLAYFORM2 Finding a scalable approximationŵ φ when w represents the weights of a neural network is a significant challenge, as even simple implementations entail significant memory overhead.", "We show how to construct a compact approximation by modelling the best-response of each row in a layer's weight matrix/bias as a rank-one affine transformation of the hyperparameters.", "We show that this can be interpreted as computing the activations of a base network in the usual fashion, plus a correction term dependent on the hyperparameters.", "We justify this approximation by showing the exact best-response for a shallow linear network with L 2 -regularized Jacobian follows a similar structure.", "We call our proposed networks Self-Tuning Networks (STNs) since they update their own hyperparameters online during training.STNs enjoy many advantages over other hyperparameter optimization methods.", "First, they are easy to implement by replacing existing modules in deep learning libraries with \"hyper\" counterparts which accept an additional vector of hyperparameters as input 2 .", "Second, because the hyperparameters are adapted online, we ensure that computational effort expended to fit φ around previous hyperparameters is not wasted.", "In addition, this online adaption yields hyperparameter schedules which we find empirically to outperform fixed hyperparameter settings.", "Finally, the STN training algorithm does not require differentiating the training loss with respect to the hyperparameters, unlike other gradient-based approaches (Maclaurin et al., 2015; Larsen et al., 1996) , allowing us to tune discrete hyperparameters, such as the number of holes to cut out of an image BID12 , data-augmentation hyperparameters, and discrete-noise dropout parameters.", "Empirically, we evaluate the performance of STNs on large-scale deep-learning problems with the Penn Treebank (Marcus et al., 1993) and CIFAR-10 datasets (Krizhevsky & Hinton, 2009) , and find that they substantially outperform baseline methods.", "We introduced Self-Tuning Networks (STNs), which efficiently approximate the best-response of parameters to hyperparameters by scaling and shifting their hidden units.", "This allowed us to use gradient-based optimization to tune various regularization hyperparameters, including discrete hyperparameters.", "We showed that STNs discover hyperparameter schedules that can outperform fixed hyperparameters.", "We validated the approach on large-scale problems and showed that STNs achieve better generalization performance than competing approaches, in less time.", "We believe STNs offer a compelling path towards large-scale, automated hyperparameter tuning for neural networks." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.2702702581882477, 0.1463414579629898, 0.10526315122842789, 0.15789473056793213, 0.14999999105930328, 0, 0, 0.06666666269302368, 0.09756097197532654, 0.12121211737394333, 0.11538460850715637, 0.1428571343421936, 0.13333332538604736, 0.09677419066429138, 0.08510638028383255, 0.190476194024086, 0.1538461446762085, 0.10256409645080566, 0.10810810327529907, 0.04878048226237297, 0.0476190447807312, 0.0555555522441864, 0.06451612710952759, 0.10169491171836853, 0.0416666641831398, 0.1666666567325592, 0.20689654350280762, 0.07692307233810425, 0.1111111044883728, 0.13333332538604736 ]
r1eEG20qKQ
true
[ "We use a hypernetwork to predict optimal weights given hyperparameters, and jointly train everything together." ]
[ "Conditional Generative Adversarial Networks (cGANs) are finding increasingly widespread use in many application domains.", "Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity.", "In this setting, model benchmarking becomes a challenge, as each metric may indicate a different \"best\" model.", "In this paper, we propose the Frechet Joint Distance (FJD), which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric.", "We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics.", "Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities (e.g. class labels, object masks, bounding boxes, images, and text captions).", "We show that FJD can be used as a promising single metric for model benchmarking.", "The use of generative models is growing across many domains (van den Oord et al., 2016c; Vondrick et al., 2016; Serban et al., 2017; Karras et al., 2018; Brock et al., 2019) .", "Among the most promising approaches, Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) , auto-regressive models (van den Oord et al., 2016a; b) , and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have been driving significant progress, with the latter at the forefront of a wide-range of applications (Mirza & Osindero, 2014; Reed et al., 2016; Zhang et al., 2018a; Vondrick et al., 2016; Almahairi et al., 2018; Subramanian et al., 2018; Salvador et al., 2019) .", "In particular, significant research has emerged from practical applications, which require generation to be based on existing context.", "For example, tasks such as image inpainting, super-resolution, or text-to-image synthesis have been successfully addressed within the framework of conditional generation, with conditional GANs (cGANs) among the most competitive approaches.", "Despite these outstanding advances, quantitative evaluation of GANs remains a challenge (Theis et al., 2016; Borji, 2018) .", "In the last few years, a significant number of evaluation metrics for GANs have been introduced in the literature (Salimans et al., 2016; Heusel et al., 2017; Bińkowski et al., 2018; Shmelkov et al., 2018; Zhou et al., 2019; Kynkäänniemi et al., 2019; Ravuri & Vinyals, 2019) .", "Although there is no clear consensus on which quantitative metric is most appropriate to benchmark GAN-based models, Inception Score (IS) (Salimans et al., 2016) and Fréchet Inception Distance (FID) (Heusel et al., 2017) have been extensively used.", "However, both IS and FID were introduced in the context of unconditional image generation and, hence, focus on capturing certain desirable properties such as visual quality and sample diversity, which do not fully encapsulate all the different phenomena that arise during conditional image generation.", "In conditional generation, we care about visual quality, conditional consistency -i.e., verifying that the generation respects its conditioning, and intra-conditioning diversity -i.e., sample diversity per conditioning.", "Although visual quality is captured by both metrics, IS is agnostic to intra-conditioning diversity and FID only captures it indirectly.", "1 Moreover, neither of them can capture conditional con-sistency.", "In order to overcome these shortcomings, researchers have resorted to reporting conditional consistency and diversity metrics in conjunction with FID Park et al., 2019) .", "Consistency metrics often use some form of concept detector to ensure that the requested conditioning appears in the generated image as expected.", "Although intuitive to use, these metrics require pretrained models that cover the same target concepts in the same format as the conditioning (i.e., classifiers for image-level class conditioning, semantic segmentation for mask conditioning, etc.), which may or may not be available off-the-shelf.", "Moreover, using different metrics to evaluate different desirable properties may hinder the process of model selection, as there may not be a single model that surpasses the rest in all measures.", "In fact, it has recently been demonstrated that there is a natural trade-off between image quality and sample diversity (Yang et al., 2019) , which calls into question how we might select the correct balance of these properties.", "In this paper we introduce a new metric called Fréchet Joint Distance (FJD), which is able to implicitly assess image quality, conditional consistency, and intra-conditioning diversity.", "FJD computes the Fréchet distance on an embedding of the joint image-conditioning distribution, and introduces only small computational overhead over FID compared to alternative methods.", "We evaluate the properties of FJD on a variant of the synthetic dSprite dataset (Matthey et al., 2017) and verify that it successfully captures the desired properties.", "We provide an analysis on the behavior of both FID and FJD under different types of conditioning such as class labels, bounding boxes, and object masks, and evaluate a variety of existing cGAN models for real-world datasets with the newly introduced metric.", "Our experiments show that (1) FJD captures the three highlighted properties of conditional generation; (2) it can be applied to any kind of conditioning (e.g., class, bounding box, mask, image, text, etc.); and (3) when applied to existing cGAN-based models, FJD demonstrates its potential to be used as a promising unified metric for hyper-parameter selection and cGAN benchmarking.", "To our knowledge, there are no existing metrics for conditional generation that capture all of these key properties.", "In this paper we introduce Fréchet Joint Distance (FJD), which is able to assess image quality, conditional consistency, and intra-conditioning diversity within a single metric.", "We compare FJD to FID on the synthetic dSprite-textures dataset, validating its ability to capture the three properties of interest across different types of conditioning, and highlighting its potential to be adopted as a unified cGAN benchmarking metric.", "We also demonstrate how FJD can be used to address the potentially ambiguous trade-off between image quality and sample diversity when performing model selection.", "Looking forward, FJD could serve as valuable metric to ground future research, as it has the potential to help elucidate the most promising contributions within the scope of conditional generation.", "In this section, we illustrate the claim made in Section 1 that FID cannot capture intra-conditioning diversity when the joint distribution of two variables changes but the marginal distribution of one of them is not altered.", "Consider two multivariate Gaussian distributions, (X 1 , Y 1 ) ∼ N (0, Σ 1 ) and (X 2 , Y 2 ) ∼ N (0, Σ 2 ), where If we let X i take the role of the embedding of the conditioning variables (e.g., position) and Y i take the role of the embedding of the generated variables (i.e., images), then computing FID in this example would correspond to computing the FD between f Y1 and f Y2 , which is zero.", "On the other hand, computing FJD would correspond to the FD between f X1,Y1 and f X2,Y2 , which equals 0.678.", "But note that Dist1 and Dist2 have different degrees of intra-conditioning diversity, as illustrated by Figure 5 (right), where two histograms of f Yi|Xi∈(0.9,1.1) are displayed, showing marked differences to each other (similar plots can be constructed for other values of X i ).", "Therefore, this example illustrates a situation in which FID is unable to capture changes in intra-conditioning diversity, while FJD is able to do so." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.30434781312942505, 0.11428570747375488, 0.22641508281230927, 0.0952380895614624, 0.1599999964237213, 0.34285715222358704, 0, 0.054054051637649536, 0, 0.1249999925494194, 0.10526315122842789, 0.145454540848732, 0.07407406717538834, 0.1666666567325592, 0.2666666507720947, 0.20512819290161133, 0.06896550953388214, 0.1818181723356247, 0.1463414579629898, 0.10169491171836853, 0.1702127605676651, 0.17241378128528595, 0.43478259444236755, 0.045454539358615875, 0.22727271914482117, 0.17543859779834747, 0.18918918073177338, 0.15789473056793213, 0.4444444477558136, 0.15094339847564697, 0.1818181723356247, 0.08695651590824127, 0.15686273574829102, 0.054054051637649536, 0.04999999329447746, 0.1269841194152832, 0.1463414579629898 ]
rylxpA4YwH
true
[ "We propose a new metric for evaluating conditional GANs that captures image quality, conditional consistency, and intra-conditioning diversity in a single measure." ]
[ "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL.", "On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori.", "However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning.", "To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions.", "TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values.", "We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network.", "Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree.", "We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games.", "Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.", "A promising approach to improving model-free deep reinforcement learning (RL) is to combine it with on-line planning.", "The model-free value function can be viewed as a rough global estimate which is then locally refined on the fly for the current state by the on-line planner.", "Crucially, this does not require new samples from the environment but only additional computation, which is often available.One strategy for on-line planning is to use look-ahead tree search BID12 BID2 .", "Traditionally, such methods have been limited to domains where perfect environment simulators are available, such as board or card games BID4 BID24 .", "However, in general, models for complex environments with high dimensional observation spaces and complex dynamics must be learned from agent experience.", "Unfortunately, to date, it has proven difficult to learn models for such domains with sufficient fidelity to realise the benefits of look-ahead planning BID17 BID29 .A", "simple approach to learning environment models is to maximise a similarity metric between model predictions and ground truth in the observation space. This", "approach has been applied with some success in cases where model fidelity is less important, e.g., for improving exploration BID3 BID17 . However", ", this objective causes significant model capacity to be devoted to predicting irrelevant aspects of the environment dynamics, such as noisy backgrounds, at the expense of value-critical features that may occupy only a small part of the observation space (Pathak et al., Since the transition model is only weakly grounded in the actual environment, our approach can alternatively be viewed as a model-free method in which the fully connected layers of DQN are replaced by a recursive network that applies transition functions with shared parameters at each tree node expansion.The resulting architecture, which we call TreeQN, encodes an inductive bias based on the prior knowledge that the environment is a stationary Markov process, which facilitates faster learning of better policies. We also", "present an actor-critic variant, ATreeC, in which the tree is augmented with a softmax layer and used as a policy network.We show that TreeQN and ATreeC outperform their DQN-based counterparts in a box-pushing domain and a suite of Atari games, with deeper trees often outperforming shallower trees, and TreeQN outperforming VPN BID18 on most Atari games. We also", "present ablation studies investigating various auxiliary losses for grounding the transition model more strongly in the environment, which could improve performance as well as lead to interpretable internal plans. While we", "show that grounding the reward function is valuable, we conclude that how to learn strongly grounded transition models and generate reliably interpretable plans without compromising performance remains an open research question.", "In this section, we present our experimental results for TreeQN and ATreeC.7.1 GROUNDING FIG4 shows the result of a hyperparameter search on η r and η s , the coefficients of the auxiliary losses on the predicted rewards and latent states.", "An intermediate value of η r helps performance but there is no benefit to using the latent space loss.", "Subsequent experiments use η r = 1 and η s = 0.The predicted rewards that the reward-grounding objective encourages the model to learn appear both in its own Q-value prediction and in the target for n-step Q-learning.", "Consequently, we expect this auxiliary loss to be well aligned with the true objective.", "By contrast, the state-grounding loss (and other potential auxiliary losses) might help representation learning but would not explicitly learn any part of the desired target.", "It is possible that this mismatch between the auxiliary and primary objective leads to degraded performance when using this form of state grounding.", "One potential route to overcoming this obstacle to joint training would be pre-training a model, as done by BID34 .", "Inside TreeQN this model could then be fine-tuned to perform well inside the planner.", "We leave this possiblity to future work.", "FIG3 shows the results of TreeQN with tree depths 1, 2, and 3, compared to a DQN baseline.", "In this domain, there is a clear advantage for the TreeQN architecture over DQN.", "TreeQN learns policies that are substantially better at avoiding obstacles and lining boxes up with goals so they can be easily pushed in later.", "TreeQN also substantially speeds up learning.", "We believe that the greater structure brought by our architecture regularises the model, encouraging appropriate state representations to be learned quickly.", "Even a depth-1 tree improves performance significantly, as disentangling the estimation of rewards and next-state values makes them easier to learn.", "This is further facilitated by the sharing of value-function parameters across branches.", "We presented TreeQN and ATreeC, new architectures for deep reinforcement learning in discreteaction domains that integrate differentiable on-line tree planning into the action-value function or policy.", "Experiments on a box-pushing domain and a set of Atari games show the benefit of these architectures over their counterparts, as well as over VPN.", "In future work, we intend to investigate enabling more efficient optimisation of deeper trees, encouraging the transition functions to produce interpretable plans, and integrating smart exploration." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.2666666507720947, 0.09302324801683426, 0.1666666567325592, 0.18518517911434174, 0.1538461446762085, 0.2222222238779068, 0.22727271914482117, 0.15094339847564697, 0.1860465109348297, 0.2380952388048172, 0.1538461446762085, 0.2142857164144516, 0.08510638028383255, 0.1304347813129425, 0.1599999964237213, 0.1666666567325592, 0.07999999821186066, 0.09917355328798294, 0.27397260069847107, 0.145454540848732, 0.1428571343421936, 0.16393442451953888, 0.04444443807005882, 0.17241378128528595, 0.04999999701976776, 0.07999999821186066, 0.1249999925494194, 0, 0.09999999403953552, 0.060606058686971664, 0.1818181723356247, 0.14999999105930328, 0.1599999964237213, 0.125, 0.1304347813129425, 0.12765957415103912, 0.052631575614213943, 0.9230769276618958, 0.12765957415103912, 0.07843136787414551 ]
H1dh6Ax0Z
true
[ "We present TreeQN and ATreeC, new architectures for deep reinforcement learning in discrete-action domains that integrate differentiable on-line tree planning into the action-value function or policy." ]
[ "Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample.", "Modeling the combinatorial label interactions in MLC has been a long-haul challenge.", "Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC.", "However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results.", "In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC.", "MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. ", "The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data).", "Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time.", "Multi-label classification (MLC) is receiving increasing attention in tasks such as text categorization and image classification.", "Accurate and scalable MLC methods are in urgent need for applications like assigning topics to web articles, classifying objects in an image, or identifying binding proteins on DNA.", "The most common and straightforward MLC method is the binary relevance (BR) approach that considers multiple target labels independently BID0 .", "However, in many MLC tasks there is a clear dependency structure among labels, which BR methods ignore.", "Accordingly, probabilistic classifier chain (PCC) models were proposed to model label dependencies and formulate MLC in an autoregressive sequential prediction manner BID1 .", "One notable work in the PCC category was from which implemented a classifier chain using a recurrent neural network (RNN) based sequence to sequence (Seq2Seq) architecture, Seq2Seq MLC.", "This model uses an encoder RNN encoding elements of an input sequence, a decoder RNN predicting output labels one after another, and beam search that computes the probability of the next T predictions of labels and then chooses the proposal with the max combined probability.However, the main drawback of classifier chain models is that their inherently sequential nature precludes parallelization during training and inference.", "This can be detrimental when there are a large number of positive labels as the classifier chain has to sequentially predict each label, and often requires beam search to obtain the optimal set.", "Aside from time-cost disadvantages, PCC methods have several other drawbacks.", "First, PCC methods require a defined ordering of labels for the sequential prediction, but MLC output labels are an unordered set, and the chosen order can lead to prediction instability .", "Secondly, even if the optimal ordering is known, PCC methods struggle to accurately capture long-range dependencies among labels in cases where the number of positive labels is large (i.e., dense labels).", "For example, the Delicious dataset has a median of 19 positive labels per sample, so it can be difficult to correctly predict the labels at the end of the prediction chain.", "Lastly, many real-world applications prefer interpretable predictors.", "For instance, in the task of predicting which proteins (labels) will bind to a DNA sequence based binding site, users care about how a prediction is made and how the interactions among labels influence the predictions 1 .Message", "Passing Neural Networks (MPNNs) BID3 introduce a class of methods that model joint dependencies of variables using neural message passing rather than an explicit representation such as a probabilistic classifier chain. Message", "passing allows for efficient inference by modelling conditional independence where the same local update procedure is applied iteratively to propagate information across variables. MPNNs provide", "a flexible method for modeling multiple variables jointly which have no explicit ordering (and can be modified to incorporate an order, as explained in section 3). To handle the", "drawbacks of BR and PCC methods, we propose a modified version of MPNNs for MLC by modeling interactions between labels using neural message passing.We introduce Message Passing Encoder-Decoder (MPED) Networks aiming to provide fast, accurate, and interpretable multi-label predictions. The key idea", "is to replace RNNs and to rely on neural message passing entirely to draw global dependencies between input components, between labels and input components, and between labels. The proposed", "MPED networks allow for significantly more parallelization in training and testing. The main contributions", "of this paper are:• Novel approach for MLC. To the authors' best knowledge", ", MPED is the first work using neural message passing for MLC.• Accurate MLC. Our model achieves", "similar, or better", "performance compared to the previous state of the art across five different MLC metrics. We validate our model on seven MLC datasets", "which cover a wide spectrum of input data structure: sequences (English text, DNA), tabular (bag-of-words), and graph (drug molecules), as well as output label structure: unknown and graph.• Fast. Empirically our model achieves an average", "1.7x speedup", "over the autoregressive seq2seq MLC at training time and an average 5x speedup over its testing time.• Interpretable. Although deep-learning based systems have", "widely been viewed", "as \"black boxes\" due to their complexity, our attention based MPED models provide a straightforward way to explain label to label, input to label, and feature to feature dependencies.", "In this work we present Message Passing Encoder-Decoder (MPED) Networks which achieve a significant speedup at close to the same performance as autoregressive models for MLC.", "We open a new avenue of using neural message passing to model label dependencies in MLC tasks.", "In addition, we show that our method is able to handle various input data types (sequence, tabular, graph), as well various output label structures (known vs unknown).", "One of our future extensions is to adapt the current model to predict more dynamic outputs.", "BID1 BID24" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.13333332538604736, 0.0624999962747097, 0.1904761791229248, 0.2222222238779068, 0.1395348757505417, 0.05405404791235924, 0.09090908616781235, 0.12121211737394333, 0.08888888359069824, 0.052631575614213943, 0.05714285373687744, 0.14999999105930328, 0.045454539358615875, 0.08695651590824127, 0.12244897335767746, 0, 0.17391303181648254, 0.0833333283662796, 0.09090908616781235, 0, 0.1538461446762085, 0.2083333283662796, 0.09302324801683426, 0.08510638028383255, 0.3050847351551056, 0.10526315122842789, 0.1875, 0.12903225421905518, 0.05714285373687744, 0, 0.10526315122842789, 0.1599999964237213, 0.04999999329447746, 0, 0.24390242993831635, 0.1818181723356247, 0.2857142686843872, 0.045454539358615875, 0.060606054961681366 ]
r1xYr3C5t7
true
[ "We propose Message Passing Encoder-Decode networks for a fast and accurate way of modelling label dependencies for multi-label classification." ]
[ "Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples.", "Previous few-shot learning works have mainly focused on classification and reinforcement learning.", "In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.", "Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions.", "This enables a few labeled samples to approximate the function.", "We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task.", "We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.", "Regression deals with the problem of learning a model relating a set of inputs to a set of outputs.", "The learned model can be thought as function y = F (x) that gives a prediction y ∈ R dy given input x ∈ R dx where d y and d x are dimensions of the output and input respectively.", "Typically, a regression model is trained on a large number of data points to be able to provide accurate predictions for new inputs.", "Recently, there have been a surge in popularity on few-shot learning methods (Vinyals et al., 2016; Koch et al., 2015; Gidaris & Komodakis, 2018) .", "Few-shot learning methods require only a few examples from each task to be able to quickly adapt and perform well on a new task.", "These few-shot learning methods in essence are learning to learn i.e. the model learns to quickly adapt itself to new tasks rather than just learning to give the correct prediction for a particular input sample.", "In this work, we propose a few shot learning model that targets few-shot regression tasks.", "Our model takes inspiration from the idea that the degree of freedom of F (x) can be significantly reduced when it is modeled a linear combination of sparsifying basis functions.", "Thus, with a few samples, we can estimate F (x).", "The two primary components of our model are", "(i) the Basis Function Learner network which encodes the basis functions for the distribution of tasks, and", "(ii) the Weights Generator network which produces the appropriate weights given a few labelled samples.", "We evaluate our model on the sinusoidal regression tasks and compare the performance to several meta-learning algorithms.", "We also evaluate our model on other regression tasks, namely the 1D heat equation tasks modeled by partial differential equations and the 2D Gaussian distribution tasks.", "Furthermore, we evaluate our model on image completion as a 2D regression problem on the MNIST and CelebA data-sets, using only a small subset of known pixel values.", "To summarize, our contributions for this paper are:", "• We propose to address few shot regression by linear combination of a set of sparsifying basis functions.", "• We propose to learn these (continuous) sparsifying basis functions from data.", "Traditionally, basis functions are hand-crafted (e.g. Fourier basis).", "• We perform experiments to evaluate our approach using sinusoidal, heat equation, 2D Gaussian tasks and MNIST/CelebA image completion tasks.", "An overview of our model as in meta-training.", "Our system learns the basis functions Φ that can result in sparse representation for any task drawn from a certain task distribution.", "The basis functions are encoded in the Basis Function Learner network.", "The system produces predictions for a regression task by generating a weight vector, w for a novel task, using the Weights Generator network.", "The prediction is obtained by taking a dot-product between the weight vector and learned basis functions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0.13793103396892548, 0.24242423474788666, 0.30434781312942505, 0.2857142686843872, 0.29999998211860657, 0.22857142984867096, 0.375, 0.1599999964237213, 0.20512819290161133, 0.1463414579629898, 0.1538461446762085, 0.2083333283662796, 0.3030303120613098, 0.2222222238779068, 0.0714285671710968, 0.07692307233810425, 0.3030303120613098, 0.1249999925494194, 0.23529411852359772, 0.2380952388048172, 0.1818181723356247, 0, 0.5714285373687744, 0.3333333432674408, 0.14814814925193787, 0.10810810327529907, 0.07692307233810425, 0.25641024112701416, 0.20689654350280762, 0.21052631735801697, 0.29411762952804565 ]
BJxDNxSFDH
true
[ "We propose a method of doing few-shot regression by learning a set of basis functions to represent the function distribution." ]
[ "Large-scale pre-trained language model, such as BERT, has recently achieved great success in a wide range of language understanding tasks.", "However, it remains an open question how to utilize BERT for text generation tasks.", "In this paper, we present a novel approach to addressing this challenge in a generic sequence-to-sequence (Seq2Seq) setting.", "We first propose a new task, Conditional Masked Language Modeling (C-MLM), to enable fine-tuning of BERT on target text-generation dataset.", "The fine-tuned BERT (i.e., teacher) is then exploited as extra supervision to improve conventional Seq2Seq models (i.e., student) for text generation.", "By leveraging BERT's idiosyncratic bidirectional nature, distilling the knowledge learned from BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation.", "Experiments show that the proposed approach significantly outperforms strong baselines of Transformer on multiple text generation tasks, including machine translation (MT) and text summarization.", "Our proposed model also achieves new state-of-the-art results on the IWSLT German-English and English-Vietnamese MT datasets.", "Large-scale pre-trained language model, such as ELMo (Peters et al., 2018) , GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) , has become the de facto first encoding step for many natural language processing (NLP) tasks.", "For example, BERT, pre-trained with deep bidirectional Transformer (Vaswani et al., 2017) via masked language modeling and next sentence prediction, has revolutionized the state of the art in many language understanding tasks, such as natural language inference (Bowman et al., 2015) and question answering (Rajpurkar et al., 2016) .", "However, beyond common practice of fine-tuning BERT for language understanding , applying BERT to language generation still remains an open question.", "Text generation aims to generate natural language sentences conditioned on certain input, with applications ranging from machine translation (Cho et al., 2014; Bahdanau et al., 2015) , text summarization (Nallapati et al., 2016; Gehring et al., 2017; Chen & Bansal, 2018) ), to image captioning Xu et al., 2015; Gan et al., 2017) .", "In this paper, we study how to use BERT for better text generation, which to the best of our knowledge is still a relatively unexplored territory.", "Intuitively, as BERT is learned with a generative objective via Masked Language Modeling (MLM) during the pre-training stage, a natural assumption is that this training objective should have learned essential, bidirectional, contextual knowledge that can help enhance text generation.", "Unfortunately, this MLM objective is not auto-regressive, which encumbers its direct application to auto-regressive text generation in practice.", "In this paper, we tackle this challenge by proposing a novel and generalizable approach to distilling knowledge learned in BERT for text generation tasks.", "We first propose a new Conditional Masked Language Modeling (C-MLM) task, inspired by MLM but requiring additional conditional input, which enables fine-tuning pre-trained BERT on a target dataset.", "In order to extract knowledge from the fine-tuned BERT and apply it to a text generation model, we leverage the fine-tuned BERT as a teacher model that generates sequences of word probability logits for the training samples, and treat the text generation model as a student network, which can effectively learn from the teacher's outputs for imitation.", "The proposed approach improves text generation by providing a good estimation on the word probability distribution for each token in a sentence, consuming both the left and the right context, the exploitation of which encourages conventional text generation models to plan ahead.", "Text generation models are usually trained via Maximum Likelihood Estimation (MLE), or teacher forcing : at each time step, it maximizes the likelihood of the next word conditioned on its previous ground-truth words.", "This corresponds to optimizing one-step-ahead prediction.", "As there is no explicit signal towards global planning in the training objective, the generation model may incline to focusing on local structure rather than global coherence.", "With our proposed approach, BERT's looking into the future ability can act as an effective regularization method, capturing subtle long-term dependencies that ensure global coherence and in consequence boost model performance on text generation.", "An alternative way to leverage BERT for text generation is to initialize the parameters of the encoder or decoder of Seq2Seq with pre-trained BERT, and then fine-tuning on the target dataset.", "However, this approach requires the encoder/decoder to have the same size as BERT, inevitably making the final text generation model too large.", "Our approach, on the other hand, is modular and compatible to any text-generation model, and has no restriction on the model size (e.g., large or small) or model architecture (e.g., LSTM or Transformer).", "The main contributions of this work are three-fold.", "(i) We present a novel approach to utilizing BERT for text generation.", "The proposed method induces sequence-level knowledge into the conventional one-step-ahead and teacher-forcing training paradigm, by introducing an effective regularization term to MLE training loss.", "(ii) We conduct comprehensive evaluation on multiple text generation tasks, including machine translation, text summarization and image captioning.", "Experiments show that our proposed approach significantly outperforms strong Transformer baselines and is generalizable to different tasks.", "(iii) The proposed model achieves new state-of-the-art on both IWSLT14 German-English and IWSLT15 English-Vietnamese datasets.", "In this work, we propose a novel and generic approach to utilizing pre-trained language models to improve text generation without explicit parameter sharing, feature extraction, or augmenting with auxiliary tasks.", "Our proposed Conditional MLM mechanism leverages unsupervised language models pre-trained on large corpus, and then adapts to supervised sequence-to-sequence tasks.", "Our distillation approach indirectly influences the text generation model by providing soft-label distributions only, hence is model-agnostic.", "Experiments show that our model improves over strong Transformer baselines on multiple text generation tasks such as machine translation and abstractive summarization, and achieves new state-of-the-art on some of the translation tasks.", "For future work, we will explore the extension of Conditional MLM to multimodal input such as image captioning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.34285715222358704, 0.10810810327529907, 0.2926829159259796, 0.23255813121795654, 0.20408162474632263, 0.22727271914482117, 0.1621621549129486, 0.15094339847564697, 0.06451612710952759, 0.19999998807907104, 0.1269841194152832, 0.21739129722118378, 0.145454540848732, 0.1538461446762085, 0.3636363446712494, 0.2083333283662796, 0.25806450843811035, 0.24561403691768646, 0.07547169178724289, 0.07407406717538834, 0.1304347813129425, 0.145454540848732, 0.375, 0.1463414579629898, 0.12244897335767746, 0, 0.42424240708351135, 0.09090908616781235, 0.2631579041481018, 0.21052631735801697, 0.1666666567325592, 0.2800000011920929, 0.19512194395065308, 0.15789473056793213, 0.2857142686843872, 0.051282044500112534 ]
Bkgz_krKPB
true
[ "We propose a model-agnostic way to leverage BERT for text generation and achieve improvements over Transformer on 2 tasks over 4 datasets." ]
[ "Humans have the remarkable ability to correctly classify images despite possible degradation.", "Many studies have suggested that this hallmark of human vision results from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways.", "Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback (CNN-F).", "CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation. ", "We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers.", "We validate the advantages of CNN-F over the baseline CNN.", "Our experimental results suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur. ", "Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts.", "Convolutional neural networks (CNNs) have been widely adopted for image classification and achieved impressive prediction accuracy.", "While state-of-the-art CNNs can achieve near-or super-human classification performance [1] , these networks are susceptible to accuracy drops in the presence of image degradation such as blur and noise, or adversarial attacks, to which human vision is much more robust [2] .", "This weakness suggests that CNNs are not able to fully capture the complexity of human vision.", "Unlike the CNN, the human's visual cortex contains not only feedforward but also feedback connections which propagate the information from higher to lower order visual cortical areas as suggested by the predictive coding model [3] .", "Additionally, recent studies suggest that recurrent circuits are crucial for core object recognition [4] .", "A recently proposed model extends CNN with a feedback generative network [5] , moving a step forward towards more brain-like CNNs.", "The inference of the model is carried out by the feedforward only CNN.", "We term convolutional neural networks with feedback whose inference uses no iterations as CNN-F (0 iterations).", "The generative feedback models the joint distribution of the data and latent variables.", "This methodology is similar to how human brain works: building an internal model of the world [6] [7] .", "Despite the success of CNN-F (0 iterations) in semi-supervised learning [5] and out-of-distribution detection [8] , the feedforward only CNN can be a noisy inference in practice and the power of the rendering top-down path is not fully utilized.", "A neuro-inspired model that carries out more accurate inference is therefore desired for robust vision.", "Our work is motivated by the interaction of feedforward and feedback signals in the brain, and our contributions are:", "We propose the Convolutional Neural Network with Feedback (CNN-F) with more accurate inference.", "We perform approximated loopy belief propagation to infer latent variables.", "We introduce recurrent structure into our network by feeding the generated image from the feedback process back into the feedforward process.", "We term the model with k-iteration inference as CNN-F (k iterations).", "In the context without confusion, we will use the name CNN-F for short in the rest of the paper.", "We demonstrate that the CNN-F is more robust to image degradation including noise, blur, and occlusion than the CNN.", "In particular, our experiments show that CNN-F experiences smaller accuracy drop compared to the corresponding CNN on degraded images.", "We verify that CNN-F is capable of restoring degraded images.", "When trained on clean data, the CNN-F can recover the original image from the degraded images at test time with high reconstruction accuracy.", "We propose the Convolutional Neural Networks with Feedback (CNN-F) which consists of both a classification pathway and a generation pathway similar to the feedforward and feedback connections in human vision.", "Our model uses approximate loopy belief propagation for inferring latent variables, allowing for messages to be propagated along both directions of the model.", "We also introduce recurrency by passing the reconstructed image and predicted label back into the network.", "We show that CNN-F is more robust than CNN on corrupted images such as noisy, blurry, and occluded images and is able to restore degraded images when trained only on clean images." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.10810810327529907, 0.1428571343421936, 0.4516128897666931, 0.07999999821186066, 0.19999998807907104, 0.1249999925494194, 0.11764705181121826, 0.07407406717538834, 0.07843136787414551, 0.07407406717538834, 0.0476190447807312, 0.07999999821186066, 0.4516128897666931, 0.08695651590824127, 0.2222222238779068, 0.17391303181648254, 0, 0.13636362552642822, 0.23076923191547394, 0.0714285671710968, 0.08695651590824127, 0, 0.1428571343421936, 0.1818181723356247, 0.14814814925193787, 0.20689654350280762, 0.13333332538604736, 0.0952380895614624, 0.1249999925494194, 0.21621620655059814, 0.0624999962747097, 0.07692307233810425, 0.1621621549129486 ]
rylU4mtUIS
true
[ "CNN-F extends CNN with a feedback generative network for robust vision." ]
[ "We develop new approximation and statistical learning theories of convolutional neural networks (CNNs) via the ResNet-type structure where the channel size, filter size, and width are fixed.", "It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks (FNNs) with a \\textit{block-sparse} structure even if the size of each layer in the CNN is fixed.", "Our result is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs.", "Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes.\n\n", "As applications, we consider two types of function classes to be estimated: the Barron class and H\\\"older class.", "We prove the clipped empirical risk minimization (ERM) estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size.", "This is minimax optimal (up to logarithmic factors) for the H\\\"older class.", "Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed.", "Convolutional Neural Network (CNN) is one of the most popular architectures in deep learning research, with various applications such as computer vision (Krizhevsky et al. (2012) ), natural language processing (Wu et al. (2016) ), and sequence analysis in bioinformatics (Alipanahi et al. (2015) , Zhou & Troyanskaya (2015) ).", "Despite practical popularity, theoretical justification for the power of CNNs is still scarce from the viewpoint of statistical learning theory.For fully-connected neural networks (FNNs), there is a lot of existing work, dating back to the 80's, for theoretical explanation regarding their approximation ability (Cybenko (1989) , Barron (1993) , Lu et al. (2017) , Yarotsky (2017), and Petersen & Voigtlaender (2017) ) and generalization power (Barron (1994) , Arora et al. (2018), and Suzuki (2018) ).", "See also Pinkus (2005) and Kainen et al. (2013) for surveys of earlier works.", "Although less common compared to FNNs, recently, statistical learning theory for CNNs has been studied, both about approximation ability (Zhou (2018) , Yarotsky (2018) , Petersen & Voigtlaender (2018) ) and about generalization power (Zhou & Feng (2018) ).", "One of the standard approaches is to relate the approximation ability of CNNs with that of FNNs, either deep or shallow.", "For example, Zhou (2018) proved that CNNs are a universal approximator of the Barron class (Barron (1993) , Klusowski & Barron (2016) ), which is a historically important function class in the approximation theory.", "Their approach is to approximate the function using a 2-layered FNN (i.e., an FNN with a single hidden layer) with the ReLU activation function (Krizhevsky et al. (2012) ) and transform the FNN into a CNN.", "Very recently independent of ours, Petersen & Voigtlaender (2018) showed any function realizable with an FNN can extend to an equivariant function realizable by a CNN that has the same order of parameters.", "However, to the best of our knowledge, no CNNs that achieves the minimax optimal rate (Tsybakov (2008) , Giné & Nickl (2015) ) in important function classes, including the Hölder class, can keep the number of units in each layer constant with respect to the sample size.", "Architectures that have extremely large depth, while moderate channel size and width have become feasible, thanks to recent methods such as identity mappings (He et al. (2016) , Huang et al. (2018) ), sophisticated initialization schemes (He et al. (2015) , Chen et al. (2018) ), and normalization techniques (Ioffe & Szegedy (2015) , Miyato et al. (2018) ).", "Therefore, we would argue that there are growing demands for theories which can accommodate such constant-size architectures.In this paper, we analyze the learning ability of ResNet-type ReLU CNNs which have identity mappings and constant-width residual blocks with fixed-size filters.", "There are mainly two reasons that motivate us to study this type of CNNs.", "First, although ResNet is the de facto architecture in various practical applications, the approximation theory for ResNet has not been explored extensively, especially from the viewpoint of the relationship between FNNs and CNNs.", "Second, constant-width CNNs are critical building blocks not only in ResNet but also in various modern CNNs such as Inception (Szegedy et al. (2015) ), DenseNet (Huang et al. (2017) ), and U-Net (Ronneberger et al. (2015) ), to name a few.", "Our strategy is to replicate the learning ability of FNNs by constructing tailored ResNet-type CNNs.", "To do so, we pay attention to the block-sparse structure of an FNN, which roughly means that it consists of a linear combination of multiple (possibly dense) FNNs (we define it rigorously in the subsequent sections).", "Block-sparseness decreases the model complexity coming from the combinatorial sparsity patterns and promotes better bounds.", "Therefore, it is often utilized, both implicitly or explicitly, in the approximation and learning theory of FNNs (e.g., Bölcskei et al. (2017) , Yarotsky (2018) ).", "We first prove that if an FNN is block-sparse with M blocks (M -way block-sparse FNN), we can realize the FNN with a ResNet-type CNN with O(M ) additional parameters, which are often negligible since the original FNN already has Ω(M ) parameters.", "Using this approximation, we give the upper bound of the estimation error of CNNs in terms of the approximation errors of block sparse FNNs and the model complexity of CNNs.", "Our result is general in the sense that it is not restricted to a specific function class, as long as we can approximate it using block-sparse FNNs.To demonstrate the wide applicability of our methods, we derive the approximation and estimation errors for two types of function classes with the same strategy: the Barron class (of parameter s = 2) and Hölder class.", "We prove, as corollaries, that our CNNs can achieve the approximation error of orderÕ(M ) for the β-Hölder class, where M is the number of parameters (we used M here, same as the number of blocks because it will turn out that CNNs have O(M ) blocks for these cases), N is the sample size, and D is the input dimension.", "These rates are same as the ones for FNNs ever known in the existing literature.", "An important consequence of our theory is that the ResNet-type CNN can achieve the minimax optimal estimation error (up to logarithmic factors) for β-Hölder class even if its filter size, channel size and width are constant with respect to the sample size, as opposed to existing works such as Yarotsky (2017) and Petersen & Voigtlaender (2018) , where optimal FNNs or CNNs could have a width or a channel size goes to infinity as N → ∞.In", "summary, the contributions of our work are as follows:• We develop the approximation theory for CNNs via ResNet-type architectures with constant-width residual blocks. We", "prove any M -way block-sparse FNN is realizable such a CNN with O(M ) additional parameters. That", "means if FNNs can approximate a function with O(M ) parameters, we can approximate the function with CNNs at the same rate (Theorem 1).• We", "derive the upper bound of the estimation error in terms of the approximation error of FNNs and the model complexity of CNNs (Theorem 2). This", "result gives the sufficient conditions to derive the same estimation error as that of FNNs (Corollary 1).• We", "apply our general theory to the Barron class and Hölder class and derive the approximation (Corollary 2 and 4) and", "estimation (Corollary 3 and 5) error", "rates, which are identical to those for FNNs, even if the CNNs have constant channel and filter size with respect to the sample size. In particular", ", this is minimax optimal for the Hölder case.", "In this paper, we established new approximation and statistical learning theories for CNNs by utilizing the ResNet-type architecture of CNNs and the block-sparse structure of FNNs.", "We proved that any M -way block-sparse FNN is realizable using CNNs with O(M ) additional parameters, when the width of the FNN is fixed.", "Using this result, we derived the approximation and estimation errors for CNNs from those for block-sparse FNNs.", "Our theory is general because it does not depend on a specific function class, as long as we can approximate it with block-sparse FNNs.", "To demonstrate the wide applicability of our results, we derived the approximation and error rates for the Barron class and Hölder class in almost same manner and showed that the estimation error of CNNs is same as that of FNNs, even if the CNNs have a constant channel size, filter size, and width with respect to the sample size.", "The key techniques were careful evaluations of the Lipschitz constant of CNNs and non-trivial weight parameter rescaling of FNNs.One of the interesting open questions is the role of the weight rescaling.", "We critically use the homogeneous property of the ReLU activation function to change the relative scale between the block-sparse part and the fully-connected part, if it were not for this property, the estimation error rate would be worse.", "The general theory for rescaling, not restricted to the Barron nor Hölder class would be beneficial for deeper understanding of the relationship between the approximation and estimation capabilities of FNNs and CNNs.Another question is when the approximation and estimation error rates of CNNs can exceed that of FNNs.", "We can derive the same rates as FNNs essentially because we can realize block-sparse FNNs using CNNs that have the same order of parameters (see Theorem 1).", "Therefore, if we dig into the internal structure of FNNs, like repetition, more carefully, the CNNs might need fewer parameters and can achieve better estimation error rate.", "Note that there is no hope to enhance this rate for the Hölder case (up to logarithmic factors) because the estimation rate using FNNs is already minimax optimal.", "It is left for future research which function classes and constraints of FNNs, like block-sparseness, we should choose." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3050847351551056, 0.8955223560333252, 0.178571417927742, 0.2711864411830902, 0.11538460850715637, 0.2461538463830948, 0.08510638028383255, 0.17241378128528595, 0.1538461446762085, 0.18947367370128632, 0.08163265138864517, 0.09090908616781235, 0.2641509473323822, 0.307692289352417, 0.1846153736114502, 0.1875, 0.23999999463558197, 0.07792207598686218, 0.24657534062862396, 0.16326530277729034, 0.21875, 0.1764705777168274, 0.23999999463558197, 0.17910447716712952, 0.08163265138864517, 0.16129031777381897, 0.2535211145877838, 0.17543859779834747, 0.20930232107639313, 0.15189872682094574, 0.12244897335767746, 0.30612245202064514, 0.21052631735801697, 0.1538461446762085, 0.178571417927742, 0.18867924809455872, 0.11320754140615463, 0.07999999821186066, 0.04878048598766327, 0.27586206793785095, 0.09090908616781235, 0.21052631735801697, 0.24561403691768646, 0.11764705181121826, 0.14035087823867798, 0.307692289352417, 0.17241378128528595, 0.1764705777168274, 0.19718308746814728, 0.13793103396892548, 0.19672130048274994, 0.10169491171836853, 0.15094339847564697 ]
HklnzhR9YQ
true
[ "It is shown that ResNet-type CNNs are a universal approximator and its expression ability is not worse than fully connected neural networks (FNNs) with a \\textit{block-sparse} structure even if the size of each layer in the CNN is fixed." ]
[ "Few shot image classification aims at learning a classifier from limited labeled data.", "Generating the classification weights has been applied in many meta-learning approaches for few shot image classification due to its simplicity and effectiveness.", "However, we argue that it is difficult to generate the exact and universal classification weights for all the diverse query samples from very few training samples.", "In this work, we introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM), which addresses current issues by two novel contributions.", "i) AWGIM generates different classification weights for different query samples by letting each of query samples attends to the whole support set.", "ii) To guarantee the generated weights adaptive to different query sample, we re-formulate the problem to maximize the lower bound of mutual information between generated weights and query as well as support data.", "As far as we can see, this is the first attempt to unify information maximization into few shot learning.", "Both two contributions are proved to be effective in the extensive experiments and we show that AWGIM is able to achieve state-of-the-art performance on benchmark datasets.", "While deep learning methods achieve great success in domains such as computer vision (He et al., 2016) , natural language processing (Devlin et al., 2018) , reinforcement learning (Silver et al., 2018) , their hunger for large amount of labeled data limits the application scenarios where only a few data are available for training.", "Humans, in contrast, are able to learn from limited data, which is desirable for deep learning methods.", "Few shot learning is thus proposed to enable deep models to learn from very few samples (Fei-Fei et al., 2006) .", "Meta learning is by far the most popular and promising approach for few shot problems (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017; Ravi & Larochelle, 2016; Rusu et al., 2019) .", "In meta learning approaches, the model extracts high level knowledge across different tasks so that it can adapt itself quickly to a new-coming task (Schmidhuber, 1987; Andrychowicz et al., 2016) .", "There are several kinds of meta learning methods for few shot learning, such as gradient-based (Finn et al., 2017; Ravi & Larochelle, 2016) and metric-based (Snell et al., 2017; Sung et al., 2018) .", "Weights generation, among these different methods, has shown effectiveness with simple formulation (Qi et al., 2018; Qiao et al., 2018; Gidaris & Komodakis, 2018; .", "In general, weights generation methods learn to generate the classification weights for different tasks conditioned on the limited labeled data.", "However, fixed classification weights for different query samples within one task might be sub-optimal, due to the few shot challenge.", "We introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM) in this work to address these limitations.", "In AWGIM, the classification weights are generated for each query sample specifically.", "This is done by two encoding paths where the query sample attends to the task context.", "However, we show in experiments that simple cross attention between query samples and support set fails to guarantee classification weights fitted to diverse query data since the query-specific information is lost during weights generation.", "Therefore, we propose to maximize the lower bound of mutual information between generated weights and query, support data.", "As far as we know, AWGIM is the first work introducing Variational Information Maximization in few shot learning.", "The induced computational overhead is minimal due to the nature of few shot problems.", "Furthermore, by maximizing the lower bound of mutual information, AWGIM gets rid of inner update without compromising performance.", "AWGIM is evaluated on two benchmark datasets and shows state-of-the-art performance.", "We also conducted detailed analysis to validate the contribution of each component in AWGIM.", "2 RELATED WORKS 2.1 FEW SHOT LEARNING Learning from few labeled training data has received growing attentions recently.", "Most successful existing methods apply meta learning to solve this problem and can be divided into several categories.", "In the gradient-based approaches, an optimal initialization for all tasks is learned (Finn et al., 2017) .", "Ravi & Larochelle (2016) learned a meta-learner LSTM directly to optimize the given fewshot classification task.", "Sun et al. (2019) learned the transformation for activations of each layer by gradients to better suit the current task.", "In the metric-based methods, a similarity metric between query and support samples is learned.", "(Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Li et al., 2019a) .", "Spatial information or local image descriptors are also considered in some works to compute richer similarities (Lifchitz et al., 2019; Li et al., 2019b; Wertheimer & Hariharan, 2019) .", "Generating the classification weights directly has been explored by some works.", "Gidaris & Komodakis (2018) generated classification weights as linear combinations of weights for base and novel classes.", "Similarly, Qiao et al. (2018) and Qi et al. (2018) both generated the classification weights from activations of a trained feature extractor.", "Graph neural network denoising autoencoders are used in (Gidaris & Komodakis, 2019) .", "Munkhdalai & Yu (2017) proposed to generate \"fast weights\" from the loss gradient for each task.", "All these methods do not consider generating different weights for different query examples, nor maximizing the mutual information.", "There are some other methods for few-shot classification.", "Generative models are used to generate or hallucinate more data in Wang et al., 2018; Chen et al., 2019) .", "Bertinetto et al. (2019) and used the closed-form solutions directly for few shot classification.", "integrated label propagation on a transductive graph to predict the query class label.", "In this work, we introduce Attentive Weights Generation via Information Maximization (AWGIM) for few shot image classification.", "AWGIM learns to generate optimal classification weights for each query sample within the task by two encoding paths.", "To guarantee this, the lower bound of mutual information between generated weights and query, support data is maximized.", "As far as we know, AWGIM is the first work utilizing mutual information techniques for few shot learning.", "The effectiveness of AWGIM is demonstrated by state-of-the-art performance on two benchmark datasets and extensive analysis." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.2857142686843872, 0.2631579041481018, 0.2631579041481018, 0.1818181723356247, 0.14999999105930328, 0.3636363446712494, 0.05128204822540283, 0.06896551698446274, 0.12903225421905518, 0.23529411852359772, 0.1428571343421936, 0.08888888359069824, 0.1395348757505417, 0, 0.25, 0.29411762952804565, 0.29411762952804565, 0.1538461446762085, 0.06896550953388214, 0.2222222238779068, 0.1875, 0.1875, 0.2142857164144516, 0, 0, 0.0714285671710968, 0.0624999962747097, 0.1249999925494194, 0, 0.13333332538604736, 0.060606054961681366, 0, 0, 0.09756097197532654, 0.1599999964237213, 0.19999998807907104, 0.12121211737394333, 0, 0.13333332538604736, 0.12903225421905518, 0.09090908616781235, 0.1249999925494194, 0.2142857164144516, 0.07692307233810425, 0.25806450843811035, 0.25, 0.1249999925494194, 0.25, 0 ]
BJxpIJHKwB
true
[ "A novel few shot learning method to generate query-specific classification weights via information maximization." ]
[ "Conversational question answering (CQA) is a novel QA task that requires the understanding of dialogue context.", "Different from traditional single-turn machine reading comprehension (MRC), CQA is a comprehensive task comprised of passage reading, coreference resolution, and contextual understanding.", "In this paper, we propose an innovative contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models.", "Our model leverages both inter-attention and self-attention to comprehend the conversation and passage.", "Furthermore, we demonstrate a novel method to integrate the BERT contextual model as a sub-module in our network.", "Empirical results show the effectiveness of SDNet.", "On the CoQA leaderboard, it outperforms the previous best model's F1 score by 1.6%.", "Our ensemble model further improves the F1 score by 2.7%.", "Machine reading comprehension (MRC) is a core NLP task in which a machine reads a passage and then answers related questions.", "It requires a deep understanding of both the article and the question, as well as the ability to reason about the passage and make inferences.", "These capabilities are essential in applications like search engines and conversational agents.", "In recent years, there have been numerous studies in this field (Huang et al., 2017; Seo et al., 2016; Liu et al., 2017) , with various innovations in text encoding, attention mechanisms and answer verification.", "However, traditional MRC tasks often take the form of single-turn question answering.", "In other words, there is no connection between different questions and answers to the same passage.", "This oversimplifies the conversational manner humans naturally take when probing a passage, where question turns are assumed to be remembered as context to subsequent queries.", "Figure 1 demonstrates an example of conversational question answering in which one needs to correctly refer \"she\" in the last two rounds of questions to its antecedent in the first question, \"Cotton.\"", "To accomplish this kind of task, the machine must comprehend both the current round's question and previous rounds of utterances in order to perform coreference resolution, pragmatic reasoning and semantic implication.", "To facilitate research in conversation question answering (CQA), several public datasets have been published that evaluate a model's efficacy in this field, such as CoQA (Reddy et al., 2018) , QuAC and QBLink (Elgohary et al., 2018) .", "In these datasets, to generate correct responses, models need to fully understand the given passage as well as the dialogue context.", "Thus, traditional MRC models are not suitable to be directly applied to this scenario.", "Therefore, a number of models have been proposed to tackle the conversational QA task.", "DrQA+PGNet (Reddy et al., 2018) combines evidence finding and answer generation to produce answers.", "BiDAF++ (Yatskar, 2018) achieves better results by employing answer marking and contextualized word embeddings on the MRC model BiDAF (Seo et al., 2016) .", "FlowQA (Huang et al., 2018 ) leverages a recurrent neural network over previous rounds of questions and answers to absorb information from its history context.", "Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton.", "Cotton lived high up in a nice warm place above the barn where all of the farmer's horses slept.", "But Cotton wasn't alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters...", "In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle the conversational question answering task.", "By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and the passage.", "Furthermore, we leverage the latest breakthrough in NLP, BERT, as a contextual embedder.", "We design the alignment of tokenizers, linear combination and weight-locking techniques to adapt BERT into our model in a computation-efficient way.", "SDNet achieves superior results over previous approaches.", "On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall F 1 score and the ensemble model further improves the F 1 by 2.7%.", "Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available.", "This will be a more realistic setting to human question answering." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.19512194395065308, 0.051282044500112534, 0.06451612710952759, 0.3333333134651184, 0.07692307233810425, 0, 0, 0.10526315122842789, 0.20512819290161133, 0.12903225421905518, 0.12244897335767746, 0.19354838132858276, 0.05714285373687744, 0.1860465109348297, 0.1702127605676651, 0.12765957415103912, 0.19230768084526062, 0.05405404791235924, 0, 0.1818181723356247, 0.05882352590560913, 0.04651162400841713, 0.17777776718139648, 0.05714285373687744, 0.10810810327529907, 0.0952380895614624, 0.3589743673801422, 0.0555555522441864, 0.25, 0.19999998807907104, 0, 0.0476190410554409, 0.04347825422883034, 0.19999998807907104 ]
SJx0F2VtwB
true
[ "A neural method for conversational question answering with attention mechanism and a novel usage of BERT as contextual embedder" ]
[ "Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality.", "However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques.", "Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations.", "Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.", "In recent years, Generative adversarial networks (GANs) (Goodfellow et al., 2014) have been becoming the state-of-the-art in several generative modeling tasks, ranging from image generation (Karras et al., 2018) to imitation learning (Ho and Ermon, 2016) .", "They are based on an idea of a two-player game, in which a discriminator tries to distinguish between real and generated data samples, while a generator tries to fool the discriminator, learning to produce realistic samples on the long run.", "Wasserstein GAN (WGAN) was proposed as a solution to the issues present in the original GAN formulation.", "Replacing the discriminator, WGAN trains a critic to approximate the Wasserstein distance between the real and generated distributions.", "This introduced a new challenge, since Wasserstein distance estimation requires the function space of the critic to only consist of 1-Lipschitz functions.", "To enforce the Lipschitz constraint on the WGAN critic, originally used weight clipping, which was soon replaced by the much more effective method of Gradient Penalty (GP) (Gulrajani et al., 2017) , which consists of penalizing the deviation of the critic's gradient norm from 1 at certain input points.", "Since then, several variants of gradient norm penalization have been introduced (Petzka et al., 2018; Wei et al., 2018; Adler and Lunz, 2018; Zhou et al., 2019b) .", "Virtual Adversarial Training (VAT) (Miyato et al., 2019 ) is a semi-supervised learning method for improving robustness against local perturbations of the input.", "Using an iterative method based on power iteration, it approximates the adversarial direction corresponding to certain input points.", "Perturbing an input towards its adversarial direction changes the network's output the most.", "Inspired by VAT, we propose a method called Adversarial Lipschitz Regularization (ALR), enabling the training of neural networks with regularization terms penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient.", "It provides means to generate a pair for each input point, for which the Lipschitz constraint is likely to be violated with high probability.", "In general, enforcing Lipschitz continuity of complex models can be useful for a lot of applications.", "In this work, we focus on applying ALR to Wasserstein GANs, as regularizing or constraining Lipschitz continuity has proven to have a high impact on training stability and reducing mode collapse.", "Source code to reproduce the presented experiments is available at https://github.com/dterjek/adversarial_lipschitz_regularization.", "Inspired by VAT, we proposed ALR and shown that it is an efficient and powerful method for learning Lipschitz constrained mappings implemented by neural networks.", "Resulting in competitive performance when applied to the training of WGANs, ALR is a generally applicable regularization method.", "It draws an important parallel between Lipschitz regularization and adversarial training, which we believe can prove to be a fruitful line of future research." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.05128204822540283, 0.060606058686971664, 0.06896551698446274, 0.09756097197532654, 0.05128204822540283, 0.054054051637649536, 0.10526315122842789, 0.09999999403953552, 0.0833333283662796, 0.043478257954120636, 0.07692307233810425, 0, 0.09090908616781235, 0, 0.060606058686971664, 0.07692307233810425, 0, 0.060606058686971664, 0.1249999925494194, 0, 0.09090908616781235, 0.0714285671710968 ]
Bke_DertPB
true
[ "alternative to gradient penalty" ]
[ "Multi-task learning promises to use less data, parameters, and time than training separate single-task models.", "But realizing these benefits in practice is challenging.", "In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task.", "There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks.", "To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints.", "We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures.", "We also present a method for quick evaluation of such architectures with feature distillation.", "Together these contributions allow us to quickly optimize for parameter-efficient multi-task models.", "We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance.", "Multi-task learning allows models to leverage similarities across tasks and avoid overfitting to the particular features of any one task (Caruana, 1997; Zamir et al., 2018) .", "This can result in better generalization and more robust feature representations.", "While this makes multi-task learning appealing for its potential performance improvements, there are also benefits in terms of resource efficiency.", "Training a multi-task model should require less data, fewer training iterations, and fewer total parameters than training an equivalent set of task-specific models.", "In this work we investigate how to automatically search over high performing multi-task architectures while taking such resource constraints into account.", "Finding architectures that offer the best accuracy possible given particular resource constraints is nontrivial.", "There are subtle trade-offs in performance when increasing or reducing use of parameters and operations.", "Furthermore, with multiple tasks, one must take into account the impact of shared operations.", "There is a large space of options for tweaking such architectures, in fact so large that it is difficult to tune an optimal configuration manually.", "Neural architecture search (NAS) allows researchers to automatically search for models that offer the best performance trade-offs relative to some metric of efficiency.", "Here we define a multi-task architecture as a single network that supports separate outputs for multiple tasks.", "These outputs are produced by unique execution paths through the model.", "In a neural network, such a path is made up of a subset of the total nodes and operations in the model.", "This subset may or may not overlap with those of other tasks.", "During inference, unused parts of the network can be ignored by either pruning out nodes or zeroing out their activations (Figure 1 ).", "Such architectures mean improved parameter efficiency because redundant operations and features can be consolidated and shared across a set of tasks.", "We seek to optimize for the computational efficiency of multi-task architectures by finding models that perform as well as possible while reducing average node use per task.", "Different tasks will require different capacities to do well, so reducing average use requires effectively identifying which tasks will ask more of the model and which tasks can perform well with less.", "In addition, performance is affected by how nodes are shared across tasks.", "It is unclear when allocating resources whether sets of tasks would benefit from sharing parameters or would instead interfere.", "Figure 1: Feature partitioning can be used to control how much network capacity is used by tasks, and how much sharing is done across tasks.", "In this work we identify effective partitioning strategies to maximize performance while reducing average computation per task.", "When searching over architectures, differences in resource use can be compared at different levels of granularity.", "Most existing work in NAS and multi-task learning searches over the allocation and use of entire layers (Zoph & Le, 2016; Fernando et al., 2017; Rosenbaum et al., 2017) , we instead partition out individual feature channels within a layer.", "This offers a greater degree of control over both the computation required by each task and the sharing that takes place between tasks.", "The main obstacle to address in searching for effective multi-task architectures is the vast number of possibilities for performing feature partitioning as well as the significant amount of computation required to evaluate and compare arrangements.", "A naive brute search over different partitioning strategies is prohibitively expensive.", "We leverage our knowledge of the search space to explore it more effectively.", "We propose a parameterization of partitioning strategies to reduce the size of the search space by eliminating unnecessary redundancies and more compactly expressing the key features that distinguish different architectures.", "In addition, the main source of overhead in NAS is evaluation of sampled architectures.", "It is common to define a surrogate operation that can be used in place of training a full model to convergence.", "Often a smaller model will be trained for a much shorter number of iterations with the hope that the differences in accuracy that emerge early on correlate with the final performance of the full model.", "We propose a strategy for evaluating multi-task architectures using feature distillation which provides much faster feedback on the effectiveness of a proposed partitioning strategy while correlating well with final validation accuracy.", "In this work we provide:", "• a parameterization that aids automatic architecture search by providing a direct and compact representation of the space of sharing strategies in multi-task architectures.", "• an efficient method for evaluating proposed parameterizations using feature distillation to further accelerate the search process.", "• results on Visual Decathlon (Rebuffi et al., 2017) to demonstrate that our search strategy allows us to effectively identify trade-offs between parameter use and performance on diverse and challenging image classification datasets.", "In this work we investigate efficient multi-task architecture search to quickly find models that achieve high performance under a limited per-task budget.", "We propose a novel strategy for searching over feature partitioning that automatically determines how much network capacity should be used by each task and how many parameters should be shared between tasks.", "We design a compact representation to serve as a search space, and show that we can quickly estimate the performance of different partitioning schemes by using feature distillation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.07999999821186066, 0, 0.1111111044883728, 0, 0.29629629850387573, 0.25, 0.25, 0.1818181723356247, 0.20000000298023224, 0, 0.0952380895614624, 0.13333332538604736, 0.06451612710952759, 0.19354838132858276, 0.1666666567325592, 0.07999999821186066, 0, 0.12121211737394333, 0.19354838132858276, 0.23076923191547394, 0, 0, 0, 0, 0.06666666269302368, 0.277777761220932, 0.052631575614213943, 0, 0, 0, 0, 0.07692307233810425, 0.12765957415103912, 0.0624999962747097, 0.20000000298023224, 0.0952380895614624, 0.08695651590824127, 0.21621620655059814, 0.08695651590824127, 0.06896550953388214, 0.10810810327529907, 0.20512820780277252, 0, 0.3125, 0.2222222238779068, 0.1463414579629898, 0.25, 0.1538461446762085, 0.1621621549129486 ]
B1eoyAVFwH
true
[ "automatic search for multi-task architectures that reduce per-task feature use" ]
[ "As distributed approaches to natural language semantics have developed and diversified, embedders for linguistic units larger than words (e.g., sentences) have come to play an increasingly important role. ", "To date, such embedders have been evaluated using benchmark tasks (e.g., GLUE) and linguistic probes. ", "We propose a comparative approach, nearest neighbor overlap (N2O), that quantifies similarity between embedders in a task-agnostic manner. ", "N2O requires only a collection of examples and is simple to understand: two embedders are more similar if, for the same set of inputs, there is greater overlap between the inputs' nearest neighbors. ", "We use N2O to compare 21 sentence embedders and show the effects of different design choices and architectures.", "Continuous embeddings-of words and of larger linguistic units-are now ubiquitious in NLP.", "The success of self-supervised pretraining methods that deliver embeddings from raw corpora has led to a proliferation of embedding methods, with an eye toward \"universality\" across NLP tasks.", "Our focus here is on sentence embedders, and specifically their evaluation.", "As with most NLP components, intrinsic (e.g., and extrinsic (e.g., GLUE; Wang et al., 2019) evaluations have emerged for sentence embedders.", "Our approach, nearest neighbor overlap (N2O), is different: it compares a pair of embedders in a linguistics-and task-agnostic manner, using only a large unannotated corpus.", "The central idea is that two embedders are more similar if, for a fixed query sentence, they tend to find nearest neighbor sets that overlap to a large degree.", "By drawing a random sample of queries from the corpus itself, N2O can be computed on in-domain data without additional annotation, and therefore can help inform embedder choices in applications such as text clustering (Cutting et al., 1992) , information retrieval (Salton & Buckley, 1988) , and open-domain question answering (Seo et al., 2018) , among others.", "After motivating and explaining the N2O method ( §2), we apply it to 21 sentence embedders ( §3-4).", "Our findings ( §5) reveal relatively high functional similarity among averaged static (noncontextual) word type embeddings, a strong effect of the use of subword information, and that BERT and GPT are distant outliers.", "In §6, we demonstrate the robustness of N2O across different query samples and probe sizes.", "We also illustrate additional analyses made possible by N2O: identifying embeddingspace neighbors of a query sentence that are stable across embedders, and those that are not ( §7); and probing the abilities of embedders to find a known paraphrase ( §8).", "The latter reveals considerable variance across embedders' ability to identify semantically similar sentences from a broader corpus.", "In this paper, we introduce nearest neighbor overlap (N2O), a comparative approach to quantifying similarity between sentence embedders.", "Using N2O, we draw comparisons across 21 embedders.", "We also provide additional analyses made possible with N2O, from which we find high variation in embedders' treatment of semantic similarity.", "GloVe.", "We use three sets of standard pretrained GloVe embeddings: 100D and 300D embeddings trained on Wikipedia and Gigaword (6B tokens), and 300D embeddings trained on Common Crawl (840B tokens).", "13 We handle tokenization and embedding lookup identically to word2vec; for the Wikipedia/Gigaword embeddings, which are uncased, we lower case all tokens as well.", "FastText.", "We use four sets of pretrained FastText embeddings: two trained on Wikipedia and other news corpora, and two trained on Common Crawl (each with an original version and one trained on subword information).", "14 We use the Python port of the FastText implementation to handle tokenization, embedding lookup, and OOV embedding computation." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.09999999403953552, 0.550000011920929, 0.22641508281230927, 0.41025641560554504, 0.11764705181121826, 0.08163265138864517, 0.12121211737394333, 0.13636362552642822, 0.35555556416511536, 0.2083333283662796, 0.08219178020954132, 0.307692289352417, 0.15094339847564697, 0.05405404791235924, 0.21052631735801697, 0.10256409645080566, 0.4000000059604645, 0.13333332538604736, 0.1860465109348297, 0.13333332538604736, 0.17391303181648254, 0.1249999925494194, 0.20512819290161133 ]
ByePEC4KDS
true
[ "We propose nearest neighbor overlap, a procedure which quantifies similarity between embedders in a task-agnostic manner, and use it to compare 21 sentence embedders." ]
[ "Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem.", "Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality.", "Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success.", "This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior.", "The synthetic likelihood ratio term also shows instability during training.", "We propose a novel objective with a ``\"Best-of-Many-Samples\" reconstruction cost and a stable direct estimate of the synthetic likelihood.", "This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality.", "Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have achieved state-of-the-art sample quality in generative modeling tasks.", "However, GANs do not explicitly estimate the data likelihood.", "Instead, it aims to \"fool\" an adversary, so that the adversary is unable to distinguish between samples from the true distribution and the generated samples.", "This leads to the generation of high quality samples (Adler & Lunz, 2018; Brock et al., 2019) .", "However, there is no incentive to cover the whole data distribution.", "Entire modes of the true data distribution can be missedcommonly referred to as the mode collapse problem.", "In contrast, the Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) explicitly maximize data likelihood and can be forced to cover all modes (Bozkurt et al., 2018; Shu et al., 2018) .", "VAEs enable sampling by constraining the latent space to a unit Gaussian and sampling through the latent space.", "However, VAEs maximize a data likelihood estimate based on the L 1 /L 2 reconstruction cost which leads to lower overall sample quality -blurriness in case of image distributions.", "Therefore, there has been a spur of recent work (Donahue et al., 2017; Larsen et al., 2016; Rosca et al., 2019) which aims integrate GANs in a VAE framework to improve VAE generation quality while covering all the modes.", "Notably in Rosca et al. (2019) , GANs are integrated in a VAE framework by augmenting the L 1 /L 2 data likelihood term in the VAE objective with a GAN discriminator based synthetic likelihood ratio term.", "However, Rosca et al. (2019) reports that in case of hybrid VAE-GANs, the latent space does not usually match the Gaussian prior.", "This is because, the reconstruction log-likelihood in the VAE objective is at odds with the divergence to the latent prior (Tabor et al., 2018) (also in case of alternatives proposed by Makhzani et al. (2016) ; ).", "This problem is further exacerbated with the addition of the synthetic likelihood term in the hybrid VAE-GAN objective -it is necessary for sample quality but it introduces additional constraints on the encoder/decoder.", "This leads to the degradation in the quality and diversity of samples.", "Moreover, the synthetic likelihood ratio term is unstable during training -as it is the ratio of outputs of a classifier, any instability in the output of the classifier is magnified.", "We directly estimate the ratio using a network with a controlled Lipschitz constant, which leads to significantly improved stability.", "Our contributions in detail are,", "1. We propose a novel objective for training hybrid VAE-GAN frameworks, which relaxes the constraints on the encoder by giving the encoder multiple chances to draw samples with high likelihood enabling it to generate realistic images while covering all modes of the data distribution,", "2. Our novel objective directly estimates the synthetic likelihood term with a controlled Lipschitz constant for stability,", "3. Finally, we demonstrate significant improvement over prior hybrid VAE-GANs and plain GANs on highly muti-modal synthetic data, CIFAR-10 and CelebA.", "We further compare our BMS-VAE-GAN to state-of-the-art GANs using the Standard CNN architecture in Table 6 with 100k generator iterations.", "Our α-GAN + SN ablation significantly outperforms the state-of-the-art plain GANs (Adler & Lunz, 2018; Miyato et al., 2018) , showing the effectiveness of hybrid VAE-GANs with a stable direct estimate of the synthetic likelihood on this highly diverse dataset.", "Furthermore, our BMS-VAE-GAN model trained using the best of T = 30 samples significantly improves over the α-GAN + SN baseline (23.4 vs 24.6 FID), showing the effectiveness of our \"Best-of-Many-Samples\".", "We also compare to Tran et al. (2018) using 300k generator iterations, again outperforming by a significant margin (21.8 vs 22.9 FID).", "The IoVM metric of Srivastava et al. (2017) (Tables 4 and 5 ), illustrates that we are also able to better reconstruct the image distribution.", "The improvement in both sample quality as measured by the FID metric and data reconstruction as measured by the IoVM metric shows that our novel \"Best-of-Many-Samples\" objective helps us both match the prior in the latent space and achieve high data log-likelihood at the same time.", "We train all models for 200k iterations and report the FID scores (Heusel et al., 2017) for all models using 10k/10k real/generated samples in Table 7 .", "The pure auto-encoding based WAE (Tolstikhin et al., 2018) has the weakest performance due to blurriness.", "Our pure autoencoding BMS-VAE (without synthetic likelihoods) improves upon the WAE (39.8 vs 41.2 FID), already demonstrating the effectiveness of using \"Best-of-Many-Samples\".", "We see that the base DCGAN has the weakest performance among the GANs.", "BEGAN suffers from partial mode collapse.", "The SN-GAN improves upon WGAN-GP, showing the effectiveness of Spectral Normalization.", "However, there exists considerable artifacts in its generations.", "The α-GAN of Rosca et al. (2019) , which integrates the base DCGAN in its framework performs significantly better (31.1 vs 19.2 FID).", "This shows the effectiveness of VAE-GAN frameworks in increasing quality and diversity of generations.", "Our enhanced α-GAN + SN regularized with Spectral Normalization performs significantly better (15.1 vs 19.2 FID).", "This shows the effectiveness of a regularized direct estimate of the synthetic likelihood.", "Using the gradient penalty regularizer of Gulrajani et al. (2017) lead to drop of 0.4 FID.", "Our BMS-VAE-GAN improves significantly over the α-GAN + SN baseline using the \"Best-of-Many-Samples\" (13.6 vs 15.1 FID).", "The results at 128×128 resolution mirror the results at 64×64.", "We additionally evaluate using the IoVM metric in Appendix C. We see that by using the \"Best-of-Many-Samples\" we obtain sharper ( Figure 4d ) results that cover more of the data distribution as shown by both the FID and IoVM.", "We propose a new objective for training hybrid VAE-GAN frameworks which overcomes key limitations of current hybrid VAE-GANs.", "We integrate,", "1. A \"Best-of-Many-Samples\" reconstruction likelihood which helps in covering all the modes of the data distribution while maintaining a latent space as close to Gaussian as possible,", "2. A stable estimate of the synthetic likelihood ratio..", "Our hybrid VAE-GAN framework outperforms state-of-the-art hybrid VAE-GANs and plain GANs in generative modelling on CelebA and CIFAR-10, demonstrating the effectiveness of our approach." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0.13636362552642822, 0.3404255211353302, 0.2222222238779068, 0.06896550953388214, 0.277777761220932, 0.3529411852359772, 0.10810810327529907, 0, 0.09999999403953552, 0.10810810327529907, 0.06666666269302368, 0.11428570747375488, 0.0833333283662796, 0.1818181723356247, 0.2083333283662796, 0.18867924809455872, 0.12244897335767746, 0.09999999403953552, 0.11999999731779099, 0.21276594698429108, 0.2666666507720947, 0.1463414579629898, 0.21621620655059814, 0.0833333283662796, 0.3103448152542114, 0.1666666567325592, 0.25641024112701416, 0.1538461446762085, 0.1071428507566452, 0, 0.1860465109348297, 0.09090908616781235, 0.18867924809455872, 0.1860465109348297, 0.0555555522441864, 0, 0.06666666269302368, 0.07999999821186066, 0, 0.07407406717538834, 0.09090908616781235, 0.1875, 0, 0.06666666269302368, 0.11428570747375488, 0, 0, 0.11764705181121826, 0.555555522441864, 0.1818181723356247, 0, 0.19512194395065308 ]
S1lk61BtvB
true
[ "We propose a new objective for training hybrid VAE-GANs which lead to significant improvement in mode coverage and quality." ]