source
stringlengths 273
149k
| source_labels
sequence | paper_id
stringlengths 9
11
| target
stringlengths 18
668
|
---|---|---|---|
We propose a new approach, known as the iterative regularized dual averaging (iRDA), to improve the efficiency of convolutional neural networks (CNN) by significantly reducing the redundancy of the model without reducing its accuracy. The method has been tested for various data sets, and proven to be significantly more efficient than most existing compressing techniques in the deep learning literature. For many popular data sets such as MNIST and CIFAR-10, more than 95% of the weights can be zeroed out without losing accuracy. In particular, we are able to make ResNet18 with 95% sparsity to have an accuracy that is comparable to that of a much larger model ResNet50 with the best 60% sparsity as reported in the literature. In recent decades, deep neural network models have achieved unprecedented success and state-ofthe-art performance in various tasks of machine learning or artificial intelligence, such as computer vision, natural language processing and reinforcement learning BID11. Deep learning models usually involve a huge number of parameters to fit variant kinds of datasets, and the number of data may be much less than the amount of parameters BID9. This may implicate that deep learning models have too much redundancy. This can be validated by the literatures from the general pruning methods BID18 to the compressing models BID6.While compressed sensing techniques have been successfully applied in many other problems, few reports could be found in the literature for their application in deep learning. The idea of sparsifying machine learning models has attracted much attention in the last ten years in machine learning BID2; BID22. When considering the memory and computing cost for some certain applications such as Apps in mobile, the sparsity of parameters plays a very important role in model compression BID6; BID0. The topic of computing sparse neural networks can be included in the bigger topic on the compression of neural networks, which usually further involves the speedup of computing the compressed models. There are many sparse methods in machine learning models such as FOBOS method BID3, also known as proximal stochastic gradient descent (prox-SGD) methods BID16, proposed for general regularized convex optimization problem, where 1 is a common regularization term. One drawback of prox-SGD is that the thresholding parameters will decay in the training process, which in unsatisfactory sparsity BID22. Apart from that, the regularized dual averaging (RDA) method BID22, proposed to obtain better sparsity, has been proven to be convergent with specific parameters in convex optimization problem, but has not been applied in deep learning fields. In this paper, we analyze the relation between simple dual averaging (SDA) method BID17 and the stochastic gradient descent (SGD) method BID19, as well as the relation between SDA and RDA. It is well-known that SGD and its variants work quite well in deep learning problems. However, there are few literatures in applying pure training algorithms to deep CNNs for model sparsification. We propose an iterative RDA (iRDA) method for training sparse CNN models, and prove the convergence under convex conditions. Numerically, we compare prox-SGD with iRDA, where the latter can achieve better sparsity while keeping satisfactory accuracy on MNIST, CIFAR-10 and CIFAR-100. We also show iRDA works for different CNN models such as VGG BID21 and BID9. Finally, we compare the performance of iRDA with some other state-of-the-art compression methods. BID0 reviews the work on compressing neural network models, and categorizes the related methods into four schemes: parameter pruning and sharing, low-rank factorization, transfered/compact convolutional filters and knowledge distillation. Among them, BID14 uses sparse decomposition on the convolutional filters to get sparse neural networks, which could be classified to the second scheme. Apart from that, BID7 prunes redundant connections by learning only the important parts. BID15 starts from a Bayesian point of view, and removes large parts of the network through sparsity inducing priors. BID23 BID10 combines reinforcement learning methods to compression. BID13 considers deep learning as a discrete-time optimal control problem, and obtains sparse weights on ternary networks. Recently, BID4 applies RDA to fully-connected neural network models on MNIST. Let z = (x, y) be an input-output pair of data, such as a picture and its corresponding label in a classification problem, and f (w, z) be the loss function of neural networks, i.e. a scalar function that is differentiable w.r.t. weights w. We are interested in the expected risk minimization problem DISPLAYFORM0 The empirical risk minimization DISPLAYFORM1 is an approximation of based on some finite given samples {z 1, z 2, . . ., z T}, where T is the size of the sample set. Regularization is a useful technique in deep learning. In general, the regularized expected risk minimization has the form DISPLAYFORM2 where Ψ(w) is a regularization term with certain effect. For example, Ψ(w) = w 2 2 may improve the generalization ability, and an 1 -norm of w can give sparse solutions. The corresponding regularized empirical risk minimization we concern takes the form DISPLAYFORM3 SDA method is a special case of primal-dual subgradient method first proposed in BID17. BID22 proposes RDA for online convex and stochastic optimization. RDA not only keeps the same convergence rate as Prox-SGD, but also achieves more sparsity in practice. In next sections, we will discuss the connections between SDA and SGD, as well as RDA and Prox-SGD. We then propose iRDA for 1 regularized problem of deep neural networks. As a solution of, SDA takes the form DISPLAYFORM0 The first term t τ =1 g τ, w is a linear function obtained by averaging all previous stochastic gradient. g t is the subgradient of f t. The second term h(w) is a strongly convex function, and {β t} is a nonnegative and nondecreasing sequence which determines the convergence rate. As g τ (w τ), τ = 1,..., t − 1 is constant in current iteration, we use g τ instead for simplicity in the following. Since subproblem equation 5 is strongly convex, it has a unique optimal solution w t+1.Let w 0 be the initial point, and h(w) = 1 2 w − w 0 2 2, the iteration scheme of SDA can be written as DISPLAYFORM1 DISPLAYFORM2. Let β t = γt α, SDA can be rewritten recursively as DISPLAYFORM3 where 1 − 1 − For the regularized problem, we recall the well-known Prox-SGD and RDA method first. At each iteration, Prox-SGD solves the subproblem DISPLAYFORM0 Specifically, α t = 1 γ √ t obtains the best convergence rate. The first two terms are an approximation of the original objective function. Note that without the regularization term Ψ, equation 8 is equivalent to SGD. It can be written in forward-backward splitting (FOBOS) scheme DISPLAYFORM1 DISPLAYFORM2 where the forward step is equivalent to SGD, and the backward step is a soft-thresholding operator working on w t+ DISPLAYFORM3 with the soft-thresholding parameter α t.Different from Prox-SGD, each iteration of RDA takes the form DISPLAYFORM4 Similarly, taking h(w) = 1 2 w − w 0 2 2, RDA can be written as DISPLAYFORM5 = arg min DISPLAYFORM6 or equivalently, DISPLAYFORM7 w t+1 = arg min DISPLAYFORM8 where β t = γ √ t to obtain the best convergence rate. From equation 14, one can see that the forward step is actually SDA and the backward step is the soft-thresholding operator, with the parameter t/β t.3.3 1 REGULARIZATION AND THE SPARSITY Set Ψ(w) = λ w 1. The problem then becomes DISPLAYFORM9 where λ is a hyper-parameter that determines sparsity. In this case, from Xiao's analysis of , the expected cost Eφ(w t) − φ associated with the random variablew t converges with rate O(DISPLAYFORM10 This convergence rate is consistent with . However, both assume f to be a convex function, which can not be guaranteed in deep learning. Nevertheless, we can still verify that RDA is a powerful sparse optimization method for deep neural networks. We conclude the closed form solutions of Prox-SGD and RDA for equation 16 as follows. has the closed form solution DISPLAYFORM0 2. The subproblem of RDA DISPLAYFORM1 has the closed form solution DISPLAYFORM2 3. The √ t-proximal stochastic gradient method has the form DISPLAYFORM3 The difference between √ t-Prox-SGD and Prox-SGD is the soft-thresholding parameter chosen to be √ t. It has the closed form solution DISPLAYFORM4 It is equivalent to DISPLAYFORM5 where the objective function is actually an approximation of DISPLAYFORM6 We can easily conclude that this iteration will converge to w = 0 if DISPLAYFORM7 Now compare the threshold λ P G = α t λ of PG and the threshold λ RDA = t βt λ of RDA. With DISPLAYFORM8 and β t = γ √ t, we have λ P G → 0 and λ RDA → ∞ as t → 0. It is clear that RDA uses a much more aggressive threshold, which guarantees to generate significantly more sparse solutions. Note that when Ψ = λ w 1, RDA requires w 1 = w 0 = 0. However, this will make deep neural network a constant function, with which the parameters can be very hard to update. Thus, in Algorithm 1, we modify the RDA method as Step 1, where w 1 can be chosen not equal to 0, and add an extra Step 2 to improve the performance. We also prove the convergence rate of Step 1 for convex problem is O( DISPLAYFORM0 Theorem 3.1 Assume there exists an optimal solution w to the problem with Ψ(w) = λ w 1 that satisfies h(w) ≤ D 2 for some D > 0, and let φ = φ(w). Let the sequences {w t} t≥1 be generated by Step 1 in iRDA, and assume g t * ≤ G for some constant G. Then the expected cost Eφ(w t) converges to φ with rate O(DISPLAYFORM1 See Appendix A for the proof. To apply iRDA, the weights of a neural network should be initialized differently from that in a normal optimization method such as SGD or its variants. Our initialization is based on BID12, BID5 and BID8, with an additional re-scaling. Let s be a scalar, the mean and the standard deviation of the uniform distribution for iRDA is zero and DISPLAYFORM0 respectively, where c is the number of channels, and k is the spatial filter size of the layer (see BID8).Choosing a suitable s is important when applying iRDA. As shown in TAB2 and TAB3 in Appendix B, if s is too small or too large, the training process could be slowed down and the generalization ability may be affected. Moreover, a small s usually requires much better initial weights, which in too many samplings in initialization process. In our experiments, a good s for iRDA is usually much larger than √ 2, and unsuitable for SGD algorithms. Iterative retraining is a method that only updates the non-zero parameters at each iteration. A trained model can be further updated with retraining, thus both the accuracies and sparsity can be improved. See Table 4 for comparisons on CIFAR-10. The iterative RDA method for 1 regularized DNNs Input:• A strongly convex function h(w) = w 2 2.• A nonnegative and nondescreasing sequence β t = γ √ t. Step 1: RDA with proper initialization Initialize: set w 0 = 0,ḡ 0 = 0 and randomly choose w 1 with methods explained in section 3.5. for t=1,2,..., T do Given the sample z it and corresponding loss function f it.Compute the stochastic gradient g t = ∇f it (w t).Update the average gradient:ḡ DISPLAYFORM0 Compute the next weight vector: DISPLAYFORM1 Step 2: iterative retraining for t=T+1,T+2,T+3,... do Given the sample z it and corresponding loss function f it.Compute the stochastic gradient DISPLAYFORM2 Set (g t) j = 0 if (w t) j = 0 for every j. Update the average gradient:ḡ DISPLAYFORM3 Compute the next weight vector: DISPLAYFORM4 In this section, σ denotes the sparsity of a model, i.e. σ = quantity of zero parameters quantity of all parameters.All neural networks are trained with mini-batch size 128. We provide a test on different hyper-parameters, so as to give an overview of their effects on performance, as shown in TAB4. We also show that the sparsity and the accuracy can be balanced with iRDA by adjusting the parameters λ and γ, as shown in TAB6. Both tables are put in Appendix C. We compare iRDA with several methods including prox-SGD, √ t−SGD and normal SGD, on different datasets including MNIST, CIFAR-10, CIFAR-100 and ImageNet(ILSVRC2012). The main are shown in TAB0. Table 2 shows the performance of iRDA on different architectures including ResNet18, VGG16 and VGG19. TAB1 shows the performance of iRDA on different Figure 1: The first 120 epochs of loss curves corresponding to TAB0, and the sparsity curve for another , where the top-1 validation accuracy is 91.34%, and σ = 0.87. datasets including MNIST, CIFAR-10, CIFAR-100 and ImageNet(ILSVRC2012). In all tables, SGD denotes stochastic gradient methods with momentum. Currently, many compression methods include human experts involvement. Some methods try to combine other structures in training process to automatize the compression process. For example, BID10 combines reinforcement learning. iRDA, as an algorithm, requires no extra structure. As shown above, iRDA can achieve good sparsity while keeping accuracy automatically, with carefully chosen parameters. For CIFAR-10, we compare the performance of iRDA with some other state-of-art compression methods in Table 4. Due to different standards, σ is referred to directly or computed from the original papers approximately. In comparison with many existing rule-based heuristic approaches, the new approach is based on a careful and iterative combination of 1 regularization and some specialized training algorithms. We find that the commonly used training algorithms such as SGD methods are not effective. We thus develop iRDA method that can be used to achieve much better sparsity. iRDA is a variant of RDA methods that have been used for some special types of online convex optimization problems in the literature. New elements in the iRDA mainly consist of judicious initialization and iterative retraining. In addition, iRDA method is carefully analyzed on its convergence for convex objective functions. Many deep neural networks trained by iRDA can achieve good sparsity while keeping the same validation accuracy as those trained by SGD with momentum on many popular datasets. This shows iRDA is a powerful sparse optimization method for image classification problems in deep learning fields. One of the differences between and iRDA is that the former one takes w 1 = arg min w h(w) whereas the latter one chooses w 1 randomly. In the following, we will prove the convergence of iRDA Step 1 for convex problem. The proofs use Lemma 9, Lemma 10, Lemma 11 directly and modify Theorem 1 and Theorem 2 in BID22. For clarity, we have some general assumptions:• The regularization term Ψ(w) is a closed convex function with convexity parameter σ and domΨ is closed.• For each t ≥ 1, f t (w) is convex and subdifferentiable on domΨ.• h(w) is strongly convex on domΨ and subdifferentiable on rint(domΨ) and also satisfies DISPLAYFORM0 Without loss of generality, assume h(w) has convexity parameter 1 and min w h(w) = 0.• There exist a constant G such that DISPLAYFORM1 • Require {β} t≥1 be a nonnegative and nondecreasing sequence and DISPLAYFORM2 Moreover, we could always choose β 1 ≥ σ such that β 0 = β 1.• For a random choosing w 1, we assume DISPLAYFORM3 First of all, we define two functions: DISPLAYFORM4 DISPLAYFORM5 The maximum in is always achieved because F D = {w ∈ domΨ|h(w) ≤ D 2 } is a nonempty compact set. Because of, we have σt+β t ≥ β 0 > 0 for all t ≥ 0, which means tΨ(w)+β t h(w) are all strongly convex, therefore the maximum in is always achieved and unique. As a , we have domU t = domV t = E * for all t ≥ 0. Moreover, by the assumption, both of the functions are nonnegative. Let s t denote the sum of the subgradients obtained up to time t in iRDA Step 1, that is DISPLAYFORM6 and π t (s) denotes the unique maximizer in the definition of V t (s) DISPLAYFORM7 which then gives DISPLAYFORM8 Lemma A.1 For any s ∈ E * and t ≥ 0, we have DISPLAYFORM9 For a proof, see Lemma 9 in.Lemma A.2 The function V t is convex and differentiable. Its gradient is given by DISPLAYFORM10 and the gradient Lipschitz continuous with constant 1/(σt + β t), that is DISPLAYFORM11 Moreover, the following inequality holds: DISPLAYFORM12 The are from Lemma 10 in BID22.Lemma A.3 For each t ≥ 1, we have DISPLAYFORM13 Since h(w t+1) ≥ 0 and the sequence {β t} t≥1 is nondecreasing, we have DISPLAYFORM14 DISPLAYFORM15 To prove this lemma, we refer to the Lemma 11 in. What's more, from the assumption 35, we could always choose β 1 ≥ σ such that β 1 = β 0 and DISPLAYFORM16 The learner's regret of online learning is the difference between his cumulative loss and the cumulative loss of the optimal fixed hypothesis, which is defined by DISPLAYFORM17 and bounded by DISPLAYFORM18 Lemma A.4 Let the sequence {w t} t≥1 and {g t} t≥1 be generated by iRDA Step 1, and assume FORMULA2 and FORMULA2 hold. Then for any t ≥ 1 and any DISPLAYFORM19 Proof First, we define the following gap sequence which measures the quality of the solutions w 1,.., w t: DISPLAYFORM20 and δ t is an upper bound on the regret R t (w) for all w ∈ F D, to see this, we use the convexity of f t (w) in the following: DISPLAYFORM21 Then, We are going to derive an upper bound on δ t. For this purpose, we subtract t τ =1 g τ, w 0 in, which leads to DISPLAYFORM22 the maximization term in is in fact U t (−s t), therefore, by applying Lemma A.1, we have DISPLAYFORM23 Next, we show that ∆ t is an upper bound for the right-hand side of inequality. We consider τ ≥ 2 and τ = 1 respectively. For any τ ≥ 2, we have DISPLAYFORM24 where FORMULA3, FORMULA2, FORMULA3 and FORMULA2 are used. Therefore, we have DISPLAYFORM25, ∀τ ≥ 2.For τ = 1, we have a similar inequality by using DISPLAYFORM26 Summing the above inequalities for τ = 1,..., t and noting that V 0 (−s 0) = V 0 = 0, we arrive at DISPLAYFORM27 Since Ψ(w t+1) ≥ 0, we subtract it from the left hand side and add Ψ(w 1) to both sides of the above inequality yields DISPLAYFORM28 Combing FORMULA3, FORMULA4, and using assumption andwe conclude DISPLAYFORM29 Lemma A.5 Assume there exists an optimal solution w to the problem that satisfies h(w) ≤ D 2 for some D > 0, and let φ = φ(w). Let the sequences {w t} t≥1 be generated by iRDA Step 1, and assume g t * ≤ G for some constant G. Then for any t ≥ 1, the expected cost associated with the random variablew t is bounded as DISPLAYFORM30 Proof First, from the definition, we have the regret at w DISPLAYFORM31 Let z[t] denote the collection of i.i.d. random variables (z, ..., z t). We note that the random variable w τ, where 1 ≤ w ≥ t, is a function of (z 1, ..., z τ −1) and is independent of (z τ, ..., z t). Therefore DISPLAYFORM32 and DISPLAYFORM33 Since φ = φ(w) = min w φ(w), we have the expected regret DISPLAYFORM34 Then, by convexity of φ, we have DISPLAYFORM35 Finally, from FORMULA4 and FORMULA4, we have DISPLAYFORM36 Then the desired follows from that of Lemma A.4. Proof of Theorem 3.1 From Lemma A.5, the expected cost associated with the random variablew t is bounded as DISPLAYFORM37 Here, we consider 1 regularization function Ψ(w) = λ w 1 and it is a convex but not strongly convex function, which means σ = 0. Now, we consider how to choose β t for t ≥ 1 and β 0 = β 1. First if β t = γt, we have 1 t · γtD 2 = γD 2, which means the expected cost does not converge. Then assume β t = γt α, α > 0 and α = 1, the right hand side of the inequality becomes DISPLAYFORM38 From above, we see that if 0 < α < 1, the expected cost converges and the optimal convergence rate O(t We have shown why prox-SGD will give poor sparsity, and although √ t-prox-SGD may introduce greater sparsity, it is not convergent. Finally, iRDA gives the best , on both the top-1 accuracy and the sparsity. iRDA ( | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJMXus0ct7 | A sparse optimization algorithm for deep CNN models. |
Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards. Many sequential decision-making problems can be tackled by imitation learning: an expert demonstrates near-optimal behavior to an agent, and the agent attempts to replicate that behavior in novel situations . This paper considers the problem of training an agent to imitate an expert, given expert action demonstrations and the ability to interact with the environment. The agent does not observe a reward signal or query the expert, and does not know the state transition dynamics. Standard approaches based on behavioral cloning (BC) use supervised learning to greedily imitate demonstrated actions, without reasoning about the consequences of actions . As a , compounding errors cause the agent to drift away from the demonstrated states . The problem with BC is that, when the agent drifts and encounters out-of-distribution states, the agent does not know how to return to the demonstrated states. Recent methods based on inverse reinforcement learning (IRL) overcome this issue by training an RL agent not only to imitate demonstrated actions, but also to visit demonstrated states (; ; b;). This is also the core idea behind generative adversarial imitation learning (GAIL) , which implements IRL using generative adversarial networks (; a). Since the true reward function for the task is unknown, these methods construct a reward signal from the demonstrations through adversarial training, making them difficult to implement and use in practice . The main idea in this paper is that the effectiveness of adversarial imitation methods can be achieved by a much simpler approach that does not require adversarial training, or indeed learning a reward function at all. Intuitively, adversarial methods encourage long-horizon imitation by providing the agent with an incentive to imitate the demonstrated actions in demonstrated states, and an incentive to take actions that lead it back to demonstrated states when it encounters new, out-ofdistribution states. One of the reasons why adversarial methods outperform greedy methods, such as BC, is that greedy methods only do, while adversarial methods do both and. Our approach is intended to do both and without adversarial training, by using constant rewards instead of learned rewards. The key idea is that, instead of using a learned reward function to provide a reward signal to the agent, we can simply give the agent a constant reward of r = +1 for matching the demonstrated action in a demonstrated state, and a constant reward of r = 0 for all other behavior. We motivate this approach theoretically, by showing that it implements a regularized variant of BC that learns long-horizon imitation by (a) imposing a sparsity prior on the reward function implied by the imitation policy, and (b) incorporating information about the state transition dynamics into the imitation policy. Intuitively, our method accomplishes (a) by training the agent using an extremely sparse reward function -+1 for demonstrations, 0 everywhere else -and accomplishes (b) by training the agent with RL instead of supervised learning. We instantiate our approach with soft Q-learning by initializing the agent's experience replay buffer with expert demonstrations, setting the rewards to a constant r = +1 in the demonstration experiences, and setting rewards to a constant r = 0 in all of the new experiences the agent collects while interacting with the environment. Since soft Q-learning is an off-policy algorithm, the agent does not necessarily have to visit the demonstrated states in order to experience positive rewards. Instead, the agent replays the demonstrations that were initially added to its buffer. Thus, our method can be applied in environments with stochastic dynamics and continuous states, where the demonstrated states are not necessarily reachable by the agent. We call this method soft Q imitation learning (SQIL). The main contribution of this paper is SQIL: a simple and general imitation learning algorithm that is effective in MDPs with high-dimensional, continuous observations and unknown dynamics. We run experiments in four image-based environments -Car Racing, Pong, Breakout, and Space Invadersand three low-dimensional environments -Humanoid, HalfCheetah, and Lunar Lander -to compare SQIL to two prior methods: BC and GAIL. We find that SQIL outperforms BC and achieves competitive compared to GAIL. Our experiments illustrate two key benefits of SQIL: that it can overcome the state distribution shift problem of BC without adversarial training or learning a reward function, which makes it easier to use, e.g., with images, and that it is simple to implement using existing Q-learning or off-policy actor-critic algorithms. SQIL performs soft Q-learning with three small, but important, modifications: it initially fills the agent's experience replay buffer with demonstrations, where the rewards are set to a constant r = +1; as the agent interacts with the world and accumulates new experiences, it adds them to the replay buffer, and sets the rewards for these new experiences to a constant r = 0; and it balances the number of demonstration experiences and new experiences (50% each) in each sample from the replay buffer. 1 These three modifications are motivated theoretically in Section 3, via an equivalence to a regularized variant of BC. Intuitively, these modifications create a simple reward structure that gives the agent an incentive to imitate the expert in demonstrated states, and to take actions that lead it back to demonstrated states when it strays from the demonstrations. Algorithm 1 Soft Q Imitation Learning (SQIL) 1: Require λsamp ∈ R ≥0, Ddemo 2: Initialize Dsamp ← ∅ 3: while Q θ not converged do 4: Sample transition (s, a, s) with imitation policy π(a|s) ∝ exp (Q θ (s, a)) 6: Dsamp ← Dsamp ∪ {(s, a, s)} 7: end while Crucially, since soft Q-learning is an off-policy algorithm, the agent does not necessarily have to visit the demonstrated states in order to experience positive rewards. Instead, the agent replays the demonstrations that were initially added to its buffer. Thus, SQIL can be used in stochastic environments with high-dimensional, continuous states, where the demonstrated states may never actually be encountered by the agent. SQIL is summarized in Algorithm 1, where Q θ is the soft Q function, D demo are demonstrations, δ 2 is the squared soft Bellman error, and r ∈ {0, 1} is a constant reward. 2 The experiments in Section 4 use a convolutional neural network or multi-layer perceptron to model Q θ, where θ are the weights of the neural network. Section A.3 in the appendix contains additional implementation details, including values for the hyperparameter λ samp; note that the simple default value of λ samp = 1 works well across a variety of environments. As the imitation policy in line 5 of Algorithm 1 learns to behave more like the expert, a growing number of expert-like transitions get added to the buffer D samp with an assigned reward of zero. This causes the effective reward for mimicking the expert to decay over time. Balancing the number of demonstration experiences and new experiences (50% each) sampled for the gradient step in line 4 ensures that this effective reward remains at least 1/(1 + λ samp), instead of decaying to zero. In practice, we find that this reward decay does not degrade performance if SQIL is halted once the squared soft Bellman error loss converges to a minimum (e.g., see Figure 8 in the appendix). Note that prior methods also require similar techniques: both GAIL and adversarial IRL (AIRL) balance the number of positive and negative examples in the training set of the discriminator, and AIRL tends to require early stopping to avoid overfitting. To understand why SQIL works, we sketch a surprising theoretical : SQIL is equivalent to a variant of behavioral cloning (BC) that uses regularization to overcome state distribution shift. BC is a simple approach that seeks to imitate the expert's actions using supervised learning -in particular, greedily maximizing the conditional likelihood of the demonstrated actions given the demonstrated states, without reasoning about the consequences of actions. Thus, when the agent makes small mistakes and enters states that are slightly different from those in the demonstrations, the distribution mismatch between the states in the demonstrations and those actually encountered by the agent leads to compounding errors . We show that SQIL is equivalent to augmenting BC with a regularization term that incorporates information about the state transition dynamics into the imitation policy, and thus enables long-horizon imitation. Maximum entropy model of expert behavior. SQIL is built on soft Q-learning, which assumes that expert behavior follows the maximum entropy model . In an infinite-horizon Markov Decision Process (MDP) with a continuous state space S and discrete action space A, 3 the expert is assumed to follow a policy π that maximizes reward R(s, a). The policy π forms a Boltzmann distribution over actions, where Q is the soft Q function. The soft Q values are a function of the rewards and dynamics, given by the soft Bellman equation, In our imitation setting, the rewards and dynamics are unknown. The expert generates a fixed set of demonstrations D demo, by rolling out their policy π in the environment and generating state transitions (s, a, s) ∈ D demo. Training an imitation policy with standard BC corresponds to fitting a parametric model π θ that minimizes the negative log-likelihood loss, In our setting, instead of explicitly modeling the policy π θ, we can represent the policy π in terms of a soft Q function Q θ via Equation 2: Using this representation of the policy, we can train Q θ via the maximum-likelihood objective in Equation 4: However, optimizing the BC loss in Equation 6 does not in general yield a valid soft Q function Q θ -i.e., a soft Q function that satisfies the soft Bellman equation (Equation 3) with respect to the dynamics and some reward function. The problem is that the BC loss does not incorporate any information about the dynamics into the learning objective, so Q θ learns to greedily assign high values to demonstrated actions, without considering the state transitions that occur as a consequence of actions. As a , Q θ may output arbitrary values in states that are off-distribution from the demonstrations D demo. In Section 3.2, we describe a regularized BC algorithm that adds constraints to ensure that Q θ is a valid soft Q function with respect to some implicitly-represented reward function, and further regularizes the implicit rewards with a sparsity prior. In Section 3.3, we show that this approach recovers an algorithm similar to SQIL. Under the maximum entropy model described in Section 3.1, expert behavior is driven by a reward function, a soft Q function that computes expected future returns, and a policy that takes actions with high soft Q values. In the previous section, we used these assumptions to represent the imitation policy in terms of a model of the soft Q function Q θ (Equation 5). In this section, we represent the reward function implicitly in terms of Q θ, as shown in Equation 7. This allows us to derive SQIL as a variant of BC that imposes a sparsity prior on the implicitly-represented rewards. Sparsity regularization. The issue with BC is that, when the agent encounters states that are outof-distribution with respect to D demo, Q θ may output arbitrary values. One solution from prior work is to regularize Q θ with a sparsity prior on the implied rewards -in particular, a penalty on the magnitude of the rewards s∈S,a∈A |R q (s, a)| implied by Q θ via the soft Bellman equation (Equation 3), where Note that the reward function R q is not explicitly modeled in this method. Instead, we directly minimize the magnitude of the right-hand side of Equation 7, which is equivalent to minimizing |R q (s, a)|. The purpose of the penalty on |R q (s, a)| is two-fold: it imposes a sparsity prior motivated by prior work , and it incorporates information about the state transition dynamics into the imitation learning objective, since R q (s, a) is a function of an expectation over next state s. is critical for learning long-horizon behavior that imitates the demonstrations, instead of greedy maximization of the action likelihoods in standard BC. For details, see. Approximations for continuous states. Unlike the discrete environments tested in , we assume the continuous state space S cannot be enumerated. Hence, we approximate the penalty s∈S,a∈A |R q (s, a)| by estimating it from samples: transitions (s, a, s) observed in the demonstrations D demo, as well as additional rollouts D samp periodically sampled during training using the latest imitation policy. This approximation, which follows the standard approach to constraint sampling , ensures that the penalty covers the state distribution actually encountered by the agent, instead of only the demonstrations. To make the penalty continuously differentiable, we introduce an additional approximation: instead of penalizing the absolute value |R q (s, a)|, we penalize the squared value (R q (s, a)) 2. Note that since the reward function R q is not explicitly modeled, but instead defined via Q θ in Equation 7, the squared penalty (R q (s, a)) 2 is equivalent to the squared soft Bellman error Regularized BC algorithm. Formally, we define the regularized BC loss function adapted from as where λ ∈ R ≥0 is a constant hyperparameter, and δ 2 denotes the squared soft Bellman error defined in Equation 1. The BC loss encourages Q θ to output high values for demonstrated actions at demonstrated states, and the penalty term propagates those high values to nearby states. In other words, Q θ outputs high values for actions that lead to states from which the demonstrated states are reachable. Hence, when the agent finds itself far from the demonstrated states, it takes actions that lead it back to the demonstrated states. The RBC algorithm follows the same procedure as Algorithm 1, except that in line 4, RBC takes a gradient step on the RBC loss from Equation 8 instead of the SQIL loss. The gradient of the RBC loss in Equation 8 is proportional to the gradient of the SQIL loss in line 4 of Algorithm 1, plus an additional term that penalizes the soft value of the initial state s 0 (full derivation in Section A.1 of the appendix): In other words, SQIL solves a similar optimization problem to RBC. The reward function in SQIL also has a clear connection to the sparsity prior in RBC: SQIL imposes the sparsity prior from RBC, by training the agent with an extremely sparse reward function -r = +1 at the demonstrations, and r = 0 everywhere else. Thus, SQIL can be motivated as a practical way to implement the ideas for regularizing BC proposed in. The main benefit of using SQIL instead of RBC is that SQIL is trivial to implement, since it only requires a few small changes to any standard Q-learning implementation (see Section 2). Extending SQIL to MDPs with a continuous action space is also easy, since we can simply replace Q-learning with an off-policy actor-critic method (see Section 4.3). Given the difficulty of implementing deep RL algorithms correctly , this flexibility makes SQIL more practical to use, since it can be built on top of existing implementations of deep RL algorithms. Furthermore, the ablation study in Section 4.4 suggests that SQIL actually performs better than RBC. Our experiments aim to compare SQIL to existing imitation learning methods on a variety of tasks with high-dimensional, continuous observations, such as images, and unknown dynamics. We benchmark SQIL against BC and GAIL 4 on four image-based games -Car Racing, Pong, Breakout, and Space Invaders -and three state-based tasks -Humanoid, HalfCheetah, and Lunar Lander (; ;). We also investigate which components of SQIL contribute most to its performance via an ablation study on the Lunar Lander game. Section A.3 in the appendix contains additional experimental details. The goal of this experiment is to study not only how well each method can mimic the expert demonstrations, but also how well they can acquire policies that generalize to new states that are not seen in the demonstrations. To do so, we train the imitation agents in an environment with a different initial state distribution S train 0 than that of the expert demonstrations S demo 0, allowing us to systematically control the mismatch between the distribution of states in the demonstrations and the states actually encountered by the agent. We run experiments on the Car Racing game from OpenAI Gym. To create S train 0, the car is rotated 90 degrees so that it begins perpendicular to the track, instead of parallel to the track as in S demo 0. This intervention presents a significant generalization challenge to the imitation learner, since the expert demonstrations do not contain any examples of states where the car is perpendicular to the road, or even significantly off the road axis. The agent must learn to make a tight turn to get back on the road, then stabilize its orientation so that it is parallel to the road, and only then proceed forward to mimic the expert demonstrations. The in Figure 1 show that SQIL and BC perform equally well when there is no variation in the initial state. The task is easy enough that even BC achieves a high reward. Note that, in the unperturbed condition (right column), BC substantially outperforms GAIL, despite the wellknown shortcomings of BC. This indicates that the adversarial optimization in GAIL can substantially hinder learning, even in settings where standard BC is sufficient. SQIL performs much better than BC when starting from S train 0, showing that SQIL is capable of generalizing to a new initial state distribution, while BC is not. SQIL learns to make a tight turn that takes the car through the grass and back onto the road, then stabilizes the car's orientation so that it is parallel to the track, and then proceeds forward like the expert does in the demonstrations. BC tends to drive straight ahead into the grass instead of turning back onto the road. 4 For all the image-based tasks, we implement a version of GAIL that uses deep Q-learning (GAIL-DQL) instead of TRPO as in the original GAIL paper , since Q-learning performs better than TRPO in these environments, and because this allows for a head-to-head comparison of SQIL and GAIL: both algorithms use the same underlying RL algorithm, but provide the agent with different rewards -SQIL provides constant rewards, while GAIL provides learned rewards. We use the standard GAIL-TRPO method as a baseline for all the low-dimensional tasks, since TRPO performs better than Q-learning in these environments. The original GAIL method implicitly encodes prior knowledge -namely, that terminating an episode is either always desirable or always undesirable. As pointed out in , this makes comparisons to alternative methods unfair. We implement the unbiased version of GAIL proposed by , and use this in all of the experiments. Comparisons to the biased version with implicit termination knowledge are included in Section A.2 in the appendix. SQIL outperforms GAIL in both conditions. Since SQIL and GAIL both use deep Q-learning for RL in this experiment, the gap between them may be attributed to the difference in the reward functions they use to train the agent. SQIL benefits from providing a constant reward that does not require fitting a discriminator, while GAIL struggles to train a discriminator to provide learned rewards directly from images. The in Figure 2 show that SQIL outperforms BC on Pong, Breakout, and Space Invaders -additional evidence that BC suffers from compounding errors, while SQIL does not. SQIL also outperforms GAIL on all three games, illustrating the difficulty of using GAIL to train an imagebased discriminator, as in Section 4.1. The experiments in the previous sections evaluate SQIL on MDPs with a discrete action space. This section illustrates how SQIL can be adapted to continuous actions. We instantiate SQIL using soft actor-critic (SAC) -an off-policy RL algorithm that can solve continuous control tasks . In particular, SAC is modified in the following ways: the agent's experience replay buffer is initially filled with expert demonstrations, where rewards are set to r = +1, when taking gradient steps to fit the agent's soft Q function, a balanced number of demonstration experiences and new experiences (50% each) are sampled from the replay buffer, and the agent observes rewards of r = 0 during its interactions with the environment, instead of an extrinsic reward signal that specifies the desired task. This instantiation of SQIL is compared to GAIL on the Humanoid (17 DoF) and HalfCheetah (6 DoF) tasks from MuJoCo. The show that SQIL outperforms BC and performs comparably to GAIL on both tasks, demonstrating that SQIL can be successfully deployed on problems with continuous actions, and that SQIL can perform well even with a small number of demonstrations. This experiment also illustrates how SQIL can be run on top of SAC or any other off-policy value-based RL algorithm. We hypothesize that SQIL works well because it combines information about the expert's policy from demonstrations with information about the environment dynamics from rollouts of the imitation policy periodically sampled during training. We also expect RBC to perform comparably to SQIL, since their objectives are similar. To test these hypotheses, we conduct an ablation study using the Lunar Lander game from OpenAI Gym. As in Section 4.1, we control the mismatch between the distribution of states in the demonstrations and the states encountered by the agent by manipulating the initial state distribution. To create S train 0, the agent is placed in a starting position never visited in the demonstrations. In the first variant of SQIL, λ samp is set to zero, to prevent SQIL from using additional samples drawn from the environment (see line 4 of Algorithm 1). This comparison tests if SQIL really needs to interact with the environment, or if it can rely solely on the demonstrations. In the second condition, γ is set to zero to prevent SQIL from accessing information about state transitions (see Equation 1 and line 4 of Algorithm 1). This comparison tests if SQIL is actually extracting information about the dynamics from the samples, or if it can perform just as well with a naïve regularizer (setting γ to zero effectively imposes a penalty on the L2-norm of the soft Q values instead of the squared soft Bellman error). In the third condition, a uniform random policy is used to sample additional rollouts, instead of the imitation policy π θ (see line 6 of Algorithm 1). This comparison tests how important it is that the samples cover the states encountered by the agent during training. In the fourth condition, we use RBC to optimize the loss in Equation 8 The in Figure 4 show that all methods perform well when there is no variation in the initial state. When the initial state is varied, SQIL performs significantly better than BC, GAIL, and the ablated variants of SQIL. This confirms our hypothesis that SQIL needs to sample from the environment using the imitation policy, and relies on information about the dynamics encoded in the samples. Surprisingly, SQIL outperforms RBC by a large margin, suggesting that the penalty on the soft value of the initial state V (s 0), which is present in RBC but not in SQIL (see Equation 9), degrades performance. Related work. Concurrently with SQIL, two other imitation learning algorithms that use constant rewards instead of a learned reward function were developed . We see our paper as contributing additional evidence to support this core idea, rather than proposing a competing method. First, SQIL is derived from sparsity-regularized BC, while the prior methods are derived from an alternative formulation of the IRL objective and from support estimation methods , showing that different theoretical approaches independently lead to using RL with constant rewards as an alternative to adversarial training -a sign that this idea may be a promising direction for future work. Second, SQIL is shown to outperform BC and GAIL in domains that were not evaluated in or -in particular, tasks with image observations and significant shift in the state distribution between the demonstrations and the training environment. Summary. We contribute the SQIL algorithm: a general method for learning to imitate an expert given action demonstrations and access to the environment. Simulation experiments on tasks with high-dimensional, continuous observations and unknown dynamics show that our method outperforms BC and achieves competitive compared to GAIL, while being simple to implement on top of existing off-policy RL algorithms. Limitations and future work. We have not yet proven that SQIL matches the expert's state occupancy measure in the limit of infinite demonstrations. One direction for future work would be to rigorously show whether or not SQIL has this property. Another direction would be to extend SQIL to recover not just the expert's policy, but also their reward function; e.g., by using a parameterized reward function to model rewards in the soft Bellman error terms, instead of using constant rewards. This could provide a simpler alternative to existing adversarial IRL algorithms. (s, a) ). Splitting up the squared soft Bellman error terms for D demo and D samp in Equation 8, Setting γ 1 turns the inner sum in the first term into a telescoping sum: Since s T is assumed to be absorbing, V (s T) is zero. Thus, In our experiments, we have that all the demonstration rollouts start at the same initial state s 0. Thus, where λ samp ∈ R ≥0 is a constant hyperparameter. As discussed in Section 4, to correct the original GAIL method's biased handling of rewards at absorbing states, we implement the suggested changes to GAIL in Section 4.2 of: adding a transition to an absorbing state and a self-loop at the absorbing state to the end of each rollout sampled from the environment, and adding a binary feature to the observations indicating whether or not a state is absorbing. This enables GAIL to learn a non-zero reward for absorbing states. We refer to the original, biased GAIL method as GAIL-DQL-B and GAIL-TRPO-B, and the unbiased version as GAIL-DQL-U and GAIL-TRPO-U. The mechanism for learning terminal rewards proposed in does not apply to SQIL, since SQIL does not learn a reward function. SQIL implicitly assumes a reward of zero at absorbing states in demonstrations. This is the case in all our experiments, which include some environments where terminating the episode is always undesirable (e.g., walking without falling down) and other environments where success requires terminating the episode (e.g., landing at a target), suggesting that SQIL is not sensitive to the choice of termination reward, and neither significantly benefits nor is significantly harmed by setting the termination reward to zero. Car Racing. The in Figure 5 show that both the biased (GAIL-DQL-B) and unbiased (GAIL-DQL-U) versions of GAIL perform equally poorly. The problem of training an image-based discriminator for this task may be difficult enough that even with an unfair bias toward avoiding crashes that terminate the episode, GAIL-DQL-B does not perform better than GAIL-DQL-U. Figure 6 show that SQIL outperforms both variants of GAIL on Pong and the unbiased version of GAIL (GAIL-DQL-U) on Breakout and Space Invaders, but performs comparably to the biased version of GAIL (GAIL-DQL-B) on Space Invaders and worse than it on Breakout. This may be due to the fact that in Breakout and Space Invaders, the agent has multiple livesfive in Breakout, and three in Space Invaders -and receives a termination signal that the episode has ended after losing each life. Thus, the agent experiences many more episode terminations than in Pong, exacerbating the bias in the way the original GAIL method handles rewards at absorbing states. Our implementation of GAIL-DQL-B in this experiment provides a learned reward of r(s, a) = − log (1 − D(s, a) ), where D is the discriminator (see Section A.3 in the appendix for details). The learned reward is always positive, while the implicit reward at an absorbing state is zero. Thus, the agent is inadvertently encouraged to avoid terminating the episode. For Breakout and Space Invaders, this just happens to be the right incentive, since the objective is to stay alive as long as possible. GAIL-DQL-B outperforms SQIL in Breakout and performs comparably to SQIL in Space Invaders because GAIL-DQL-B is accidentally biased in the right way. Lunar Lander. The in Figure 7 show that when the initial state is varied, SQIL outperforms the unbiased variant of GAIL (GAIL-TRPO-U), but underperforms against the biased version of GAIL (GAIL-TRPO-B). The latter is likely due to the fact that the implementation of GAIL-TRPO-B we used in this experiment provides a learned reward of r(s, a) = log (D (s, a) ), where D is the discriminator (see Section A.3 in the appendix for details). The learned reward is always negative, while the implicit reward at an absorbing state is zero. Thus, the agent is inadvertently encouraged to terminate the episode quickly. For the Lunar Lander game, this just happens to be the right incentive, since the objective is to land on the ground and thereby terminate the episode. As in the Atari experiments, GAIL-TRPO-B performs better than SQIL in this experiment because GAIL-TRPO-B is accidentally biased in the right way. To ensure fair comparisons, the same network architectures were used to evaluate SQIL, GAIL, and BC. For Lunar Lander, we used a network architecture with two fully-connected layers containing 128 hidden units each to represent the Q network in SQIL, the policy and discriminator networks in GAIL, and the policy network in BC. For Car Racing, we used four convolutional layers (following ) and two fully-connected layers containing 256 hidden units each. For Humanoid and HalfCheetah, we used two fully-connected layers containing 256 hidden units each. For Atari, we used the convolutional neural network described in to represent the Q network in SQIL, as well as the Q network and discriminator network in GAIL. To ensure fair comparisons, the same demonstration data were used to train SQIL, GAIL, and BC. For Lunar Lander, we collected 100 demonstration rollouts. For Car Racing, Pong, Breakout, and Space Invaders, we collected 20 demonstration rollouts. Expert demonstrations were generated from scratch for Lunar Lander using DQN , and collected from open-source pretrained policies for Car Racing as well as Humanoid and HalfCheetah . The Humanoid demonstrations were generated by a stochastic expert policy, while the HalfCheetah demonstrations were generated by a deterministic expert policy; both experts were trained using TRPO. 6 We used two open-source implementations of GAIL: for Lunar Lander, and for MuJoCo. We adapted the OpenAI Baselines implementation of GAIL to use soft Q-learning for Car Racing and Atari. Expert demonstrations were generated from scratch for Atari using DQN. For Lunar Lander, we set λ samp = 10 −6. For Car Racing, we set λ samp = 0.01. For all other environments, we set λ samp = 1. SQIL was not pre-trained in any of the experiments. GAIL was pre-trained using BC for HalfCheetah, but was not pre-trained in any other experiments. In standard implementations of soft Q-learning and SAC, the agent's experience replay buffer typically has a fixed size, and once the buffer is full, old experiences are deleted to make room for new experiences. In SQIL, we never delete demonstration experiences from the replay buffer, but otherwise follow the standard implementation. We use Adam to take the gradient step in line 4 of Algorithm 1. The BC and GAIL performance metrics in Section 4.3 are taken from . 7 The GAIL and SQIL policies in Section 4.3 are set to be deterministic during the evaluation rollouts used to measure performance. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1xKd24twB | A simple and effective alternative to adversarial imitation learning: initialize experience replay buffer with demonstrations, set their reward to +1, set reward for all other data to 0, run Q-learning or soft actor-critic to train. |
Generating visualizations and interpretations from high-dimensional data is a common problem in many fields. Two key approaches for tackling this problem are clustering and representation learning. There are very performant deep clustering models on the one hand and interpretable representation learning techniques, often relying on latent topological structures such as self-organizing maps, on the other hand. However, current methods do not yet successfully combine these two approaches. We present a new deep architecture for probabilistic clustering, VarPSOM, and its extension to time series data, VarTPSOM, composed of VarPSOM modules connected by LSTM cells. We show that they achieve superior clustering performance compared to current deep clustering methods on static MNIST/Fashion-MNIST data as well as medical time series, while inducing an interpretable representation. Moreover, on the medical time series, VarTPSOM successfully predicts future trajectories in the original data space. Given a set of data samples {x i} i=1,...,n, where x i ∈ R d, the goal is to partition the data into a set 102 of clusters {S i} i=1,...,K, while retaining a topological structure over the cluster centroids. The proposed architecture for static data is presented in Figure 1a. The input vector x i is embedded 104 into a latent representation z i using a VAE. This latent vector is then clustered using PSOM, a 112 DECODER ENCODER (a) VarPSOM architecture for clustering of static data. Data points xi are mapped to a continuous embedding zi using a VAE (parameterized by Φ). The loss function is the sum of a SOMbased clustering loss and the ELBO. (b) VarTPSOM architecture, composed of VarPSOM modules connected by LSTMs across the time axis, which predict the continuous embedding zt+1 of the next time step. This architecture allows to unroll future trajectories in the latent space as well as the original data space by reconstructing the xt using the VAE. A Self-Organizing Map is comprised of K nodes connected to form a grid M ⊆ N 2, where the 114 node m i,j, at position (i, j) of the grid, corresponds to a centroid vector, µ i,j in the input space. The centroids are tied by a neighborhood relation N (µ i,j) = {µ i−1,j, µ i+1,j, µ i,j−1, µ i,j+1}. Given a 116 random initialization of the centroids, the SOM algorithm randomly selects an input x i and updates 117 both its closest centroid µ i,j and its neighbors N (µ i,j) to move them closer to x i. For a complete 118 description of the SOM algorithm, we refer to the appendix (A). The Clustering Assignment Hardening method has been recently introduced by the DEC model and was shown to perform well in the latent space of AEs . Given an embedding function z i = f (x i), it uses a Student's t-distribution (S) as a kernel to measure the similarity between an embedded data point z i, and a centroid µ j: It improves the cluster purity by enforcing the distribution S to approach a target distribution, T: By taking the original distribution to the power of γ and normalizing it, the target distribution puts more emphasis on data points that are assigned a high confidence. We follow in choosing γ=2, which leads to larger gradient contributions of points close to cluster centers, as they show empirically. The ing clustering loss is defined as: Our proposed clustering method, called PSOM, expands Clustering Assignment Hardening to include a SOM neighborhood structure over the centroids. We add an additional loss to to achieve an interpretable representation. This loss term maximizes the similarity between each data point and the neighbors of the closest centroids. For each embedded data point z i and each centroid µ j the loss is defined as the negative sum of all the neighbors of µ j, {e : µ e ∈ N (µ j (x i))}, of the probability that z i is assigned to e, defined as s ie. This sum is weighted by the similarity s ij between z i and the centroid µ j: The complete PSOM clustering loss is then: L PSOM = KL(T S) + βL SOM. We note that for β = 0 it becomes equivalent to Clustering Assignment Hardening. In our method, the nonlinear mapping between the input x i and embedding z i is realized by a VAE. Instead of directly embedding the input x i into a latent embedding z i, the VAE learns a probability distribution q φ (z | x i) parametrized as a multivariate normal distribution whose mean and variance are (µ φ, Σ φ) = f φ (x i). Similarly, it also learns the probability distribution of the reconstructed output given a sampled latent embedding, p θ (x i | z) where (µ θ, Σ θ) = f θ (z i). Both f φ and f θ are neural networks, respectively called encoder and decoder. The ELBO loss is: where p(z) is an isotropic Gaussian prior over the latent embeddings. The second term can be interpreted as a form of regularization, which encourages the latent space to be compact. For each data point x i the latent embedding z i is sampled from q φ (z | x i). Adding the ELBO loss to the PSOM loss from the previous subsection, we get the overall loss function of VarPSOM: To the best of our knowledge, no previous SOM methods attempted to use a VAE to embed the 123 inputs into a latent space. There are many advantages of a VAE over an AE for realizing our goals. Its prior on the latent space encourages structured and disentangled factors To extend our proposed model to time series data, we add a temporal component to the architecture. Given a set of N time series of length T, {x t,i} t=1,...,T;i=1,...,N, the goal is to learn interpretable trajectories on the SOM grid. To do so, the VarPSOM could be used directly but it would treat each time step t of the time series independently, which is undesirable. To exploit temporal information and enforce smoothness in the trajectories, we add an additional loss to: where u it,it+1 = g(z i,t, z i,t+1) is the similarity between z i,t and z i,t+1 using a Student's t- between time points are discouraged. One of the main goals in time series modeling is to predict future data points, or alternatively, future embeddings. This can be achieved by adding a long short-term memory network (LSTM) across the latent embeddings of the time series, as shown in Fig 1b. Each cell of the LSTM takes as input the latent embedding z t at time step t, and predicts a probability distribution over the next latent embedding, p ω (z t+1 | z t). We parametrize this distribution as a Multivariate Normal Distribution whose mean and variance are learnt by the LSTM. The prediction loss is the log-likelihood between the learned distribution and a sample of the next embedding z t+1: The final loss of VarTPSOM, which is trainable in a fully end-to-end fashion, is configurations we refer to the appendix, (B.3). Implementation In implementing our models we focused on retaining a fair comparison with the 158 baselines. Hence we decided to use a standard network structure, with fully connected layers of 159 dimensions d − 500 − 500 − 2000 − l, to implement both the VAE of our models and the AE of the 160 baselines. The latent dimension, l, is set to 100 for the VAE, and to 10 for the AEs. Since the prior 161 in the VAE enforces the latent embeddings to be compact, it also requires more dimensions to learn 162 performance. We suspect this is due to the regularization effect of the SOM's topological structure. Overall, VarPSOM outperforms both DEC and IDEC. Improvement over Training After obtaining the initial configuration of the SOM structure, both 187 clustering and feature extraction using the VAE are trained jointly. To illustrate that our architecture 188 improves clustering performance over the initial configuration, we plotted NMI and Purity against 189 the number of training iterations in Figure 2. We observe that the performance is stable when 190 increasing the number of epochs and no overfitting is visible. as this is the only method among the baselines that is suited for temporal data. We presented two novel methods for interpretable unsupervised clustering, VarPSOM and VarTP- SOM. Both models make use of a VAE and a novel clustering method, PSOM, that extends the 229 classical SOM algorithm to include a centroid-based probability distribution. Our models achieve are tied by a neighborhood relation, here defined as N (µ i,j) = {µ i−1,j, µ i+1,j, µ i,j−1, µ i,j+1}. Given a random initialization of the centroids, the SOM algorithm randomly selects an input x i and 335 updates both its closest centroid µ i,j and its neighbors N (µ i,j) to move them closer to x i. The 336 algorithm then iterates these steps until convergence. Algorithm 1 Self-Organizing Maps At each time t, present an input x(t) and select the winner, Update the weights of the winner and its neighbours, until the map converges score in the next 6 and 12 hours (APACHE-6/12), and the mortality in the next 24 hours. Only those variables from the APACHE score definition which are recorded in the eICU 360 database were taken into account. Each dataset is divided into training, validation and test sets for both our models and the baselines. We evaluate the DEC model for different latent space dimensions. Table S1 shows that the AE, used 364 in the DEC model, performs better when a lower dimensional latent space is used. Figure S3: Randomly sampled VarTPSOM trajectories, from patients expired at the end of the ICU stay, as well as healthily dispatched patients. Superimposed is a heatmap which displays the cluster enrichment in the current APACHE score, from this model run. We observe that trajectories of dying patients are often in different locations of the map as healthy patients, in particular in those regions enriched for high APACHE scores, which corresponds with clinical intuition. assignments of data points to clusters which in a better ability to quantify uncertainty in the 392 data. For visualizing health states in the ICU, this property is very important. In Fig S4 we plot an 393 example patient trajectory, where 6 different time-steps (in temporal order) of the trajectory were 394 chosen. Our model yields a soft centroid-based probability distribution which evolves with time and 395 which allows estimation of likely discrete health states at a given point in time. For each time-step 396 the distribution of probabilities is plotted using a heat-map, whereas the overall trajectory is plotted 397 using a black line. The circle and cross indicate ICU admission and dispatch, respectively. Figure S4: Probabilities over discrete patient health states for 6 different time-steps of the selected time series. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJxJdp4YvS | We present a new deep architecture, VarPSOM, and its extension to time series data, VarTPSOM, which achieve superior clustering performance compared to current deep clustering methods on static and temporal data. |
Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using'multi-task learning'. This saves computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We systematically study task cooperation and competition and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks. Many applications, especially robotics and autonomous vehicles, are chiefly interested in using multi-task learning to reduce the inference time and computational complexity required to estimate many characteristics of visual input. For example, an autonomous vehicle may need to detect the location of pedestrians, determine a per-pixel depth, and predict objects' trajectories, all within tens of milliseconds. In multi-task learning, multiple tasks are solved at the same time, typically with a single neural network. In addition to reduced inference time, solving a set of tasks jointly rather than independently can, in theory, have other benefits such as improved prediction accuracy, increased data efficiency, and reduced training time. Unfortunately, the quality of predictions are often observed to suffer when a network is tasked with making multiple predictions. This is because learning objectives can have complex and unknown dynamics and may compete. In fact, multi-task performance can suffer so much that smaller independent networks are often superior (as we will see in the experiments section). We refer to any situation in which the competing priorities of the network cause poor task performance as crosstalk. On the other hand, when task objectives do not interfere much with each other, performance on both tasks can be maintained or even improved when jointly trained. Intuitively, this loss or gain of quality seems to depend on the relationship between the jointly trained tasks. Prior work has studied the relationship between tasks for transfer learning . However, we find that transfer relationships are not highly predictive of multi-task relationships. In addition to studying multi-task relationships, we attempt to determine how to produce good prediction accuracy under a limited inference time budget by assigning competing tasks to separate networks and cooperating tasks to the same network. More concretely, this leads to the following problem: Given a set of tasks, T, and a computational budget b (e.g., maximum allowable inference time), what is the optimal way to assign tasks to networks with combined cost ≤ b such that a combined measure of task performances is maximized? To this end, we develop a computational framework for choosing the best tasks to group together in order to have a small number of separate deep neural networks that completely cover the task set and that maximize task performance under a given computational budget. We make the intriguing Figure 1: Given five tasks to solve, there are many ways that they can be split into task groups for multitask learning. How do we find the best one? We propose a computational framework that, for instance, suggests the following grouping to achieve the lowest total loss, using a computational budget of 2.5 units: train network A to solve Semantic Segmentation, Depth Estimation, and Surface Normal Prediction; train network B to solve Keypoint Detection, Edge Detection, and Surface Normal Prediction; train network C with a less computationally expensive encoder to solve Surface Normal Prediction alone; including Surface Normals as an output in the first two networks were found advantageous for improving the other outputs, while the best Normals were predicted by the third network. This task grouping outperforms all other feasible ones, including learning all five tasks in one large network or using five dedicated smaller networks. observation that the inclusion of an additional task in a network can potentially improve the accuracy of the other tasks, even though the performance of the added task might be poor. This can be viewed as regularizing or guiding the loss of one task by adding an additional loss, as often employed in curriculum learning or network regularization. Achieving this, of course, depends on picking the proper regularizing task -our system can take advantage of this phenomenon, as schematically shown in Figure 1. This paper has two main contributions. In Section 3, we outline a framework for systematically assigning tasks to networks in order to achieve the best total prediction accuracy with a limited inference-time budget. We then analyze the ing accuracy and show that selecting the best assignment of tasks to groups is critical for good performance. Secondly, in Section 6, we analyze situations in which multi-task learning helps and when it doesn't, quantify the compatibilities of various task combinations for multi-task learning, compare them to the transfer learning task affinities, and discuss the implications. Moreover, we analyze the factors that influence multi-task affinities. Multi-Task Learning: for a good overview of multi-task learning. The authors identify two clusters of contemporary techniques that we believe cover the space well, hard parameter sharing and soft parameter sharing. In brief, the primary difference between the majority of the existing works and our study is that we wish to understand the relationships between tasks and find compatible groupings of tasks for any given set of tasks, rather than designing a neural network architecture to solve a particular fixed set of tasks well. A known contemporary example of hard parameter sharing in computer vision is UberNet . The authors tackle 7 computer vision problems using hard parameter sharing. The authors focus on reducing the computational cost of training for hard parameter sharing, but experience a rapid degradation in performance as more tasks are added to the network. Hard parameter sharing is also used in many other works such as (; ; ; ; ; ; ; ; ; ; ; d. ; ;). Other works, such as and (Chen et al. (2018b) ), aim to dynamically reweight each task's loss during training. The former work finds weights that provably lead to a Pareto-optimal solution, while the latter attempts to find weights that balance the influence of each task on network weights. Finally, (Bingel & Søgaard ) studies task interaction for NLP. In soft or partial parameter sharing, either there is a separate set of parameters per task, or a significant fraction of the parameters are unshared. The models are tied together either by information sharing or by requiring parameters to be similar. Examples include (; ; ; ; ;). The canonical example of soft parameter sharing can be seen in . The authors are interested in designing a deep dependency parser for languages such as Irish that do not have much treebank data available. They tie the weights of two networks together by adding an L2 distance penalty between corresponding weights and show substantial improvement. Another example of soft parameter sharing is Cross-stitch Networks . Starting with separate networks for two tasks, the authors add'cross-stitch units' between them, which allow each network to peek at the other network's hidden layers. This approach reduces but does not eliminate task interfearence, and the overall performance is less sensitive to the relative loss weights. Unlike our method, none of the aforementioned works attempt to discover good groups of tasks to train together. Also, soft parameter sharing does not reduce inference time, a major goal of ours. Transfer Learning: Transfer learning is similar to multi-task learning in that solutions are learned for multiple tasks. Unlike multi-task learning, however, transfer learning methods often assume that a model for a source task is given and then adapt that model to a target task. Transfer learning methods generally neither seek any benefit for source tasks nor a reduction in inference time as their main objective. Neural Architecture Search (NAS): Many recent works search the space of deep learning architectures to find ones that perform well (; ; ; ; ; ;). This is related to our work as we search the space of task groupings. Just as with NAS, the found task groupings often perform better than human-engineered ones. Task Relationships: Our work is most related to Taskonomy , where the authors studied the relationships between visual tasks for transfer learning and introduced a dataset with over 4 million images and corresponding labels for 26 tasks. This was followed by a number of recent works, which further analyzed task relationships (; Dwivedi & Roig.;; ) for transfer learning. While they extract relationships between these tasks for transfer learning, we are interested in the multi-task learning setting. Interestingly, we find notable differences between transfer task affinity and multi-task affinity. Their method also differs in that they are interested in labeled-data efficiency and not inference-time efficiency. Finally, the transfer quantification approach taken by Taskonomy (readout functions) is only capable of finding relationships between the high-level bottleneck representations developed for each task, whereas structural similarities between tasks at all levels are potentially relevant for multi-task learning. Our goal is to find an assignment of tasks to networks that in the best overall loss. Our strategy is to select from a large set of candidate networks to include in our final solution. We define the problem as follows: We want to minimize the overall loss on a set of tasks T = {t 1, t 2, ..., t k} given a limited inference time budget, b, which is the total amount of time we have to complete all tasks. Each neural network that solves some subset of T and that could potentially be a part of the final solution is denoted by n. It has an associated inference time cost, c n, and a loss for each task, L(n, t i) (which is ∞ for each task the network does not attempt to solve). A solution S is a set of networks that together solve all tasks. The computational cost of a solution is cost(S) = n∈S c n. The loss of a solution on a task, L(S, t i), is the lowest loss on that task among the solution's networks 1, L(S, We want to find the solution with the lowest overall loss and a cost that is under our budget, S b = argmin S:cost(S)≤b L(S). For a given task set T, we wish to determine not just how well each pair of tasks performs when trained together, but also how well each combination of tasks performs together so that we can capture higher-order task relationships. To that end, our candidate set of networks contains all 2 |T | − 1 possible groupings: |T | 1 networks with one task, |T | 2 networks with two tasks, networks with three tasks, etc. For the five tasks we use in our experiments, this is 31 networks, of which five are single-task networks. The size of the networks is another design choice, and to somewhat explore its effects we also include 5 single task networks each with half of the computational cost of a standard network. This brings our total up to 36 networks. Consider the situation in which we have an initial candidate set C 0 = {n 1, n 2, ..., n m} of fullytrained networks that each solve some subset of our task set T. Our goal is to choose a subset of C 0 that solve all the tasks with total inference time under budget b and the lowest overall loss. More formally, we want to find a solution It can be shown that solving this problem is NP-hard in general (reduction from SET-COVER). However, many techniques exist that can optimally solve most reasonably-sized instances of problems like these in acceptable amounts of time. All of these techniques produce the same solutions. We chose to use a branch-and-bound-like algorithm for finding our optimal solutions (shown as Algorithm 1 in the Appendix), but in principle the exact same solutions could be achieved by other optimization methods, such as encoding the problem as a binary integer program (BIP) and solving it in a way similar to Taskonomy . Most contemporary MTL works use fewer than 4 unique task types, but in principal, the NP-hard nature of the optimization problem does limit the number of candidate solutions that can be considered. However, using synthetic inputs, we found that our branch-and-bound like approach requires less time than network training for all 2 |T | − 1 + |T | candidates for fewer than ten tasks. Scaling beyond that would require approximations or stronger optimization techniques. This section describes two techniques for reducing the training time required to obtain a collection of networks as input to the network selection algorithm. Our goal is to produce task groupings with similar to the ones produced by the complete search, but with less training time burden. Both techniques involve predicting the performance of a network without actually training it to convergence. The first technique involves training each of the networks for a short amount of time, and the second involves inferring how networks trained on more than two tasks will perform based on how networks trained on two tasks perform. We found a moderately high correlation (Pearson's r = 0.49) between the validation loss of our neural networks after a pass through just 20% of our data and the final test loss of the fully trained networks. This implies that the task relationship trends stabilize early. We fine that we can get decent by running network selection on the lightly trained networks, and then simply training the chosen networks to convergence. For our setup, this technique reduces the training time burden by about 20x over fully training all candiate networks and would require fewer than 150 GPU hours to execute. This is only 35% training-time overhead. Obviously, this technique does come with a prediction accuracy penalty. Because the correlation between early network performance and final network performance is not perfect, the decisions made by network selection are no longer guaranteed to be optimal once networks are trained to convergence. We call this approximation the Early Stopping Approximation (ESA) and present the of using this technique in Section 5. Do the performances of a network trained with tasks A and B, another trained with tasks A and C, and a third trained with tasks B and C tell us anything about the performance of a network trained on tasks A, B, and C? As it turns out, the answer is yes. Although this ignores complex task interactions and nonlinearities, a simple average of the first-order networks' accuracies was a good indicator of the accuracy of a higher-order network. Experimentally, this prediction strategy has an average max ratio error of only 5.2% on our candidate networks. Using this strategy, we can predict the performance of all networks with three or more tasks using the performance of all of the fully trained two task networks. First, simply train all networks with two or fewer tasks to convergence. Then predict the performance of higher-order networks. Finally, run network selection on both groups. With our setup (see Section 4), this strategy saves training time by only about 50%, compared with 95% for the early stopping approximation, and it still comes with a prediction quality penalty. However, this technique requires only a quadratic number of networks to be trained rather than an exponential number, and would therefore win out when the number of tasks is large. We call this strategy the Higher Order Approximation (HOA), and present its in Section 5. We perform our evaluation using the Taskonomy dataset , which is currently the largest multi-task dataset in vision with diverse tasks. The data was obtained from 3D scans of about 600 buildings. There are 4,076,375 examples, which we divided into 3,974,199 training instances, 52,000 validation instances, and 50,176 test instances. There was no overlap in the buildings that appeared in the training and test sets. All data labels were normalized (x = 0, σ = 1). Our framework is agnostic to the particular set of tasks. We have chosen to perform the study using five tasks in Taskonomy: Semantic Segmentation, Depth Estimation, Surface Normal Prediction, Keypoint Detection, and Edge Detection, so that one semantic task, two 3D tasks, and two 2D tasks are included. These tasks were chosen to be representative of major task categories, but also to have enough overlap in order to test the hypothesis that similar tasks will train well together. Crossentropy loss was used for Semantic Segmentation, while an L1 loss was used for all other tasks. Network Architecture: The proposed framework can work with any network architecture. In our experiments, all of the networks used a standard encoder-decoder architecture with a modified Xception encoder. Our choice of architecture is not critical and was chosen for reasonably fast inference time performance. The Xception network encoder was simplified to have 17 layers and the middle flow layers were reduced to having 512 rather than 728 channels. All maxpooling layers were replaced by 2 × 2 convolution layers with a stride of 2 (similar to Chen et al. (2018a) ). The full-size encoder had about 4 million parameters. All networks had an input image size of 256x256. We measure inference time in units of the time taken to do inference for one of our full-size encoders. We call this a Standard Network Time (SNT). This corresponds to 2.28 billion multiply-adds and about 4 ms/image on a single Nvidia RTX 2080 Ti. Our decoders were designed to be lightweight and have four transposed convolutional layers and four separable convolutional layers . Every decoder has about 116,000 parameters. All training was done using PyTorch with Apex for fp16 acceleration . Trained Networks: As described in Section 3.1, we trained 31 networks with full sized encoders and standard decoders. 26 were multi-task networks and 5 were single task networks. Another five single-task networks were trained, each having a half-size encoder and a standard decoder. These 36 networks were included in network optimization as C 0. 20 smaller, single-task networks of various sizes were also trained to be used in the baselines and the analysis of Section 6, but not used for network selection. In order to produce our smaller models, we shrunk the number of channels in every layer of the encoder such that it had the appropriate number of parameters and flops. The training loss we used was the unweighted mean of the losses for the included tasks. Networks were trained with an initial learning rate of 0.2, which was reduced by half every time the training loss stopped decreasing. Networks were trained until their validation loss stopped improving, typically requiring only 4-8 passes through the dataset. The network with the highest validation loss (checked after each epoch of 20% of our data) was saved. The performance scores used for network selection were calculated on the validation set. We computed solutions for inference time budgets from 1 to 5 at increments of 0.5. Each solution chosen was evaluated on the test set. We compare our with conventional methods, such as five single-task networks and a single network with all tasks trained jointly. We also compare with two multi-task methods in the literature. The first one is. We found that their algorithm under-weighted the Semantic Segmentation task too aggressively, leading to poor performance on the task and poor performance overall compared to a simple sum of task losses. We speculate that this is because semantic segmentation's loss behaves differently from the other losses. Next we compared to GradNorm (Chen et al. (2018b) ). The here were also slightly worse than classical MTL with uniform task weights. In any event, these techniques are orthogonal to ours and can be used in conjunction for situations in which they lead to better solutions than simply summing losses. Finally, we compare our to two control baselines illustrative of the importance of making good choices about which tasks to train together,'Random' and'Pessimal.''Random' is a solution consisting of valid random task groupings that solve our five tasks. The reported values are the average of a thousand random trials.' Pessimal' is a solution in which we choose the networks that lead to the worst overall performance, though the solution's performance on each task is still the best among its networks. Each baseline was evaluated with multiple encoder sizes so that all models' could be compared at many inference time budgets. Figure 2 shows the task groups that were chosen for each technique, and Figure 3 shows the performance of these groups along with those of our baselines. We can see that each of our methods outperforms our traditional baselines for every computational budget. When the computational budget is only 1 SNT, all of our methods must select the same model-a traditional multi-task network with a 1 SNT encoder and five decoders. This strategy outperforms , and individual training. However, solutions that utilize multiple networks outperform this traditional strategy for every budget > 1.5-better performance can always be achieved by grouping tasks according to their compatibility. Table 7. When the computational budget is effectively unlimited (5 SNT), our optimal method picks five networks, each of which is used to make predictions for a separate task. However, three of the networks are trained with three tasks each, while only two are trained with one task each. This shows that the representations learned through multi-task learning were found to be best for three of our tasks (s, d, and e), whereas two of our tasks (n and k) are best solved individually. We also see that our optimal technique using 2.5 SNT and our Higher Order Approximation using 3.5 SNT can both outperform five individual networks (which uses 5 SNT). Total Loss All-in-one (triple-size resnet18) 0.50925 Five Individual (resnet18s .6-size each) 0.53484 nKE, SDn, N (3 standard resnet18's) 0.50658 In order to determine how these task groupings generalize to other architectures, we retrained our best solution for 3 SNT using resnet18 ). The in Table 1 suggest that good task groupings for one architecture are likely to be good in another, though to a lesser extent. Task affinities seem to be somewhat architecture-dependent, so for the very best , task selection must be run for each architecture choice. Figure 4 allows qualitative comparison between our methods and our baselines. We can see clear visual issues with each of our baselines that are not present in our methods. Both of our approximate methods produce predictions similar to the optimal task grouping. The data generated by the above evaluation presents an opportunity to analyze how tasks interact in a multi-task setting, and allows us to compare with some of the vast body of research in transfer learning, such as Taskonomy (Table 4 : The transfer learning affinities between pairs of tasks according to the authors of Taskonomy . Forward and backward transfer affinities are averaged. Transfer Affinity Multi-Task Affinity In order to determine the between task affinity for multi-task learning, we took the average of our first-order relationships matrix (Table 2) and its transpose. The is shown in Table 3. The pair with the highest affinity by this metric are Surface Normal Prediction and 2D Edge Detection. Our two 3D tasks, Depth Estimation and Surface Normal Prediction, do not score highly on this similarity metric. This contrasts with the findings for transfer learning in Taskonomy (Table 4), in which they have the highest affinity. Our two 2D tasks also do not score highly. We speculate that the Normals task naturally preserves edges, while Depth and Normals (for example) don't add much training signal to each other. See Section A.3 for more on factors that influence multi-task affinity. Figure 5 depicts the relationship between transfer learning affinities and multi-task affinities, which surprisingly seem to be negatively correlated in our high-data scenario. This suggests that it might be better to train dissimilar tasks together. This could be because dissimilar tasks are able to provide stronger and more meaningful regularization. More research is necessary to discover when and if this correlation and explanation hold. We describe the problem of task compatibility as it pertains to multi-task learning. We provide an algorithm and computational framework for determining which tasks should be trained jointly and which tasks should be trained separately. Our solution can take advantage of situations in which joint training is beneficial to some tasks but not others in the same group. For many use cases, this framework is sufficient, but it can be costly at training time. Hence, we offer two strategies for coping with this issue and evaluate their performance. Our methods outperform single-task networks, a multi-task network with all tasks trained jointly, as well as other baselines. Finally, we use this opportunity to analyze how particular tasks interact in a multi-task setting and compare that with previous on transfer learning task interactions. A.1 NETWORK SELECTION ALGORITHM Input: C r, a running set of candidate networks, each with an associated cost c ∈ R and a performance score for each task the network solves. Initially, C r = C 0 Input: S r ⊆ C 0, a running solution, initially Ø Input: b r ∈ R, the remaining time budget, initially b Most promising networks first 4: Best ← S r for n ∈ C r do 6: C r ← C r \ n \ is set subtraction. S i ← S r ∪ {n} 8: Child ← GETBESTNETWORKS(C r, S i, b i) Best ← BETTER(Best, Child) return Best 12: function FILTER(C r, S r, b r) Remove networks from C r with c n > b r. 14: Remove networks from C r that cannot improve S r's performance on any task. return S 2 Algorithm 1 chooses the best subset of networks in our collection, subject to the inference time budget constraint. The algorithm recursively explores the space of solutions and prunes branches that cannot lead to optimal solutions. The recursion terminates when the budget is exhausted, at which point C r becomes empty and the loop body does not execute. The sorting step on line 3 requires a heuristic upon which to sort. We found that ranking models based on how much they improve the current solution, S, works well. It should be noted that this algorithm always produces an optimal solution, regardless of which sorting heuristic is used. However, better sorting heuristics reduce the running time because subsequent iterations will more readily detect and prune portions of the search space that cannot contain an optimal solution. In our setup, we tried variants of problems with 5 tasks and 36 networks, and all of them took less than a second to solve. The definition of the BETTER function is application-specific. For our experiments, we prefer networks that have the lowest total loss across all five tasks. Other applications may have hard performance requirements for some of the tasks, and performance on one of these tasks cannot be sacrificed in order to achieve better performance on another task. Such application-specific constraints can be encoded in BETTER. In order to determine how well network selection works for different task sets, we re-ran network selection on all five 4-task subsets of our task set. The performance average of all 5 sets is shown in Figure 6. We see that our techniques generalize at least to subsets of our studied tasks. The finding that Depth and Normals don't cooperate is counter to much of the multitask learning literature such as , , and. However, the majority of these works use training sets with fewer than 100k instances, while we use nearly 4 million training instances. Table 5 shows the loss obtained on our setup when we limit to only 100k training instances. The fact that task affinities can change depending on the amount of available training data demonstrates the necessity of using an empirical approach like ours for finding task affinities and groupings. A. Table 9: The test set performance of our 31 networks on each task that they solve. | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJlTpCEKvS | We analyze what tasks are best learned together in one network, and which are best to learn separately. |
Search engine has become a fundamental component in various web and mobile applications. Retrieving relevant documents from the massive datasets is challenging for a search engine system, especially when faced with verbose or tail queries. In this paper, we explore a vector space search framework for document retrieval. Specifically, we trained a deep semantic matching model so that each query and document can be encoded as a low dimensional embedding. Our model was trained based on BERT architecture. We deployed a fast k-nearest-neighbor index service for online serving. Both offline and online metrics demonstrate that our method improved retrieval performance and search quality considerably, particularly for tail queries Search engine has been widely applied in plenty of areas on the internet, which receives a query provided by users and returns a list of relevant documents within sub-seconds, helping users obtain their desired information instantaneously. Numerous technologies have been developed and utilized in real-world search engine systems. However, the existing semantic gap between search queries and documents, makes it challenging to retrieve the most relevant documents from tens of millions of documents. Therefore, there is still a large proportion of search requests that can not be satisfied perfectly, especially for long tail queries. A search engine system is usually composed of three main modules, -query understanding module -retrieval module -ranking module The query understanding module first parses the original query string into a structured query object BID32. More specifically, the query understanding module includes several subtasks, such as word segmentation, query correction, term importance analyze, query expansion, and query rewrite, etc. After the query string was parsed, an index module accepts the parsed query, and then retrieve the candidate documents. We call this stage the retrieval stage or the first round stage. Most web-scale search engine systems use the term inverted index for document retrieval, where term is the most basic unit in the whole retrieval procedure. In the first round stage, the retrieved documents are ranked by a simple relevance model, eg TF-IDF, BM25, and the top-N documents with the highest score are submitted to the next stage for ranking. Finally, the documents scored largest by a ranking function are returned to users eventually. For a search system described above, the final retrieval performance is highly enslaved by these query understanding module. Take word segmentation as an example: this task segments raw continuous query string into a list of segmented terms. Since the word segmentation algorithm has the risk of wrong segmentation. If the error segmented term does not appear in the document space, then no document could be retrieved in the first round stage, and it will return a page without any document which damages the user's experience seriously. There is a lot of work focused on better understanding queries to retrieve more relevant documents. However, since the final performance is influenced by all parts of the query understanding module. Attempts to optimize only one part is usually hard to contribute to a significant enhancement. To avoid the problems mentioned above, we propose a novel complementary retrieval sys-tem that retrieves documents without the traditional term-based retrieval framework. That is, instead of parse raw query into a structured query, we directly map both queries and documents into a low dimension of embedding. Then in the online serving, the k-nearest-neighbor documents of the given query in the latent embedding space are searched for retrieval. Recently, we have witnessed tremendous successful applications of deep learning techniques in information retrieval circle, like query document relevance matching BID14 BID34 BID33, query rewriting BID13, and search ranking BID12 BID10. However, it is still hard to directly retrieve relevant documents using an end2end fashion based on knearest-neighbor search in latent space, especially for long tail queries. The latest far-reaching advancement in natural language processing with deep learning, BERT BID8, provides a turning point to make end2end retrieval realizable. In this paper, we present a document retrieval framework as a supplement to the traditional inverted index based retrieval system. We design a new architecture to retrieve documents without a traditional term-based query understanding pipeline, which avoids performance decay by each subtask of query understanding. We use BERT architecture as the general encoder of query and document strings, then we fine-tuned the pre-trained BERT model with human annotated data and negative sampling technique. Finally, we conduct both offline and online experiments to verify our proposed method. To sum up, our main contributions are described below:1. We design a novel end2end document retrieval framework ,which is a supplement to traditional term-based methods.2. Our model is trained on transformer architecture, and a series of training techniques are developed for performance enhancement.3. The proposed techniques can not only be used in document retrieval but also have a significant improvement for search ranking. The rest of the paper is organized as follows. We concisely review the related work in Section 2. Sections 3 mainly describes our proposed methods. Offline and online experiments are detailed given in Section 4 and Section 5 respectively. Finally, we conclude and discuss future work in Section 6. There is a variety of work on search query understanding BID29, including query correction BID5, query term weighting BID39, query expansion and query reformulation BID3. In general, these kinds of methods coherently rewrite the raw query into a new query, by replacing, adding, or removing terms or phrases in the raw query. The rewritten query gets better expression and therefore can retrieve more relevant documents than the original one. Besides the inverted index, vector search engines BID9 have also been widely applied in many information seeking tasks, like image search BID16 and recommendation system BID6.To retrieve documents using a vector search, we need to map a piece of text into a low-dimensional numerical vector. Various embedding techniques have been developed and proven to have the powerful capability of capturing the semantic meaning of natural language text BID23, BID27 BID22. However, these kinds of models are still not capable of complicating text encoding, especially for long tail text queries. More recently, researchers have been describing the various architecture of neural models BID24. In text relevance matching area, we can divide most models into two typical categories, namely representation BID14 based models and interaction based models BID26 BID35 BID7. The representation models, like DSSM, are trained to obtain high-level representations of query and document respectively, then use vector distance between the query and document embedding for text relevance score. While the interaction based models first compute the term correlation matrix between query and doc-uments and calculate semantic matching similarity based on the correlation matrix. Both representation models and interaction based models could be trained from massive click feedback data BID18 BID0 or industrial annotation. These two kinds of model architecture are broadly deployed in realworld search engine systems, especially in ranking phase. For the representational models, once we obtained the high-level representation of raw texts, we can retrieve documents through the knearest-neighbor space search. However, the performance of representation based models are usually poorer than interaction based models, which makes k-nearest-neighbor retrieval hard to deploy in the real-world systems, since too many irrelevant documents retrieved may even damage overall performance. BID38 developed an architecture to transform the text into a sparse representation, while they still retrieve documents using a term-based index like lucene 1 because the nonzero value in the sparse representations is treated as virtual terms. BID2 BID11 ) developed a uniform query and document embedding framework by generating ngram embedding using user session and click data, and then generalize it to arbitrary text by mean average pooling of ngram embedding. Since ngram is a common and effective skill in a variety of NLP tasks, training a good ngram representation requires a massive of datasets, which may be a bottleneck for many researchers and companies. Meanwhile, the model capacity of DSSM and its' variations makes it not capable to capture complex semantic meanings of natural language. Recently, ELMo, GPT-2 BID31 and BERT BID8 show the great power of unsupervised pretraining in NLP tasks. The BERT model is built on a 12 layer transformer architecture, pre-trained with large scale text data. The pre-trained models can be fine-tuned easily and outperform many state-of-art models in various NLP tasks. We used the pre-trained BERT-Base(Chinese) 2 model released by Google and fine-tuned the model for semantic representation. Our fine-tuned model outperformed many state-of-art models in deep relevance matching, and obtain a great success in se-1 https://lucene.apache.org/ 2 https://github.com/google-research/bert In this section, we first illustrate our proposed semantic retrieval framework, which is composed of both offline and online parts respectively. Then, we introduce the model structure used for encoding queries and titles, and the techniques we used to boost the performance. Figure 1 shows our proposed system architecture. The offline module includes model training, document embedding inference, and semantic index builder. While in online serving, both query's semantic embedding and traditional term base query parser are computed, and then those two are sent to semantic index service and inverted index service respectively for document retrieval. Finally, documents retrieved from both index services are merged and sent to ranking service for document scoring. The pre-trained BERT model can be leveraged for semantic ranking and matching BID30 in various ways. We developed two models here: BERT(rep) and BERT(rel). The BERT(rep) model uses the pre-trained BERT model to obtain embedding of query and doc respectively, while the BERT(rel) model concatenates query and document first and get the one representation for a query document pair. The final score of a query, document pair is computed as below: DISPLAYFORM0 In the equation 1, we use the mean average of last layer as encoder output for each query and document, and compute the dot product of two embedding as matching score, where L represents the max sequence length. We also tried directly using the last layer of [CLS] term's embedding, but performed worse than the average pooling described in equation 1. DISPLAYFORM1 The equation 2 use embedding of last layer's [CLS] token and weighted sum it to a scalar by vector w, where w is a full connection layer with only one output. The model capacity of this method is more powerful than BERT(rel) because it calculates the term interaction between the query and document in the self-attention layers. However, since the BERT(rel) model is an interaction base model, this model can not be applied to semantic retrieval. Both two models are trained through a supervised learning fashion, with a pairwise max margin hinge loss to distinguish relatively positive and negative samples. The loss function for one query is: DISPLAYFORM2 where p i and p j represent to model score computed for each < query, document > pair, and y i and y j is the label for each document respectively. τ is the hyper parameter called margin to determine how far the model need to push a pair away from each other. The margin parameter is tuned for the best performance here. We use the additive data sampling technique to further enhance model performance. Therefore, the data we used to train our model is comprised of two parts, human annotated data, and negative sampled data. Negative sampling has been successfully applied in many tasks, such as neural language modelling BID23, e-commerce list embedding BID10, graph embedding BID37 and so on. Sampling negative training instance is also useful for model training in this scenario, since different from traditional term-based retrieval method, the vector space search is much more likely to retrieve irrelevant documents. Thus we propose to augment more irrelevant documents. When the negative samples were added to training, the model learned to push relevant and irrelevant documents away from each other, then the model is more robust to noisy documents. A straightforward way of negative sample mining is to select negative samples corresponding to a uniform distribution over the whole corpus, in particular, irrelevant documents here. However, this simple strategy fails to generate hard negative samples, which provide more important information for the model. Therefore, we propose another negative sampling method. At first, we train a baseline model with only human annotated data. Then we use this model to encode documents and queries. After that, we use an unsupervised cluster algorithm to assign each document and query a cluster id. Finally, we uniformly random selected negative documents from the cluster that query was distributed. For convenient, we call this kind of negative sampling name of N EG cluster, and globally sampled data name of N EG global. We append N EG cluster and N EG global to the raw dataset for per query and fine-tuned the model again to obtain our final model.reWe show the whole training procedure in the Algorithm 1 Training Framework of our proposed model human annotated data D, BERT pre-trained model M 1: M1 ← {D, M}, fine-tune the model M by D 2: compute embedding E for query and doc using M1 3: compute cluster centroids C by E 4: for all d ∈ Docs do 5:compute closest centroid C d for d 6: end for 7: for all q ∈ Query do 8:compute closest centroid Cq for q 9: uniform sample N EG global from whole doc set 10: uniform sample N EG cluster among docs where DISPLAYFORM0 D1(q) = {D(q) ∪ N EG global ∪ N EG cluster } 12: end for 13: M2 ← {D1, M}, fine-tune the model M by D1 Ensure:BERT model M2Algorithm 1, and sample's meaning is much closer to query than that of N EG global, which makes the model more robust for hard samples. Once the model was trained, we need to serve it on the fly. We first computed the embedding of all documents and build a vector index using faiss 3 BID19, which was open sourced by facebook and support k-nearestneighbor search for vector data in milliseconds. We developed a c++ based semantic index server to provide efficient concurrent online service. Our model was inferenced on a GPU server, and inference speed was accelerated 2 times faster than tfserving through a c++ based library developed by us. During the online serving, when a query was received, the GPU server first inferences the query embedding, and downstream sends the query embedding to semantic index service for document retrieval. For the balance of efficiency and effect, we retrieve k most similar documents in the semantic service for next stage ranking, where k is set to 20 here. In this section, we carry out offline experiments to illustrate the performance of our proposed semantic retrieval methods. In the experiment, we train the model with 1 epoch, use Adam with a learning rate of 10 −5, β1 = 0.9, β2 = 0.999. The data annotated by human editors is a list of triplets like <query, doc, relevance>. The rele-3 https://github.com/facebookresearch/faiss vance score has three grade 0, 1, 2, which represents bad, f air and excellent respectively. The dataset contains 36159 queries and 1181229 query doc pairs. Beside the dataset for training, we additionally annotated a small dataset for test, the test dataset contains 2703 queries and 84244 querydoc pairs. The summarize of dataset is shown at Table 2. We evaluate our proposed model from ranking and retrieval aspects. We compared the ranking performance using Normalized Discounted Cumulative Gain(NDCG), and retrieval performance with Recall. The way how these metrics are calculated will be introduced in Section 4.4 and Section 4.5 respectively. • ClickSim A relevance matching model BID17 which use web-scale click data to generate term representations for query and document, and use cosine similarity to represent query document relevance.• K-NRM An interaction based matching model using kernel pooling BID35.• Match Pyramid An interaction based matching model using convolutions on term matching matrix BID25 ).• DSSM A representation based model proposed by Microsoft Research BID14. The model proposed here using word vectors pretrained on document title corpus. And three full connection layer with size of 300, 300, and 128 dimensions are used for text encoding. We use metric Recall to evaluate the model's retrieval performance here. This metric measures how many relevant documents are retrieved by a given model. For a given query q, the Recall rate is calculated as, DISPLAYFORM0 where Ret q represents the retrieved documents for q, Rel q stands for all the relevant documents for query q, where relevant documents are defined as document relevance annotated larger than 0 here. To evaluate the recall performance offline, we first built semantic index both for our model and baseline model. We computed representation for document title of each model, then we used the representation embedding to build semantic index. Once queries' embedding of each model were computed, we retrieved the top k documents by knearest-neighbor search. Besides comparing the recall measure of different models only using semantic index, we compared the recall enhancement when the semantic index was added to the lexical inverted index. We used a commercial term-based inverted index engine developed by us and build a lexical index with it. Both lexical inverted index and semantic index were built to retrieve documents, with top 300 and top 20 respectively. Then we calculated the recall of the union set. In the experiment, since document size of testset is small, we need a larger document corpus to make the recall measured more accurately. Therefore, both semantic index and lexical index were built with all human annotated data, including trainset and testset. And recall metric were calculated using only queries in the testset. TAB3 shows the of different models, BERT(rep) outperforms baseline model DSSM significantly in the recall measure. And after adding our model as a supplement to the lexical index, the recall rate is improved from 54.9% to 69.4%. While the baseline model, DSSM performs poorly on this task. -NDCG score Since our proposed model could not only be applied in document retrieval but also applied in the ranking stage. We measured the model's ranking quality through Normalized Discounted Cumulative Gain(NDCG). For a ranked document list, the NDCG for a query is calculated as, DISPLAYFORM0 where IDCG n represents the DCG score when the list was perfectly ranked by relevance. We compute following variation of Discounted Cumulative Gain(DCG) BID15, DISPLAYFORM1 According to the equation 6, higher relevance label contribute to higher weight in the computation. We calculate N DCG with different rank list size of {1, 3, 5} respectively. Table 4 shows that our model is superior to the state of art deep relevance matching models, and BERT(rep) model is slightly worse than the BERT(rel) model since BERT(rel) model uses self-attention between the query and title tokens before aggregates final score. However, both the BERT(rep) model and BERT(rel) model outperform other baselines significantly. We feed the doc product of query doc embedding into a gbdt ranking model BID4 as a relevance feature, and observe the feature importance after the tree model was trained. The feature importance was computed by the statistics collected during the tree ensemble training procedure. TAB5 shows that without adding BERT(rel) feature, the BERT(rep) feature ranks first in the ranking function, and accounts for 34% of importance among all features in our ranking function. In Section 3.2.1, we described two negative sampling generator method: the N EG global samples and N EG cluster for training data enhancement. We tuned the negative samples size, and obtained the best performance with 10 N EG global and 10 N EG cluster respectively. After adding negative samples, the average negative sample size for a given query increased from 19.9 to 39.9. TAB6 shows the model performance with different kinds of negative samples. Only adding N EG global can improve NDCG@3 at about 0.5%, when adding N EG cluster, the NDCG@3 is further improved by 0.8%. Therefore, the overall measurements are enhanced by 1.4% after additive sampling. pooling layer used in BERT model In this paper, we use the reduce-mean of the last layer as BERT(rep) model's pooled output. Different layers of BERT may own different aspects of knowledge about the input sequence. To verify the effectiveness of different layers, we trained different models, with pooled output from different layers respectively. From FIG1, the red solid line shows that the layer closest to last obtains higher NDCG measure. This is reasonable since higher layers make the model contains more parameters. Besides comparing the of different layers, we also developed a method aggregate the embedding of all layers. In this method, an attention layer calculates the weight across different layers, therefore a weighted sum of each layer's embedding on each position is the final representation of each term. After that, we used reduce-mean of all terms' embedding as the final pooled output. The of aggregation is shown as the green dot line, which does not outperform simple average pooling on the last layer. Meanwhile, we also tried using [CLS] term's embedding of the last layer as pooled output, but it behaved even worse. In , using mean average pooling of the last layer as final pooled output performs best in this scenario, even though some work claims aggregating layers is useful BID21.5 Online Evaluation and Case Study After offline evaluations, we conduct an online a/b test to further verify our proposed system. In the online experiment procedure, 40 percent of online traffic were randomly distributed to four groups, 2 control groups, and 2 experimental groups. The metric we used to evaluate is the Clicked Search Rate(CSR), which is computed as: DISPLAYFORM0 After a week's observation, as shown in TAB7, the overall CSR of two experimental groups both surpass two control groups by 0.65%, which is relatively a huge improvement to our experience. We also examined the online performance for queries with different frequency. We split queries into Top, Torso, and Tail by query search times in a day. Since our proposed method mainly focuses on boosting the performance of long tail queries, we can see the CSR metric is not significant in the Top and Torso query part. But the metric increased by nearly 1.05 % in the Tail part, which contributed to the largest algorithm iteration in the first half of 2019. This section highlights some good cases after our system was deployed online. We show the final ranked at top 6 for query "送外卖不认识路" (do not know the way to deliver food) at TAB8, where SEMANTIC represents the document retrieved from the proposed semantic index, and LEXICAL for traditional termbased inverted index. In this case, three documents are retrieved from semantic index, and the relevance is also much better than the document from traditional inverted index. Notice that there are many ways to express "不 认 识 路"(do not know the way) in Chinese, while the semantic index retrieved documents indeed capture the several alternatives of expressing it: "不知道路线", "不认路", "不懂路". And the term retrieved document only contains the same term "不认识路" as query expressed. In this paper, we present an architecture for semantic document retrieval. In this architecture, we first train a deep representation model for query and document embedding, then we build our semantic index using a fast k-nearest-neighbor vector search engine. Both offline and online experiments have shown that retrieval performance is greatly enhanced by our method. For the future work, we would like to explore a more general framework that could use more signals involved for semantic retrievals, like document quality features, recency features, and other text encoding models. | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1xvMrHqAE | A deep semantic framework for textual search engine document retrieval |
Semi-Supervised Learning (SSL) approaches have been an influential framework for the usage of unlabeled data when there is not a sufficient amount of labeled data available over the course of training. SSL methods based on Convolutional Neural Networks (CNNs) have recently provided successful on standard benchmark tasks such as image classification. In this work, we consider the general setting of SSL problem where the labeled and unlabeled data come from the same underlying probability distribution. We propose a new approach that adopts an Optimal Transport (OT) technique serving as a metric of similarity between discrete empirical probability measures to provide pseudo-labels for the unlabeled data, which can then be used in conjunction with the initial labeled data to train the CNN model in an SSL manner. We have evaluated and compared our proposed method with state-of-the-art SSL algorithms on standard datasets to demonstrate the superiority and effectiveness of our SSL algorithm. Recent developments in CNNs have provided promising for many applications in machine learning and computer vision;. However, the success of CNN models requires a vast amount of well-annotated training data, which is not always feasible to perform manually. There are essentially two different solutions that are usually used to deal with this problem: 1) Transfer Learning (TL) and 2) SemiSupervised Learning (SSL). In TL methods , the learning of a new task is improved by transferring knowledge from a related task which has already been learned. SSL methods , however, tend to learn discriminative models that can make use of the information from an input distribution that is given by a large amount of unlabeled data. To make use of unlabeled data, it is presumed that the underlying distribution of data has some structure. SSL algorithms make use of at least one of the following structural assumptions: continuity, cluster, or manifold. In the continuity assumption, data which are close to each other are more likely to belong to the same class. In the cluster assumption, data tends to form discrete clusters, and data in the same cluster are more likely to share the same label. In the manifold assumption, data lies approximately on a manifold of much lower dimension than the input space which can be classified by using distances and densities defined on the manifold. Thus, to define a natural similarity distance or divergence between probability measures on a manifold, it is important to consider the geometrical structures of the metric space in which the manifold exists. There are two principal directions that model geometrical structures underlying the manifold on which the discrete probability measures lie. The first direction is based on the principal of invariance, which relies on the criterion that the geometry between probability measures should be invariant under invertible transformations of random variables. This perspective is the foundation of the theory of information geometry, which operates as a base for the statistical inference. The second direction is established by the theory of Optimal Transport (OT), which exploits prior geometric knowledge on the base space in which random variables are valued. Computing OT or Wasserstein distance between two random variables equals to achieving a coupling between these two variables that is optimal in the sense that the expectation of the transportation cost between the first and second variables is minimal. The Wasserstein distance between two probability measures considers the metric properties of the base space on which a structure or a pattern is defined. However, traditional information-theoretic divergences such as the Hellinger divergence and the Kullback-Leibler (KL) divergence are not able to properly capture the geometry of the base space. Thus, the Wasserstein distance is useful for the applications where the structure or geometry of the base space plays a significant role. In this work, similar to other SSL methods, we make a structural assumption about the data in which the data are represented by a CNN model. Inspired by the Wasserstein distance, which exploits properly the geometry of the base space to provide a natural notion of similarity between the discrete empirical measures, we use it to provide pseudo-labels for the unlabeled data to train a CNN model in an SSL fashion. Specifically, in our SSL method, labeled data belonging to each class is a discrete measure. Thus, all the labeled data create a measure of measures and similarly, the pool of unlabeled data is also a measure of measures constructed by data belonging to different classes. Thus, we design a measure of measures OT plan serving as a similarity metric between discrete empirical measures to map the unlabeled measures to the labeled measures based on which, the pseudo-labels for the unlabeled data are inferred. Our SSL method is based on the role of Wasserstein distances in the hierarchical modeling. It stems from the fact that the labeled and unlabeled datasets hierarchically create a measure of measures in which each measure is constructed by the data belonging to the same class. Computing the exact Wasserstein distance, however, is computationally expensive and usually is solved by a linear program (Appendix A and D). introduced an interesting method which relaxes the OT problem using the entropy of the solution as a strong convex regularizer. The entropic regularization provides two main advantageous: 1) The regularized OT problem relies on Sinkhorns algorithm that is faster by several orders of magnitude than the exact solution of the linear program. 2) In contrast to exact OT, the regularized OT is a differentiable function of their inputs, even when the OT problem is used for discrete measures. These advantages have caused that the regularized OT to receive a lot of attention in machine learning applications such as generating data; , designing loss function , domain adaptation; , clustering; and low-rank approximation. Pseudo-Labeling is a simple approach whereby a model incorporates it's own predictions on unlabeled data to obtain additional information during the training;;. The main downside of these methods is that they are unable to correct their own mistakes where predictions of the model on unlabeled data are confident but incorrect. In such a case, the erroneous data not only can not contribute to the training, but the error of the models is amplified during the training as well. This effect is aggravated where the domain of the unlabeled data is different from that of labeled data. Note that pseudo-labeling in is similar to entropy regularization , in the sense that it forces the model to provide higher confidence predictions for unlabeled data. However, it differs because it only forces these criteria on data which have a low entropy prediction due to the threshold of confidence. Consistency Regularization can be considered as a way of using unlabeled data to explore a smooth manifold on which all of the data points are embedded. This simple criterion has provided a set of methods that are currently considered as state of the art for the SSL challenge. Some of these methods are stochastic perturbations Sajjadi et al. (2016b), π-model , mean teacher , and Virtual Adversarial Training (VAT). The original idea behind stochastic perturbations and π-model was first introduced in and has been referred to as pseudo-ensembles. The pseudo-ensembles regularization techniques are usually designed such that the prediction of the model ideally should not change significantly if the data given to the model is perturbed; in other words, under realistic perturbations of a data point x (x → x), output of the model f θ (x) should not change significantly. This goal is achieved by adding a weighted loss term such as d(f θ (x), f θ (x)) to the total loss of the model f θ (x), where d(., .) is mean squared error or Kullback-Leibler divergence which measures a distance between outputs of the prediction function. The main problem of pseudo-ensemble methods, including π-model is that they rely on a potentially unstable target prediction, which can immediately change during the training. To address this problem, two methods, including temporal ensembling and mean teacher , were proposed to obtain a more stable target output f θ (x). Specifically, temporal ensembling uses an exponentially accumulated average of outputs, f θ (x), to make the target output smooth and consistent. Inspired by this method, mean teacher instead uses a prediction function which is parametrized by an exponentially accumulated average of θ during the training. Like the π-model, mean teacher adds a mean squared error loss d(f θ (x), f θ (x)) as a regularization term to the total loss function for training the network. It has been shown that mean teacher outperforms temporal ensembling in practice. Contrary to stochastic perturbation methods which rely on constructing f θ (x) stochastically, VAT in the first step approximates a small perturbation r to add it to x which significantly changes the prediction of the model f θ (x). In the next step, a consistency regularization technique is applied to minimize d(f θ (x), f θ (x + r)) with respect to θ which is the parameters of the model. Entropy Minimization methods use a loss term which is applied on the unlabeled data to force the model f θ (x) to produce confident predictions (i.e., low-entropy) for all of the samples, regardless of what the actual labels are. For example, by assuming the softmax layer of a CNN has c outputs, the loss term applied on unlabeled data is as follows: Ideally, this class of methods penalizes the decision boundary that passes near the data points, while they instead force the model to provide a high-confidence prediction. It has been shown that entropy minimization on its own, can not produce competitive Sajjadi et al. (2016a). However, entropy minimization can be used in conjunction with VAT (i.e., EntMin VAT) to provide state of the art in which VAT assumes a fixed virtual label prediction in the regularization d(f θ (x), f θ (x + r)). For any subset θ ⊂ R c, assume that S(θ) represents the space of Borel probability measures on θ. The Wasserstein space of order k ∈ [1, ∞) of probability measures on θ is defined as follows: where, ||.|| is the Euclidean distance in R c. Let Π(P, Q) denote the set of all probability measures on θ × θ which have marginals P and Q; then the k-th Wasserstein distance between P and Q in S k (θ), is defined as follows: where x ∼ P, x ∼ Q and k ≥ 1. Explicitly, W k (P, Q) is the optimal cost of moving mass from P to Q, where the cost of moving mass is proportional to the Euclidean distance raised to the power k. In Eq., the Wasserstein between two probability measures was defined. However, using a recursion of concepts, we can talk about measure of measures in which a cloud of measures (M) is transported to another cloud of measures (M). We define a relevant distance metric on this abstract space as follows: let the space of Borel measures on S k (θ) be represented by S k (S k (θ)); this space is also a Polish, complete and separable metric space as S k (θ) is a Polish space (cf. section. 3 in). It will be endowed with a Wasserstein metric W k of order k that is induced by a metric where, Q ∼ M, P ∼ M, and Π(M, M) is the set of all probability measures on S k (θ) × S k (θ) that have marginals M and M. Note that the existence of an optimal solution, π ∈ Π(M, M), is always guaranteed (Appendix E). In words, W k (M, M) corresponds to the optimal cost of transporting mass from M to M, where the cost of moving unit mass in its space of support, S k (θ), is proportional to the power k of the Wasserstein distance W k in S k (θ). The goal of our algorithm is to use OT to provide pseudo-labels for the unlabeled data to train a CNN model in an SSL manner. The basic premise in our algorithm is that the discrepancy between two discrete empirical measures which come from the same underlying distribution is expected to be less than the case where these measures come from two different distributions. In this work, since we make a structural assumption about the data and assume that the labeled and unlabeled data belonging to the same class come from the same distribution (i.e., general setting in SSL), we leverage OT metric to map similar measures from two measure of measures. This is because OT exploits well the structure or geometry of the underlying metric space to provide a natural notion of similarity between empirical measures in the metric space. Here, labeled data belonging to the same class is a measure. Thus, all the initially labeled data construct a measure of measures and similarly, all the unlabeled data is also a measure of measures constructed by data from different classes. Thus, we design a measure of measures OT plan to map the unlabeled measures to the similar labeled measures based on which, pseudo-labels for the unlabeled data in each measure are inferred. The mapping between the labeled and unlabeled measures based on the measure of measures OT is formulated as follows: Given an image z i ∈ R m×n from the either labeled or unlabeled dataset, the CNN acts as a function f (w, z i): R m×n → R c with the parameters w that maps z i to a c-dimensional representation, where c is number of the classes. Assume that X = {x 1, ..., x m} and X = {x 1, ..., x m} are the sets of c-dimensional outputs represented by the CNN for the labeled and unlabeled images, respectively. Let P i = 1/n i ni j=1 δ xj denote a discrete measure constructed by the labeled data belonging to the i-th class, where δ xj is a Dirac unit mass on x j and n i is number of the data within the i-th class. Thus, all the labeled data construct a measure of measures M = c i=1 α i δ Pi, where α i = n i /m represents amount of the mass in the measure P i and δ Pi is a Dirac unit mass on the measure P i. Similarly unlabeled data construct a measure of measures M = c j=1 β j δ Qj in that each measure Q i, is created by the unlabeled data belonging to the unknown but the same class, where β j = n j /m is amount of the mass in the measure Q j and δ Qi is a Dirac unit mass on Q j. The goal of our SSL method is to use the OT to find a coupling between the measures in M and M that is optimal in the sense that it has a minimal expected transportation cost. This is because the transportation cost between two empirical measures which come from the same distribution (data from the same class) is expected to be less than the case where these measures come from two different distributions (data from different classes). Thus, we design an OT cost function defined in Eq. to obtain an optimal coupling between measures in M and M based on which the labels of data in the unlabeled measures are inferred: where T is the optimal coupling matrix in which T (i, j) indicates amount of the mass that should be moved from Q i to P j to provide an OT plan between M and M. Thus, if highest amount of the mass from Q i is transported to P k (i.e., Q i is mapped to P k); the data belonging to the measure Q i are annotated by k which is the label of the measure P k. Variable X is the pairwise similarity matrix between measures within M and M in which X(i, j) = W k (Q i, P j) which is the Wasserstein distance between two clouds of data points Q i and P j. Note that the ground metric used for computing W k (Q i, P j) is the Euclidean distance. Moreover, T, M denotes the Frobenius dot-product between T and X matrices, and T is transportation polytope defined as follows: where 1 c is a c-dimensional vector with all elements equal to one. Finally, E(T) is entropy of the optimal coupling matrix T which is used for regularizing the OT, and λ is a hyperparameter that balances between two terms in Eq.. The optimal coupling solution for the regularized OT defined in Eq. is obtained by an iterative algorithm relied on Sinkhorn algorithm (Appendix D). In Sec. 4, we represented the pool of unlabeled data as a measure of measures M = c j=1 β j δ Qj in which each measure is constructed by data that belong to the same class. However, label of the unlabeled data is unknown to allow us to identify these unlabeled measures. Moreover, CNN as a classifier trained on a limited amount of the labeled data simply miss-classifies these unlabeled data. In such a case, there is little option other than to use unsupervised methods, such as the clustering to explore the unlabeled data belonging to the same class. This is because in structural assumption based on the clustering, it is assumed that the data within the same cluster are more likely to share the same label. Here, we leverage the Wasserstein metric to explore these unknown measures underlying the unlabeled data. Specifically, we relate the clustering algorithm to the problem of exploring Wasserstein barycenter of the unlabeled data. Wasserstein barycenter was initially introduced by. Given probability measures R 1,..., R l ∈ S 2 (θ) for l ≥ 1, their Wasserstein barycenterR l,µ is defined as follows: where µ i is the weight associated with R i. In the case where R 1,..., R l are discrete measures with finite number of elements and the weights in µ are uniform, it is shown by that the problem of exploring Wasserstein barycenterR l,µ on the space of S 2 (θ) in is recast to search only on O r (θ) denoting as a set of probability measures with at most r support points in θ, where r = l i=1 e i − l + 1 and e i is the number of elements in R i for all 1 ≤ i ≤ l. Moreover, an efficient algorithm for exploring local solutions of the Wasserstein barycenter problem over O r (θ) for some r ≥ 1 has been studied by. Beside, the popular K-means clustering can be considered as solving an optimization problem that comes up in the quantization problem, a simple but very practical connection;. The connection is as follows: Given m unlabeled data x 1,..., x m ∈ θ. Suppose that these data are related to at most k clusters where k ≥ 1 is a given number. The K-means problem finds the set Z containing at most k atoms θ 1,..., θ k ∈ θ that minimizes: is equivalent to explore a discrete measure H including finite number of support points and minimizing the following objective:. This problem can also be thought of as a Wasserstein barycenter problem when l = 1. From this prospective, as denoted by , the algorithm for finding the Wasserstein barycenters is an alternative for the popular Loyds algorithm to find local minimum of the K-means objective. Thus, we adopt the algorithm introduced in used for computing the Wasserstein barycenters of empirical probability measures to explore the clusters underlying the unlabeled data (Appendix B). Our SSL method finally leverages the unlabeled image data annotated by pseudo-labels obtained from the OT in conjunction with the supervision signals of the initial labeled image data to train the CNN classifier. Thus, we use the generic cross entropy as our discriminative loss function to train the parameters of our CNN as follows: Let X l be all of the labeled training data annotated by true labels Y, and X u be the unlabeled training data annotated by pseudo-labels Y, then the total loss function L, used to train our CNN in an SSL fashion is as follows: where w is parameters of the CNN, and L c denotes cross entropy loss function, and α is a hyperparameter that balances between two losses obtained from the labeled and unlabeled data. For training, we initially train the CNN using the labeled data as a warm up step, and then use OT to provide pseudo-labels for the unlabeled data to train the CNN in conjunction with the initial labeled data for the next epochs. Specifically, after training the CNN using the labeled data, in each epoch, we select the same amount of initial labeled data from the pool of unlabeled data and then use OT to compute their pseudo-labels; then, we train the CNN in a mini-batch mode. Our overall SSL method is described in Algorithm 2 (Appendix C). For evaluating our SSL technique and comparing it with the other SSL algorithms, we follow the concrete suggestions and criteria which are provided in. Some of these recommendations are as follows: 1) we use a common CNN architecture and training procedure to conduct a comparative analysis, because differences in CNN architecture or even implementation details can influence the . 2) We report the performance of a fully-supervised case as a baseline because the goal of SSL is to greatly outperform the fully-supervised settings. 3) We change the amount of labeled and unlabeled data when reporting the performance of our SSL algorithm because an ideal SSL method should remain efficient even with the small amount of labeled and additional unlabeled data. 4) We also perform an analysis on realistic small validation sets. This is because, in real-world applications, the large validation set is instead used as the training, therefore, an SSL algorithm which needs heavy tuning on a per-task or per-model basis to perform well would not be applicable if the validation sets are realistically small (This analysis is done in Appendix F). For the first criterion, we have used the'WRN-28-2' model (i.e., ResNet with depth 28 and width 2) , including batch normalization and leaky ReLU nonlinearities. We conducted our experiments on the widely used CIFAR-10 , and datasets. Note that in our experiments, we tackle the general SSL challenge where the labeled and unlabeled data come from the same underlying distribution, and a given unlabeled data belongs to one of the classes in the labeled set and therefor, there is no class distribution mismatch. Moreover, for each of these datasets, we split the training set into two different sets of labeled and unlabeled data. For training, we use the well-known Adam optimizer with the default hyperparameters values and a learning rate of 3 × 10 −3 in our experiments, and all the experiments have been done on a NVIDIA TITAN X GPU. The batch size in our experiments is set to 100. We have not used any form of early stopping; however, we have consistently monitored the performance of the validation set and reported test error at the point of lowest validation error. The stopping criteria for the Sinkhorn algorithm is either maxIter = 10,000 or tolerance = 10 −8, where maxIter is the maximum number of iterations and tolerance is a threshold for the integrated stopping criterion based on the marginal differences. In experiments, we followed the data augmentation and standard data normalization used in. Specifically, for SVHN, we converted pixel intensity values of the images to floating point values in the range of [-1, 1]. For the data augmentation, we only applied random translation by up to 2 pixels. We used the standard training and validation split, with 65,932 images for the training set and 7,325 for the validation set. For CIFAR-10, we applied global contrast normalization. The data augmentation on CIFAR-10 are random translation by up to 2 pixels, random horizontal flipping, and Gaussian input noise with standard deviation 0.15. We used the standard training and validation split, with 45,000 images for the training set and 5,000 images for the validation set. Here, we consider the second criterion for evaluation of our SSL method. The purpose of SSL is mainly to achieve a better performance when it uses the unlabeled data than the case where using the labeled data alone. To ensure that our SSL model benefits from the unlabeled data during the training, we report the error rate of the WRN model for both cases where we only use the labeled data (i.e., Supervised in Table. 1), and the case where we leverage the unlabeled data by using the OT technique during the training (i.e., ROT in Table. 1). Moreover, we have reported the performance of other SSL algorithms in Table. 1 which also leverage the unlabeled data during the training. All of the compared SSL methods use the common CNN model (i.e., 'WRN-28-2') and training procedure as suggested in the first criterion for the realistic evaluation of SSL models. The of all SSL methods reported in Table. 1 is the test error at the point of lowest validation error for tuning their hyperparameters. For a fair evaluation with other SSL algorithms, we selected 4,000 samples of the training set as the labeled data and the remaining as the unlabeled data for the CIFAR-10 dataset, and we chose 1,000 samples of the training set as the labeled data and the rest as the unlabeled data for the SVHN dataset. We ran our SSL algorithm over five times with different random splits of labeled and unlabeled sets for each dataset, and we reported the mean and standard deviation of the test error rate in Table. 1. The in Table. 1 indicates that on both CIFAR-10 and SVHN, the gap between the fully-supervised baseline and ROT is bigger than this gap for the other SSL methods. This indicates the potential of our model for leveraging the unlabeled data in comparison to other methods that also use the unlabeled data to improve the classification performance of a CNN model in SSL fashion. Moreover, we trained our baseline WRN on the entire training set of CIFAR-10 and SVHN and the test error over five runs are 4.23(±0.18) and 2.56(±0.04), respectively. Besides the particular manner in which we choose the one particular pseudo-label, we also use "soft pseudo-labels". Essentially, instead of having the one-hot target in the usual classification loss (i.e., cross-entropy), we can have the row of the transport plan corresponding to the unlabeled data points as the target. We used the soft pseudo-labels produced by OT to train the CNN. The comparison of in Table. 1 show that one-hot targets used in ROT outperforms the soft pseudo-labels used in ROT. Why this is happening can be supported by SSL methods based on the entropy minimization criterion. This set of methods force the model to produce confident predictions (i.e., low entropy for output of the model). Similarly here, once we use one-hot targets, we encourage the network to produce more confident predictions than when using soft-pseudo labels. In this section, we compare ROT which is based on the measure of measure OT with two other baselines. Both the baselines assign pseudo-labels for the unlabeled samples based on the greedy nearest neighbor (GNN) search. The first baseline is sample to sample (S-S-GNN) case, where pseudolabels for the unlabeled data are obtained by GNN on the outputs of softmax layer. Specifically, for each of the unlabeled sample, we annotate it with the label of the closest labeled sample in the training set. The second baseline is sample to measure (S-M-GNN) case where, pseudo-labels of the unlabeled samples are obtained based on the GNN between the unlabeled samples and the probability measures constructed by initial labeled data in the training set. When transporting from a Dirac to a probability measure, the OT problem (regularized or not) has a closed form. Essentially, there is only one admissible coupling. Thus, in such a case, the Wasserstein distance between a sample to a probability measure is simply computed as follows: Given an unlabeled Dirac δ x i and a labeled measure The comparison of between ROT, and these baselines on the SVHN and CIFAR-10 in Table. 2 shows the benefit of measure of measure OT for training a CNN in an SSL manner. Instead of using the CNN as a classifier to produce pseudo-labels for the unlabeled data, we used the Wasserstein barycenters to cluster the unlabeled data. This allowed us to explore the unlabeled measures that we could then match them with the labeled measures for pseudo-labeling. This was because the CNN, as a classifier trained on a limited amount of the labeled data, simply miss-classifies the unlabeled data. To compare these two different strategies for producing the pseudo-labels to train the CNN classifier in an SSL fashion, we experimentally show how the clustering-based method (i.e., ROT) can have a greater positive influence on the training of our CNN classifier. We report the number of pseudo-labels which are accurately predicted by ROT. This allows us to know the level of accuracy of the pseudo-label obtained for the unlabeled data, which the CNN can then benefit from during the training. We also report these with that of predicted labels achieved by the baseline CNN classifier (i.e., WRN) on the unlabeled training data. This comparison also allows us to know whether or not the CNN classifier can benefit from our strategy for providing pseudo-labels during the training, because, otherwise, the WRN can simply use its own predicted labels on unlabeled training data over the course of training. To indicate the efficiency of our method during the training of the CNN, we changed the number of initial labeled data in the training set and reported the number of accurately predicted pseudo-labels by the baseline WRN, and ROT on the remaining unlabeled training data. Fig. 2 (c) and Fig. 1(d) show that, for both CIFAR and SVHN datasets, the labels predicted by ROT on the unlabeled training data are more accurate than the WRN, which means that the entire CNN network can better benefit from the ROT strategy than the case where it is trained solely by its own predicted labels. Moreover, we monitored the trend of transportation cost between the labeled and unlabeled measures obtained by Eq. 3 during the training. Fig. 2(a) and Fig. 2(b) show that the transportation cost is reduced as the images fed into the CNN are represented by a better feature set during the training. In Table. 2, we evaluated ROT for the case where we only use 4,000 and 1,000 initial labeled data for the CIFAR-10 and SVHN, respectively. However, here, we explore that how varying the amount of initial labeled data decreases the performance of ROT in the very limited label regime, and also at which point our SSL method can recover the performance of training when using all of the labeled data in the dataset. To do this evaluation, we gradually increase the number of labeled data during the training and report the performance of our SSL method on the testing set. In this experiment, we ran our SSL method over five times with different random splits of labeled and unlabeled sets for each dataset, and reported the mean and standard deviation of the error rate in Fig. 2(a) and Fig. 2(b). The show that the performance of ROT tends to converge as the number of labels increases. Another possibility for evaluating the performance of our SSL method is to change the number of unlabeled data during the training. However, using the CIFAR-10 and SVHN datasets in isolation puts an upper limit on the amount of available unlabeled data. Fortunately, in contrast to CIFAR-10, SVHN has been distributed with the SVHN-extra dataset, which includes 531,131 additional digit images and has also been previously used as unlabeled data for evaluation of different SSL methods in. These additional data come from the same distribution as SVHN does, which allows us to use them in our SSL framework. Fig. 2(c) shows the trend of test error for our SSL algorithm on SVHN with 1,000 labels and changing amounts of unlabeled images from SVHN-extra dataset. The shows that, increasing the amount of unlabeled data improves the performance of our SSL method, but this improvement is not significant when we provide 40k unlabeled data. We proposed a new SSL method based on the optimal transportation technique in which unlabeled data masses are transported to a set of labeled data masses, each of which is constructed by data belonging to the same class. In this method, we found a mapping between the labeled and unlabeled masses which was used to infer pseudo-labels for the unlabeled data so that we could use them to train our CNN model. Finally, we experimentally evaluated our SSL method to indicate its potential and effectiveness for leveraging the unlabeled data when labels are limited during the training. Discrete Optimal Transport: For any r ≥ 1, let the probability simplex be denoted by, and also assume that U = {u 1, ..., u n} and V = {v 1, ..., v m} are two sets of data points in between two discrete measures U and V is the k-th root of the optimum of a network flow problem known as the transportation problem. Note that δ ui is the Dirac unit mass located on point u i, a and b are the weighting vectors which belong to the probability simplex ∆ n and ∆ m, respectively. The transportation problem depends on the two following components: 1) matrix M ∈ R n×m + which encodes the geometry of the data points by measuring the pairwise distance between elements in U and V increased to the power k, 2) the transportation polytope P (a, b) ∈ R n×m + which acts as a feasible set, characterized as a set of n × m non-negative matrices such that their row and column marginals are a and b, respectively. This means that the transportation plan should satisfy the marginal constraints. In other words, let 1 m be an m-dimensional vector with all elements equal to one, then the transportation polytope is represented as follows: P (a, b) = {T ∈ R n×m + |T 1 n = b, T 1 m = a}. Essentially, each element T (i, j) indicates the amount of mass which is transported from i to j. Note that in the transportation problem, the matrix M is also considered as a cost parameter such that Let T, M denote the Frobenius dot-product between T and M matrices. Then the discrete Wasserstein distance W k (U, V) is formulated by an optimum of a parametric linear program g on a cost matrix M, and n × m number of variables parameterized by the marginals a and b as follows: The Wasserstein distance in is a Linear Program (LP) and a subgradient of its solution can be calculated using Lagrange duality. The dual LP of is formulated as follows: where the polyhedron C M of dual variables is as follows: Considering LP duality, the following equality is established. Computing the exact Wasserstein distance in is time consuming. To alleviate this problem, has introduced an interesting method that regularizes using the entropy of the solution matrix H(T), (i.e., min T, M + γH(T)). It has been shown that if T γ is the solution of the regularized version of and α γ is its dual solution in, then ∃!u ∈ R n +, v ∈ R m + such that the solution matrix is T γ = diag(u)Kdiag(v) and α γ = − log(u)/γ + (log(u) 1 n )/(γn))1 n where, K = exp(−M/γ). The vectors u and v are updated iteratively between step 1 and 2 by using the well-known Sinkhorn algorithm as follows: step 1)u = a/Kv and step 2)v = b/K u, where/ denotes element-wise division operator. Given an image x n ∈ R m×n from the either labeled or the unlabeled set, the CNN acts as a function f n: R m×n → R c with the parameters θ n that maps x n to a c-dimensional representation, where c is the number of classes. Assume that X u = {x 1, ..., x n} is the set of CNN outputs extracted from the unlabeled data. As noted in , the Wasserstein barycenter of the unlabeled set X u is equivalent to Lloyd's algorithm, where the maximization step (i.e., the assignment of the weight of each data point to its closest centroid) is equivalent to the computation of α in dual form, while the expectation step (i.e., the re-centering step) is equivalent to the update for centers Y using the optimal transport, which in this case is equivalent to the trivial transportation plan that assigns the weight (divided by n) of each unlabeled data in X u to its closest neighbor in centers Y. Algorithm 1 shows the Wasserstein barycenter of the unlabeled data for clustering. while not converged do 6:, t ← t + 1 10: end while 11: a ←â 12: Expectation Step: T ← optimal coupling of p(a, b, M XuY) 14:, balancing coefficients: α, λ, learning rate: β, batch size: b, distance matrix: X, 1: train CNN parameters initially using the labeled data, 2: repeat 3:: Softmax layer outputs on Z l and Z u, 4: {Q 1, ..., Q c} ← cluster on X u using Algorithm. 1, {P 1, ..., P c} ← labeled data grouped to c classes, 6: compute α, β based on amount of the mass in measures Q and P, for each Q i and P j do 8: end for 10: T ← optimal coupling of p(α, β, X), {y u} n u=1 ← pseudo-label data in each cluster Q i with the highest amount of mass transport toward the labeled measure (i.e., argmax T (i, :)), 12: The regular OT problem defined in can be solved by an effective linear programming method in the order of O(n 3 log(n)) time complexity, where n is number of the points in each probability measures. has introduced an interesting approach which relaxes the OT problem by adding a strong convex regularizer to the OT cost function to reduce the time complexity to O(n 2). Specifically, this approach asks for a solution T with more entropy, instead of computing the exact Wasserstein distance. In other words, the regularized OT distances can interpolate the solution, depending on the regularization strength γ, between exact OT (γ = 0), and Maximum Mean Discrepancy, MMD, (γ = ∞). In this work, we use the regularized OT not only for the matter of time complexity, but also it has been shown that the sample complexity of exact Wasserstein distance is O(1/n 1/d), while the regularized Wasserstein distance depending on γ value, is between O(1/ √ n) and O(1/n 1/d), where d is dimension of the samples;. This means that the entropic regularization reduces the chance of over-fitting for our SSL model when it computes the Wasserstein distance between output of the CNN obtained from the labeled and unlabeled data. Hence, our OT problem in the regularized form is recast as follows: where γ is a hyperparameter that balances two terms in, and E(T) = − mn ij T ij (log(T ij − 1) is the entropy of the solution matrix T. It has been shown that if T γ is the solution of the optimization, then ∃!u ∈ R n +, v ∈ R m + such that the solution matrix for is T γ = diag(u)Kdiag(v) where, K = exp(−X/γ). The vectors u and v are updated iteratively between step 1 and 2 by using the well-known Sinkhorn algorithm as follows: step 1)u = a/Kv and step 2)v = b/K u, where/ denotes element-wise division operator. It can be simply shown that there always exists an optimal coupling, π ∈ Π(M, M), that achieves infimum of Eq. in the paper. This is because the cost function ||x − y|| in Eq. is continuous, and based on Theorem 4.1, the existence of an optimal coupling π ∈ Π(R, S) which obtains the infimum is guaranteed due to the tightness of Π(R, S). Furthermore, based on Corollary 6.11, the term W k (x, x) used in Eq. is a continuous function and Π(M, M) is tight again, so the existence of an optimal coupling in Π(M, M) is also guaranteed. Let L 1 be the Lebesgue space of exponent 1, and (X, µ) and (Y, ν) be two Polish probability spaces; let a: X → R ∪ {−∞} and b: Y → R ∪ {−∞} be two upper semi-continuous functions such that Then there is a coupling of (µ, ν) which minimizes the total cost Ec(X, Y) among all possible couplings (X, Y). Lemma 1: Let X and Y be two Polish spaces. Let R ⊂ P(X) and S ⊂ P(Y) be tight subsets of P(X) and P(Y) respectively. Then, the set Π(R, S) of all transference plans whose marginals lie in R and S respectively, is itself tight in P(X × Y). Proof of Lemma: Let µ ∈ R, ν ∈ S, and π ∈ Π(µ, ν). By assuming that, for any > 0 there is a compact set K ⊂ X, independent of the choice of µ in R, such that µ[X nK] ≤; and similarly there is a compact set L ⊂ Y, independent of the choice of ν in S, such that ν[YnL] ≤. Then, for any coupling (X, Y) of (µ, ν), The desired follows because this bound is independent of the coupling, and K × L is compact in X × Y. Lemma 2: Let X and Y be two Polish spaces, and c: X × Y → R ∪ {+∞} a lower semi-continuous cost function. Let h: X × Y → R ∪ {−∞} be an upper semi-continuous function such that c ≥ h. Let (π k) k ∈ N be a sequence of probability measures on X × Y, converging weakly to some Therefore, In particular, if c is non-negative, then F: π → cdπ is lower semi-continuous on P(X × Y), equipped with the topology of weak convergence. Replacing c by c−h, we may assume that c is a non-negative lower semi-continuous function. Then c can be written as the point-wise limit of a non-decreasing family (c) ∈ N of continuous real-valued functions. By monotone convergence,: If X is a Polish space, then a set R ⊂ P(X) is precompact for the weak topology if and only if it is tight, i.e. for any > 0 there is a compact set K such that µ[X nK] ≤ for all µ ∈ R. Proof of Theorem 4.1: Since X is Polish, {µ} is tight in P(X); similarly, {ν} is tight in P(Y). By using the Lemma 1, Π(µ, ν) is tight in P(X × Y), and by using Prokhorovs theorem, this set has a compact closure. By passing to the limit in the equation for marginals, we see that Π(µ, ν) is closed, so it is in fact compact. Then let (π k) k ∈ N be a sequence of probability measures on X × Y, such that cdπ k converges to the infimum transport cost. Extracting a sub-sequence if necessary, we may assume that π k converges to some π ∈ Π(µ, ν)., and c ≥ h by assumption; moreover, hdπ k = hdπ = adµ + bdν; so Lemma 2 implies: Therefore, π is minimizing. Note that further details of the proof of Theorem 4.1 are also available in Villani's book. Corollary 6.11 in Villanis book: ) is a Polish space, and p ∈ [1, ∞), then W p is continuous on P p (X). More explicitly, if µ k (resp. ν k) converges to µ (resp. ν) weakly in P p (X) as k → ∞, then One of the interesting arguments presented in for a standard evaluation of different SSL models is that it may not be feasible to perform model selection for an SSL challenge if the hyperparameters of the model are tuned on the realistically small validation sets. On the other hand, most of the SSL datasets in the literature are designed in such a way that the validation set, which is used for tuning the hyperparameters but not for parameters of the model, is much larger than the training set. For example, the standard SVHN dataset used in our work has about 7000 labeled data in the validation set. Hence, the validation set is seven times larger than the training set of the SSL methods which evaluate their performance by using only 1,000 labeled data during the training. However, this is not a practical choice for a real-world application. This is because, this large validation set will be used as the training set instead of validation set for tuning the hyperparameters. Using small validation sets, however, causes an issue in that the evaluation metric, such as the accuracy for tuning the hyperparameters will be unstable and noisy across the different runs. Although the fact that small validation sets limit the ability for model selection has been discussed in , the work presented in has used the Hoeffding inequality to directly analyze the relationship between the size of validation set and the variance in estimation of a models accuracy: P(|V − E(V)| < p) > 1 − 2 exp(−2np 2). In this inequality, V denotes the empirical estimate of the validation error, E[V] is its hypothetical true value, p is the desired maximum deviation between the estimation and the true value, and n represents the number of samples in the validation set. Based on this inequality, the number of samples in the validation set should be very large. For example, we will require about 20,000 samples in the validation set if we want to be 95% confident in estimation of validation error that differes less than 1% from the absolute true value. Note that in this analysis, validation error is computed as the average of independent binary indicator variables representing if a given sample in the validation set is classified correctly or not. This analysis may be unrealistic because of the assumption that the validation accuracy is the average of independent variables. To address this problem, measure this phenomenon empirically, and train the SSL methods using 1,000 labels in the training set from SVHN dataset and then evaluate them on the validation sets with different sizes. Note that these small synthetic validation sets are generated by different randomly sampled sets without overlapping from the full SVHN validation set. Following the same setting for evaluation of our SSL algorithm (ROT) in a real world scenario, in Fig. 3(a) and Fig. 4(a), we reported the mean and standard deviation of validation errors over five times randomly non-overlapping splitting the SVHN and CIFAR validation sets with varying sizes. The in Fig. 3(a) and Fig. 4(a) indicate that as we increase the size of validation set, the ROT algorithm will be more confident and stable to select its hyperparameters than the case where we use small-size validation set. For a fair comparison between our method and the other SSL methods in Table. 1 of the paper, we have been consistent with other methods in the size of the training and validation sets as it is designed in standard SVHN and CIFAR-10 datasets. Specifically, for SVHN, we used 65,932 images for the training set and 7,325 for the validation set, and for CIFAR-10 dataset, we used 45,000 images for the training set and 5,000 images for the validation set. Fig. 3(b) and Fig. 4(b) indicate the error rate of the ROT algorithm on the SVHN and CIFAR validation sets for different values of λ in our transportation plan. Note that during the tuning of λ, we fixed α in Eq. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkeXTaNKPS | We propose a new algorithm based on the optimal transport to train a CNN in an SSL fashion. |
Batch normalization (batch norm) is often used in an attempt to stabilize and accelerate training in deep neural networks. In many cases it indeed decreases the number of parameter updates required to achieve low training error. However, it also reduces robustness to small adversarial input perturbations and noise by double-digit percentages, as we show on five standard datasets. Furthermore, substituting weight decay for batch norm is sufficient to nullify the relationship between adversarial vulnerability and the input dimension. Our work is consistent with a mean-field analysis that found that batch norm causes exploding gradients. Batch norm is a standard component of modern deep neural networks, and tends to make the training process less sensitive to the choice of hyperparameters in many cases . While ease of training is desirable for model developers, an important concern among stakeholders is that of model robustness to plausible, previously unseen inputs during deployment. The adversarial examples phenomenon has exposed unstable predictions across state-of-the-art models . This has led to a variety of methods that aim to improve robustness, but doing so effectively remains a challenge BID0;; ). We believe that a prerequisite to developing methods that increase robustness is an understanding of factors that reduce it. Presented at the ICML 2019 Workshop on Identifying and Understanding Deep Learning Phenomena. Copyright 2019 by the author(s). Layer 2 Layer 14 Figure 1. Two mini-batches from the "Adversarial Spheres" dataset (2D), and their representations in a deep linear network with batch norm at initialization. Mini-batch membership is indicated by marker fill and class membership by colour. Each layer is projected to its two principal components. Classes are mixed by Layer 14.ing neural network architectures-that use batch norm-and patching them against specific attacks, e.g., through inclusion of adversarial examples during training (; ; ; Mądry et al., 2018). An implicit assumption is that batch norm itself does not reduce robustness -an assumption that we tested empirically and found to be invalid. In the original work that introduced batch norm, it was suggested that other forms of regularization can be turned down or disabled when using it without decreasing standard test accuracy. Robustness, however, is less forgiving: it is strongly impacted by the disparate mechanisms of various regularizers. The frequently made observation that adversarial vulnerability can scale with the input dimension (; ;) highlights the importance of identifying regularizers as more than merely a way to improve test accuracy. In particular, batch norm was a confounding factor in , making the of their initialization-time analysis hold after training. By adding 2 regularization and removing batch norm, we show that there is no inherent relationship between adversarial vulnerability and the input dimension. We briefly review how batch norm modifies the hidden layers' pre-activations h of a neural network. We use the notation of , where α is the index for a neuron, l for the layer, and i for a mini-batch of B samples from the dataset; N l denotes the number of neurons in layer l, W l is the matrix of weights and b l is the vector of biases that parametrize layer l. The batch mean is defined as µ α = 1 B i h αi, and the variance is σ DISPLAYFORM0 2. In the batch norm procedure, the mean µ α is subtracted from the pre-activation of each unit h l αi (consistent with), the is divided by the standard deviation σ α plus a small constant c to prevent division by zero, then scaled and shifted by the learned parameters γ α and β α, respectively. This is described in Eq., where a per-unit nonlinearity φ, e.g., ReLU, is applied after the normalization. DISPLAYFORM1 Note that this procedure fixes the first and second moments of all neurons α equally at initialization. This suppresses the information contained in these moments. Because batch norm induces a non-local batch-wise nonlinearity to each unit α, this loss of information cannot be recovered by the parameters γ α and β α.To understand how batch normalization is harmful, consider two mini-batches that differ by only a single example: due to the induced batch-wise nonlinearity, they will have different representations for each example . This difference is further amplified by stacking batch norm layers. Conversely, batch normalization of intermediate representations for two different inputs impair the ability to distinguish high-quality examples (as judged by an "oracle") that ought to be classified with a large prediction margin, from low-quality, i.e., more ambiguous, instances. We argue that this information loss and inability to maintain relative distances in the input space reduces adversarial as well as general robustness. Figure 1 shows a degradation of class-relevant input distances in a batch-normalized linear network on a 2D variant of the "Adversarial Spheres" dataset . We first evaluate the robustness (quantified as the drop in test accuracy under input perturbations) of convolutional networks, with and without batch norm, that were trained using standard procedures. The datasets -MNIST, SVHN, CIFAR-10, and ImageNet -were normalized to zero mean and unit variance. As a white-box adversarial attack we use projected gradient descent (PGD), ∞ -and 2 -norm variants, for its simplicity and ability to degrade performance with little perceptible 1 We add a ReLU nonlinearity when attempting to learn the binary classification task posed by in Appendix D, but the activations in the linear case give us pause. change to the input (Mądry et al., 2018). We run PGD for 20 iterations, with ∞ = 0.03 and a step size of ∞ /10 for SVHN, CIFAR-10, and ∞ = 0.01 for ImageNet. For PGD-2 we set 2 = ∞ √ d, where d is the input dimension. We report the test accuracy for additive Gaussian noise of zero mean and variance 1 /4, denoted as "Noise" , as well as the full CIFAR-10-C common corruption benchmark in Appendix C.We found these methods were sufficient to demonstrate a considerable disparity in robustness due to batch norm, but this is not intended as a formal security evaluation. All uncertainties are the standard error of the mean. For the SVHN dataset, models were trained by stochastic gradient descent (SGD) with momentum 0.9 for 50 epochs, with a batch size of 128 and initial learning rate of 0.01, which was dropped by a factor of ten at epochs 25 and 40. Trials were repeated over five random seeds. We show the of this experiment in TAB0, finding that despite batch norm increasing clean test accuracy by 1.86 ± 0.05%, it reduced test accuracy for additive noise by 5.5 ± 0.6%, for PGD-∞ by 17 ± 1%, and for PGD-2 by 20 ± 1%. Clean Noise PGD-∞ PGD-2 92.60 ± 0.04 83.6 ± 0.2 27.1 ± 0.3 22.0 ± 0.8 94.46 ± 0.02 78.1 ± 0.6 10 ± 1 1.6 ± 0.3For the CIFAR-10 experiments we trained models with a similar procedure as for SVHN, but with random 32 × 32 crops using four-pixel padding, and horizontal flips. In the first experiment, a basic comparison with and without batch norm shown in TAB1, we evaluated the best model in terms of test accuracy after training for 150 epochs with a fixed learning rate of 0.01. In this case, inclusion of batch norm reduces the clean generalization gap (difference between training and test accuracy) by 1.1 ± 0.2%. For additive noise, test accuracy drops by 6 ± 1%, and for PGD perturbations by 17.3 ± 0.7% and 5.9 ± 0.4% for ∞ and 2 variants, respectively. Very similar , presented in TAB2, are obtained on a new test set, CIFAR-10.1 v6 : batch norm slightly improves the clean test accuracy (by 2.0 ± 0.3%), but leads to a considerable drop in test accuracy for the cases with additive noise and the two 2 Each experiment has a unique uncertainty, hence the number of decimal places varies. Clean Noise PGD-∞ PGD-2 87.9 ± 0.1 78.9 ± 0.6 52.9 ± 0.6 65.6 ± 0.3 88.7 ± 0.1 73 ± 1 35.7 ± 0.3 59.7 ± 0.3 It has been suggested that one of the benefits of batch norm is that it facilitates training with a larger learning rate . We test this from a robustness perspective in an experiment summarized in Table 4, where the initial learning rate was increased to 0.1 when batch norm was used. We prolonged training for up to 350 epochs, and dropped the learning rate by a factor of ten at epoch 150 and 250 in both cases, which increases clean test accuracy relative to Table 2. The deepest model that was trainable using standard initialization without batch norm was VGG13. 3 None of the deeper models with batch norm recover the robustness of the most shallow, or same-depth equivalents without batch norm, nor does the higher learning rate in conjunction with batch norm improve robustness compared to baselines trained for the same number of epochs. Additional for deeper models on SVHN and CIFAR-10 can be found in Appendix A.We evaluated the robustness of pre-trained ImageNet models from the torchvision.models repository.4 Results are shown in TAB3, where batch norm improves top-5 accuracy on noise in some cases, but consistently reduces it by 8.54% to 11.00% (absolute) for PGD. The trends are the same for top-1 accuracy, only the absolute values were 3 For which one of ten random seeds failed to achieve better than chance accuracy on the training set, while others performed as expected. We report the first three successful runs for consistency with the other experiments.4 https://pytorch.org/docs/stable/ torchvision/models.html, v1.1.0. We show the effect of increasing the number of epochs on these trends in Appendix A.5. They also show in experiments that independence between vulnerability and the input dimension can be approximately recovered through adversarial training by projected gradient descent (PGD) (Mądry et al., 2018), with a modest tradeoff of clean accuracy. We show that this can be achieved by simpler means and with little to no trade-off through 2 weight decay, where the regularization constant λ corrects. We confirm that without regularization the loss does scale roughly as predicted: the predicted values lie between loss ratios obtained for = 0.05 and = 0.1 attacks for most image widths (see Table 4 of Appendix B). Training with 2 weight decay, however, we obtain adversarial test accuracy ratios of 0.98 ± 0.01, 0.96 ± 0.04, and 1.00 ± 0.03 and clean accuracy ratios of 0.999 ± 0.002, 0.996 ± 0.003, and 0.987 ± 0.004 for Next, we repeat this experiment with a two-hidden-layer ReLU MLP, with the number of hidden units equal to the half the input dimension, and optionally use one hidden layer with batch norm. 5 To evaluate robustness, 100 iterations of BIM-∞ were used with a step size of 1e-3, and Table 6. Evaluating the robustness of a MLP with and without batch norm. We observe a 61 ± 1% reduction in test accuracy due to batch norm for 97.95 ± 0.08 93.0 ± 0.4 66.7 ± 0.9 97.88 ± 0.09 76.6 ± 0.7 22.9 ± 0.7 56 98.19 ± 0.04 93.8 ± 0.1 53.2 ± 0.7 98.22 ± 0.02 79.3 ± 0.6 8.6 ± 0.8 84 98.27 ± 0.04 94.3 ± 0.1 47.6 ± 0.8 98.28 ± 0.05 80.5 ± 0.6 6.1 ± 0.5 Table 7. Evaluating the robustness of a MLP with 2 weight decay (same λ as for linear model, see Despite a difference in clean accuracy of only 0.08 ± 0.05%, Table 6 shows that for the original image resolution, batch norm reduced accuracy for noise by 16.4 ± 0.4%, and for BIM-∞ by 43.8 ± 0.5%. Robustness keeps decreasing as the image size increases, with the batch-normalized network having ∼ 40% less robustness to BIM and 13 − 16% less to noise at all sizes. We then apply the 2 regularization constants tuned for the respective input dimensions on the linear model to the ReLU MLP with no further adjustments. Table 7 shows that by adding sufficient 2 regularization (λ = 0.01) to recover the original (DISPLAYFORM0, no BN) accuracy for BIM of ≈ 66% when using batch norm, we induce a test error increase of 1.69 ± 0.01%, which is substantial on MNIST. Furthermore, using the same regularization constant without batch norm increases clean test accuracy by 1.39 ± 0.04%, and for the BIM-∞ perturbation by 21.7 ± 0.4%.Following the guidance in the original work on batch norm to the extreme (λ = 0): to reduce weight decay when using batch norm, accuracy for the ∞ = 0.1 perturbation is degraded by 79.3 ± 0.3% for √ d = 56, and 81.2 ± 0.2% for √ d = 84. In all cases, using batch norm greatly reduced test accuracy for noisy and adversarially perturbed inputs, while weight decay increased accuracy for such inputs. We found that there is no free lunch with batch norm: the accelerated training properties and occasionally higher clean test accuracy come at the cost of robustness, both to additive noise and for adversarial perturbations. We have shown that there is no inherent relationship between the input dimension and vulnerability. Our highlight the importance of identifying the disparate mechanisms of regularization techniques, especially when concerned about robustness. Bjorck, N., Gomes, C. P., Selman, B., and Weinberger, K. Q.Understanding This section contains supplementary explanations and to those of Section 3.A.1. Why the VGG Architecture?For SVHN and CIFAR-10 experiments, we selected the VGG family of models as a simple yet contemporary convolutional architecture whose development occurred independent of batch norm. This makes it suitable for a causal intervention, given that we want to study the effect of batch norm itself, and not batch norm + other architectural innovations + hyperparameter tuning. State-of-the-art architectures such as Inception, and ResNet whose development is more intimately linked with batch norm may be less suitable for this kind of analysis. The superior standard test accuracy of these models is somewhat moot given a trade-off between standard test accuracy and robustness, demonstrated in this work and elsewhere (; ; ;). Aside from these reasons, and provision of pre-trained variants on ImageNet with and without batch norm in torchvision.models for ease of reproducibility, this choice of architecture is arbitrary. We used the PGD implementation from Ding et al. FORMULA4 with settings as below. The pixel range was set to {±1} for SVHN, and {±2} for CIFAR-10 and ImageNet:from advertorch.attacks import LinfPGDAttack adversary = LinfPGDAttack(net, loss_fn=nn. CrossEntropyLoss(reduction="sum"), eps=0.03, nb_iter=20, eps_iter=0.003, rand_init=False, clip_min=-1.0, clip_max=1.0, targeted=False)We compared PGD using a step size of /10 to our own BIM implemenation with a step size of /20, for the same number of iterations. This reduces test accuracy for ∞ = 0.03 perturbations from 31.3 ± 0.2% for BIM to 27.1 ± 0.3% for PGD for the unnormalized VGG8 network, and from 15 ± 1% to 10 ± 1% for the batch-normalized network. The difference due to batch norm is identical in both cases: 17 ± 1%. Results were also consistent between PGD and BIM for ImageNet. We also tried increasing the number of PGD iterations for deeper networks. For VGG16 on CIFAR-10, using 40 iterations of PGD with a step size of ∞ /20, instead of 20 iterations with ∞ /10, reduced accuracy from 28.9 ± 0.2% to 28.5 ± 0.3%, a difference of only 0.4 ± 0.5%. Our first attempt to train VGG models on SVHN with more than 8 layers failed, therefore for a fair comparison we report the robustness of the deeper models that were only trainable by using batch norm in TAB6. None of these models obtained much better robustness in terms of PGD-2, although they did better for PGD-∞. Fixup initialization was recently proposed to reduce the use of normalization layers in deep residual networks (b). As a natural test we compare a WideResNet (28 layers, width factor 10) with Fixup versus the default architecture with batch norm. Note that the Fixup variant still contains one batch norm layer before the classification layer, but the number of batch norm layers is still greatly reduced. Fixup 94.6 ± 0.1 69.1 ± 1.1 20.3 ± 0.3 9.4 ± 0.2 87.5 ± 0.3 67.8 ± 0.9 BN 95.9 ± 0.1 57.6 ± 1.5 14.9 ± 0.6 8.3 ± 0.3 89.6 ± 0.2 58.3 ± 1.2We train WideResNets (WRN) with five unique seeds and show their test accuracies in TAB7. Consistent with Recht et al. FORMULA4, higher clean test accuracy on CIFAR-10, i.e. obtained by the WRN compared to VGG, translated to higher clean accuracy on CIFAR-10.1. However, these gains were wiped out by moderate Gaussian noise. VGG8 dramatically outperforms both WideResNet variants subject to noise, achieving 78.9 ± 0.6 vs. 69.1 ± 1.1. Unlike for VGG8, the WRN showed little generalization gap between noisy CIFAR-10 and 10.1 variants: 69.1 ± 1.1 is reasonably compatible with 67.8 ± 0.9, and 57.6 ± 1.5 with 58.3 ± 1.2. The Fixup variant improves accuracy by 11.6 ± 1.9% for noisy CIFAR-10, 9.5 ± 1.5% for noisy CIFAR-10.1, 5.4 ± 0.6% for PGD-∞, and 1.1 ± 0.4% for PGD-2.We believe our work serves as a compelling motivation for Fixup and other techniques that aim to reduce usage of batch normalization. The role of skip-connections should be isolated in future work since absolute values were consistently lower for residual networks. TAB0. ImageNet validation accuracy for adversarial examples transfered between VGG variants of various depths, indicated by number, with and without batch norm ("", ""). All adversarial examples were crafted with BIM-∞ using 10 steps and a step size of 5e-3, which is higher than for the white-box analysis to improve transferability. The BIM objective was simply misclassification, i.e., it was not a targeted attack. For efficiency reasons, we select 2048 samples from the validation set. Values along the diagonal in first two columns for Source = Target indicate white-box accuracy. The discrepancy between the in additive noise and for white-box BIM perturbations for ImageNet in Section 3 raises a natural question: Is gradient masking a factor influencing the success of the white-box on ImageNet? No, consistent with the white-box , when the target is unnormalized but the source is, top 1 accuracy is 10.5% − 16.4% higher, while top 5 accuracy is 5.3% − 7.5% higher, than vice versa. This can be observed in TAB0 by comparing the diagonals from lower left to upper right. When targeting an unnormalized model, we reduce top 1 accuracy by 16.5% − 20.4% using a source that is also unnormalized, compared to a difference of only 2.1% − 4.9% by matching batch normalized networks. This suggests that the features used by unnormalized networks are more stable than those of batch normalized networks. Unfortunately, the pre-trained ImageNet models provided by the PyTorch developers do not include hyperparameter settings or other training details. However, we believe that this speaks to the generality of the , i.e., that they are not sensitive to hyperparameters. ) respectively. The batch norm parameters γ and β were left as default, momentum disabled, and c = 1e-3. Each coordinate is first averaged over three seeds. Diamond shaped artefacts for unnormalized case indicate one of three seeds failed to train -note that we show an equivalent version of (a) with these outliers removed and additional batch sizes from 5-20 in FIG3. Best viewed in colour. In FIG5 we show that batch norm not only limits the maximum trainable depth, but robustness decreases with the batch size for depths that maintain test accuracy, at around 25 or fewer layers (in FIG5). Both clean accuracy and robustness showed little to no relationship with depth nor batch size in unnormalized networks. A few outliers are observed for unnormalized networks at large depths and batch size, which could be due to the reduced number of parameter update steps that from a higher batch size and fixed number of epochs .Note that in FIG5 (a) the bottom row-without batch norm-appears lighter than the equivalent plot above, with batch norm, indicating that unnormalized networks obtain less absolute peak accuracy than the batch normalized network. Given that the unnormalized networks take longer to converge, we prolong training for 40 total epochs. When they do converge, we see more configurations that achieve higher clean test accuracy than batch normalized networks in FIG5. Furthermore, good robustness can be experienced simultaneously with good clean test accuracy in unnormalized networks, whereas the regimes of good clean accuracy and robustness are still mostly non-overlapping in FIG5. Consider a logistic regression model with input x ∈ R d, labels y ∈ {±1}, parameterized by weights w, and bias b. Predictions are defined by o = w x + b and the model can be optimized by stochastic gradient descent (SGD) on the sigmoid cross entropy loss, which reduces to SGD on, where ζ is the softplus loss ζ(z) = log(1 + e −z): DISPLAYFORM0 We note that w x + b is a scaled, signed distance between x and the classification boundary defined by our model. If we define d(x) as the signed Euclidean distance between x and the boundary, then we have: w x + b = w 2 d(x). Hence, tency with the VGG8 experiment. Both models had already fit the training set by this point.minimizing is equivalent to minimizing DISPLAYFORM1 We define the scaled softplus loss as DISPLAYFORM2 and note that adding a 2 regularization term in, ing in, can be understood as a way of controlling the scaling of the softplus function: To test this theory empirically we study a single linear layer on variants of MNIST of increasing input dimension, where the "core idea" from is exact. Clearly, this model is too simple to obtain competitive test accuracy, but this is a helpful first step that will be subsequently extended to ReLU networks. The model was trained by SGD for 50 epochs with a constant learning rate of 1e-2 and a batch size of 128. In TAB0 we show that increasing the input dimension by resizing MNIST from 28 × 28 to various resolutions with PIL.Image. NEAREST interpolation increases adversarial vulnerability in terms of accuracy and loss. Furthermore, the "adversarial damage", which is predicted to grow like √ d by Theorem 4 of , falls in between that obtained empirically for = 0.05 and = 0.1 for all image widths except for 112, which experiences slightly more damage than anticipated. note that independence between vulnerability and the input dimension can be recovered through adversarial example augmented training by projected gradient descent (PGD), with a small trade-off in terms of standard test accuracy. We find that the same can be achieved through a much simpler approach: 2 weight decay, with λ chosen to correct for the loss scaling. This way we recover input dimension invariant vulnerability with little degradation of test accuracy, e.g., see the = 0.1 accuracy ratio of 1.00 ± 0.03 with 2 for TAB0 compared to 0.10 ± 0.09 without. DISPLAYFORM3 DISPLAYFORM4 Compared to PGD training, weight decay regularization i) does not have an arbitrary hyperparameter that ignores intersample distances, ii) does not prolong training by a multiplicative factor given by the number of steps in the inner loop, and 3) is less attack-specific. Thus, we do not use adversarially augmented training because we wish to convey a notion of robustness to unseen attacks and common corruptions. Furthermore, enforcing robustness to -perturbations may increase vulnerability to invariance-based examples, where semantic changes are made to the input thus changing the Oracle label, but not the classifier's prediction . Our models trained with weight decay obtained 12% higher accuracy (86 vs. 74 correct) compared to batch norm on a small sample of 100 ∞ invariance-based MNIST examples. 8 We make primary use of traditional p perturbations as they are well studied in the literature and straightforward to compute, but solely defending against these is not the end goal. A more detailed comparison between adversarial training and weight decay can be found in. The scaling of the loss function mechanism of weight decay is complementary to other mechanisms identified in the literature recently, for instance that it also increases the effective learning rate (van ; a). Our are consistent with these works in that weight decay reduces the generalization gap, even in batch-normalized networks where it is presumed to have no effect. Given that batch norm is not typically used on the last layer, the loss scaling mechanism persists in this setting although to a lesser degree. We evaluated robustness on the common corruptions and perturbations benchmarks . Common corruptions are 19 types of real-world effects that can be grouped into four categories: "noise", "blur", "weather", and "digital". Each corruption has five "severity" or intensity levels. These are applied to the test sets of CIFAR-10 and ImageNet, denoted CIFAR-10-C and ImageNet-C respectively. When reporting the mean corruption error (mCE), we average over intensity levels for each corruption, then over all corruptions. We outline the for two VGG variants and a WideResNet on CIFAR-10-C, trained from scratch independently over three and five random seeds, respectively. The most important are also summarized in TAB0.For VGG8 batch norm increased the error rate for all noise variants, at every intensity level. The mean generalization gaps 91.74 ± 0.02 64.5 ± 0.8 63.3 ± 0.3 70.9 ± 0.4 71.5 ± 0.5 65.3 ± 0.6 93.0 ± 0.1 43.6 ± 1.2 49.7 ± 0.5 56.8 ± 0.9 60.4 ± 0.7 67.7 ± 0.5 WRN-28-10 F 94.6 ± 0.1 63.3 ± 0.9 66.7 ± 0.9 71.7 ± 0.7 73.5 ± 0.6 81.2 ± 0.7 95.9 ± 0.1 51.2 ± 2.7 56.0 ± 2.7 63.0 ± 2.5 66.6 ± 2.5 86.0 ± 0.9 for noise were: Gaussian-9.2 ± 1.9%, Impulse-7.5 ± 0.8%, Shot-5.6 ± 1.6%, and Speckle-4.5 ± 1.6%. The next most impactful corruptions were: Contrast-4.4 ± 1.3%, Spatter-2.4 ± 0.7%, JPEG-2.0 ± 0.4%, and Pixelate-1.3 ± 0.5%.Results for the remaining corruptions were a coin toss as to whether batch norm improved or degraded robustness, as the random error was in the same ballpark as the difference being measured. These were: Weather-Brightness, Frost, Snow, and Saturate; Blur-Defocus, Gaussian, Glass, Zoom and Motion; and Elastic transformation. Averaging over all corruptions we get an mCE gap of 1.9 ± 0.9% due to batch norm, or a loss of accuracy from 72.9 ± 0.7% to 71.0 ± 0.6%.VGG13 were mostly consistent with VGG8: batch norm increased the error rate for all noise variants, at every intensity level. Particularly notable, the generalization gap enlarged to 26 − 28% for Gaussian noise at severity levels 3, 4, and 5; and 17%+ for Impulse noise at levels 4 and 5. Averaging over all levels, we have gaps for noise variants of: Gaussian-20.9 ± 1.4%, Impulse-13.6 ± 0.6%, Shot-14.1 ± 1.0%, and Speckle-11.1 ± 0.8%. Robustness to the other corruptions seemed to benefit from the slightly higher clean test accuracy of 1.3 ± 0.1% due to batch norm for VGG13. The remaining generalization gaps varied from (negative) 0.2 ± 1.3% for Zoom blur, to 2.9 ± 0.6% for Pixelate. Overall mCE was reduced by 2.0 ± 0.3% for the unnormalized network. For a WideResNet 28-10 (WRN) using "Fixup" initialization (b) to reduce the use of batch norm, the mCE was similarly reduced by 1.6 ± 0.4%. Unpacking each category, the mean generalization gaps for noise were: Gaussian-12.1 ± 2.8%, Impulse-10.7 ± 2.9%, Shot-8.7 ± 2.6%, and Speckle-6.9 ± 2.6%. Note that the large uncertainty for these measurements is due to high variance for the model with batch norm, on average 2.3% versus 0.7% for Fixup. JPEG compression was next at 4.6 ± 0.3%.Interestingly, some corruptions that led to a positive gap for VGG8 showed a negative gap for the WRN, i.e., batch norm improved accuracy to: Contrast-4.9 ± 1.1%, Snow-2.8 ± 0.4%, Spatter-2.3 ± 0.8%. These were the same corruptions for which VGG13 lost, or did not improve its robustness when batch norm was removed, hence why we believe these correlate with standard test accuracy (highest for WRN). Visually, these corruptions appear to preserve texture information. Conversely, noise is applied in a spatially global way that disproportionately degrades these textures, emphasizing shapes and edges. It is now well known that modern CNNs trained on standard datasets have a propensity to rely excessively on texture rather than shape cues . The WRN obtains ≈ 0 training error and is in our view over-fitted; CIFAR-10 is known to be difficult to learn robustly given few samples . The "Adversarial Spheres" dataset contains points sampled uniformly from the surfaces of two concentric n-dimensional spheres with radii R = 1 and R = 1.3 respectively, and the classification task is to attribute a given point to the inner or outer sphere. We consider the case n = 2, that is, datapoints from two concentric circles. This simple problem poses a challenge to the conventional wisdom regarding batch norm: not only does batch norm harm robustness, it makes training less stable. In Figure 6 we show that, using the same architecture as in , the batch-normalized network is highly sensitive to the learning rate η. We use SGD instead of Adam to avoid introducing unnecessary complexity, and especially since SGD has been shown to converge to the maximum-margin solution for linearly separable data (Soudry et al., Figure 5 . Two mini-batches from the "Adversarial Spheres" dataset (2D variant), and their representations in a deep linear network at initialization time (a) with batch norm and (b) without batch norm. Mini-batch membership is indicated by marker fill and class membership by colour. Each layer is projected to its two principal components. In (b) we scale both components by a factor of 100, as the dynamic range decreases with depth under default initialization. We observe in (a) that some samples are already overlapping at Layer 2, and classes are mixed at Layer 14.(a) (b) Figure 6. We train the same two-hidden-layer fully connected network of width 1000 units using ReLU activations and a mini-batch size of 50 on a 2D variant of the "Adversarial Spheres" binary classification problem . Dashed lines denote the model with batch norm. The batch-normalized model fails to train for a learning rate of η = 0.01, which otherwise converges quickly for the unnormalized equivalent. We repeat the experiment over five random seeds, shaded regions indicate a 95% confidence interval.2018). We use a finite dataset of 500 samples from N (0, I) projected onto the circles. The unormalized network achieves zero training error for η up to 0.1 (not shown), whereas the batch-normalized network is already untrainable at η = 0.01. To evaluate robustness, we sample 10,000 test points from the same distribution for each class (20k total), and apply noise drawn from N (0, 0.005 × I). We evaluate only the models that could be trained to 100% training accuracy with the smaller learning rate of η = 0.001. The model with batch norm classifies 94.83% of these points correctly, while the unnormalized net obtains 96.06%. We show a qualitative aspect of batch norm by visualizing the activations of the penultimate hidden layer in a fully-connected network (a) without and (b) with batch norm over the course of 500 epochs. In the unnormalized network 7(a), all data points are overlapping at initialization. Over the first ≈ 20 epochs, the points spread further apart (middle plot) and begin to form clusters. In the final stage (epochs ≈ 300 − 500), the clusters become tighter. When we introduce two batch-norm layers in the network, placing them before the visualized layer, the activation patterns display notable differences, as shown in FIG9 (b): i) at initialization, all data points are spread out, allowing easier partitioning into clusters and thus facilitating faster training; ii) the clusters are more stationary, and the stages of cluster formation and tightening are not as strictly separated; iii) the inter-cluster distance and the clusters themselves are larger, indicating that the decision boundary is more sensitive to small input variations. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BkxOwVShhE | Batch normalization reduces adversarial robustness, as well as general robustness in many cases, particularly to noise corruptions. |
This paper presents preliminary ideas of our work for auto- mated learning of Hierarchical Goal Networks in nondeter- ministic domains. We are currently implementing the ideas expressed in this paper. Many domains are amenable to hierarchical problem-solving representations whereby complex problems are represented and solved at different levels of abstraction. Examples include some navigation tasks where hierarchical A* has been shown to be a natural solution solving the navigation problem over different levels of abstraction BID29 BID66; dividing a reinforcement learning task into subtasks where policy control is learned for subproblems and combined to form a solution for the overall problem BID9 BID10 BID12; abstraction planning, where concrete problems are transformed into abstract problem formulations, these abstract problems are solved as abstract plans, and in turn these abstract plans are refined into concrete solutions BID31 BID4; and hierarchical task network (HTN) planning where complex tasks are recursively decomposed into simpler tasks BID8 BID68 BID15 BID43. These paradigms have in common a divideand-conquer method to problem solving that is amenable to stratified representation of the subproblems. Among the various formalisms, HTN planning has been a recurrent research focus over the years. An HTN planner formulates a plan using actions and HTN methods. The latter describe how and when to reduce complex tasks into simpler subtasks. HTN methods are used to recursively decompose tasks until so-called primitive tasks are reached corresponding to actions that can be performed directly in the world. The HTN planners SHOP and SHOP2 have routinely demonstrated impressive gains in performance (runtime and otherwise) over standard planners. The primary reason for these performance gains is because of the capability of HTN planners to exploit domain-specific knowledge BID67. HTNs provide a natuCopyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ral knowledge-modeling representation for many domains, including military planning BID38 BID40, strategy formulation in computer games BID23 BID22, manufacturing processes BID47 BID61, project planning BID62 BID63, story-telling BID6, web service composition, and UAV planning BID20 Despite these successes, HTN planning suffers from a representational flaw centered around the notion of task. A task is informally defined as a description of an activity to be performed (e.g., find the location of robot r15) (e.g., the task "dislodge red team from Magan hill" in some adversarial game) and syntactically represented as a logical atom (e.g., (locate r15)). (e.g., "(dislodge redteam Magan)"). Beyond this syntax, there is no explicit semantics of what tasks actually mean in HTN representations. HTN planners obviate this issue by requiring that a complete collection of tasks and methods is given, one that decomposes every complex task in every plausible situation. However, the knowledge engineering effort of creating a complete set of tasks and methods can be significant BID17. Furthermore, researchers have pointed out that the lack of tasks' semantics make using HTNs problematic for execution monitoring problems BID13 BID14 ). Unlike goals, which are conditions that can be evaluated against the current state of the world, tasks have no explicit semantics other than decomposing them using methods. For example, suppose that a team of robots is trying to locate r15 and, using HTN planning, it generates a plan calling for the different robots to ascertain r15's location. While executing the plan generate a complex plan in a gaming task to dislodge red team from Magan hill, the HTN planner might set a complex plan to cutoff access to Magan, surround it, weaken the defenders with artillery fire and then proceed to assault it. If sometime while executing the plan, the opponent abandons the hill, the plan would continue to be executed despite the fact that the task is already achieved. This is due to the lack of task semantics, so their fulfillment cannot be checked against the current state; instead their fulfillment is only guaranteed when the execution of the generated plans is completed. Hierarchical Goal Networks (HGNs) solve these limitations by representing goals (not tasks) at all echelons of the hierarchy BID56. Hence, goal fulfillment can be directly checked against the current state. In particular, even when a goal g is decomposed into other goals (i.e., in HGN, HGN methods decompose goals into subgoals), the question if the goal is achieved can be answered directly by checking if it is valid in the current state. So in the previous example, when the opponent abandons the hill, an agent executing the plan knows this goal has been achieved regardless of how far it got into executing the said plan. Another advantage of HGNs is that it relaxes the complete domain requirement of HTN planning BID57; in HTN planning a complete set of HTN methods for each task is needed to generate plans. Even if the HGN methods are incomplete, it is still possible to generate solution plans by falling back to standard planning techniques such as heuristic planning BID24 to achieve any open goals. Nevertheless, having a collection of well-crafted HGN methods can lead to significant improvement in performance over standard planning techniques BID59.When the HGN domain is complete (i.e., there is no need to revert to standard planning techniques to solve any problem in the domain), its expressiveness is equivalent to Simple Hierarchical Ordered Planning BID59, which is the particular variant of HTN planning used by the widely used SHOP and SHOP2 BID45 ) HTN planners. SHOP requires the user to specify a total order of the tasks; SHOP2 drops this requirement allowing partial-order between the tasks BID44. Both have the same representation capabilities although SHOP2 is usually preferred since it doesn't force the user to provide a total order for the method's subtasks BID44.In this work, we propose the automated learning of HGNs for ND domains extending our previous work on learning HTNs for deterministic domains BID21. While work exists on learning goal hierarchies BID53 BID32 BID49, these works are based on formalisms that have more limited representations than HGNs and in fact predate them. Aside from HGNs, researchers have explored other ways to address the limitation associated with the lack of tasks' semantics. For instance, TMKs (Task-Method-Knowledge models) require not only the tasks and methods to be given but also the semantics of the tasks themselves as (preconditions,effects) pairs BID41 BID42. While this solves the issue with the lack of tasks' semantics it may exacerbate the knowledge engineering requirement of HTNs: the knowledge engineer must not only encode the methods and tasks but also must encode their semantics and ensure that the methods are consistent with the given tasks' semantics. To deal with incomplete HTN domains, researchers have proposed translating the methods into a collection of actions so that standard planning techniques can be used BID1 ). There are two limitations with this approach. First, HTN planning is strictly more expressive than standard planning BID15, hence the translation will be incomplete in many domains. Second, for domains when translating methods into actions is possible, it may in exponentiallymany actions on the number of methods. HGNs are more in line with efforts combining HTN and standard planning approaches BID30 BID17; the main difference is that HGNs eliminate the use of tasks all-together while still preserving the expressiveness of Simple Hierarchical Ordered Planning BID59.The problem of learning hierarchical planning knowledge has been a frequent research subject over the years. For example, ICARUS (Choi and Langley 2005) learns HTN methods by using skills (i.e., abstract definitions of semantics of complex actions) represented as Horn clauses. The crucial step is a teleoreactive process where planning is used to fill gaps in the HTN planning knowledge. For example, if the learned HTN knowledge is able to get a package from an starting location to a location L1 and the HTN knowledge is also able to get the package from a location L2 to its destination, but there is no HTN knowledge on how to get the package from L1 to L2, then an standard planner is used to generate a plan to get the package from L1 to L2 and skills are used to learn new HTN methods from the plan generated to fill the gap on how to get from L1 to L2.Another example is HTN-Maker (Hogg, Muñoz-Avila, and Kuter 2008). HTN-Maker uses task semantics defined as (preconditions,effects) pairs, exactly like TMKs mentioned before, to identify sequences of contiguous actions in the input plan trace where the preconditions and effects are met. Task hierarchies are learned when an action sequence is identified as achieving a task and the action sequence is a sub-sequence of another larger action sequence achieving another task. This includes the special case when the sub-sequence and the sequence achieve the same task. In such a situation recursion is learned. HTN-Maker learns incrementally after each training case is given. HTNLearn BID70 transforms the input traces into a constraint satisfaction problem. Like HTN-Maker, it also assumes (preconditions,effects) as the task semantics to be given as input. HTNLearn process the input traces converting them into constraints. For example, if a literal p is observed before an action a and a is a candidate first sub-task for a method m, then a constraint c is added indicating that p is a precondition of m. These constrains are solved by a MAXSAT solver, which returns the truth value for each constraint. For example, if c is true then p is added as a precondition of m. As a of the MAXSAT process, HTNLearn is not able to converge to a 100% correct domain (the evaluation of HTNLearn computes the error rates in the learned domain).Similar to HTN planning, hierarchical decompositions have been used in hierarchical reinforcement learning BID50 BID10. The hierarchical structure of the reinforcement learning problem is analogous to an instance of the decomposition tree that an HTN planner might generate. Given this hierarchical structure, hierarchical reinforcement learners perform value-function composition for a task based on the value functions learned over its subtasks recursively. However, the possible hierarchical decompositions must be provided in advance. Hierarchical goal networks (HGNs) BID56 are an alternative representation formalism to HTNs. In HGNs, goals, instead of tasks, are decomposed at every level of the hierarchy. HGN methods have the same fo.rm as HTN methods but instead of decomposing a task, they decompose a goal; analogously instead of subtasks, HGN methods have subgoals. If the domain description is incomplete, HGNs can fall back to STRIPS planners to fill gaps in the domain. On the other hand, total-order HGNs are as expressive as totalorder HTNs BID59 and its partial-order variant ) is as expressive as partial-order HTNs.Inductive learning has been used to learn rules indicating goal-subgoal relations in X-learn BID53. This is akin to learning macro-operators BID39 BID5; the learned rules and macro-operators provide search control knowledge to reach the goals more rapidly but they don't add expressibility to standard planning. SOAR learns goal-subgoal relations BID32. It uses as input annotated behavior trace structures, indicating the decisions that led to the plans; this is used to generate a goal-subgoal relations. Another work on learning goal-subgoal relations is reported in BID49 ). It uses case-based learning techniques to store goal-subgoal relations, which are then reused by using similarity metrics. These works assume some form of the input traces, unstructured in BID49 ) and structured in BID32, to be annotated with the subgoals as they are accomplished in the traces. In our proposed work, the input traces are not annotated and, more importantly, we are learning HGNs. Goal regression techniques have been used to generate a plan starting from the goals that must be achieved BID52 BID36. The of goal regression can be seen as a hierarchy recursively generated by indicating for each goal what subgoals must be achieved. The goal-subgoal relations ing from goal regression are a direct consequence of the domain's operators: the goals are effects of the operators and the preconditions are the subgoals. In contrast, in a HGN, the hierarchies of goals represent relations between the HGN methods and are not necessarily implied directly from the actions. Making an analogy with HTN methods, HGN methods capture additional domain-specific knowledge BID45 or generate plans with desirable properties (e.g., taking into account quality considerations) again not explicitly represented in the actions BID27.Work on learning hierarchical plan knowledge is related to learning of context-free grammars (CFGs), which aims at eliciting a finite set of production rules from a finite set of strings BID48 BID55 ). The precise definition of the learning problem varies constraining the ing CFG by, among others, providing a target function (e.g., obtaining a CFG with the minimum number of production rules) or assuming that negative examples (i.e., strings that must not be generated by the CFG) are given. To learn CFGs, algorithms search for production rules that generate the training set (and none of the negative examples when provided). Grammar learning is exploited by the Greedy Structure Hypothesizer (GSH) BID35, which uses probabilistic context-free grammars learning techniques to learn a hierarchical structure of the input plan traces. GSH doesnt learn preconditions since its goals are not to generate the grammars for planning but to reflect users preferences. The difference between learning CFG and learning hierarchical planning knowledge is twofold. First, characters that form a string have no meaning. In contrast, actions in a given plan are defined by their preconditions and effects. This means that plausible strings generated by the grammars may be invalid when viewed as plans. Second, learning HGNs requires not only learning the task decomposition but also the preconditions. This is an important difference: HTNs are strictly more expressive than CFGs BID16. Intuitively, HTNs are akin to context-sensitive grammars in that they constraint when a decomposition can take place. Context-sensitive grammars are also strictly more expressive then CFGs BID60.Finally, as we will see in the next the next section, our proposed work is related to the notion of planning landmarks BID25. Given a planning problem P, defined as a triple (s 0, g,A), indicating the initial state, the goals and the actions respectively, a planning landmark is either an action a ∈ A, or state atom p ∈ s (s is an state, represented as a collection of atoms) that occurs in any solution plan trace solving P. Given the problem description P, planning systems can identify automatically landmarks for P. Planning landmarks have been widely used for automated planning ing in planners such as LAMA BID54 and the HGN planner GoDel BID57 ). We want to learn HGNs for fully observable nondeterministic (FOND) planning BID19 Speck, Ortlieb, and Mattmüller 2015; BID69. In such domains, actions may have multiple outcomes. For example, in the Minecraft simulation, when a character swings a sword to hit a monster, there are two possible outcomes: either the sword hits the monster or the monster parries it and the sword doesn't hit anything. As discussed before, HTN learners require the tasks semantics to be given either as Horn clauses defining the tasks or as (preconditions,effects) pairs. The latter is used, for example, in the nondeterministic HTN learner ND-HTNMaker, a state-of-the-art HTN learner, to pinpoint locations in the traces where the various tasks are fulfilled. ND-HTNMaker enforces a right recursive structure: exactly one primitive task followed by none or exactly one compound task. The main objective of enforcing this right recursive structure is to deal with nondeterminism: if, for example, the character swings the sword (e.g., a primitive task), the follow-up compound task handles the nondeterminism: one method decomposing a compound task t will simply perform the action to swing at the monster followed by t, thereby ensuring that method can be triggered as many times as needed until the monster is hit (and dies). Other methods decomposing t handle the case when the monster has been dealt with (e.g., a method handling the case when "character next to a dead monster"). This ensures that methods learned by HTN-Maker are provable correct BID26. Correctness can be loosely defined as follows: any solution generated by a sound nondeterministic HTN planner such as ND-SHOP BID33 ) using the learned methods and the nondeterministic actions is also a solution when using the nondeterministic actions (i.e., without the methods).Like in the deterministic case, the inputs will be a collection of actions A and a collection of traces s 0 a 0 s 1 a 1... a n s n+1, where each a i ∈ A. Only this time, any action a i ∈ A may have multiple outcomes; so each occurrence of a i in the input traces will reflect one such outcome. Planning in nondeterministic domains requires to account for all possible outcomes. As such, BID7 proposed a categorization of solutions for nondeterministic domains. It distinguishes between weak, strong cyclic and strong solutions for a problem (s 0, g,A). A solution is represented as a policy π: S → A, a mapping from the possible states in the world S to actions A, indicating for any given state s ∈ S, what action π(s) to take. Given a policy π, an execution trace is any sequence s 0 π(s 0) s 1 π(s 1)... π(s n) s n+1, where s i is an state that can be reached from state s i−1 after applying action π(s i−1).A solution policy π is weak if there exists an execution trace from s 0 to a state satisfying g. Weak solutions guarantee that a goal state can be successfully reached sometimes. For example, in the Minecraft simulation, a policy that assumes a computer-controlled character will always hit any monster it encounters when swinging the sword is considered a weak solution. In particular, this solution would not account for the situation when the monster parries the character's sword attack; e.g., the monster might counter-attack and disable the agent and the agent has not planned what to do in such a situation. Under the fairness assumption, stating that "every action executed infinitely often will exhibit all its effects infinitely often" (D'Ippolito, Rodrıguez, and Sardina 2018), a solution π is either strong cyclic or strong if every terminal state entails the goals and for every state s that the agent might finds itself in after executing π from s 0, there exists an execution trace from the state s to a state satisfying g. The difference is that in strong cyclic solutions the same state might be visited more than once whereas in strong solutions this never happens. For example, a strong cyclic solution might have the character swing the sword against the monster and if the monster parries the attack, the character takes an step back to avoid the monster's counter-attack and step towards the monster while taking another swing at it; this can be repeated as many times as needed until the monster dies. Strong solutions are ideal since they never visit the same state but in some domains they might not exists. For instance, there are no strong solutions in the Minecraft simulation mentioned as the monster can repeatedly parry the character's attacks. The same occurs in the robot navigation domain BID7 ), created to model nondeterminism. In this domain a robot is navigating between offices and when it encounters a closed door for an office it wants to access, the robot will open it. There is an another agent acting in the environment that closes doors at random. So the robot might need to repeatedly execute the action to open the same door. Solving nondeterministic planning problems is difficult because of what has been dubbed the explosion of states as a of the nondeterminism BID19. One demonstrated way to counter this is by adding domain-specific knowledge as described in BID33 ). While the algorithm described is generic for a variety of ways to encode the domain-specific knowledge, it showcases hierarchical planning techniques outperforming an state-of-the-art nondeterministic planner in some domains including the robot navigation domain. The show either speedups of several orders of magnitude or the ability to solve problems of sizes, measured by the number of goals to achieve, previously impossible to solve. Relation to probabilistic domains. In this work we are neither assuming a probability distribution over the possible actions' outcomes to be given nor we aim to learn such a distribution. Once an HGN domain is learned, hierarchical reinforcement learning techniques BID10 can be used to learn a probability distribution over the various possible goal decompositions and exploit the learned distribution during problem solving as done in BID27.We propose to learn bridge atoms and their hierarchical structure with the important constraint that the learned hierarchical structure must encode the domain's nondeterminism in a sound way. For instance, the nondeterministic version of the logistics transportation domain in BID26 ) extends the deterministic version as follows: when loading a package p into vehicle v in a location l there are two possible outcomes: either p is inside v or p is still at l (i.e., the load action failed). Regardless of possibly repeating the same action multiple times, traces will bring the package to the airport, transport it by air to the destination city, and deliver it. So the kinds of decompositions we are aiming to learn should also work on nondeterministic domains; on the other hand a learned hierarchy would be unsound if, for example, it assumes that the load truck action always succeeds and immediately proceeds to deliver the package to an airport. This will lead to weak solutions. To correctly handle nondeterminism, we propose forcing a right-recursive structure on lower echelons of the learned HGNs. This takes care of the nondeterminism and combine well with the higher decompositions. For instance, in the transportation domain we identified a goal g airp, for the package p reaching the airport, identified as a bridge atom, and then have all methods achieving g airp be right recursive; e.g., methods of the form (: method g airp prec (g g airp) <), where g is some intermediate goal such as loading the package into a vehicle. Our aim is the automated learning of HGN methods. This includes learning the goals, the goal-subgoal structure of the HGN methods and their applicability conditions. Specifically, the learning problem can be defined as follows: given a set of actions A and a collection of traces Π generated using actions in A, to obtain a collection of HGN methods. A collection of methods M is correct if given any (initial state, goal) pair (s 0, g), and any solution plan π generated by a sound HGN planner using M and A, π is a correct plan solving the planning problem (s 0, g,A). An HGN method m is a construct of the form (:method head(m) preconditions(m) subgoals(m) <(m)) corresponding to the goal decomposed by m (called the head of m), the preconditions for applying m and the subgoals decomposing head(m). Figure 1 shows an example of an HGN method in the logistics transportation domain BID65. (the question marks indicate variables. It recursively decomposes the goal of delivering ? pack1 into ? loc2 into three subgoals: delivering? pack1 to the airport? airp1 in the same city as its current location? loc1, delivering? pack1 to the airport? airp2 in the same city as the destination location? loc2, and recursively achieve the head goal):Head: Package-delivery Preconditions: (at ?pack ? loc1 ? city1) (airport ? airp1 ? city1) (airport ? airp2 ?city2) (location ? loc2 ? city2) (= ?city1 ?city2) Subgoals: g 1: (package-at ?pack ? airp1) g 2:(package-at ?pack1 ? airp2) g 3:(package-at ?pack ? loc2 ? city2) Constraints: g 1 < g 3, g 2 < g 3 Figure 1: Example of an HGN method in the logistics transportation domain. The question marks indicate variables. The goal achieved by the method is the last subgoal, g 3. It recursively decomposes the goal of delivering? pack into? loc2 into three subgoals: delivering? pack1 to the airport? airp1 in the same city as its current location? loc1, delivering? pack to the airport? airp2 in the same city as the destination location? loc2, and g 3 is to be achieved after g 1 and g 2 are achieved. HGNs planners BID57 BID56 maintain a list G = g 1,..., g n of open goals (i.e., goals to achieve). Planning follows a recursive procedure, starting with π =, choosing the first element, g 1, in G, and either applying an HGN method m decomposing g 1 into m's subgoals g 1,..., g k, concatenating m's subgoals into G (i.e, G = g 1, . . ., g k, g 1, . . ., g n are the new open goals), or applying an action a ∈ A achieving g, appending a to π (i.e., π ← π · a) and removing g from G. In either case it will check if the preconditions of m (respectively, a) are satisfied in the current state. When a is applied, the current state is transformed in the usual way BID18. When G = ∅, π is returned. HGN planners extend this basic procedure to allow the use of standard planning techniques to achieve open goals and to enable a partial ordering between the methods' subgoals. the planner picks the first goal in G without predecessors. For example, in Figure 1, the user may define the constraints: g 1 < g 3, g 2 < g 3, and the planner instead of always picking the first subgoal in G, it picks the first subgoal without predecessors. We propose transforming the problem of identifying the goals and learning their hierarchical relation into the problem of finding relations between word embeddings extracted from text. Specifically, we propose viewing the collection of input traces Π as text: each plan trace π = s 0 a 0 s 1 a 1... a n s n+1 is viewed as a sentence w 1 w 2... w m; each action a i and each atom in s j is viewed as a word w k in the sentence. The order of the plan elements in each trace is preserved (we use the term plan element to refer to both atoms and actions): the word w j = a i appears before the word w j = p, for every p ∈ s i+1. In turn, every w j appears before w j = a i+1.Word embeddings are vectors representing words in a multi-dimensional vector space BID3 BID2. There are a number of algorithms to do this translation BID37 BID51. They have in common that they represent vector similarity based on the cooccurrence of words in the text. That is, words that tend to occur near one another will have similar vector representations. In our preliminary work we used Word2Vec BID37 ) (i.e., Word-Neighboring Word), a widely used algorithm for generating word embeddings. Word2Vec uses a shallow neural network, consisting of a single hidden layer, to compute these vector representation; it computes a context window W consisting of k contiguous words and trains the network using each word w ∈ W (i.e., W is w's context). The window W is "moved" one word at the time through the text further training the network each time. Training is repeated with windows of size i = {1, 2, . . . k}. For this reason, Word2Vec is said to use "dynamic windows". In Word2Vec, similarity is computed with the cosine similarity, sim C, because it measures how close is the orientation of the ing vectors, which are distributed in such a way that words frequently co-occurring in the context windows have similar orientation whereas those that co-occur less frequently will have a dissimilar orientation. There are two particularities of the change of representation from plan elements to word embeddings that is particularly suitable for our purposes: first the procedure is unsupervised. This means in our case that we do not have to annotate the traces with additional information such as where the goals are been achieved in the traces. Second, vector representations are generated based on the context in which they occur (e.g., the dynamic window W in Word2Vec). In our case, the vector representations of the plan elements will be generated based on their proximity to other plan elements in the traces. These vectors can be clustered together into plan elements that are close to one another. Our working hypothesis, supported by previous work BID21, is that what we call bridge atoms, are ideal candidates for goals. Given two clusters of plan element embeddings, A and B, a bridge atom, bridge AB, is an atom in either A or B that is most similar to the plan elements in the other set. Establishing a bridge atom hierarchy is a recursive process that first requires calculating the bridge atom of a corpus, splitting each text around the bridge atom so that each text in the corpus becomes two new texts, before and after the bridge atom, and then repeating the procedure on the ing sub-corpora. The procedure for find a bridge atom for a corpora is as follows. We train a Word2Vec model on the corpus to determine the word vectors and cluster them with Hierarchical Agglomerative Clustering. Currently we limit the number of clusters to two, although later research may explore how to determine the number of clusters from the structure of the traces. We determine the cosine distance of each atom in a cluster to each atom in the other and average them together for each atom, selecting the word with the shortest average distance, DISPLAYFORM0 where dist C is the cosine distance between the vector representations of two atoms. If an action is selected as the bridge atom, we instead use in its place the atom describing one of its goals. As previously stated, by splitting each trace around the bridge atom, we can form two new sub-corpora, one from the section of each trace before the bridge atom and one from the section after the bridge atom. Then we recursively perform the procedure for bridge atom selection on each new corpora, keeping track of the hierarchical relationship of each subcorpora to the other corpora. If during the division process, a section of a trace becomes shorter than some threshold, we discard it from the sub-corpora. Progress along any branch of recursion halts once there are insufficient traces in a subcorpus for training. We use the hierarchy of bridge atoms as a guide for building a set of hierarchical methods. At the lowest level of division are single-action or short multi-action sections of the traces. Each of these sections will become a method with a single goal (an effect of an action) or a method with multiple goals (one for each of the actions). Each of these methods have two subgoals: one for the subsection of trace before a bridge atom and another one for the trace after that bridge atom. Each action is annotated with its preconditions. The preconditions of a method can be extrapolated from the preconditions of the actions into which it decomposes by regressing over the actions of that section of the plan trace in reverse, collecting the action preconditions and removing from the preconditions any atom which is in the effects of chronologically-preceding action. We are using a variant of the Pyhop HTN planner (https: //bitbucket.org/dananau/pyhop). Our variant introduces nondeterminism in the actions and generates solution policies as described in BID33.Our experiments use a nondeterministic variant of the logistics domain BID64 ). In the domain, packages must be relocated from one location to another. Trucks transport packages within cities, and airplanes transport packages between cities via airports. Nondeterminism is introduced via the load and unload operators, which have two outcomes, success (the package is loaded onto/unload from the specified vehicle) or failure (the package does not move from its original location). We have also added rockets that transport packages between cities on different planets via launchpads. All traces demonstrate a plan for achieving the same goal, the relocation of a package from a location in one city on the starting planet to a location in a city on the destination planet. To ensure that Word2Vec can identify common bridge atoms across the corpus, the package and each location must have the same name in all traces. Although Word2Vec typically works best on a corpus of thousands of texts or more, we are able to learn reasonable bridge atoms from hundreds of texts by increasing the number of epochs and lowering learning rate. For our problem design, a reasonable first bridge atom is one that involves the package and a rocket or launchpad, as transporting the package from the start planet to the destination planet marks the halfway point in the traces. From a corpus of 700 traces, with 1000 epochs and a learning rate of 0.00025, our first bridge atom is the action unload(package, rocket).Because word embeddings are sensitive to word context, the trace structure influences the bridge atom hierarchy. Which atoms are included in the trace and where they are included is important. We are experimenting with two different variants of state expression within traces. In one variant, we list each action preceded by its deletelist and followed by its addlist. If an atom occurs in the addlist of one action and the deletelist of the subsequent action, that atom will only appear in the addlist of the first action. In another variant, we list actions preceded by their preconditions and followed by their effects. In both variants, atoms are listed alphabetically. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1GALrBWtN | Learning HGNs, ND domains |
Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the ing Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins. The ability to learn quickly is a key characteristic that distinguishes human intelligence from its artificial counterpart. Humans effectively utilize prior knowledge and experiences to learn new skills quickly. However, artificial learners trained with traditional supervised-learning or reinforcementlearning methods generally perform poorly when only a small amount of data is available or when they need to adapt to a changing task. Meta-learning seeks to resolve this deficiency by broadening the learner's scope to a distribution of related tasks. Rather than training the learner on a single task (with the goal of generalizing to unseen samples from a similar data distribution) a meta-learner is trained on a distribution of similar tasks, with the goal of learning a strategy that generalizes to related but unseen tasks from a similar task distribution. Traditionally, a successful learner discovers a rule that generalizes across data points, while a successful meta-learner learns an algorithm that generalizes across tasks. Many recently-proposed meta-learning methods demonstrate improved performance at the expense of being hand-designed at either the architectural or algorithmic level. Some have been engineered with a particular application in mind, while others have aspects of a particular high-level strategy already built into them. However, the optimal strategy for an arbitrary range of tasks may not be obvious to the humans designing a meta-learner, in which case the meta-learner should have the flexibility to learn the best way to solve the tasks it is presented with. Such a meta-learner would need to have an expressive, versatile model architecture, in order to learn a range of strategies in a variety of domains. Meta-learning can be formalized as a sequence-to-sequence problem; in existing approaches that adopt this view, the bottleneck is in the meta-learner's ability to internalize and refer to past experience. Thus, we propose a class of model architectures that addresses this shortcoming: we combine temporal convolutions, which enable the meta-learner to aggregate contextual information from past experience, with causal attention, which allow it to pinpoint specific pieces of information within that context. We evaluate this Simple Neural AttenIve Learner (SNAIL) on several heavily-benchmarked meta-learning tasks, including the Omniglot and mini-Imagenet datasets in supervised learning, and multi-armed bandits, tabular Markov Decision processes (MDPs), visual navigation, and continuous control in reinforcement learning. In all domains, SNAIL achieves state-of-the-art performance by significant margins, outperforming methods that are domain-specific or rely on built-in algorithmic priors. Before we describe SNAIL in detail, we will introduce notation and formalize the meta-learning problem. As briefly discussed in Section 1, the goal of meta-learning is generalization across tasks rather than across data points. Each task T i is episodic and defined by inputs x t, outputs a t, a loss function L i (x t, a t), a transition distribution P i (x t |x t−1, a t−1), and an episode length H i. A meta-learner (with parameters θ) models the distribution π(a t |x 1, . . ., x t ; θ). Given a distribution over tasks T = P (T i), the meta-learner's objective is to minimize its expected loss with respect to θ. DISPLAYFORM0 Hi t=0 L i (x t, a t), where x t ∼ P i (x t |x t−1, a t−1), a t ∼ π(a t |x 1, . . ., x t ; θ)A meta-learner is trained by optimizing this expected loss over tasks (or mini-batches of tasks) sampled from T. During testing, the meta-learner is evaluated on unseen tasks from a different task distribution T = P (T i) that is similar to the training task distribution T. The key principle motivating our approach is simplicity and versatility: a meta-learner should be universally applicable to domains in both supervised and reinforcement learning. It should be generic and expressive enough to learn an optimal strategy, rather than having the strategy already built-in. BID19 considered a similar formulation of the meta-learning problem, and explored using recurrent neural networks (RNNs) to implement a meta-learner. Although simple and generic, their approach is significantly outperformed by methods that are hand-designed to exploit domain or algorithmic knowledge (methods which we survey in Section 4). We hypothesize that this is because traditional RNN architectures propagate information by keeping it in their hidden state from one timestep to the next; this temporally-linear dependency bottlenecks their capacity to perform sophisticated computation on a stream of inputs.van den BID26 introduced a class of architectures that generate sequential data (in their case, audio) by performing dilated 1D-convolutions over the temporal dimension. These temporal convolutions (TC) are causal, so that the generated values at the next timestep are only influenced by past timesteps and not future ones. Compared to traditional RNNs, they offer more direct, highbandwidth access to past information, allowing them to perform more sophisticated computation over a temporal context of fixed size. However, to scale to long sequences, the dilation rates generally increase exponentially, so that the required number of layers scales logarithmically with the sequence length. Hence, they have coarser access to inputs that are further back in time; their bounded capacity and positional dependence can be undesirable in a meta-learner, which should be able to fully utilize increasingly large amounts of experience. In contrast, soft attention (in particular, the style used by BID28) allows a model to pinpoint a specific piece of information from a potentially infinitely-large context. It treats the context as an unordered key-value store which it can query based on the content of each element. However, the lack of positional dependence can also be undesirable, especially in reinforcement learning, where the observations, actions, and rewards are intrinsically sequential. Despite their individual shortcomings, temporal convolutions and attention complement each other: while the former provide high-bandwidth access at the expense of finite context size, the latter provide pinpoint access over an infinitely large context. Hence, we construct SNAIL by combining the two: we use temporal convolutions to produce the context over which we use a causal attention operation. By interleaving TC layers with causal attention layers, SNAIL can have high-bandwidth access over its past experience without constraints on the amount of experience it can effectively use. By using attention at multiple stages within a model that is trained end-to-end, SNAIL can learn what pieces of information to pick out from the experience it gathers, as well as a feature representation that is amenable to doing so easily. As an additional benefit, SNAIL architectures are easier to train than traditional RNNs such as LSTM or GRUs (where the underlying optimization can be difficult because of the temporally-linear hidden state dependency) and can be efficiently implemented so that an entire sequence can be processed in a single forward pass. Figure 1 provides an illustration of SNAIL, and we discuss architectural components in Section 3.1. Figure 1: Overview of our simple neural attentive learner (SNAIL); in this example, two blocks of TC layers (orange) are interleaved with two causal attention layers (green). The same class of model architectures can be applied to both supervised and reinforcement learning. In supervised settings, SNAIL receives as input a sequence of example-label pairs (x 1, y 1),..., (x t−1, y t−1) for timesteps 1,..., t − 1, followed by an unlabeled example (x t, −). It then outputs its prediction for x t based on the previous labeled examples it has seen. In reinforcement-learning settings, it receives a sequence of observation-action-reward tuples (o 1, −, −),..., (o t, a t−1, r t−1). At each time t, it outputs a distribution over actions a t based on the current observation o t as well as previous observations, actions, and rewards. Crucially, following existing work in meta-RL BID2 BID31, we preserve the internal state of a SNAIL across episode boundaries, which allows it to have memory that spans multiple episodes. The observations also contain a binary input that indicates episode termination. We compose SNAIL architectures using a few primary building blocks. Below, we provide pseudocode for applying each block to a matrix ("inputs" in the pseudocode) of size (sequence length) × (input dimensionality). Note that, if any of the inputs are images, we employ an additional (spatial) convolutional network that converts the image into a feature vector before it is passed into the SNAIL. FIG0 illustrates the different blocks visually. Many techniques have been proposed to increase the capacity or accelerate the training of deep convolutional architectures, including batch normalization BID9 ), residual connections BID6 ), and dense connections BID8 ). We found that these techniques greatly improved the expressive capacity and training speed of SNAILs, but that no particular choice of residual/dense configurations was essential for good performance (we explore the robustness of SNAILs to architectural choices in Appendix B).A dense block applies a single causal 1D-convolution with dilation rate R and D filters (we used kernel size 2 in all experiments), and then concatenates the with its input. We used the gated activation function (line 3) introduced by van den BID26 BID13 return concat(inputs, read)where CausallyMaskedSoftmax(·) zeros out the appropriate probabilities before normalization, so that a particular timestep's query cannot have access to future keys/values. Pioneered by BID20;; BID25, meta-learning is not a new idea. A key tradeoff central to many recent meta-learning approaches is between performance and generality; we discuss several notable methods and how they fit into this paradigm. BID5 investigated the use of recurrent neural networks (RNNs) to solve algorithmic tasks. They experimented with a meta-learner implemented by an LSTM, but their suggested that LSTM architectures are ill-equipped for these kinds of tasks. They then designed a more sophisticated RNN architecture, where an LSTM controller was coupled to an external memory bank from which it can read and write, and demonstrated that these memory-augmented neural networks (MANNs) achieved substantially better performance than LSTMs. BID19 evaluated both LSTM and MANN meta-learners on few-shot image classification, and confirm the inadequacy of the LSTM architecture. These approaches are generic, but MANNs feature a complicated memoryaddressing architecture that is difficult to train -they still suffer from the same temporally-linear hidden-state dependencies as LSTMs. In response, several approaches have demonstrated good performance in few-shot classification with specialized neural network architectures. BID12 used a Siamese network that was trained to predict whether two images belong to the same class. BID30 learned an embedding function and used cosine distance in an attention kernel to judge image similarity. BID23 employed a similar approach to BID30, based on Euclidean distance metrics. All three methods work well within the context of classification, but are not readily applicable to other domains, such as reinforcement learning. They perform well because their architectures have been designed to exploit domain knowledge, but ideally we would like a meta-learner that is not constrained to a particular problem type. A number of methods consider a meta-learner that makes updates to the parameters of a traditional learner BID1 BID7. BID0 and BID14 investigated the setting of learning to optimize, where the learner is an objective function to minimize, and the meta-learner uses the gradients of the learner to perform the optimization. Their meta-learner was implemented by an LSTM and the strategy that it learned can be interpreted as a gradient-based optimization algorithm; however, it is unclear whether the learned optimizers are substantially better than existing SGD-based methods. extended this idea, using a similar LSTM meta-learner in a few-shot classification setting, where the traditional learner was a convolutional-network-based classifier. In this setting, the meta-learning algorithm is decomposed into two parts: the traditional learner's initial parameters are trained to be suitable for fast gradient-based adaptation; the LSTM meta-learner is trained to be an optimization algorithm adapted for meta-learning tasks. BID3 explored a special case where the meta-learner is constrained to use ordinary gradient descent to update the learner and showed that this simplified model (known as MAML) can achieve equivalent performance. BID15 explored a more sophisticated weight update scheme that yielded minor performance improvements on few-shot classification. All of the methods discussed in the previous paragraph have the benefit of being domain independent, but they explicitly encode a particular strategy for the meta-learner to follow (namely, adaptation via gradient descent at test time). In a particular domain, there may exist better strategies that exploit the structure of the task, but gradient-based methods will be unable to discover them. In contrast, SNAIL presents an alternative paradigm where a generic architecture has the capacity to learn an algorithm that exploits domain-specific task structure. BID2 and BID31 both investigated meta-learning in reinforcement-learning domains using traditional RNN architectures (GRUs and LSTMs). In addition, BID3 experimented with fast adaptation of policies in continuous control, where the meta-learner was trained on a distribution of closely-related locomotion tasks. In Section 5.2, we benchmark SNAIL against MAML and an LSTM-based meta-learner on the tasks considered by these works. Our experiments were designed to investigate the following questions:• How does SNAIL's generality affect its performance on a range of meta-learning tasks?• How does its performance compare to existing approaches that are specialized to a particular task domain, or have elements of a high-level strategy already built-in? • How does SNAIL scale with high-dimensional inputs and long-term temporal dependencies? In the few-shot classification setting, we wish to classify data points into N classes when we only have a small number (K) of labeled examples per class. A meta-learner is readily applicable, because it learns how to compare input points, rather than memorize a specific mapping from points to classes. The Omniglot and mini-ImageNet datasets for few-shot image classification are the standard benchmarks in supervised meta-learning. Introduced by BID13, Omniglot consists of black-and-white images of handwritten characters gathered from 50 languages, for a total of 1632 different classes with 20 instances per class. Like prior works, we downsampled the images to 28 × 28 and randomly selected 1200 classes for training and 432 for testing. We performed the same data augmentation proposed by BID19, forming new classes by rotating each member of an existing class by a multiple of 90 degrees. Mini-ImageNet is a more difficult benchmark; a subset of the well-known ImageNet dataset, it consists of 84 × 84 color images from 100 different classes with 600 instances per class. We used the split released by BID18 and used by a number of other works, with 64 classes for training, 16 for validation, and 20 for testing. To evaluate a SNAIL on the N -way, K-shot problem, we sample N classes from the overall dataset and K examples of each class. We then feed the corresponding N K example-label pairs to the SNAIL in a random order, followed by a new, unlabeled example from one of the N classes. We report the average accuracy on this last, (N K + 1)-th timestep. We tested SNAIL on 5-way Omniglot, 20-way Omniglot, and 5-way mini-ImageNet. For each of these three splits, we trained the SNAIL on episodes where the number of shots K was chosen uniformly at random from 1 to 5 (note that this is unlike prior works, who train separate models for each shot). For a K-shot episode within an N -way problem, the loss was simply the average cross-entropy between the predicted and true label on the (N K + 1)-th timestep. We train both the SNAIL and the feature-extracting embedding network in an end-to-end fashion using Adam BID11 For a complete description of the specifics SNAIL and embedding architectures we used, we refer the reader to Appendix A. Table 1 displays our on 5-way and 20-way Omniglot, and Table 2 respectively for 5-way mini-ImageNet. We see that SNAIL outperforms state-of-the-art methods that are extensively handdesigned, and/or domain-specific. It significantly exceeds the performance of methods such as BID19 that are similarly simple and generic. In Appendix B, we conduct a number of ablations to analyse SNAIL's performance. Table 1: 5-way and 20-way, 1-shot and 5-shot classification accuracies on Omniglot, with 95% confidence intervals where available. For each task, the best-performing method is highlighted, along with any others whose confidence intervals overlap. Method 5-Way Omniglot 20-Way Omniglot 1-shot 5-shot 1-shot 5-shot BID19 82.8% 94.9% -- BID12 97.3% 98.4% 88.2% 97.0% BID30 98.1% 98.9% 93.8% 98.5% BID3 98.7% ± 0.4% 99.9% ± 0.3% 95.8% ± 0.3% 98.9% ± 0.2% BID23 97.4% 99.3% 96.0% 98.9% BID15 98.9% -97.0% -SNAIL, Ours 99.07% ± 0.16% 99.78% ± 0.09% 97.64% ± 0.30% 99.36% ± 0.18% Table 2: 5-way, 1-shot and 5-shot classification accuracies on mini-ImageNet, with 95% confidence intervals where available. For each task, the best-performing method is highlighted, along with any others whose confidence intervals overlap. Method 5-Way Mini-ImageNet 1-shot 5-shot BID30 43.6% 55.3% BID3 48.7% ± 1.84% 63.1% ± 0.92% BID18 43.4% ± 0.77% 60.2% ± 0.71% BID23 46.61% ± 0.78% 65.77% ± 0.70% BID15 49.21% ± 0.96% -SNAIL, Ours 55.71% ± 0.99% 68.88% ± 0.92% Reinforcement learning features a number of challenges that supervised learning does not, including long-term temporal dependencies (as the experienced states and rewards may depend on actions taken many timesteps ago) as well as balancing exploration and exploitation. To explore SNAIL's ability to learn RL algorithms, we evaluate it on four different domains from prior work in meta-RL 1: • Multi-armed bandits BID2 BID31: the agent interacts with a set of arms whose reward distributions are unknown. Although its actions do not affect its state, exploration and exploitation are both essential: an optimal agent must initially explore by sampling different arms, but later exploit its knowledge by repeatedly selecting the best arm.• Tabular MDPs BID2 BID31: we procedurally generate random MDPs and allow the agent to act within each one for multiple episodes. Since every MDP is different, a meta-learner cannot simply memorize the ones it is trained on; it must actually learn an algorithm for solving MDPs. • Visual navigation BID2 BID31: the agent must navigate randomlygenerated mazes to find a randomly-located goal, using only visual observations as input. It is allowed to interact with the same maze/goal configuration for two episodes, so an optimal agent should explore the maze on the first episode to find the goal, and then go directly to the goal on the second episode. This task features many of the common challenges in deep RL, including high-dimensional observations, partial observability, and sparse rewards.• Continuous control BID3: we consider a suite of simulated locomotion tasks. Although the environment dynamics are complex, the underlying task distribution is quite narrow. As a , there is significant task structure for a meta-learner to exploit; the optimal strategy is closer to task-identification than a true RL algorithm. On each of these domains, we trained a SNAIL, along with two meta-learning baselines:• An LSTM-based meta-learner, as concurrently proposed by BID2 BID31. We refer to this method as "LSTM" in the tables and figures in subsequent sections.• MAML, the method introduced by BID3. It trains the initial parameters of a policy to achieve maximal performance after one (policy) gradient update on a new task. We also conducted some ablation experiments, which are detailed in Appendix D. In all domains, we trained the meta-learners using trust region policy optimization with generalized advantage estimation (TRPO with GAE; BID21); the SNAIL architectures and TRPO/GAE hyperparameters are detailed in Appendix C. In the bandit and MDP domains, there exist a number of human-designed algorithms with various optimality guarantees (which we discuss in more depth in the subsequent sections). Although there isn't much task structure for a meta-learner to exploit, the existence of upper bounds on asymptotic performance let us evaluate the optimality of a meta-learned algorithm. However, the true utility of a meta-learner is that it can learn an algorithm specialized to the particular distribution of tasks it is trained on. We evaluate this in the visual navigation and continuous control domains, where there is significant task structure for the meta-learner to exploit, but no optimal algorithms are known to exist due to the task complexity. In our bandit experiments (styled after BID2), each of K arms gives rewards according to a Bernoulli distribution whose parameter p ∈ is chosen randomly at the start of each episode of length N. At each timestep, the meta-learner receives previous timestep's reward, along with a one-hot encoding of the corresponding arm selected. It outputs a discrete probability distribution over the K arms; the selected arm is determined by sampling from this distribution. As an oracle, we consider the Gittins index BID4, the Bayes optimal solution in the discounted, infinite horizon setting. Since it is only optimal as N → ∞, a meta-learner can outperform it for smaller N by choosing to exploit sooner. Following BID2, we tested all combinations of N = 10, 100, 500 and K = 5, 10, 50. We also tested the additional case of N = 1000, K = 50 to further evaluate the scalability of SNAIL to longer sequences. We report the mean reward per episode for each setting; the are given in Table 3 with 95% confidence intervals where available. We found that training MAML was too computationally expensive for N = 500, 1000; hence we omit those from Table 3. Table 3: Results on multi-arm bandit problems. For each, we highlighted the best performing method, and any others whose performance is not statistically-significantly different (based on a one-sided t-test with p = 0.05). Except for SNAIL and MAML, we report the from BID2. DISPLAYFORM0 10, 5 6.6 5.0 6.7 6.5 ± 0.1 6.6 ± 0.1 10, 10 6.6 5.0 6.7 6.6 ± 0.1 6.7 ± 0.1 10, 50 6.5 5.1 6.8 6.6 ± 0.1 6.7 ± 0. 1 100 ), each MDP had 10 states and 5 actions (both discrete); the reward for each (state, action)-pair followed a normal distribution with unit variance where the mean was sampled from N, and the transitions are sampled from a flat Dirichlet distribution (the latter is a commonly used prior in Bayesian RL) with random parameters. We allowed each meta-learner to interact with an MDP for N episodes of length 10. As input, they received one-hot encodings of the current state and previous action, the previous reward received, and a binary flag indicating termination of the current episode. In addition to a random agent, we consider the follow human-designed algorithms as baselines.• PSRL BID24: a Bayesian method that estimate the belief over the current MDP parameters. At the start of each of the N episodes, it samples an MDP from the current posterior, and acts according to the optimal policy for the rest of the episode.• OPSRL BID17: an optimistic variant of PSRL.• UCRL2 BID10: uses an extended value iteration procedure to compute an optimistic MDP under the current belief.• -greedy: with probability 1 −, act optimally against the MAP estimate according to the current posterior (which is updated once per episode). As an oracle, we run value iteration for 10 iterations (the episode length) on each MDP. Value iteration is optimal when the MDP parameters (reward function, transition probabilities) are known; thus, the ing values provide an upper bound on the performance of any algorithm, whether human-designed or meta-learned (which do not receive the MDP parameters).We tested N = 10, 25, 50, 75, 100; in TAB3, we report the performance normalized by the valueiteration upper bound. As N increases, performance should approach 1, as the algorithm learns more about the current MDP. Similarly to the bandit experiments, we could not train MAML successfully for N = 50, 75, 100. In FIG1, we show learning curves of SNAIL and LSTM. We consider the set of tasks introduced by BID3, in which two simulated robots (a planar cheetah and a 3D-quadruped ant) have to run in a particular direction or at a specified velocity (the direction or velocity are chosen randomly and not told to the agent). In the goal direction experiments, the reward is the magnitude of the robot's velocity in either the forward or backward direction, and in the goal velocity experiments, the reward is the negative absolute value between its current forward velocity and the goal. The observations are the robot's joint angles and velocities, and the actions are its joint torques. For each of these four task distributions ({ant, cheetah} × {goal velocity, goal direction}), BID3 trained a policy to maximize its performance after one policy gradient update using 20 episodes (40 for ant), of 200 timesteps each, on a newly sampled task. We trained both SNAIL and LSTM on each of these four task categories. Since they do not update their parameters at test time (instead incorporating experience through their hidden state), SNAIL and LSTM receive as input the previous action, previous reward, and an episode-termination flag in addition to the current observation. We found that two episodes of interaction was sufficient for these meta-learners to adapt to a task, and that unrolling them for longer did not improve performance. In Figure 4, we show how the different methods adapt to a new task. As an oracle, we sampled tasks from each distribution, and trained a separate policy for each task. We plot the average performance of the oracle policies for each task distribution as an upper bound on a meta-learner's performance. Qualitatively, we can think of MAML as applying a general-purpose strategy (namely, gradient descent) to a distribution of highly-structured tasks. In contrast, SNAIL and LSTM are able to specialize themselves based on the shared task structure, enabling them to identify the task within the initial timesteps of the first episode, and then act optimally thereafter. Figure 4: Test-time adaptation curves on simulated locomotion tasks for SNAIL, LSTM, and MAML (which was unrolled for three policy gradient updates). Since SNAIL incorporates experience through its hidden state, it can exploit common task structure to perform optimally within a few timesteps. and BID31 consider the task of visual navigation, where the agent must find a target in a maze using only visual inputs. The former used randomly-generated mazes and target positions, while the latter used a fixed maze and only four different target positions. Hence, we evaluated SNAIL on the former, more challenging task. The observations the agent receives are 30 × 40 first-person images, and the actions it can take are {step forward, turn slightly left, turn slightly right}. We constructed a training dataset and two test datasets (unseen mazes of the same and larger size, respectively), each with 1000 mazes. The agents were allowed to interact with each maze for 2 episodes, with episode length 250 (1000 in the larger mazes). The starting and goal locations were chosen randomly for each trial but remained fixed within each pair of episodes. The agents received rewards of +1 for reaching the target (which ed in the episode terminating), -0.01 at each timestep, to encourage it to reach the goal faster, and -0.001 for hitting the wall. FIG3 depicts an example of the observations as well as sample maze layouts. We evaluate each method using the average episode length, for both the first and second episode within a trial. The are displayed in Table 5. Since MAML scaled poorly to long sequences in the bandit and MDP domains, we did not evaluate it on this domain; the computational expense was prohibitively high. Qualitatively, we observe that the optimal strategy does indeed emerge: the SNAIL agent explores the maze during the first episode, and then, after finding the goal, goes directly there on the second episode (the LSTM agent also exhibits this behavior, but has a harder time remembering where the goal is). An illustration is depicted in FIG3. Table 5: Average time to find the goal on each episode in the small and large mazes. SNAIL solves the mazes the fastest, and improves the most from the first to second episode. Large Maze Episode 1 Episode 2 Episode 1 Episode 2Random 188.6 ± 3.5 187.7 ± 3.5 420.2 ± 1.2 420.8 ± 1.2 LSTM 52.4 ± 1.3 39.1 ± 0.9 180.1 ± 6.0 150.6 ± 5.9 SNAIL (ours) 50.3 ± 0.3 34.8 ± 0.2 140.5 ± 4.2 105.9 ± 2.4 We presented a simple and generic class of architectures for meta-learning, motivated by the need for a meta-learner to quickly incorporate and refer to past experience. Our simple neural attentive learner (SNAIL) utilizes a novel combination of temporal convolutions and causal attention, two building blocks of sequence-to-sequence models that have complementary strengths and weaknesses. We demonstrate that SNAIL achieves state-of-the-art performance by significant margins on all of the most-widely benchmarked meta-learning tasks in both supervised and reinforcement learning, without relying on any application-specific architectural components or algorithmic priors. Although we designed SNAIL with meta-learning in mind, it would likely excel at other sequence-tosequence tasks, such as language modeling or translation; we plan to explore this in future work. Another interesting idea would be to train an meta-learner that can attend over its entire lifetime of experience (rather than only a few recent episodes, as in this work). An agent with this lifelong memory could learn faster and generalize better; however, to keep the computational requirements practical, it would also need to learn how to decide what experiences are worth remembering. With the building blocks defined in Section 3.1, we can concisely describe SNAIL architectures. We used the same SNAIL architecture for both Omniglot and mini-Imagenet. For the N -way, K-shot problem, the sequence length is T = N K + 1, and we used the following: AttentionBlock, TCBlock(T, 128), AttentionBlock, TCBlock(T, 128), AttentionBlock, followed by a final 1 × 1 convolution with N filters. For the Omniglot dataset, we used the same embedding network architecture as all prior works, which repeat the following block four times {3x3 conv (64 channels), batch norm, ReLU, 2x2 max pool }, and then apply a single fully-connected layer to output a 64-dimensional feature vector. For mini-Imagenet, existing gradient-descent-based methods BID18 BID3, which update their model's weights during testing, used the same network structure as the Omniglot network but reduced the number of channels to 32, in spite of the significantly-increased complexity of the images. We found that this shallow embedding network did not make adequate use of SNAIL's expressive capacity, and opted to to use a deeper embedding network to prevent underfitting (in Appendix B, we conduct ablations regarding this decision). Illustrated in FIG4, our embedding was a smaller version of the ResNet BID6 architectures commonly used for the full Imagenet dataset. BID6, uses several of the residual blocks depicted in (a). To investigate the contribution of different components (TC, attention, and deeper embedding in the case of mini-Imagenet) to SNAIL's performance, we conducted a number of ablations, which are summarized in Table 6. From these ablations, we draw two :• Both TC and attention layers are essential for maximal performance. When we remove either one, the ing model is still competitive with other state-of-the-art methods, but the combination yields the best performance. Notably, compared to the full model, using only TC layers in similar 1-shot performance but worse 5-shot. In Section 3 we discussed how temporal convolutions have coarser access to inputs farther back in time; this illustrates that this effect is relevant even at sequence length 26. • SNAIL's improved performance is not purely a of the deeper embedding network. Gradient-based methods (we tested MAML; BID3) overfit significantly when they use our embedding, and domain-specific RNN-based methods BID30 don't utilize the extra capacity as well as SNAIL does. Table 6: The ablations we conducted on the few-shot classification task. From these, we conclude that (i) both TC and attention are essential for the best performance, and (ii) SNAIL's improved performance cannot be entirely explained by the deeper embedding. Replace SNAIL with stacked LSTM. Varied number of layers and their sizes (with similar number of parameters to SNAIL).5-way Omniglot: 78.1% and 90.8% (1-shot, 5-shot).We were unable to successfully train this method on miniImagenet. SNAIL with shallow mini-Imagenet embedding.5-way mini-Imagenet: 45.1% and 55.2% (1-shot, 5-shot).MAML BID3, a state-ofthe-art gradient-based method, with our deeper mini-Imagenet embedding. It overfits tremendously; for 1-shot, 5-way mini-ImageNet: 30.1% & 75.2% on the test and training set respectively. MAML trains separate models for 1-shot and 5-shot; we didn't train a 5-shot model because the 1-shot did so poorly. SNAIl, no TC layers (only attention). This is a generalization of the method used by BID30, as they only use a single attentive read and explicitly force the keys to be features of the image and the values to be the labels. We experimented with multiple parallel reads (often referred to as multiple heads) as well as up to three consecutive attentive blocks. On 5-way and 20-way Omniglot: equivalent performance to the full model. 5-way mini-Imagenet: 49.9% and 63.9% (1-shot, 5-shot). On 5-way Omniglot: 98.8% and 99.2% (1-shot, 5-shot).We were unable to train 20-way Omniglot using this method. On 5-way mini-Imagenet: 55.1% and 61.2%.In an attempt to analyse the learned feature representation, we tried using the features learned by the Omniglot embedding in a nearest-neighbor classifier, using both cosine and Euclidean distance. On 5-way Omniglot, this achieves 65.1% and 67.1% (1-shot and 5-shot) for Euclidean distance and 67.7% and 68.3% for cosine. Although SNAIL must be comparing images in order to successfully make few-shot predictions, this suggests that the strategy it learns is more sophisticated than either of these distance metrics. We contrast this with BID30 and BID23, who explicitly enforce such representations on the meta-learned strategy. In addition, we investigated how sensitive SNAILs are to architectural design choices by sampling random permutations of the different components introduced in Section 3.1. We chose each component uniformly at random from six options: {AttentionBlock, DenseBlock(R, 128) for R ∈ {1, 2, 4, 8, 16}}. We sampled architectures with 13 layers each (for consistency with our primary model), and trained them on 5-way Omniglot. Averaged across 3 runs, these SNAILs achieved 98.62% ± 0.13% and 99.71% ± 0.08% for 1-shot and 5-shot, essentially matching the state-of-the-art performance of our primary architecture. Finally, we explored the dependence of the classification strategy learned by SNAIL on the dataset it was trained on. If it truly learned an algorithm for few-shot classification, then a SNAIL trained on images from a particular domain should easily transfer to a new domain (such as between Omniglot and mini-Imagenet). To test this hypothesis:• First, we took a SNAIL trained on 5-way Omniglot, fixed its weights, and re-learned an embedding for mini-Imagenet. Despite the SNAIL weights not being trained for miniImagenet, this method was able to achieve 50.62% and 62.34% on 1-shot and 5-shot.• Then, we tried this in the reverse direction (freezing the SNAIL weights from mini-Imagenet, and re-learning an embedding for 5-way Omniglot), and this attained 98.66% and 99.56%.• Lastly, we combined an embedding trained on 5-way Omniglot with a SNAIL trained on 5-way mini-Imagenet (with a single linear layer in between, to handle the difference in feature vector dimensionality). We trained this model on 5-way Omniglot, where only the weights of the intermediate linear layer could be updated. It achieved 98.5% and 99.5%.All of these are very competitive with the state-of-the-art, suggesting a strong degree of transferability of the algorithm and feature representation learned by SNAIL. An interesting idea for future work in zero-shot learning would be to learn embeddings for multiple datasets in an unsupervised manner, but with some mild distributional constraints imposed on the output feature representation. Then, one could train a SNAIL on one dataset, and have it transfer to new datasets without a single labeled example. C REINFORCEMENT LEARNING For the N -timestep, K-arm bandit problem, the total trajectory length is T = N. For the MDP problem with N episodes per MDP, it is T = 10N (since each episode lasts for 10 timesteps).For multi-arm bandits and tabular MDPs, we used the same architecture. First, we applied a fullyconnected layer with 32 outputs that was shared between the policy and value function. Then the policy used: TCBlock(T, 32), TCBlock(T, 32), AttentionBlock. The value function used: TCBlock(T, 16), TCBlock(T, 16), AttentionBlock.We found that removing the attention blocks made no difference in performance on the bandit problems, whereas SNAILs without attention could not learn to solve MDPs. For each simulation locomotion task, the total trajectory length was T = 400 (2 episodes of 200 timesteps each). We used the same architecture (shared between policy and value function) for all tasks: two fully-connected layers of size 256 with tanh nonlinearities, AttentionBlock, TCBlock(T, 16), TCBlock(T, 16), AttentionBlock. Then the policy and value function applied separate fully-connected layers to produce the requisite output dimensionalities. Unlike the other RL tasks we considered, the observations in this domain include images. We preprocess the images using the same convolutional architecture as BID2: two layers with {kernel size 5 × 5, 16 filters, stride 2, ReLU nonlinearity}, whose output is then flattened and then passed to a fully-connected layer to produce a feature vector of size 256.The total trajectory length was T = 500 (2 episodes of 250 timesteps each). For the policy, we used: TCBlock(T, 32), AttentionBlock, TCBlock(T, 32), AttentionBlock. For the value function we used: TCBlock(T, 16), TCBlock(T, 16). As discussed in Section 5.2, we trained all policies using trust-region policy optimization with generalized advantage estimation (TRPO with GAE, BID21). The hyperparameters are listed in TAB4. For multi-armed bandits, tabular MDPs, and visual navigation, we used the same hyperparamters as BID2 to make our directly comparable; additional tuning could potentially improve SNAIL's performance. First, we consider an SNAIL agent without attention layers (only TC layers, which amounts to a variant of the WaveNet architecture introduced by van den BID26).When applied to the bandit domain, we found that this TC-only model performed just as well as a complete SNAIL. This is likely due to the simplicity of this task domain, as successful performance on bandit problems does not require maintaining a large memory of past experience. Indeed, many human designed algorithms (including the asymptotically optimal Gittins index) simply update running statistics at each timestep. However, this model struggled in the MDP domain, where a more sophisticated algorithm is required. The are in the table below (with those of a random agent, SNAIL, LSTM and MAML duplicated from TAB3 for reference). This agent's asymptotic suboptimality suggests that its ability to internalize past experience is being saturated. Next, we considered a SNAIL agent without TC layers (only attention). Due to the sequential nature of RL tasks, we employed the positional encoding proposed by BID29. This model, which is equivalent to their Transformer architecture, could not solve the bandit or MDP tasks. In both domains, its performance was no better than random. To no avail, we experimented with multiple blocks of attention and multiple heads per block. We hypothesize that this architecture's inadequacy stems from the fact that pure attentive lookups cannot easily process sequential information. Despite their infinite receptive field, they cannot directly compare two adjacent timesteps (such as a single state-action-state transition) in the same way as a single convolution can. The TC layers are essential because they allow the agent to locally analyse contiguous parts of a sequence to produce a better contextual representation over which to attend. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | B1DmUzWAW | a simple RNN-based meta-learner that achieves SOTA performance on popular benchmarks |
Knowledge Graph Embedding (KGE) has attracted more attention in recent years. Most of KGE models learn from time-unaware triples. However, the inclusion of temporal information beside triples would further improve the performance of a KGE model. In this regard, we propose LiTSE, a temporal KGE model which incorporates time information into entity/relation representations by using linear time series decomposition. Moreover, considering the temporal uncertainty during the evolution of entity/relation representations over time, we map the representations of temporal KGs into the space of multi-dimensional Gaussian distributions. The mean of each entity/ relation embedding at a time step shows the current expected position, whereas its covariance (which is stationary over time) represents its temporal uncertainty. Experiments show that LiTSE not only achieves the state-of- the-art on link prediction in temporal KGs, but also has the ability to predict the occurrence time of facts with missing time annotations, as well as the existence of future events. To the best of our knowledge, no other model is capable to perform all these tasks. Knowledge Graphs (KGs) are being used for gathering and organizing scattered human knowledge into structured knowledge systems. YAGO , NELL BID3, DBpedia BID0 and Freebase BID1 are among existing KGs that have been successfully used in various applications including question answering, assistant systems, information retrieval, etc. In these KGs, knowledge can be represented as RDF triples (s, p,o) in which s (subject) and o (object) are entities (nodes), and p (predicate) is the relation (edge) between them. KG embedding attempts to learn the representations of entities and relations in high-dimensional latent feature spaces while preserving certain properties of the original graph. Recently, KGE has become a very active research topic due to the wide ranges of downstream applications. Different KGE models have been proposed so far to efficiently learn the representations of KGs and perform KG completion as well as inferencing BID2 BID22 BID23 BID20 BID5.Most of existing KGE models solely learn from time-unknown facts and ignore the useful temporal information in KGs. In fact, there are many time-aware facts (or events) in some temporal KGs. For instance, (Obama, wasBornIn, Hawaii) happened at August 4, 1961, and (Obama, presidentOf, USA) was true from 2009 to 2017. These temporal KGs, e.g. ICEWS BID9, YAGO3 BID11, store such temporal information either explicitly or implicitly. Traditional KGE models such as TransE learn only from time-unknown facts and consequently cannot distinguish relations with similar semantic meaning. For instance, they often confuse relations such as wasBornIn and diedIn when predicting (person,?,location).To tackle this problem, Temporal KGE models BID4 BID6 BID18 encode time information in their embeddings. Temporal KGE models outperform traditional KGE models on link prediction over temporal KGs. It justifies that incorporation of time information can further improve the performance of a KGE model. Some existing temporal KGE models encode time information in a latent space e.g. representing time as a vector BID4 BID10. These models cannot capture some prop-erties of time information such as the length of time interval as well as order of two time points. Moreover, some exiting temporal graph embedding models BID18 BID19 consider the changes of entity representations over time as a kind of temporal evolution process, while they ignore the uncertainty during the temporal evolution. We argue that the evolution of entity representations has randomness, because the features of an entity at a certain time are not completely determined by the past information. For example, (Steve Jobs, diedIn, California) happened on 2011-10-05. The semantic characteristics of this entity should have a sudden change at this time point. However, due to the incompleteness of knowledge in KGs, this change can not be predicted only according to its past evolutionary trend. Therefore, the representation of Steve Jobs is supposed to include some random components to handle this uncertainty, e.g. a Gaussian noise component. To address the above problems, we propose a new temporal KGE model based on linear time series decomposition (LiTSE) that captures the evolution process of KG representations. LiTSE fits the evolution process of an entity or relation as a linear function of time with a Gaussian random noise. Inspired by, our approach represents each entity and relation as a multi-dimensional Gaussian distribution at each time step to introduce a random component. The mean of an entity/relation representation at a certain time step indicates its current expected position, which is obtained from its initial representation, its evolutionary direction vector which represents the long-term trend of its evolution and the current time. The covariance which describes the temporal uncertainty during its evolution, is denoted as a constant diagonal matrix for computing efficiency. Our contributions are as follows.• Learning the representations for temporal KGs is a relatively unexplored problem because most of existing KGE models only learn from time-unknown facts. We propose LiTSE, a new KGE model to incorporate the time information into the KG representations.• Different from the previous temporal KGE models which use time encoding to incorporate time information, LiTSE fits the evolution process of KG representations as a linear function of time. This enables us to observe and predict the time information directly from entity/relation representations. In particular, we can predict the occurrence of a fact in a future time, according to the known evolution trends of KG representations learned from the past information.• We specially consider the temporal uncertainty during the evolution process of KG representation. Thus, we model each entity as a Gaussian distribution at each time step and use KL-divergence between two Gaussian distributions to compute the scores of facts for optimization.• Beside performing link prediction in temporal KGs, our models are proved to be capable of estimating the occurrence time of a fact with missing time annotation, and predicting future events. The rest of the paper is organized as follows: Section 2 reviews related works. Our model is introduced in the section 3. The proposed model is evaluated and compared with state-of-the-art models in the section 4. Finally, the paper is concluded in the last section. A large amount of research has been done in KGE. These approaches can generally be categorized into two groups, namely semantic matching models and transnational distance models. RESCAL BID13 and its extensions, e.g. DistMult BID23, ComplEx BID20, ConvE BID5, are the semantic matching models. These models measure plausibility of facts by matching latent semantics of entities and relations embodied in their vector space representations. A few examples of translational distance models include TransE BID2, TransH BID22, TransD. These models measure the plausibility of a fact as the distance between the two entities, usually after a translation carried out by the relation. Particularly, KG2E takes into account the uncertainties of KG representations and represents entities and relations as random vectors drawn from multivariate Gaussian distributions. KG2E scores a fact by measuring the distance between the distributions of the entities and the relation. The above methods achieve good on link prediction in KGs. Moreover, some re-cent researches illustrate that the performances of KGE models can be further improved by incorporating time information in temporal KGs. TAE BID17 imposes temporal order constraints on time-sensitive relation pairs, e.g. BornIn and wasDiedIn, where the prior relation is supposed to lie close to the subsequent relation after a temporal transition. TAE only uses temporal order information between relations, but not the exact time information in facts. TTransE BID10 propose scoring functions which incorporate time representations into a TransE-type score function in different ways. BID6 utilizes recurrent neural networks to learn time-aware representations of relations and uses standard scoring functions from the existing KGE model, e.g. TransE BID2 and DistMult BID23. HyTE BID4 encodes time in the entity-relation space by associating a corresponding hyperplane to each timestamp. The above three methods represent each time step as a latent feature vector or a hyperplane matrix and update the entity/relation representations at different time steps with the corresponding time representations. That means the all entity/relation representations have the same evolution trend. In contrast, Know-Evolve BID18 ) models the temporal evolution of each entity representation as an individual temporal point process, which is a non-linear function of time. They exploit recurrent neural network to capture the dynamic characteristics of entity representations. Know-Evolve is also proved to perform well on time prediction. In this paper, we fit the temporal evolution of entity/relation representations by deploying linear time series decomposition. This enables us to directly observe and predict time information from entity/relation representations. We can also predict the future events according to the evolution trends of entity/relation representations. Moreover, inspired by KG2E, we map the entity and relation representations in a space of multi-dimensional Gaussian distributions to model the uncertainty of temporal KGs. Different from KG2E, we focus on the temporal embedding and give a specific definition of the uncertainty in temporal KGs, i.e. the randomness during the temporal evolution of KG representations. In this section, we present a detailed description of our proposed method, LiTSE, which not only uses relational properties between entities in triples but also incorporate the associated temporal meta-data by using linear time series decomposition. A time series is a series of time-oriented data. Time series analysis is widely used in many fields, ranging from economics and finance to managing production operations, to the analysis of political and social policy sessions BID12. An important technique for time series analysis is time series decomposition. This technique decomposes a time series into four components, including a trend component, a cyclical component, a seasonal component and an irregular component (i.e. "noise").In our method, we regard the evolution of an entity/relation representation as a linear time series and assume that it only consists of two components, i.e. a linear trend component and a Gaussian noise component. The motivation of this assumption is based on the following three points.• The simplicity of the embedding model architecture. Considering a temporal KG consisting of thousands of entities and relations, we can avoid introducing too many parameters by only using a trend component and a noise component to fit the temporal evolution of each relation/entity representation.• The capability of time prediction. Based on our proposed assumption, our model is capable to estimate the occurring time of a triple with the missing time annotation.• The efficiency of model training. Commonly, a moving-average model (MA model) is used when modeling the irregular term of a time series BID12. But we have to deploy a global optimization algorithm while training a MA model. Instead, we take a Gaussian noise as the irregular component in time series decomposition. This method enables us to employ mini-batch training for efficiency purposes. To incorporate temporal information into traditional KGs, a new temporal dimension is added to fact triples, denoted as a quadruple (s, p, o, t). It represents the creation of relationship edge p between subject entity s, and object entity o at time step t. The score term x spot = f t (e s, r p, e o) can represent the conditional probability or the confidence value of this event x spot, where e s, e o ∈ R Le, r p ∈ R Lr are representations of s, o and p. In the case of a fact (s, p, o, [t s, t e]), we consider it to be a positive triple for each time step between t s and t e. t s and t e denote the start and end time during which the triple (s, p, o) is valid. At each time step, the time-specific representations of an entity e i or a relation r i should be updated as e i,t or r i,t. In order to avoid information redundancy, we only incorporate time information into entity representations or relation representations, but not both. The model where time information is incorporated into relation representations is denoted as LiTSER. Another model with evolving entity representations is called as LiT-SEE. Thus, the score of a quadruple (s, p, o, t) can be represented as x spot = f e (e s,t, r p, e o,t) or x spot = f r (e s, r p,t, e o). Due to the similarity between LiTSEE and LiTSER, we take LiTSEE as an example to describe our method in this section. In our proposed model LiTSEE, we first utilize a linear function to fit the evolution processes of entity representations as: DISPLAYFORM0 where the e i is the time-independent latent representation of the ith entity which is subjected to ||e i || 2 = 1, the coefficient α i denotes its evolutionary rate, and the vector w i represents the direction of its evolution which is restricted to ||w i || 2 = 1. For LiTSEE, we use the following translationbased scoring function to measure the plausibility of a fact (s, p, o, t) BID2. DISPLAYFORM1 where ||r p || 2 = 1. Furthermore, to model the temporal uncertainty of the latent representations of the subject and the object in this fact, we assume e s,t and e o,t have randomness and obey Gaussian probability distributions: P s,t ∼ N (e s,t, Σ s) and P o,t ∼ N (e o,t, Σ o). Similarly, the predicate is represented as P r ∼ N (r p, Σ r). The mean vectors e and r, and covariance matrix Σ indicate the corresponding embedding representations for the Gaussian distribution. This advanced model based on LiTSEE is denoted as LiTSEE G.Similarly to LiTSEE, we consider the transformation of LiTSEE G from the subject to the object to be akin to the predicate in a positive fact. We use the following formula to express this transformation: P s,t − P o,t, which corresponds to the probability distribution P e,t ∼ N (µ e,t, Σ e). Here, µ e,t = e s,t − e o,t and Σ e = Σ s + Σ o. As a , combined with the probability of relation P r ∼ N (r p, Σ r), we measure the similarity between P e,t and P r to score the fact. KL divergence is a straightforward method of measuring the similarity of two probability distributions. We optimize the following score function based on the KL divergence between the entity-transformed distribution and relation distribution . DISPLAYFORM2 where, tr(Σ) and Σ −1 indicate the trace and inverse of the covariance matrix, respectively. Considering the simplified diagonal covariance, we can compute the trace and inverse of the matrix simply and effectively for LiTSEE G. The gradient of log determinant is et al., 2008). We can compute the gradients of Equation 3 with respect to the time-independent latent feature vectors, evolutionary direction vectors and covariance matrix (here acting as a vector) as follows: DISPLAYFORM3 DISPLAYFORM4 where DISPLAYFORM5 spot = Σ −1 r (r p + e o − e s + t(α o wIn the same way, we can extend LiTSER to LiTSER G by adding a Gaussian noise component into the linear evolution function of each relation/entity representation. The architectures of LiTSER and LiTSER G are similar to LiTSEE and LiTSEE G . The processes of computing gradients of the score functions in LiTSEE G and LiTSER G are also alike. Therefore, it is unnecessary to go into details about LiTSER and LiTSER G here. As mentioned in section 3.1, our proposed models are translation-based. Thus, we train our models by minimizing the margin-based ranking loss. DISPLAYFORM0 where, [T] is the set of time steps in the temporal KG, D + t is the set of positive triples with time stamp t and D − t is the set of negative examples. In this paper, we not only generate negative samples by randomly corrupting subjects or objects of the positives such as (s, p, o, t) and (s, p, o, t), but also add extra negative samples (s, p, o, t) which are present in the KG but do not exist in the subgraph for a particular time BID4. We use this time-dependent negative sampling approach for time prediction. In the other hand, to compare our model with baseline models fairly, we use uniform negative sampling method BID2 for link prediction and future event prediction. To avoid overfitting, we add some regularizations while learning the Gaussian embedding. As described in Section 3.1, the norms of the original representations of entities and relations, as well as the norms of all evolutionary direction vectors, are restricted by 1. Besides, the following constraint is considered for covariance when we minimize the loss L: DISPLAYFORM1 where, E and R are the set of entities and relations respectively, c min and c max are two positive constants. During training process, we use Σ l ← max(c min, min(c max, Σ l)) to achieve this regularization for diagonal covariance matrices. These constraints for the mean and covariance are also considered during initialization. To show the capability of LiTSE, we compare it and its extensions with other state-of-the-art baselines on link prediction. Particularly, we also evaluate our method for two other tasks: time prediction and future event prediction, which baseline models are not capable to handle. To compare our model with baselines, we used the following three datasets, namely ICEWS14, ICEWS05-15 and YAGO11K, released by Dasgupta et al. FORMULA0 and BID6. ICEWS14 and ICEWS05-15 are subsets of Integrated Crisis Early Warning System (ICEWS) BID9. ICEWS is a repository that contains political events with specific time annotations, e.g. The statistics of the datasets are listed in Table 1. We compare our method and other baselines by performing link prediction on ICEWS14, ICEWS05-15 and YAGO11k D. Specially, we also evaluate the performance of our proposed models on time prediction and future event prediction with two event-based datasets, ICEWS14 and ICEWS05-15. In this paper, we report the experimental on three tasks: Link Prediction, Time Prediction and Future Event Prediction. We split anew the facts into training, validation and test in a proportion of 80%/10%/10%. All facts in the test set occur after facts in the training/validation set. We train the model with the training set and judge whether quadruples in the test set are positive or not. The decision process is similar to triple classification BID15: for a fact (s, p, o, t), if x spot is below a relation-specific threshold δ r, then positive; otherwise negative. The thresholds δ r are determined on the validation set. For link prediction task, we compare our method with several state-of-the-art KGE models and existing time-wise KGE models, including TransE BID2, DistMult BID23, KG2E, ComplEx BID20, ConvE BID5, TTransE BID10,TA-TransE and TA-DistMult (García-Durán et al., 2018) as well as HyTE BID4. All these baselines are not applicable to estimating time information by computation (HyTE did it by ranking, which is much more time-consuming). Therefore, we compare the among our proposed models for time prediction. Considering that the above time-wise KGE models are not capable to represent a future time step, we compare our models with the above static KGE models for future event prediction. We implemented our models and baseline models in PyTorch, except TA-TransE and TA-DistMult. Since some implementation details of these two models were unclear, we report their from the original paper BID6. We used Adagrad optimizer to train all the implemented models and selected the optimal hyperparameters by early validation stopping according to MRR on the validation set. We restricted the iterations to 5000. For all the models, the batch size b = 512 was kept on both the datasets. We tuned the embedding dimensionalities d in {50, 100}, the learning rates lr in {0.001, 0.01, 0.1} and the ratio of negatives over positive training samples η in {1, 3, 5, 10}. For translation-based models, the margins γ were varied in the range {1, 2, 3, 5, 10}. For semantic matching models, the regularizer weights λ were chosen from the set {0.001, 0.01, 0.1}. For ConvE, we select dropout parameters from the set {0, 0.2}. Similar to the setting in KG2E, we selected the pair of restriction values c min and c max for covariance among {(0.005, 0.5), (0.01, 1), (0.03, 3), (0.05, 5)} for Gaussian embedding models. The default configuration for our proposed models is as follows: lr = 0.1, η = 10, γ = 1. Below, we only list the non-default parameters. For LiTSEE, the optimal configuration is as follows: lr = 0.01, γ = 10 on YAGO11k D. For Table 2: Link prediction (filtered setting). Rows 1-7: basic models with no time information. Rows 8-11: models which encode information. * indicates in this row were taken from (García-Durán et al., 2018). Dashes: could not be obtained. The best among all models are written bold. The red numbers are the best obtained from our implemention. LiTSEE G, the optimal configuration is as follows: γ = 2, (c min, c max) = (0.01, 1) on ICEWS14; (c min, c max) = (0.01, 1) on ICEWS05-15; lr = 0.01, γ = 10, (c min, c max) = (0.005, 0.5)] on YAGO11k D. For LiTSER, the optimal configuration is as follows: γ = 2 on ICEWS14; lr = 0.01, γ = 10 on YAGO11k D. For LiTSER G, the optimal configuration is as follows: (c min, c max) = (0.005, 0.5) both on ICEWS14 and ICEWS05-15;lr = 0.01, γ = 10, (c min, c max) = (0.005, 0.5)] on YAGO11k D. The above configurations were used for all three tasks. The obtained for different tasks are based on the above mentioned experimental setup. Table 2 shows the for link prediction task. In ICEWS14 and ICEWS05-15, LiTSEE G outperformed all embedding models considering MRR, Hits@10 and Hits@1. TransE implemented by BID6 got the best MR in these two datasets. It is noteworthy that the ratio of negatives over positive samples η used in (García-Durán et al., 2018) was 500, much higher than our setting. BID20 investigated the influence of η on KGE models and discovered that increasing η could lead to better . Thus, the obtained from BID6 would become worse if the same η as ours was used. Except the obtained from BID6 BID5 and BID20 showed that the performances of ConvE and ComplEx were remarkably better than KG2E on static KGs, e.g. FB15k and WN18 BID2. These prove that modeling temporal uncertainty in temporal KGs by mapping KG representations into the space of multi-dimensional Gaussian distribution substantially improve the performances of KGE models on temporal KGs. As mentioned in Section 3.2, we corrupted time information in positive facts to generate negative samples (s, p, o, t) for time prediction. TAB5 shows the of our proposed models for time prediction on ICEWS14 and ICEWS05-15. Beside Mean Absolute Errors (MAEs), we also report the proportions of testing examples which prediction errors are under 10 days, denoted as Error@10. Although MAEs of our models were high due to a small part of bad predictions, 57.5% of prediction errors of LiTSEE G on ICEWS14 were under 10 days, which proves the ability of our method for time prediction. We introduce LiTSE, a temporal KGE model that incorporates time information into KG representations by using linear time series decomposition. LiTSE fits the temporal evolution of KG representations over time as linear time series, which enables itself to estimate time information of a triple with the missing time annotation and predict the occurrence of a future event. Considering the uncertainty during the temporal evolution of KG representations, LiTSE maps the representations of temporal KGs into the space of multi-dimensional Gaussian distributions. The covariance of an entity/relation representation represents its randomness component. Experimental demonstrate that our method significantly outperforms the state-of-the-art methods on link prediction and future event prediction. Besides, our method can effectively predict the occurrence time of a fact. Our work establishes a previously unexplored connection between relational processes and time series analysis with a potential to open a new direction of research on reasoning over time. In the future, we will explore to use other time series analysis techniques to model the temporal evolution of KG representations. Along with considering the temporal uncertainty, another benefit of using time series analysis is to enable the embedding model to encode temporal rules. For instance, given two quadruple (s, p, o, t p) and (s, q, o, t q), there exists a temporal constraint t p < t q. Since the time information is represented as a numerical variable in a time series model, it is feasible to incorporate such temporal rules into our models. We will investigate the possibility of encoding temporal rules into our proposed models. DISPLAYFORM0 regularize the covariances for each entity and relation with constraint 6. 18. end loop TAB10 shows the statistics of datasets which are anew split for future event prediction, denoted as ICEWS14-F and ICEWS05-15F. As mentioned in Section 4.2, all of the facts in test set occur after the facts in training set and validation set, and the facts of validation set occur after the facts in training set. The time spans of training sets, validation sets and test sets of ICEWS14 and ICEWS05-15 are reported in TAB10. t e represents the end time of the dataset. For instance, t e of the training set of ICEWS14 is 2014/10/20 and t e of the validation set of ICEWS14 is 2014/11/22, which means the time stamps of quadruples in the validation set of ICEWS14 are between 2014/10/21 and 2014/11/22. In TAB1, we summarize the scoring function of baselines and our models and compare their space complexities. x, y, z = i x i y i z i denotes the tri-linear dot product; * denotes the convolution operator; Seq denotes a LSTM network; P t denotes the temporal projection for embeddings; w t denotes the embedding for the time step t. As shown in TAB1, our models have the same space complexities as traditional KGE models. On the other hand, the space complexities of TTransE and HyTE will be much higher than our models if n t is larger than n e and n r. Comparison of our models with baseline models for space complexity. n e, n r and n t are numbers of entities, relations and time steps. We borrow some notations from BID5 for simplicity. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJxiIBKG64 | Submitted in EMNLP |
We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines. Competitive games have been grand challenges for artificial intelligence research since at least the 1950s BID38 BID45 BID6. In recent years, a number of breakthroughs in AI have been made in these domains by combining deep reinforcement learning (RL) with self-play, achieving superhuman performance at ). In continuous control domains, competitive games possess a natural curriculum property, as observed in, where complex behaviors have the potential to emerge in simple environments as a of competition between agents, rather than due to increasing difficulty of manually designed tasks. Challenging collaborative-competitive multi-agent environments have only recently been addressed using end-to-end RL by BID21, which learns visually complex first-person 2v2 video games to human level. One longstanding challenge in AI has been robot soccer BID23, including simulated leagues, which has been tackled with machine learning techniques BID37 BID29 but not yet mastered by end-to-end reinforcement learning. We investigate the emergence of co-operative behaviors through multi-agent competitive games. We design a simple research environment with simulated physics in which complexity arises primarily through competition between teams of learning agents. We introduce a challenging multi-agent soccer environment, using MuJoCo BID46 which embeds soccer in a wider universe of possible environments with consistent simulated physics, already used extensively in the machine learning research community BID16 BID5 BID44. We focus here on multi-agent interaction by using relatively simple bodies with a 3-dimensional action space (though the environment is scalable to more agents and more complex bodies). 1 We use this environment to examine continuous multiagent reinforcement learning and some of its challenges including coordination, use of shaping rewards, exploitability and evaluation. We study a framework for continuous multi-agent RL based on decentralized population-based training (PBT) of independent RL learners BID20, where individual agents learn off-policy with recurrent memory and decomposed shaping reward channels. In contrast to some recent work where some degree of centralized learning was essential for multi-agent coordinated behaviors (e.g. BID28 BID9, we demonstrate that end-to-end PBT can lead to emergent cooperative behaviors in our soccer domain. While designing shaping rewards that induce desired cooperative behavior is difficult, PBT provides a mechanism for automatically evolving simple shaping rewards over time, driven directly by competitive match . We further suggest to decompose reward into separate weighted channels, with individual discount factors and automatically optimize reward weights and corresponding discounts online. We demonstrate that PBT is able to evolve agents' shaping rewards from myopically optimizing dense individual shaping rewards through to focusing relatively more on long-horizon game rewards, i.e. individual agent's rewards automatically align more with the team objective over time. Their behavior correspondingly evolves from random, through simple ball chasing early in the learning process, to more co-operative and strategic behaviors showing awareness of other agents. These behaviors are demonstrated visually and we provide quantitative evidence for coordination using game statistics, analysis of value functions and a new method of analyzing agents' counterfactual policy divergence. Finally, evaluation in competitive multi-agent domains remains largely an open question. Traditionally, multi-agent research in competitive domains relies on handcrafted bots or established human baselines BID21, but these are often unavailable and difficult to design. In this paper, we highlight that diversity and exploitability of evaluators is an issue, by observing non-transitivities in the agents pairwise rankings using tournaments between trained teams. We apply an evaluation scheme based on Nash averaging BID2 and evaluate our agents based on performance against pre-trained agents in the support set of the Nash average. We treat our soccer domain as a multi-agent reinforcement learning problem (MARL) which models a collection of agents interacting with an environment and learning, from these interactions, to optimize individual cumulative reward. MARL can be cooperative, competitive or some mixture of the two (as is the case in soccer), depending upon the alignment of agents' rewards. MARL is typically modelled as a Markov game BID39 ), which comprises: a state space S, n agents with observation and action sets O 1,..., O n and A 1,..., A n; a (possibly stochastic) reward function R i: S × A i → R for each agent; observation functions φ i: S → O i; a transition function P which defines the conditional distribution over successor states given previous state-actions: P (S t+1 |S t, A 1 t, ..., A n t), which satisfies the Markov property P (S t+1 |S τ, A 1 τ, ..., A n τ, ∀τ ≤ t) = P (S t+1 |S t, A Algorithm 1 Population-based Training for Multi-Agent RL.1: procedure PBT-MARL 2:{Ai} i∈ [1,..,N] N independent agents forming a population. for agent Ai in {Ai} i∈[1,..,N] do 4:Initialize agent network parameters θi and agent rating ri to fixed initial rating Rinit. Sample initial hyper-parameter θ h i from the initial hyper-parameter distribution. 6: end for 7:while true do 8:Agents play TrainingMatches and update network parameters by Retrace-SVG0. for match (si, sj) ∈ TrainingMatches do 10:UpdateRating (agents, according to some fitness function, inherit network parameters and some hyperparameters from stronger agents, with additional mutation. Hyperparameters can continue to evolve during training, rather than committing to a single fixed value (we show that this is indeed the case in Section 5.1). PBT was extended to incorporate co-play BID21 as a method of optimizing agents for MARL: subsets of agents are selected from the population to play together in multi-agent games. In any such game each agent in the population effectively treats the other agents as part of their environment and learns a policy π θ to optimize their expected return, averaged over such games. In any game in which π θ controls player i in the game, if we denote by π \i:= {π j} j∈{1,2,...,n},j =i the policies of the other agents j = i, we can write the expected cumulative return over a game as DISPLAYFORM0 where the expectation is w.r.t. the environment dynamics and conditioned on the actions being drawn from policies π θ and π \i. Each agent in the population attempts to optimize averaged over the draw of all agents from the population P, leading to the PBT objective J(π θ): DISPLAYFORM1, where the outer expectation is w.r.t. the probability that the agent with policy π θ controls player i in the environment, and the inner expectation is the expectation over the draw of other agents, conditioned on π θ controlling player i in the game. PBT achieves some robustness to exploitability by training a population of learning agents against each other. Algorithm 1 describes PBT-MARL for a population of N agents {A i} i∈ [1,..,N], employed in this work. Throughout our experiments we use Stochastic Value Gradients (SVG0) BID15 as our reinforcement learning algorithm for continuous control. This is an actor-critic policy gradient algorithm, which in our setting is used to estimate gradients network for bootstrapping, as is also described in BID13;. The identity of other agents π \i in a game are not explicitly revealed but are potentially vital for accurate action-value estimation (value will differ when playing against weak rather than strong opponents). Thus, we use a recurrent critic to enable the Q-function to implicitly condition on other players observed behavior, better estimate the correct value for the current game, and generalize over the diversity of players in the population of PBT, and, to some extent, the diversity of behaviors in replay. We find in practice that a recurrent Q-function, learned from partial unrolls, performs very well. Details of our Q-critic updates, including how memory states are incorporated into replay, are given in Appendix A.2. Reinforcement learning agents learning in environments with sparse rewards often require additional reward signal to provide more feedback to the optimizer. Reward can be provided to encourage agents to explore novel states for instance (e.g. BID4, or some other form of intrinsic motivation. Reward shaping is particularly challenging in continuous control (e.g. BID35 where obtaining sparse rewards is often highly unlikely with random exploration, but shaping can perturb objectives (e.g. BID1 ing in degenerate behaviors. Reward shaping is yet more complicated in the cooperative multi-agent setting in which independent agents must optimize a joint objective. Team rewards can be difficult to co-optimize due to complex credit assignment, and can in degenerate behavior where one agent learns a reasonable policy before its teammate, discouraging exploration which could interfere with the first agent's behavior as observed by BID12 . On the other hand, it is challenging to design shaping rewards which induce desired co-operative behavior. We design n r shaping reward functions {r j : S × A → R} j=1,...,nr, weighted so that r(·):= nr j=1 α j r j (·) is the agent's internal reward and, as in BID21, we use populationbased training to optimize the relative weighting {α j} j=1,...,nr. Our shaping rewards are simple individual rewards to help with exploration, but which would induce degenerate behaviors if badly scaled. Since the fitness function used in PBT will typically be the true environment reward (in our case win/loss signal in soccer), the weighting of shaping rewards can in principle be automatically optimized online using the environment reward signal. One enhancement we introduce is to optimize separate discount factors {γ j} j=1,...,nr for each individual reward channel. The objective optimized is then (recalling Equation 1) DISPLAYFORM0 This separation of discount factors enables agents to learn to optimize the sparse environment reward far in the future with a high discount factor, but optimize dense shaping rewards myopically, which would also make value-learning easier. This would be impossible if discounts were confounded. The specific shaping rewards used for soccer are detailed in Section 5.1. We simulate 2v2 soccer using the MuJoCo physics engine BID46. The 4 players in the game are a single sphere (the body) with 2 fixed arms, and a box head, and have a 3-dimensional action space: accelerate the body forwards/backwards, torque can be applied around the vertical axis to rotate, and apply downwards force to "jump". Applying torque makes the player spin, gently for steering, or with more force in order to "kick" the football with its arms. At each timestep, proprioception (position, velocity, accelerometer information), task (egocentric ball position, velocity and angular velocity, goal and corner positions) and teammate and opponent (orientation, position and velocity) features are observed making a 93-dimensional input observation vector. Each soccer match lasts upto 45 seconds, and is terminated when the first team scores. We disable contacts between the players, but enable contacts between the players, the pitch and the ball. This makes it impossible for players to foul and avoids the need for a complicated contact rules, and led to more dynamic matches. There is a small border around the pitch which players can enter, but when the ball is kicked out-of-bounds it is reset by automatic "throw in" a small random distance towards the center of the pitch, and no penalty is incurred. The players choose a new action every 0.05 seconds. At the start of an episode the players and ball are positioned uniformly at random on the pitch. We train agents on a field whose dimensions are randomized in the range 20m × 15m to 28m × 21m, with fixed aspect ratio, and are tested on a field of fixed size 24m × 18m. We show an example frame of the game in FIG0. We use population-based training with 32 agents in the population, an agent is chosen for evolution if its expected win rate against another chosen agent drops below 0.47. The k-factor learning rate for Elo is 0.1 (this is low, due to the high stochasticity in the game ). Following evolution there is a grace period where the agent does not learn while its replay buffer refills with fresh data, and a further "burn-in" period before the agent can evolve again or before its weights can be copied into another agent, in order to limit the frequency of evolution and maintain diversity in the population. For each 2v2 training match 4 agents were selected uniformly at random from the population of 32 agents, so that agents are paired with diverse teammates and opponents. Unlike multi-agent domains where we possess hand-crafted bots or human baselines, evaluating agent performance in novel domains where we do not possess such knowledge remains an open question. A number of solutions have been proposed: for competitive board games, there exits evaluation metrics such as Elo BID8 where ratings of two players should translate to their relative win-rates; in professional team sports, head-to-head tournaments are typically used to measure team performance; in BID0, survival-of-the-fittest is directly translated to multiagent learning as a proxy to relative agent performance. Unfortunately, as shown in BID2, in a simple game of rock-paper-scissors, a rock-playing agent will attain high Elo score if we simply introduce more scissor-play agents into a tournament. Survival-of-the-fittest analysis as shown in BID0 would lead to a cycle, and agent ranking would depend on when measurements are taken. Nash-Averaging Evaluators: One desirable property for multi-agent evaluation is invariance to redundant agents: i.e. the presence of multiple agents with similar strategies should not bias the ranking. In this work, we apply Nash-averaging which possesses this property. Nash-Averaging consists of a meta-game played using a pair-wise win-rate matrix between N agents. A row player and a column player simultaneously pick distributions over agents for a mixed strategy, aiming for a non-exploitable strategy (see BID2 .In order to meaningfully evaluate our learned agents, we need to bootstrap our evaluation process. Concretely, we choose a set of fixed evaluation teams by Nash-averaging from a population of 10 teams previously produced by diverse training schemes, with 25B frames of learning experience each. We collected 1M tournament matches between the set of 10 agents. FIG1 shows the pairwise expected goal difference among the 3 agents in the support set. Nash Averaging assigned nonzero weights to 3 teams that exhibit diverse policies with non-transitive performance which would not have been apparent under alternative evaluation schemes: agent A wins or draws against agent B on 59.7% of the games; agent B wins or draws against agent C on 71.1% of the games and agent C wins or draws against agent A on 65.3% of the matches. We show recordings of example tournament matches between agent A, B and C to demonstrate qualitatively the diversity in their policies (video 3 on the website 2). Elo rating alone would yield a different picture: agent B is the best agent in the tournament with an Elo rating of 1084.27, followed by C at 1068.85; Agent A ranks 5th at 1016.48 and we would have incorrectly concluded that agent B ought to beat agent A with a win-rate of 62%. All variants of agents presented in the experimental section are evaluated against the set of 3 agents in terms of their pair-wise expected difference in score, weighted by support weights. We describe in this section a set of experimental . We first present the incremental effect of various algorithmic components. We further show that population-based training with co-play and reward shaping induces a progression from random to simple ball chasing and finally coordinated behaviors. A tournament between all trained agents is provided in Appendix D. We incrementally introduce algorithmic components and show the effect of each by evaluating them against the set of 3 evaluation agents. We compare agent performance using expected goal difference weighted according to the Nash averaging procedure. We annotate a number of algorithmic components as follows: ff: feedforward policy and action-value estimator; evo: population-based training with agents evolving within the population; rwd shp: providing dense shaping rewards on top of sparse environment scoring/conceding rewards; lstm: recurrent policy with recurrent action-value estimator; lstm q: feedforward policy with recurrent action-value estimator; channels: decomposed action-value estimation for each reward component; each with its own, individually evolving discount factor. Population-based Training with Evolution: We first introduce PBT with evolution. FIG2 (ff vs ff + evo) shows that Evolution kicks in at 2B steps, which quickly improves agent performance at the population level. We show in FIG3 that Population-based training coupled with evolution yields a natural progression of learning rates, entropy costs as well as the discount factor. Critic learning rate gradually decreases as training progresses, while discount factor increases over time, focusing increasingly on long-term return. Entropy costs slowly decreases which reflects a shift from exploration to exploitation over the course training. Reward Shaping: We introduced two simple dense shaping rewards in addition to the sparse scoring and conceding environment rewards: vel-to-ball: player's linear velocity projected onto its unit direction vector towards the ball, thresholded at zero; vel-ball-to-goal: ball's linear velocity projected onto its unit direction vector towards the center of opponent's goal. Furthermore the sparse goal reward and concede penalty are separately evolved, and so can receive separate weight that trades off between the importance of scoring versus conceding. Dense shaping rewards make learning significantly easier early in training. This is reflected by agents' performance against the dummy evaluator where agents with dense shaping rewards quickly start to win games from the start FIG2, ff + evo vs ff + evo + rwd shp). On the other hand, shaping rewards tend to induce sub-optimal policies BID34 BID35; We show in FIG4 however that this is mitigated by coupling training with hyper-parameter evolution which adaptively adjusts the importance of shaping rewards. Early on in the training, the population as a whole decreases the penalty of conceding a goal which evolves towards zero, assigning this reward relatively lower weight than scoring. This trend is subsequently reversed towards the end of training, where the agents evolved to pay more attention to conceding goals: i.e. agents first learn to optimize scoring and then incorporate defending. The dense shaping reward vel-to-ball however quickly decreases in relative importance which is mirrored in their changing behavior, see Section 5.2.Recurrence: The introduction of recurrence in the action-value function has a significant impact on agents' performance as shown in FIG2 (ff + evo + rwd shp vs lstm + evo + rwd shp reaching weighted expected goal difference of 0 at 22B vs 35B steps). A recurrent policy seems to underperform its feedforward counterpart in the presence of a recurrent action-value function. This could be due to out-of-sample evaluators which suggests that recurrent policy might overfit to the behaviors of agents from its own population while feedforward policy cannot. Decomposed Action-Value Function: While we observed empirically that the discount factor increases over time during the evolution process, we hypothesize that different reward components require different discount factor. We show in FIG5 that this is indeed the case, for sparse environment rewards and vel-ball-to-goal, the agents focus on increasingly long planning horizon. In contrast, agents quickly evolve to pay attention to short-term returns on vel-to-ball, once they learned the basic movements. Note that although this agent underperforms lstm + evo + rwd shp asymptot- ically, it achieved faster learning in comparison (reaching 0.2 at 15B vs 35B). This agent also attains the highest Elo in a tournament between all of our trained agents, see Appendix D. This indicates that the training population is less diverse than the Nash-averaging evaluation set, motivating future work on introducing diversity as part of training regime. Assessing cooperative behavior in soccer is difficult. We present several indicators ranging from behavior statistics, policy analysis to behavior probing and qualitative game play in order to demonstrate the level of cooperation between agents. We provide birds-eye view videos on the website 2 (video 1), where each agent's value-function is also plotted, along with a bar plot showing the value-functions for each weighted shaping reward component. Early in the matches the 2 dense shaping rewards (rightmost channels) dominate the value, until it becomes apparent that one team has an advantage at which point all agent's value functions become dominated by the sparse conceding/scoring reward (first and second channels) indicating that PBT has learned a balance between sparse environment and dense shaping rewards so that positions with a clear advantage to score will be preferred. There are recurring motifs in the videos: for example, evidence that agents have learned a "cross" pass from the sideline to a teammate in the centre (see Appendix F for example traces), and frequently appear to anticipate this and change direction to receive. Another camera angle is provided on the website 2 (video 2) showing representative, consecutive games played between two fixed teams. These particular agents generally kick the ball upfield, avoiding opponents and towards teammates. Statistics collected during matches are shown in FIG6. The vel-to-ball plot shows the agents average velocity towards the ball as training progresses: early in the learning process agents quickly maximize their velocity towards the ball (optimizing their shaping reward) but gradually fixate less on simple ball chasing as they learn more useful behaviors, such as kicking the ball upfield. The teammate-spread-out shows the evolution of the spread of teammates position on the pitch. This shows the percentage of timesteps where the teammates are spread at least 5m apart: both agents quickly learn to hog the ball, driving this lower, but over time learn more useful behaviors which in diverse player distributions. pass/interception shows that pass, where players from the same team consecutively kicked the ball and interception, where players from the opposing teams kicked the ball in sequence, both remain flat throughout training. To pass is the more difficult behavior as it requires two teammates to coordinate whereas interception only requires one of the two opponents to position correctly. pass/interception-10m logs pass/interception events over more than 10m, and here we see a dramatic increase in pass-10m while interception-10m remains flat, i.e. long range passes become increasingly common over the course of training, reaching equal frequency as long-range interception. In addition to analyzing behavior statistics, we could ask the following: "had a subset of the observation been different, how much would I have changed my policy?". This reveals the extent to which an agent's policy is dependent on this subset of the observation space. To quantify this, we analyze counterfactual policy divergence: at each step, we replace a subset of the observation with 10 valid alternatives, drawn from a fixed distribution, and we measure the KL divergence incurred in agents' policy distributions. This cannot be measured for a recurrent policy due to recurrent states and we investigate ff + evo + rwd shp instead FIG2, where the policy network is feedforward. We study the effect of five types of counterfactual information over the course of training.ball-position has a strong impact on agent's policy distribution, more so than player and opponent positions. Interestingly, ball-position initially reaches its peak quickly while divergence incurred by counterfactual player/opponent positions plateau until reaching 5B training steps. This phase coincides with agent's greedy optimization of shaping rewards, as reflected in FIG7. Counterfactual teammate/opponent position increasingly affect agents' policies from 5B steps, as they spread out more and run less directly towards the ball. Opponent-0/1-position incur less divergence than teammate position individually, suggesting that teammate position has relatively large impact than any single opponent, and increasingly so during 5B-20B steps. This suggests that comparatively players learn to leverage a coordinating teammate first, before paying attention to competing opponents. The gap between teammate-position and opponents-position eventually widens, as opponents become increasingly relevant to the game dynamics. The progression observed in counterfactual policy divergence provides evidence for emergent cooperative behaviors among the players. Qualitatively, we could ask the following question: would agents coordinate in scenarios where it's clearly advantageous to do so? To this end, we designed a probe task, to test our trained agents for coordination, where blue0 possesses the ball, while the two opponents are centered on the pitch in front. A teammate blue1 is introduced to either left or right side. In Figure 9 we show typical traces of agents' behaviors (additional probe task video shown at Video 4 on our website 2): at 5B steps, pass intercept 5B left 0 100 5B right 31 90 80B left 76 24 80B right 56 27Figure 9: L1: Comparison between two snapshots (5B vs 80B) of the same agent. L2: number of successful passes and interception occurred in the first 100 timesteps, aggregated over 100 episodes.when agents play more individualistically, we observe that blue0 always tries to dribble the ball by itself, regardless of the position of blue1. Later on in the training, blue0 actively seeks to pass and its behavior is driven by the configuration of its teammate, showing a high-level of coordination. In "8e10 left" in particular, we observe two consecutive pass (blue0 to blue1 and back), in the spirit of 2-on-1 passes that emerge frequently in human soccer games. The population-based training we use here was introduced by BID21 for the capturethe-flag domain, whereas our implementation is for continuous control in simulated physics which is less visually rich but arguably more open-ended, with potential for sophisticated behaviors generally and allows us to focus on complex multi-agent interactions, which may often be physically observable and interpretable (as is the case with passing in soccer). Other recent related approaches to multi-agent training include PSRO BID24 and NFSP BID18, which are motivated by game-theoretic methods (fictitious play and double oracle) for solving matrix games, aiming for some robustness by playing previous best response policies, rather than the (more data efficient and parallelizable) approach of playing against simultaneous learning agents in a population. The RoboCup competition is a grand challenge in AI and some top-performing teams have used elements of reinforcement learning BID37 BID29, but are not end-to-end RL. Our environment is intended as a research platform, and easily extendable along several lines of complexity: complex bodies; more agents; multi-task, transfer and continual learning. Coordination and cooperation has been studied recently in deepRL in, for example, BID28 BID11 BID12 BID42; BID32, but all of these require some degree of centralization. Agents in our framework perform fully independent asynchronous learning yet demonstrate evidence of complex coordinated behaviors. BID0 introduce a MuJoCo Sumo domain with similar motivation to ours, and observe emergent complexity from competition, in a 1v1 domain. We are explicitly interested in cooperation within teams as well as competition. Other attempts at optimizing rewards for multi-agent teams include BID27. We have introduced a new 2v2 soccer domain with simulated physics for continuous multi-agent reinforcement learning research, and used competition between agents in this simple domain to train teams of independent RL agents, demonstrating coordinated behavior, including repeated passing motifs. We demonstrated that a framework of distributed population-based-training with continuous control, combined with automatic optimization of shaping reward channels, can learn in this environment end-to-end. We introduced the idea of automatically optimizing separate discount factors for the shaping rewards, to facilitate the transition from myopically optimizing shaping rewards towards alignment with the sparse long-horizon team rewards and corresponding cooperative behavior. We have introduced novel method of counterfactual policy divergence to analyze agent behavior. Our evaluation has highlighted non-transitivities in pairwise match and the practical need for robustness, which is a topic for future work. Our environment can serve as a platform for multiagent research with continuous physical worlds, and can be easily scaled to more agents and more complex bodies, which we leave for future research. In our soccer environment the reward is invariant over player and we can drop the dependence on i. SVG requires the critic to learn a differentiable Q-function. The true state of the game s and the identity of other agents π \i, are not revealed during a game and so identities must be inferred from their behavior, for example. Further, as noted in BID10, off-policy replay is not always fully sound in multi-agent environments since the effective dynamics from any single agent's perspective changes as the other agent's policies change. Because of this, we generally model Q as a function of an agents history of observations -typically keeping a low dimensional summary in the internal state of an LSTM: Q π θ (·, ·; ψ): X × A → R, where X denotes the space of possible histories or internal memory state, parameterized by a neural network with weights ψ. This enables the Q-function to implicitly condition on other players observed behavior and generalize over the diversity of players in the population and diversity of behaviors in replay, Q is learned using trajectory data stored in an experience replay buffer B, by minimizing the k-step return TD-error with off-policy retrace correction BID33, using a separate target network for bootstrapping, as is also described in BID13;. Specifically we minimize: DISPLAYFORM0 where ξ:= ((s t, a t, r t)) i+k t=i is a k-step trajectory snippet, where i denotes the timestep of the first state in the snippet, sampled uniformly from the replay buffer B of prior experience, and Q retrace is the off-policy corrected retrace target: DISPLAYFORM1 where, for stability,Q(·, ·;ψ): X ×A → R andπ are target network and policies BID30 periodically synced with the online action-value critic and policy (in our experiments we sync after every 100 gradient steps), and c s:= min(1, π(as|xs) β(as|xs) ), where β denotes the behavior policy which generated the trajectory snippet ξ sampled from B, and i s=i+1 c s:= 1. In our soccer experiments k = 40. Though we use off-policy corrections, the replay buffer has a threshold, to ensure that data is relatively recent. When modelling Q using an LSTM the agent's internal memory state at the first timestep of the snippet is stored in replay, along with the trajectory data. When replaying the experience the LSTM is primed with this stored internal state but then updates its own state during replay of the snippet. LSTMs are optimized using backpropagation through time with unrolls truncated to length 40 in our experiments. We use Elo rating BID8 ), introduced to evaluate the strength of human chess players, to measure an agent's performance within the population of learning agents and determine eligibility for evolution. Elo is updated from pairwise match and can be used to predict expected win rates against the other members of the population. For a given pair of agents i, j (or a pair of agent teams), s elo estimates the expected win rate of agent i playing against agent j. We show in Algorithm 3 the update rule for a two player competitive game for simplicity, for a team of multiple players, we use their average Elo score instead. By using Elo as the fitness function, driving the evolution of the population's hyperparamters, the agents' internal hyperparameters (see Section 3.3) can be automatically optimized for the objective we are ultimately interested in -the win rate against other agents. Individual shaping rewards would otherwise be difficult to handcraft without biasing this objective. We parametrize each agent's policy and critic using neural networks. Observation preprocessing is first applied to each raw teammate and opponent feature using a shared 2-layer network with 32 and 16 neurons and Elu activations BID7 to embed each individual player's data into a consistent, learned 16 dimensional embedding space. The maximum, minimum and mean of each dimension is then passed as input to the remainder of the network, where it is concatenated with the ball and pitch features. This preprocessing makes the network architecture invariant to the order of teammates and opponents features. Both critic and actor then apply 2 feed-forward, elu-activated, layers of size 512 and 256, followed by a final layer of 256 neurons which is either feed-forward or made recurrent using an LSTM BID19. Weights are not shared between critic and actor networks. We learn the parametrized gaussian policies using SVG0 as detailed in Appendix A, and the critic as described in Section A.2, with the Adam optimizer BID22 used to apply gradient updates. We also ran a round robin tournament with 50,000 matches between the best teams from 5 populations of agents (selected by Elo within their population), all trained for 5e10 agent steps -i.e. each learner had processed at least 5e10 frames from the replay buffer, though the number of raw environment steps would be much lower than that) and computed the Elo score. This shows the advantage of including shaping rewards, adding a recurrent critic and separate reward and discount channels, and the further (marginal) contribution of a recurrent actor. The full win rate matrix for this tournament is given in FIG0. Note that the agent with full recurrence and separate reward channels attains the highest Elo in this tournament, though performance against our Nash evaluators in Section 5.1 is more mixed. This highlights the possibility for non-transitivities in this domain and the practical need for robustness to opponents. To assess the relative importance of hyperparameters we replicated a single experiment (using a feed-forward policy and critic network) with 3 different seeds, see FIG0. Critic learning rate and entropy regularizer evolve consistently over the three training runs. In particular the critic learning rate tends to be reduced over time. If a certain hyperparameter was not important to agent performance we would expect less consistency in its evolution across seeds, as selection would be driven by other hyperparameters: thus indicating performance is more sensitive to critic learning rate than actor learning rate. Elo lstm + evo + rwd shp + channels 1071 lstm q + evo + rwd shp + channels 1069 lstm q + evo + rwd shp 1006 ff + evo + rwd shp 956 ff + evo 898Figure 10: Win rate matrix for the Tournament between teams: from top to bottom, ordered by Elo, ascending: ff + evo; ff + evo + rwd shp; lstm q + evo + rwd shp; lstm q + evo + rwd shp + channels; lstm + evo + rwd shp + channels. ELo derived from the tournament is given in the table. Figure 11: Hyperparameter evolution for three separate seeds, displayed over three separate rows. As well as the videos at the website 3, we provide visualizations of traces of the agent behavior, in the repeated "cross pass" motif, see FIG0. | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BkG8sjR5Km | We introduce a new MuJoCo soccer environment for continuous multi-agent reinforcement learning research, and show that population-based training of independent reinforcement learners can learn cooperative behaviors |
Program synthesis is the task of automatically generating a program consistent with a specification. Recent years have seen proposal of a number of neural approaches for program synthesis, many of which adopt a sequence generation paradigm similar to neural machine translation, in which sequence-to-sequence models are trained to maximize the likelihood of known reference programs. While achieving impressive , this strategy has two key limitations. First, it ignores Program Aliasing: the fact that many different programs may satisfy a given specification (especially with incomplete specifications such as a few input-output examples). By maximizing the likelihood of only a single reference program, it penalizes many semantically correct programs, which can adversely affect the synthesizer performance. Second, this strategy overlooks the fact that programs have a strict syntax that can be efficiently checked. To address the first limitation, we perform reinforcement learning on top of a supervised model with an objective that explicitly maximizes the likelihood of generating semantically correct programs. For addressing the second limitation, we introduce a training procedure that directly maximizes the probability of generating syntactically correct programs that fulfill the specification. We show that our contributions lead to improved accuracy of the models, especially in cases where the training data is limited. The task of program synthesis is to automatically generate a program that is consistent with a specification such as a set of input-output examples, and has been studied since the early days of Artificial Intelligence BID34. There has been a lot of recent progress made on neural program induction, where novel neural architectures inspired from computation modules such as RAM, stack, CPU, turing machines, and GPU BID10 BID17 BID20 BID11 BID31 BID18 have been proposed to train these architectures in an end-to-end fashion to mimic the behavior of the desired program. While these approaches have achieved impressive , they do not return explicit interpretable programs, tend not to generalize well on inputs of arbitrary length, and require a lot of examples and computation for learning each program. To mitigate some of these limitations, neural program synthesis approaches BID16 BID28 BID7 have been recently proposed that learn explicit programs in a Domain-specific language (DSL) from as few as five input-output examples. These approaches, instead of using a large number of input-output examples to learn a single program, learn a large number of different programs, each from just a few input-output examples. During training, the correct program is provided as reference, but at test time, the learnt model generates the program from only the input-output examples. While neural program synthesis techniques improve over program induction techniques in certain domains, they suffer from two key limitations. First, these approaches use supervised learning with reference programs and suffer from the problem of Program Aliasing: For a small number of input-output examples, there can be many programs that correctly transform inputs to outputs. The problem is the discrepancy between the single supervised reference program and the multitude of correct programs. FIG0 shows an example of this: if maximizing the probability of ground truth program, predicting Program B would be assigned a high loss even though the two programs are semantically equivalent for the input-output example. Maximum likelihood training forces the model to learn to predict ground truth programs, which is different from the true objective of program synthesis: predicting any consistent program. To address this problem, we alter the optimization objective: instead of maximum likelihood, we use policy gradient reinforcement learning to directly encourage generation of any program that is consistent with the given examples. The second limitation of neural program synthesis techniques based on sequence generation paradigm BID7 ) is that they often overlook the fact that programs have a strict syntax, which can be checked efficiently. Similarly to the work of BID28, we explore a method for leveraging the syntax of the programming language in order to aggressively prune the exponentially large search space of possible programs. In particular, not all sequences of tokens are valid programs and syntactically incorrect programs can be efficiently ignored both during training and at test time. A syntax checker is an additional form of supervision that may not always be present. To address this limitation, we introduce a neural architecture that retains the benefits of aggressive syntax pruning, even without assuming access to the definition of the grammar made in previous work BID28. This model is jointly conditioned on syntactic and program correctness, and can implicitly learn the syntax of the language while training. We demonstrate the efficacy of our approach by developing a neural program synthesis system for the Karel programming language BID29, an educational programming language, consiting of control flow constructs such as loops and conditionals, making it more complex than the domains tackled by previous neural program synthesis works. This paper makes the following key contributions:• We show that Reinforcement Learning can directly optimize for generating any consistent program and improves performance compared to pure supervised learning.• We introduce a method for pruning the space of possible programs using a syntax checker and show that explicit syntax checking helps generate better programs.• In the absence of a syntax checker, we introduce a model that jointly learns syntax and the production of correct programs. We demonstrate this model improves performance in instances with limited training data. Program synthesis is one of the fundamental problems in Artificial Intelligence. To the best of our knowledge, it can be traced back to the work of BID34 where a theorem prover was used to construct LISP programs based on a formal specification of the input-output relation. As formal specification is often as complex as writing the original program, many techniques were developed to achieve the same goal with simpler partial specifications in the form of input-output (IO) examples BID1 BID33. Rule-based synthesis approaches have recently been successful in delivering on the promise of Programming By Example BID24, the most widely known example being the FlashFill system BID13 in Excel. However, such systems are extremely complicated to extend and need significant development time from domain experts to provide the pruning rules for efficient search. As a , the use of Machine Learning methods have been proposed, based on Bayesian probabilistic models BID23 or Inductive Logic programming BID25 BID26 to automatically generate programs based on examples. Recently, inspired by the success of Neural Networks in other applications such as vision BID19 or speech recognition BID9 ) differentiable controllers were made to learn the behaviour of programs by using gradient descent over differentiable version of traditional programming concepts such as memory addressing BID10, manipulating stacks BID17 BID12, register machines BID20, and data manipulation BID27. These approaches to program induction however tend to struggle with generalization, especially when presented with inputs of a different dimension than the one they were trained with and require a very large amount of training data. Some exceptions to this include Neural Programmer Interpreters BID31 and its extensions BID5 ) that learn from program traces rather than only examples. However, they still learn a different model for each program and are computationally expensive, unlike our system that uses a single model for learning a large number of programs. A series of recent works aim to infer explicit program source code with the assumption that code structure provides an inductive bias for better generalization. In particular, explicitly modeling control flow statements such as conditionals and loops can often lead to programs capable of generalizing, regardless of input size BID8 BID4 BID32. One remaining drawback of these approaches is the need to restart learning from scratch for each new program. Thus they are largely unsuited for the situation of synthesizing a new program on-the-fly from very few examples. The latest developments use large datasets of artificially generated programs and learn to map embeddings of IO examples to information about the programs to generate. BID3 produce scores over attributes, to be used as heuristics to speed up search-based techniques. BID28 use their dataset to learn probability over the expansion rules of a predefined grammar, while BID7 directly predict the source code of the programs. These last two methods use supervised training to maximize the likelihood of a single reference program, while we directly optimize for the generation of any consistent program. Our approach to optimize program correctness is similar in spirit to advances in Neural Machine Translation BID37 BID30 that leverage reinforcement learning to optimize directly for evaluation metrics. Taking advantage of the fact that programs can be syntactically checked and unit tested against the specification examples, we show how to improve on those REINFORCE based methods. Recently, BID14 proposed a method similar to ours based on Maximum Marginal Likelihood to generate programs based on a description in natural language. From an application point of view, our target domain is more complex as our DSL includes control flow operations such as conditionals and loops. Moreover, natural language utterances fully describe the steps that the program needs to take, while learning from IO examples requires planning over potentially long executions. Their approach is more akin to inferring a formal specification based on a natural language description, as opposed to our generation of imperative programs. Incorporating knowledge of the grammar of the target domain to enforce syntactical correctness has already proven useful to model arithmetic expressions, molecules BID21, and programs BID28 BID38. These approaches define the model over the production rules of the grammar; we instead operate directly over the terminals of the grammar. This allows us to learn the grammar jointly with the model in the case where no formal grammar specification is available. Our approach is extremely general and can be applied to the very recently proposed methods for inferring and executing programs for visual reasoning BID16 that to the best of our knowledge does not directly explictly encourage grammar consistency of the ing program. Before describing our proposed methods, we establish the necessary notation, present the problem setting and describe the general paradigm of our approach. To avoid confusion with probabilities, we will use the letter λ to denote programs. I and O will be used to denote respectively input states and output states and we will use the shortcut IO to denote a pair of corresponding input/output examples. A state constitutes what the programs are going to be operating on, depending on the application domain. In FlashFill-type applications BID28 BID7, Input and Output states would be strings of characters, while in our Karel environment, states are grids describing the presence of objects. If we were to apply our method to actual programming languages, states would represent the content of the machine's registers and the memory. At training time, we assume to have access to N training samples, each training sample consisting of a set of K Input/Output states and a program implementing the mapping correctly: DISPLAYFORM0 where λ i (I k i) denotes the ing state of applying the program λ i to the input state I k i. Our goal is to learn a synthesizer σ that, given a set of input/output examples produces a program: DISPLAYFORM1 We evaluate the programs on a set of test cases for which we have both specification examples and held-out examples: DISPLAYFORM2 At test time, we evaluate the performance of our learned synthesizer by generating, for each sample in the test set, a programλ j. The metric we care about is Generalization: DISPLAYFORM3 3.2 NEURAL PROGRAM SYNTHESIS ARCHITECTURE Similar to BID7 we use a sequential LSTM-based BID15 ) language model, conditioned on an embedding of the input-output pairs. Each pair is encoded independently by a convolutional neural network (CNN) to generate a joint embedding. A succinct description of the architecture can be found in section 6.1 and the exact dimensions are available in the supplementary materials. where each token comes from an alphabet Σ. We model the program one token at a time using an LSTM. At each timestep, the input consists of the concatenation of the embedding of the IO pair and of the last predicted token. One such decoder LSTM is run for each of the IO pairs, all using the same weights. The probability of the next token is defined as the Softmax of a linear layer over the max-pooled hidden state of all the decoder LSTMs. A schema representing this architecture can be seen in FIG1.The form of the model that we are learning is: DISPLAYFORM0 At test time, the most likely programs are obtained by running a beam search. One of the advantages of program synthesis is the ability to execute hypothesized programs. Through execution, we remove syntactically incorrect programs and programs that are not consistent with the observed examples. Among the remaining programs, we return the most likely according to the model. Results of all the decoders are maxpooled and the prediction is modulated by the mask generated by the syntax model. The probability over the next token is then obtained by a Softmax transformation. To estimate the parameters θ of our model, the default solution is to perform supervised training, framing the problem as Maximum Likelihood estimation. BID7 follow this approach and use stochastic gradient descent to solve: DISPLAYFORM0 However, this training objective exhibits several drawbacks. First, at training time, the model is only exposed to the training data distribution, while at test time, it is fed back the token from its own previous predictions. This discrepancy in distribution of the inputs is well known in Natural Language Processing under the name of exposure bias BID30.Moreover, this loss does not represent the true objective of program synthesis. In practice, any equivalent program should be as valid a prediction as the reference one. This property, that we call program aliasing, is not taken into account by the MLE training. Ideally, we would like the model to learn to reason about the necessary steps needed to map the input to the output. As a , the loss shouldn't penalize correct programs, even if they do not correspond to the ground truth. The first modification that we propose is to change the target objective to bring it more in line with the goal of program synthesis. We replace the optimization problem of by DISPLAYFORM0 where R i (λ) is a reward function designed to encode the quality of the sampled programs. Note that this formulation is extremely generic and would allow to represent a wide range of different objective functions. If we assume that we have access to a simulator to run our programs, we can design R i so as to optimize for generalization on held-out examples, preventing the model to overfit on its inputs. Additional property such as program conciseness, or runtime efficiency could also be encoded into the reward. DISPLAYFORM1 Step Figure 3: Approximation using a beamsearch. All possibles next tokens are tried for each candidates, the S (here 3) most likely according to p θ are kept. When an End-Of-Sequence token (green) is reached, the candidate is held out. At the end, the most likely complete sequences are used to construct an approximate distribution, through rescaling. However, this expressiveness comes at a cost: the inner sum in FORMULA7 is over all possible programs and therefore is not tractable to compute. The standard method consists of approximating the objective by defining a Monte Carlo estimate of the expected reward, using S samples from the model. To perform optimization, an estimator of the gradient of the expected reward is built based on the REINFORCE trick BID35. DISPLAYFORM2 However, given that we sample from a unique model, there is a high chance that we will sample the same programs repeatedly when estimating the gradient. This is especially true when the model has been pre-trained in a supervised manner. A different approach is to approximate the distribution of the learned distribution by another one with a smaller support. To obtain this smaller distribution, one possible solution is to employ the S most likely samples as returned by a Beam Search. We generate the embedding of the IO grids and perform decoding, keeping at each step the S most likely candidates prefixes based on the probability p θ given by the model. At step t, we evaluate p θ (s 1 . . . s t, IO k i k=1..K) for all the possible next token s t and all the candidates (s 1 . . . s t−1) previously obtained. The S most likely sequences will be the candidates at the (t + 1) step. FIG2 represents this process. Based on the final samples obtained, we define a probability distribution to use as an approximation of p θ in. As opposed to, this approximation introduces a bias. It however has the advantage of aligning the training procedure more closely with the testing procedure where only likely samples are going to be decoded. Formally, this corresponds to performing the following approximation of the objective function, (BS(p θ, S) being the S samples returned by a beam search with beam size S): DISPLAYFORM3 where DISPLAYFORM4 With this approximation, the support of the distribution q θ is much smaller as it contains only the S elements returned by the beam search. As a , the sum become tractable and no estimator are needed. We can simply differentiate this new objective function to obtain the gradients of the loss with regards to p θ and use the chain-rule to subsequently obtain gradient with regards to θ necessary for the optimization. Note that if S is large enough to cover all possible programs, we recover the objective of.Based on this more tractable distribution, we can define more complex objective functions. In program synthesis, we have the possibility to prune out several predictions by using the specification. Therefore, we can choose to go beyond optimizing the expected reward when sampling a single program and optimize the expected reward when sampling a bag of C programs and keeping the best one. This in a new objective function: DISPLAYFORM5 where q θ is defined as previously. We argue that by optimizing this objective function, the model gets the capability of "hedging its bets" and assigning probability mass to several candidates programs, ing in a higher diversity of outputs. In the special case where the reward function only takes values in {0, 1}, as it is when we are using correctness as a reward, this can be more easily computed as: DISPLAYFORM6 The derivation leading to this formulation as well as a description on how to efficiently compute the more general loss can be found in appendix A. Note that although this formulation brings the training objective function closer to the testing procedure, it is still not ideal. It indeed makes the assumption that if we have a correct program in our bag of samples, we can identify it, ignoring the fact that it is possible to have some incorrect program consistent with the IO pairs partial specification (and therefore not prunable). In addition, this represents a probability where the C programs are sampled independently, which in practice we wouldn't do. One aspect of the program synthesis task is that syntactically incorrect programs can be trivially identified and pruned before making a prediction. As a , if we use stx to denote the event that the sampled program is syntactically correct, what we care about modeling correctly is p(λ | IO k i k=1..K, stx). Using Bayes rule, we can rewrite this: DISPLAYFORM0 We drop the conditional dependency on the IO pairs in the second line of as the syntactical correctness of the program is independent from the specification when conditioned on the program. We can do the same operation at the token level, denoting by stx 1...t the event that the sequence of the first t tokens s 1 · · · s t doesn't contain any syntax error and may therefore be a prefix to a valid program. DISPLAYFORM1 Given a grammar, it is possible to construct a checker to determine valid prefixes of programs. Example applications include compilers to signal syntax errors to the user and autocomplete features of Integrated Development Environments (IDEs) to restrict the list of suggested completions. The quantity p (stx 1...t | s 1 · · · s t) is therefore not a probability but can be implemented as a deterministic process for a given target programming language. In practice, this is implemented by getting at each timestep a mask M = {− inf, 0} | Σ | where M j = − inf if the j-th token in the alphabet is not a valid token in the current context, and 0 otherwise. This mask is added to the output of the network, just before the Softmax operation that normalizes the output to a probability over the tokens. Conditioning on the syntactical correctness of the programs provides several advantages: First, sampled programs become syntactically correct by construction. At test time, it allows the beam search to only explore useful candidates. It also ensures that when we are optimizing for correctness, the samples used to approximate the distribution are all going to be valid candidates. Restricting the dimension of the space on which our model is defined also makes the problem simpler to learn. It may not always be feasible to assume access to a syntax checker. In general, we wish to retain the syntax checker's ability to aggressively prune the search in program space without requiring access to the syntax itself. To this end, we propose to represent the syntax checker as a neural network module and learn it jointly. Similar to the base model, we implement learned syntax checking using an LSTM g φ. Comparing to the decoder LSTM, there are two major differences:• The syntaxLSTM is conditioned only on the program tokens, not on the IO pairs. This ensures that the learned checker models only the syntax of the language.• The output of the syntaxLSTM is passed through an elementwise x → − exp(x) activation function and added to the decoder LSTM's output. Similar to the mask in Section 5.1, the exponential activation function allows the syntaxLSTM to output high penalties to any tokens deemed syntactically incorrect. The addition of the syntaxLSTM doesn't necessitate any change to the training procedure as it is simply equivalent to a change of architecture. However, in the supervised setting, when we have access to syntactically correct programs, we have the possibility of adding an additional term to the loss to prevent the model from masking valid programs: DISPLAYFORM0 This loss penalizes the syntaxLSTM for giving negative scores to each token belonging to a known valid program. We use the reference programs as example of valid programs when we perform supervised training. The Karel programming language is an educational programming language , used for example in Stanford CS introductory classes (cs1) or in the Hour of Code initiative (hoc). It features an agent inside a gridworld (See FIG0, capable of moving (move, turn{Left,Right}), modifying world state ({pick,put}Marker), and querying the state of the nearby environment for its own markers (markerPresent, noMarkerPresent) or for natural obstacles (frontIsClear, leftIsClear, rightIsClear). Our goal is to learn to generate a program in the Karel DSL given a small set of input and output grids. The language supports for loops, while loops, and conditionals, but no variable assignment. Compared to the original Karel language, we only removed the possibility of defining subroutines. The specification for the DSL can be found in appendix B.To evaluate our method, we follow the standard practice BID7 BID28 BID27 BID3 and use a synthetic dataset generated by randomly sampling programs from the DSL. We perform a few simple heuristic checks to ensure generated programs have observable effect on the world and prune out programs performing spurious actions (e.g. executing a turnLeft just after a turnRight for example). For each program, we a set of IO pairs are generated by sampling random input grids and executing the program on them to obtain the corresponding output grids. A large number of them are sampled and 6 are kept for each program, ensuring that all conditionals in a program are hit by at least one of the examples. The first 5 samples serve as the specification, and the sixth one is kept as held-out test pair. 5000 programs are not used for training, and get split out between a validation set and a test set. We represent the input and output elements as grids where each cell in the grid is a vector with 16 channels, indicating the presence of elements (AgentFacingNorth, AgentFacingSouth, · · ·, Obstacle, OneMarkerPresent, TwoMarkersPresent, · · ·). The input and output grids are initially passed through independent convolution layers, before being concatenated and passed through two convolutional residual blocks and a fully connected layer, mapping them to a final 512-dimensional representation. We run one decoder per IO pair and perform a maxpooling operation over the output of all the decoders, out of which we perform the prediction of the next token. Our models are implemented using the Pytorch framework (pyt). Code and data will be made available. Table 1: RL_beam optimization of program correctness in consistent improvements in top-1 generalization accuracy over supervised learning MLE, even though the exact match of recovering the reference program drops. The improved objective function in further improvements. We trained a variety of models on the full Karel dataset containing 1-million examples as well as a reduced dataset containing only 10, 000 examples. In general, the small dataset serves to help understand the data efficiency of the program synthesis methods and is motivated by the expected difficulty of obtaining many labeled examples in real-world program synthesis domains. The Karel DSL was previously used by BID6 to study the relative perfomances of a range of methods depending on the available amount of data. The task considered was however different as they attempted to perform program induction as opposed to program synthesis. Rather than predicting a program implementing the desired transformation, they simply output a specification of the changes that would from applying the program so a direct number comparison wouldn't be meaningful. Models are grouped according to training objectives. As a baseline, we use MLE, which corresponds to the maximum likelihood objective (Eq.6), similar to the method proposed by BID7. Unless otherwise specified, the reward considered for our other methods is generalization: +1 if the program matches all samples, including the held out one and 0 otherwise. RL uses the expected reward objective (Eq.7), using REINFORCE to obtain a gradient estimate (Eq.8). RL_beam attempts to solve the proxy problem described by Equation FORMULA11 and RL_beam_div the richer loss function of Equation. RL_beam_div_opt also optimizes the loss of equation FORMULA12 but the reward additionally includes a term inversly proportional to the number of timesteps it takes for the program to finish executing. All RL models are initialized from pretrained supervised models. Optimizing for correctness (RL): Results in Table 1 show that optimizing for the expected program correctness consistently provides improvements in top-1 generalization accuracy. Top-1 Generalization Accuracy (Eq. 4) denotes the accuracy of the most likely program synthesized by beam search decoding having the correct behaviour across all input-output examples. We didn't perform any pruning of programs that were incorrect on the 5 specification examples. The improved performance of RL methods confirms our hypothesis that better loss functions can effectively combat the program aliasing problem. On the full dataset, when optimizing for correctness, Exact Match Accuracy decreases, indicating that the RL models no longer prioritize generating programs that exactly match the references. On the small dataset, RL_beam methods improves both exact match accuracy and generalization. Comparing RL_beam to standard RL, we note improvements across all levels of generalization. By better aligning the RL objective with the sampling that happens during beam search decoding, consistent improvements can be made in accuracy. Further improvements are made by encouraging diversity in the beam of solutions (RL_beam_div) and penalizing long running programs (RL_beam_div_opt).In the settings where little training data is available, RL methods show more dramatic improvements over MLE, indicating that data efficiency of program synthesis methods can be greatly improved by using a small number of samples first for supervised training and again for Reinforcement Learning. As a side note, we were unable to achieve high performance when training the RL methods from scratch. The necessity of extensive supervised pretraining to get benefits from Reinforcement Table 3: Grammar prunes the space of possible programs: On the full dataset, handwritten syntax checking MLE_handwritten improves accuracy over no grammar MLE, although MLE_large shows that simply adding more parameters in even greater gains. On the small dataset, learning the syntax MLE_learned outperforms handwritten grammar and larger models. Learning fine-tuning is well-known in the Neural Machine Translation literature BID30 BID37 BID36 BID2. TAB3 examines the top-1, top-5, and top-50 generalization accuracy of supervised and RL models. RL_beam methods performs best for top-1 but their advantage drops for higher-rank accuracy. Inspection of generated programs shows that the top predictions all become diverse variations of the same program, up to addition/removal of no-operations (turnLeft followed by turnRight, full circle obtained by a series of four turnLeft). The RL_beam_div objective helps alleviate this effect as does RL_beam_div_opt, which penalizes redundant programs. This is important as in the task of Program Synthesis, we may not necessarily need to return the most likely output if it can be pruned by our specification. We also compare models according to the use of syntax: MLE_handwritten denotes the use of a handwritten syntax checker (Sec 5.1), MLE_learned denotes a learned syntax (Sec 5.2), while no suffix denotes no syntax usage. Table 3 compares syntax models. On the full dataset, leveraging the handwritten syntax leads to marginally better accuracies than learning the syntax or using no syntax. Given access to enough data, the network seems to be capable of learning to model the syntax using the sheer volume of training examples. On the other hand, when the amount of training data is limited, learning the syntax produces significantly better performance. By incorporating syntactic structure in the model architecture and objective, more leverage is gained from small training data. Interestingly, the learned syntax model even outperforms the handwritten syntax model. We posit the syntaxLSTM is free to learn a richer syntax. For example, the syntaxLSTM could learn to model the distribution of programs and discourage the prediction of not only syntactically incorrect programs, but also the unlikely ones. To control for the extra parameters introduced by the syntaxLSTM, we compare against MLE_large, which uses no syntax but features a larger decoder LSTM, ing in the same number of parameters as MLE_learned. Results show that the larger number of parameters is not enough to explain the difference in performance, which again indicates the utility of jointly learning syntax. Analysis of learned syntax: Section 5.2 claimed that by decomposing our models into two separate decoders, we could decompose the learning so that one decoder would specialize in picking the likely tokens given the IO pairs, while the other would enforce the grammar of the language. We now provide experimental evidence that this decomposition happens in practice. TAB5 shows the percentage of syntactically correct programs among the most likely predictions of the MLE + learned model trained on the full dataset. Both columns correspond to the same set of parameters but the second column doesn't apply the syntaxLSTM's mask to constrain the decoding process. The precipitous drop in syntax accuracy indicates the extent to which the program decoder has learned to rely on the syntaxLSTM to produce syntactically correct programs. FIG2 compares the syntax masks generated by the learned and handwritten syntax models while decoding Program A in FIG0. FORMULA2 FIG2 analyzes the difference between the handwritten and learned syntax masks. White indicates similar output, which occupies the majority of the visualization. Blue cells correspond to instances where the syntaxLSTM labeled a token correct when it actually was syntactically incorrect. This type of error can be recovered if the program decoder predicts those tokens as unlikely. On the other hand, red cells indicate the syntaxLSTM predicted a valid token is syntactically incorrect. This type of error is more dangerous because the program decoder cannot recover the valid token once it is declared incorrect. The majority of red errors correspond to tokens which are rarely observed in the training dataset, indicating that the syntaxLSTM learns to model more than just syntaxit also captures the distribution over programs. Given a large enough dataset of real programs, the syntaxLSTM learns to consider non-sensical and unlikely programs as syntactically incorrect, ensuring that generated programs are both syntactically correct and likely. In this section, we describe how the objective function of Equation FORMULA12 can be computed. We have a distribution q θ over S programs, as obtained by performing a beam search over p θ and renormalizing. We are going to get C independent samples from this distribution and obtain a reward corresponding to the best performing of all of them. In the case where the reward function R i (λ r) is built on a boolean property, such as for example "correctness of the generated program", Equation can be simplified. As R i (λ r) can only take the values 0 or 1, the term max j∈1.. C R i (λ j) is going to be equal to 0 only if all of the C sampled programs give a reward of zero. For each sample, there is a probability of DISPLAYFORM0 ) of sampling a program with rewards zero. The probability of not sampling a single correct program out of the C samples is q C incorrect. From this, we can derive the form of Equation. Note that this can be computed without any additional sampling steps as we have a close form solution for this expectation. In the general case, a similar derivation can be obtained. Assume that the programs outputted by the beam search have associated rewards R 0, R 1,..., R S and assume without loss of generality that R 0 < R 1 <.. < R S. The probability of sampling a program with a reward smaller than R i is q ≤Ri = λr∈BS(p θ,S) [R i (λ r) ≤ R i ] q θ (λ r | IO k i k=1..K) so the probability of obtaining a final reward of less than R i when sampling C samples is q C ≤Ri. As a , the probability of obtaining a reward of exactly R i is q The state of the grid word are represented as a 16 × 18 × 18 tensor. For each cell of the grid, the 16-dimensional vector corresponds to the feature indicated in TAB7 The decoders are two-layer LSTM with a hidden size of 256. Tokens of the DSL are embedded to a 256 dimensional vector. The input of the LSTM is therefore of dimension 768 (256 dimensional of the token + 512 dimensional of the IO pair embedding). The LSTM used to model the syntax is similarly sized but doesn't take the embedding of the IO pairs as input so its input size is only 256.One decoder LSTM is run on the embedding of each IO pair. The topmost activation are passed through a MaxPooling operation to obtain a 256 dimensional vector representing all pairs. This is passed through a linear layer to obtain a score for each of the 52 possible tokens. We add the output of the model and of the eventual syntax model, whether learned or handwritten and pass it through a SoftMax layer to obtain a probability distribution over the next token. All training is performed using the Adam optimizer, with a learning rate of 10 − 4. Supervised training used a batch size of 128 and RL methods used a batch size of 16. We used 100 rollouts per samples | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1Xw62kRZ | Using the DSL grammar and reinforcement learning to improve synthesis of programs with complex control flow. |
Time series forecasting plays a crucial role in marketing, finance and many other quantitative fields. A large amount of methodologies has been developed on this topic, including ARIMA, Holt–Winters, etc. However, their performance is easily undermined by the existence of change points and anomaly points, two structures commonly observed in real data, but rarely considered in the aforementioned methods. In this paper, we propose a novel state space time series model, with the capability to capture the structure of change points and anomaly points, as well as trend and seasonality. To infer all the hidden variables, we develop a Bayesian framework, which is able to obtain distributions and forecasting intervals for time series forecasting, with provable theoretical properties. For implementation, an iterative algorithm with Markov chain Monte Carlo (MCMC), Kalman filter and Kalman smoothing is proposed. In both synthetic data and real data applications, our methodology yields a better performance in time series forecasting compared with existing methods, along with more accurate change point detection and anomaly detection. Time series forecasting has a rich and luminous history, and is essentially important in most of business operations nowadays. The main aim of time series forecasting is to carefully collect and rigorously study the past observations of a time series to develop an appropriate model which could describe the inherent structure of the series, in order to generate future values. For instance, the internet companies are interested in the number of daily active users (DAU), say, what is DAU after certain period of time, or when will reach their target DAU goal. Time series forecasting is a fruitful research area with many existing methodologies. The most popular and frequently used time series model might be the Autoregressive Integrated Moving Average (ARIMA) BID4 BID31 BID8 BID16. Taking seasonality into consideration, BID4 proposed the Seasonal ARIMA. The Holt-Winters method BID30 ) is also very popular by using exponential smoothing. State space model BID10 BID24 BID5 also attracts much attention, which is a linear function of an underlying Markov process plus additive noise. Exponential Smoothing State Space Model (ETS) decomposes times series into error, trend, seasonal that change over time. Recently, deep learning is applied for time-series trend learning using LSTM BID26, bidirectional dynamic Boltzmann machine BID23 is applied for time-series long-term dependency learning, and coherent probabilistic forecast BID25 ) is proposed for a hierarchy or an aggregation-level comprising a set of time series. Orthogonal to these works, this paper focuses on robust ways of time series forecasting in presence of change points and anomalies. In Internet time series forecasting, Google develops the Bayesian structure time series (BSTS) model BID5 BID24 to capture the trend, seasonality, and similar components of the target series. Recently, Facebook proposes the Prophet approach BID27 based on a decomposable model with interpretable parameters that can be intuitively adjusted by analyst. However, as in the DAU example, some special events like Christmas Holiday or President Election, newly launched apps or features, may cause short period or long-term change of DAU, leading to weird forecasting of those traditional models. The aforementioned special cases are well known as• Anomaly points. The items, events or observations that don't conform to an expected pattern or other items in the dataset, leading to a sudden spike or decrease in the series.• Change points. A market intervention, such as a new product launch or the onset of an advertising (or ad) campaign, may lead to the level change of the original series. Time series forecasting without change/anomaly point detection and adjustment may also lead to bizarre forecasting since these models might learn the abrupt changes in the past. There are literatures on detecting anomaly or change points individually, examples can be found in BID29; BID22; BID3; BID21 BID29. However, the aforementioned change point detection models could not support detection in the presence of seasonality, while the presence of trend/change point is not handled by the anomaly detection models. Most importantly, there is a discrepancy between anomaly/change points detection and adjustment, and commonly used manually adjustment might be a bit arbitrary. Unfortunately, the forecasting gap caused by abnormal and change points, to the best of our knowledge, has not been given full attention and no good solution has been found so far. This paper is strongly motivated by bridging this gap. In this paper, to overcome the limitations of the most (if not all) current models that the anomaly points and change points are not properly considered, we develop a state space time series forecasting model in the Bayesian framework that can simultaneously detect anomaly and change points and perform forecasting. The learned structure information related to anomaly and change points is automatically incorporated into the forecasting process, which naturally enhances the model prediction based on the feedback of state-space model. To solve the ant optimization problem, an iterative algorithm based on Bayesian approximate inference with Markov chain Monte Carlo (MCMC), Kalman filter and Kalman smoothing is proposed. The novel model could explicitly capture the structure of change points, anomaly points, trend and seasonality, as also provide the distributions and forecasting intervals due to Bayesian forecasting framework. Both synthetic and real data sets show the better performance of proposed model, in comparison with existing baseline. Moreover, our proposed model outperforms state-of-the-art models in identifying anomaly and change points. To summarize, our work has the following contributions.• We proposed a robust 1 Bayesian state-space time series forecasting model that is able to explicitly capture the structures of change points and anomalies (which are generally ignored in most current models), and therefore automatically adapt for forecasting by incorporating the prior information of trend, seasonality, as well as change points and anomalies using state space modeling. Due to the enhancement of model description capability, the of model prediction and abnormal and change points detection are mutually improved.• To solve the ant optimization problem, an effective algorithm based on approximate inference using Markov chain Monte Carlo (MCMC) is proposed with theoretical guaranteed forecasting paths.• Our proposed method outperforms the state-of-the-art methods in time series forecasting in presence of change points and anomalies, and detects change points and anomalies with high accuracy and low false discovery rate on both tasks, outperforming popular change point and anomaly detection methods. Our method is flexible to capture the structure of time series under various scenarios with any component combinations of trend, seasonality, change points and anomalies. Therefore our method can be applied in many settings in practice. State space time series model BID13 has been one of the most popular models in time series analysis. It is capable of fitting complicated time series structure including linear trend and seasonality. However, times series observed in real life are almost all prevailed with outliers. Change points, less in frequency but are still widely observed in real time series analysis. Unfortunately, both structures are ignored in the classic state space time series model. In the section, we aim to address this issue by introducing a novel state space time series model. Let y = (y 1, y 2, . . ., y n) be a sequence of time series observations with length n. The ultimate goal is to forecast (y n+1, y n+2, . . .). The accuracy in forecasting lies in a successful decomposition of y into existing components. Apart from the residuals, we assume the time series is composed by trend, seasonality, change points and anomaly points. In a nutshell, we have an additive model with time series = trend + seasonality + change point + anomaly point + residual. As the classical state space model, we have observation equation and transition equations to model y and hidden variables. We use µ = (µ 1, µ 2, . . ., µ n) to model trend, and use γ = (γ 1, γ 2, . . ., γ n) to model seasonality. We use a binary vector z a = (z a 1, z a 2, . . ., z a n) to indicate anomaly points. Then we have Observation equation: DISPLAYFORM0 The deviation between the observation y t and its "mean" µ t + γ t is modeled by t and o t, depending on the value of z a t. If z a t = 1, then y t is an anomaly point; otherwise it is not. Distinguished from the residues = (1, 2, . . ., n), the anomaly is captured by o = (o 1, o 2, . . ., o n) which has relative large magnitude. The hidden state variable µ and γ have intrinsic structures. There are two transition equations, for trend and seasonality separately Transition Equations: Trend: DISPLAYFORM1 DISPLAYFORM2 In Equation, δ = (δ 1, δ 2, . . ., δ n) can be viewed as the "slope" of the trend, measuring how fast the trend changes over time. The change point component is also incorporated in Equation by a binary vector z c = (z is one of the five additive components along with trend, seasonality, anomaly points and residuals. Here we model the change point directly into the trend component. Though differing in formulation, they are equivalent to each other. We choose to model in as in Equation FORMULA1 due to simplicity, and its similarity with the definition of anomaly points in Equation.The seasonality component is presented in Equation. Here S is the length of one season and w = (w 1, w 2, . . ., w n) is the noise for seasonality. The seasonality component is assumed to have almost zero average in each season. The observation equation and transition equations (i.e., Equation) define how y is generated from all the hidden variables including change points and anomaly points. We continue to explore this new model, under a Bayesian framework. Bayesian methods are widely used in many data analysis fields. It is easy to implement and interpret, and it also has the ability to produce posterior distribution. The Bayesian method on state space time series model has been investigated in BID24 BID5. In this section, we also consider Bayesian framework for our novel state space time series model. We assume all the noises are normally distributed DISPLAYFORM0 where σ, σ o, σ u, σ r, σ v, σ w are parameters for standard deviation. As binary vectors, a natural choice is to model anomaly point indicator z a and change point indicator z c to the model them as Bernoulli random variables DISPLAYFORM1 where p a, p c are probabilities for each point to be an anomaly or change point. For simplicity, we denote α t = (µ t, δ t, γ t, γ t−1, . . ., γ t−(S−2) ) to include the main hidden variables (except z a t and z c t) in the transition equations. All the α t are well defined and can be generated from the previous status, except α 1. We denote a 1 to be the parameter for α 1, which can be interpreted as the "mean" for α 1.With Bayesian framework, we are able to represent our model graphically as in FIG2. As shown in FIG2, the only observations are y and all the others are hidden. In this paper, we assume there is no additional information on all the hidden states. If we have some prior information, for example, some points are more likely to be change points, then our model can be easily modified to incorporate such information, by using proper prior. In FIG2, we use squares and circles to classify unknown variables. Despite all being unknown, they actually behave differently according to their own functionality. For those in squares, they behave like turning parameters. Once they are initialized or given, those in circles behaves like latent variables. We call the former "parameters" and the latter "latent variable", as listed in TAB0. The "mean" for the initial trend and seasonality p = (pa, pc)Probabilities for each point to be anomaly or change point σ = (σ, σo, σu, σr, σv, σw) Standard deviationThe discrepancy between these two categories is clearly captured by the joint likelihood function. From FIG2, the joint distribution (i.e., the likelihood function) can be written down explicitly as DISPLAYFORM2 2 ) is the density function for normal distribution with mean x 1 and standard deviation x 2. Here we slightly abuse the notation by using µ 0, δ 0, γ 0, γ −1,..., γ 2−S, which are actually the corresponding coordinates of a 1.As long with other probabilistic graphical models, our model can also be viewed as a generative model. Given the parameters a 1, p, σ, we are able to generate time series. We present the generative procedure as follows. Algorithm 1: Generative Procedure Input: Parameters a 1, σ = (σ, σ o, σ u, σ r, σ v, σ w) and p a, p c, length of time series to generate m Output: Time series y = (y 1, y 2, . . ., y m) 1 Generate the indexes where anomalies or change points occur DISPLAYFORM3 2 Generate all the noises, o, u, r, v, w as independent normal random variables with mean zero and standard deviation σ, σ o, σ u, σ r, σ v, σ w respectively; 3 Generate {α t} m t=1 sequentially by the transition functions in Equation FORMULA1 This section is about inferring unknown variables from y, given the Bayesian setting described in the previous section. The main framework here is to sequentially update each hidden variable by fixing the remaining ones. As stated in the previous section, there are two different categories of unknown variables. Different update schemes need to be used due to the difference in their functionality. For the latent variables, we implement Markov chain Monte Carlo (MCMC) for inference. Particular, we use Gibbs sampler. We will elaborate the details of updates in the following sections. In this section, we focus on updating α assuming all the other hidden variables are given and fixed. The essence of Gibbs sampler is to obtain posterior distribution p a1,p,σ (α|y, z). This can be achieved by a combination of Kalman filter, Kalman smoothing and the so-called "fake-path" trick. We provide some intuitive explanation here and refer the readers to BID10 for detailed implementation. Kalman filter and Kalman smoothing are classic algorithms in signal processing and pattern recolonization for Bayesian inference. It is well related to other algorithms especially message passing algorithm. Kalman filter collects information forwards to obtain E(α t |y 1, y 2, . . ., y t); while Kalman smoothing distribute information backwards to achieve E(α t |y).However, the combination of Kalman filter and Kalman smoothing is not enough, as it only gives the the expectations of marginal distributions {E(α t |y)} n t=1, instead of the joint distribution required for Gibbs sampler. To address this issue, we can use the "fake-path" trick described in Brodersen et al. FORMULA0; BID10. The main idea underlying this trick lies on the fact that the covariance structure of p(α t |y) is not dependent on the means. If we are able to obtain the covariance by some other way, then we can add it up with {E(α t |y)} n t=1 to obtain a sample from p(α|y). This trick involves three steps. Note that all the other hidden variables z, p, σ are given.1. Pick some vectorã 1, and generate a sequence of time seriesỹ from it by Algorithm 1. In this way, we also observeα. 2. Obtain {E(α t |ỹ)} n t=1 fromỹ by Kalman filter and Kalman smoothing. 3. We use {α t − E(α t |ỹ) + E(α t |y)} n t=1 as our sampling from the conditional distribution. In this section, we update z by Gibbs sampler, assuming α, a 1, p, σ are all given and fixed. We need to obtain the conditional distribution p a1,p,σ (z|y, α). Note that in the graphical model described in Section 2, {z are still independent Bernoulli random variables, but possibly with different success probabilities. Thus, we can take the calculation point by point. For example, for the anomaly detection for the t-th point, we have DISPLAYFORM0 . And the prior on z a t is P(z a t = 1) = p a and P(z a t = 0) = p 1. Let p a t = P(z a t = 1|y, α). Directly calculation leads to DISPLAYFORM1 This equality holds for all t = 1, 2,..., n. Similarly for change point detection, let p As mentioned above, all the coordinates in z are still independent Bernoulli random variables conditioned on y, α. Thus, for Gibbs sampler, we can generate z by sampling independently with DISPLAYFORM2 For change point detection here, we have an additional segment control step. After obtaining {z c t} n t=1 as mentioned above, we need to make sure that the change points detected satisfy some additional requirement on the length of segment among two consecutive change points. This issue arises from the ambiguity between the definitions of change point and anomaly points. For example, consider a time series with value. We can view it with two change points, one increases the trend by 1 and the other decreases it by 1. Alternatively, we can also argue the three 1s in this time series are anomalies, though next to each other. One way to address this ambiguity is by defining the minimum length of segment (denoted as). In this toy example, if we set the minimum length to be 4, then they are anomaly points; if we set it to be 3, then we regard them to be change points. But a more complicated criterion is needed than using minimum length as the time series usually own much more complex structure than this toy example. Consider time series FIG0 ) and the minimum time series parameter = 3. It is reasonable to view it with one change point with increment 1, and the two -1s should be regarded as anomalies. As a combination of all these factors, we propose the following segment control method. A default value for the parameter is the length of seasonality, i.e., = S.Algorithm 2: Segment control on change points Input: change point binary vector z c,trend µ, standard deviation for outliers σ r, change point minimum segment Output: change point binary vector z The parameters σ, a 1 and p need both initialization and update. We have different initializations and update schemes for each of them. For all the standard deviations, once we obtain α and z, we update them by taking the empirical standard deviation correspondingly. For σ δ and σ γ, the calculation is straightforward as they only involve δ and γ respectively. For σ, σ o, σ u and σ r, it is a bit more involved due to z. Nevertheless, we can obtain the following update equations for all of them: DISPLAYFORM0 DISPLAYFORM1 Note that in some iterations, when there is no change point or anomaly detected in z, then the updates above for σ o, σ r are not well-defined. In those cases, we simply let them remain the same. To initialize σ, we let them all equal to the standard deviation of y. For a 1, we initialize it by letting its first coordinate to be equal to the average of y 1, y 2,..., y S, and all the remaining coordinates to be equal to 0. Since a 1 can be interpreted as the mean vector of α 1, in this way the trend is initialized to be matched up with average of the first season, and the slope and seasonality are initialized to be equal to 0. We update a 1 by using information of α. We let the first two coordinates (trend and slope) of a 1 to be equal to those of α 1, and we let the remaining coordinates (seasonality) of a 1 to be equal to those of α S+1. The reason why we do not let a 1 to be equal to α 1 entirely is due to the consideration on convergence and robustness. Since we initialize the seasonality part in a 1 as 0, it will remain 0 if we let a 1 equals α 1 entirely (due to the mechanism how we update α 1 as described in Section 4.1. We can avoid such trouble via using α S+1 .For p, we initialize them to be equal to 1/n. If we have additional information on the number of change points or anomaly points, we can initiate them with different values, for example, 0.1/n, or 10/n. We can update p after obtaining z, but we choose not to, also for the sake of robustness. In the early iterations when the algorithm is far from convergence, it is highly possible that z a or z c may turn out to be all 0. If we update p, say, by taking the proportion of change point or anomaly points in z. Then p a or p c might be 0, and it may get stuck in 0 in the remaining iterations. Once we infer all the latent variables α, z and tune all the parameters p, a 1, σ, we are able to forecast the future time series y future . From the graphical model described in Section 3, the future forecasting only involves α n instead of the whole α. Note that we assume that there exists no change point and anomaly point in the future. This is reasonable as in most cases we have no additional information on the future time series. Given α n and σ we can use our predictive procedures (i.e., Algorithm 1) to generate future time series y future. We can further integrate out α n to have the posterior predictive distribution as p σ (y future |y).The forecasting on future time series is not deterministic. There are two sources for the randomness in y future. One comes from the inference of α n (and also σ) from y. Under the Bayesian framework in Section 3, we have a posterior distribution over α n rather than a single point estimation. The second one comes from the forecasting function itself. The forecasting involves intrinsic noise like t, u t, v t and w t. Thus, the predictive density function p σ (y future |y, α n) will lead to different path even with fixed σ and α n. In this way we are able to obtain distribution and predictive interval for forecasting. We also suggest to take the average of multiple forecasting paths, as the posterior mean for the forecasting. The average of multiple forecasting paths (denoted asȳ future), if the number of paths is large enough, always takes the form as a combination of linear trend and seasonality. This can be observed in both our synthesis data (Section 7) and real data analysis (Section 8). This seems to be surprising at the first glance, but makes some sense intuitively. Under our assumption, we have no information on the future, and thus a reliable way to forecast the future is to use the information collected at the end of observed time series, i.e., trend µ n, slope δ n and seasonality structure. Theorem 1 gives mathematical explanation of the linearity ofȳ future, in both mean and standard deviation. Theorem 1. Let N be the number of future time series paths we generate from Algorithm 1). Let m be the number of points we are going to forecast. Denote {y to be the future paths. Defineȳ future = (ȳ n+1,ȳ n+2, . . .,ȳ n+m) to be the average such that DISPLAYFORM0 Then for all j = 1, 2,..., N, we haveȳ n+j as a normal distribution with mean and variance as DISPLAYFORM1 Consequently, for all j = 1, 2,..., m, E[ȳ n+j] is in a linear form with respect to j, and the standard deviation ofȳ n+j also takes a approximately linear form with respect to j. Proof. Recall that α n, σ are given and fixed, and we assume there is no change point or anomaly in the future time series. The Equation leads to δ n+j = δ n + j l=1 v n+l, which implies that DISPLAYFORM2 For the seasonality part, simple linear algebra together with Equation 3 leads to γ n+j = γ n−S+(j mod S) + j l=1 w n+l. Thus, DISPLAYFORM3 Due to the independence and Gaussian distribution of all the noises,ȳ n+j is also normally distributed and its means and variance can be calculated accordingly. Our proposed method can be divided into three parts: initialization, inference, and forecasting. Section 4 and Section 5 provide detailed explanation and reasoning for each of them. We present a whole picture of our proposed methodology in Algorithm 3.Algorithm 3: Proposed Algorithm Input: Observed time series y = (y 1, y 2, . . ., y n), seasonality length S, length of time series for forecasting m, number of predictive paths N, change point minimum segment l Output: Change point detection z c, anomaly points z a, forecasting y future = (y n+1, y n+1, . . ., y n+m) and its distribution or predictive intervals Part I: Initialization; 1 Initialize σ, σ o, σ u, σ r, σ v, σ w all with the empirical standard deviation of y; 2 Initialize a 1 such that its first coordinate equals to the average of (y 1, y 2, . . ., y S) and all the remaining S coordinates with 0; 3 Initialize p a and p c by 1/n. Then generate z a and z c as independent Bernoulli random variables with success probability p a and p c respectively; DISPLAYFORM0 Infer α by Kalman filter, Kalman smoothing and "fake-path" trick described in Section 4.1; Update z a and z c by sampling from DISPLAYFORM0, where the success probability {p Update σ by Equation FORMULA11 to; Update a 1 such that its first two coordinates equal to the those of α 1 and the remaining (S − 1) coordinates equals to those of α S+1; Calculate the likelihood function L a1,p,σ (y, α, z) given in Equation; end Part III: Forecasting; 10 With a n and σ, use the generate procedure in Algorithm 1 to generate future time series y future with length m. Repeat the generative procedure to obtain multiple future paths y DISPLAYFORM0 future,..., yfuture; 11 Combine all the predictive paths give the distribution for the future time series forecasting. If needed, calculate the point-wise quantile to obtain predictive intervals. Use the point-wise average as our final forecasting . It is worth mentioning that our proposed methodology is downward compatible with many simpler state space time series models. By letting p c = 0, we assume there is no change point in the time series. By letting p a = 0, we assume there is no anomaly point in the time series. If both p c and p a are set to be 0, then our model is reduced to the classic state space time series model. Also, the seasonality and slope can be removed from our model, if we know there exists no such structure in the data. In this section, we study the synthetic data generated from our model. We let S = 7 and provide values for σ and a 1. The change points and anomaly points are randomly generated. We use our generative procedure (Algorithm 1) to generate time series with total length 500 by fixed parameters. The first 350 points will be used as training set and the remaining 150 points will be used to evaluate the performance of forecasting. When generating, we let the time series have weekly seasonality with S = 7. For σ we have σ = 0.1, σ u = 0.1, σ v = 0.0004, σ w = 0.01, σ r = 1, σ o = 4. For α 1 we have value for µ as 20, value for δ as 0, and value for seasonality as (1, 2, 4, −1, −3, −2)/10. For p we have p c = 4/350 and p a = 10/350. Despite that, to make sure that at least one change point is in existence, we force z c 330 = 1 and r 330 = 2. That is, for each time series we generate, its 330th point is a change point with the mean shifted up by 3. Also to be consistence with our assumption, we force z c i = z a i = 0, ∀351 ≤ i ≤ 500 so there exists no change point or anomaly point in the testing part. The top panel of FIG9 shows one example of synthesis data. The blue line marks the separation between training and testing set. The blue dashed line indicates the locations for the change point, while the yellow dots indicate the positions of anomaly points. Also see FIG9 for illustration on the returned by implementing our proposed algorithm on the same dataset. The red line gives the fitting in the first 350 points and forecasting in the last 150 points. The change points detected are marked with vertical red dotted line, and the anomaly detected are flagged with purple squares. FIG9 shows that on this dataset, our proposed algorithm yields perfect detection on both change points and anomaly points. In FIG9, the gray part indicates the 90% predictive interval for forecasting. We run our generative model 100 times to produce 100 different time series, and implement multiply methods on each of them, and aggregate the together for comparison. We include the following methodologies. For time series forecasting, we compare our method against Bayesian Structural Time Series (BSTS) BID24 BID5 ), Seasonal Decomposition of Time Series by Loess (STL) BID7 ), Seasonal ARIMA BID4, Holt-Winters , Exponential Smoothing State Space Model (ETS) ), and the Prophet R package by BID27. We evaluate the performances by mean absolute percentage error (MAPE), mean square error (MSE) and mean absolute error (MAE) on forecasting set. The mathematical definition of these three criterion is given as follows. Let x 1, x 2,..., x n be the true value andx 1,x 2,...,x n be the estimation or predictive values. Then we have DISPLAYFORM0 The comparison of our proposed algorithm and the aforementioned algorithms are included below in TAB1. As we mentioned in Section 6, our algorithm is downward compatible with the cases ignoring the existence of change point or anomaly, by setting p c = 0 or p a = 0. We also run proposed algorithm on the synthetic data with p c = 0 (no change point), or p a = 0 (no anomaly point), or p c = p a = 0 (no change and anomaly point), for the purpose of numeric comparison. From TAB1 it turns out that our proposed algorithm achieves the best performance compared to other existing methods. Our proposed algorithm also performs better compared with the cases ignoring change point or anomaly point. This is a convincing evidence on the importance of incorporating both change point structure and anomaly point structure when modeling, for time series forecasting. We also compare our proposed method with other existing change point detection methods and anomaly detection algorithm with respect to the performance of detections. We evaluate the performance by two criterions: True Positive Rate (TPR) and False Positive (FP). TPR measures the percentage of change points or anomalies to be correctly detected. FP count the number of points wrongly detected as change points or anomaly points. The mathematical definitions of TPR and FP are as follows. Let (z 1, z 2, . . ., z n) be the true binary vector for change points or anomalies, and (ẑ 1,ẑ 2, . . .,ẑ n) are the estimated ones. Then DISPLAYFORM1 From the definition, we can see high TPR and low FP means the algorithm has better performance in detection. The comparison on change point detection is shown in TAB2. We compare our against three popular change point detection methods: Bayesian Change Point (BCP) BID3, Change-Point (CP) BID21 and Breakout (twitter, 2017). From TAB2 our proposed method outperforms the most of the others by both TPR and FP. We have smaller TPR compared to CP, but we are better in FP. In TAB3, we also compare the performance of our algorithm on anomaly detection with three existing common anomaly detection methods: the AnomalyDetection package by BID29, RAD by BID22 and Tsoutlier by BID6. The comparison is listed in TAB3. We can see our method also outperforms most of the others with respect to anomaly detection, by both TPR and FP. RAD has slightly better TPR but its FP is much worse compared with ours. In this section, we implement our proposed method on real-world datasets. We also compare its performance against other existing time series forecasting methodologies. We consider two datasets, one is a public data called Well-log dataset, and the other is an unpublished internet traffic dataset. The bottom panels of FIG10 and FIG11 give the of our proposed algorithms. The blue line separates the training set and testing set. We use red line to show our fitting and forecasting , vertical red dashed line to indicate change points and purple dots to indicate anomaly points. The gray part shows 90% predication interval. This dataset BID11 BID20 was collected when drilling a well. It measures the nuclear magnetic response, which provides geophysical information to analyze the structure of rock surrounding the well. This dataset is public and available online 2. It has 4050 points in total. We split it such that the first 3000 points are used as training set and last 1000 points are used to evaluate the forecasting performance. From FIG10, it is obvious that there exists no seasonality or slope structure in the dataset. This motivates us not to include these two components in our model. We implement our proposed algorithm without seasonality and slope, and compare the forecasting performance with other methods in TAB4. Our method outperforms BSTS, ARIMA, ETS and Prophet. However in TAB4 the performance can be slightly improved if we ignore the existence of anomaly points by letting p a = 0. This may be caused by model mis-specification as the data may not generated in a way not entirely captured by our model. Nevertheless, the performances of our method considering anomaly points or not, are comparable to each other. In this dataset there is no ground-truth of change point and anomaly point on their locations or even existence. However, from bottom panel of FIG10, there are some obvious changes in the sequence and they all successfully captured by our algorithm. Our second real data is an Internet traffic data acquired from a major Tech company (see FIG11). It is a daily traffic data, with seasonality S = 7. We use the first 800 observations as training set and evaluate the performance of forecasting on the remaining 265 points. The bottom panel of FIG11 show the from implementing our algorithm. We also do the comparison of forecasting performance of our proposed algorithm together with other existing methods, shown in TAB5. We can also see that our algorithm outperforms all the other algorithms with respect to MAPE, MSE and MAE. From FIG11 our proposed algorithm identifies one change point (the 576th point, indicated by the vertical red dashed line), which can be confirmed that this is exactly the only one change point existing in this time series caused by the change of counting methods, by some external information. Thus, we give the perfect change point detection in this Internet traffic data. For this Internet traffic dataset, since we have ground-truth for change point, we can compare the performance of change point detection of different methodologies. BCP returns posterior distribution, which peaks in the the 576th point with posterior probability value 0.5. And it also returns with many other points with posterior probability value around 0.1. CP returns 4 change points, where the 576th point (the only true one) is one of them. Breakout returns 8 change points without including the 576th point. To sum up, our proposed method achieves the best change point detection in this real dataset. Compared to the aforementioned models, our work differs in Bayesian modeling which samples posterior to estimate hidden components given the independent Bernoulli priors of changing point and anomalies. | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rJLTTe-0W | We propose a novel state space time series model with the capability to capture the structure of change points and anomaly points, so that it has a better forecasting performance when there exist change points and anomalies in the time series. |
The complex world around us is inherently multimodal and sequential (continuous). Information is scattered across different modalities and requires multiple continuous sensors to be captured. As machine learning leaps towards better generalization to real world, multimodal sequential learning becomes a fundamental research area. Arguably, modeling arbitrarily distributed spatio-temporal dynamics within and across modalities is the biggest challenge in this research area. In this paper, we present a new transformer model, called the Factorized Multimodal Transformer (FMT) for multimodal sequential learning. FMT inherently models the intramodal and intermodal (involving two or more modalities) dynamics within its multimodal input in a factorized manner. The proposed factorization allows for increasing the number of self-attentions to better model the multimodal phenomena at hand; without encountering difficulties during training (e.g. overfitting) even on relatively low-resource setups. All the attention mechanisms within FMT have a full time-domain receptive field which allows them to asynchronously capture long-range multimodal dynamics. In our experiments we focus on datasets that contain the three commonly studied modalities of language, vision and acoustic. We perform a wide range of experiments, spanning across 3 well-studied datasets and 21 distinct labels. FMT shows superior performance over previously proposed models, setting new state of the art in the studied datasets. In many naturally occurring scenarios, our perception of the world is multimodal. For example, consider multimodal language (face-to-face communication), where modalities of language, vision and acoustic are seamlessly used together for communicative intent . Such scenarios are widespread in everyday life, where continuous sensory perceptions form multimodal sequential data. Each modality within multimodal data exhibits exclusive intramodal dynamics, and presents a unique source of information. Modalities are not fully independent of each other. Relations across two (bimodal) or more (trimodal, . . .) of them form intermodal dynamics; often asynchronous spatio-temporal dynamics which bind modalities together. Learning from multimodal sequential data has been an active, yet challenging research area within the field of machine learning (Baltrušaitis et al., 2018). Various approaches relying on graphical models or RNNs have been proposed for multimodal sequential learning. Transformer models are a new class of neural models that rely on a carefully designed non-recurrent architecture for sequential modeling . Their superior performance is attributed to a self-attention mechanism, which is uniquely capable of highlighting related information across a sequence. This self-attention is a particularly appealing mechanism for multimodal sequential learning, as it can be modified into a strong neural component for finding relations between different modalities (the cornerstone of this paper). In practice, numerous such relations may simultaneously exist within multimodal data, which would require increasing the number of attention units (i.e. heads). Increasing the number of attentions in an efficient and semantically meaningful way inside a transformer model, can boost the performance in modeling multimodal sequential data. In this paper, we present a new transformer model for multimodal sequential learning, called Factorized Multimodal Transformer (FMT). FMT is capable of modeling asynchronous intramodal and intermodal dynamics in an efficient manner, within one single transformer network. It does so by specifically accounting for possible sets of interactions between modalities (i.e. factorizing based on combinations) in a Factorized Multimodal Self-attention (FMS) unit. We evaluate the performance of FMT on multimodal language: a challenging type of multimodal data which exhibits idiosyncratic and asynchronous spatio-temporal relations across language, vision and acoustic modalities. FMT is compared to previously proposed approaches for multimodal sequential learning over multimodal sentiment analysis (CMU-MOSI) , multimodal emotion recognition (IEMOCAP) , and multimodal personality traits recognition (POM) . The related works to studies in this paper fall into two main areas. Modeling multimodal sequential data is among the core research areas within the field of machine learning. In this area, previous work can be classified into two main categories. The first category of models, and arguably the simplest, are models that use early or late fusion. Early fusion uses feature concatenation of all modalities into a single modality. Subsequently, the multimodal sequential learning task is treated as a unimodal one and tackled using unimodal sequential models such as Hidden Markov Models (HMMs) , Hidden Conditional Random Fields (HCRFs), and RNNs (e.g.). While such models are often successful for real unimodal data (i.e. not feature concatenated multimodal data), they lack the necessary components to deal with multimodal data often causes suboptimal performance . Contrary to early fusion which concatenates modalities at input level, late fusion models have relied on learning ensembles of weak classifiers from different modalities (; ;). Hybrid methods have also been used to combine early and late fusion together (; ; ;). The second category of models comprise of models specifically designed for multimodal data. Multimodal variations of graphical models have been proposed, including Multi-view HCRFs where the potentials of the HCRF are changed to facilitate multiple modalities Song et al. (2012; . Multimodal models based on LSTMs include , Memory Fusion Network (a) with its recurrent and graph variants c), as well as Multi-attention Recurrent Networks (b). Studies have also proposed generic fusion techniques that can be used in various models including Tensor Fusion and its approximate variants (Liang et al.;, as well as Compact Bilinear Pooling (; ;). Many of these models, from both first and second categories, are used as baselines in this paper. Transformer is a non-recurrent neural architecture designed for modeling sequential data . It has shown superior performance across multiple NLP tasks when compared to RNN-based or convolutional architectures . This superior performance of Transformer model is largely credited to a self-attention; a neural component that allows for efficiently extracting both short and long-range dependencies within its input sequence space. Transformer models have been successfully applied to various areas within machine learning including NLP and computer vision (; ;). Extending transformer to multimodal domains, specially for structured multimodal sequences is relatively understudied; with the previous works mainly focusing on using transformer models for modality alignment using cross-modal links between single transformers for each modality . In this section, we outline the proposed Factorized Multimodal Transformer 1 (FMT). Figure 1 shows the overall structure of the FMT model. The input first goes through an embedding layer, followed by multiple Multimodal Transformer Layers (MTL). Each MTL consists of multiple Factorized Multimodal Self-attentions (FMS). FMS explicitly accounts for intramodal and intermodal factors within its multimodal input. S1 and S2 are two summarization networks. They are necessary components of FMT which allow for increasing the number of attentions efficiently, without overparameterization of the FMT. Layer (MTL) Consider a multimodal sequential dataset with constituent modalities of language, vision and acoustic. The modalities are denoted as {L, V, A} from hereon for abbreviation. After resampling using a reference clock, modalities can follow the same frequency. Essentially, this resampling is often based on word timestamps (i.e. word alignment). Subsequently, the dataset can be denoted as: x i ∈ R Ti×dx, y i ∈ R dy are the inputs and labels. is a triplet of language, visual and audio inputs for timestamp t in i-th datapoint. N is the total number of samples within the dataset, and T i the total number of timestamps within i-th datapoint. Zero paddings (on the left) can be used to unify the length of all sequences to a desired fixed length A denotes the dimensionality of input at each timestep, which in turn is equal to the sum of dimensionality of each modality. d y denotes the dimensionality of the associated labels of a sequence. At the first step within the FMT model, each modality is passed to a unimodal embedding layer with the operation Positional embeddings are also added to the input at this stage. The output of the embeddings collectively form We denote the dimensionality of this output as e x = e L + e V + e A. After the initial embedding, FMT now consists of a stack of Multimodal Transformer Layers (MTL). MTL 1) captures factorized dynamics within multimodal data in parallel, and 2) aligns the timeasynchronous information both within and across modalities. Both of these are achieved using multiple Factorized Multimodal Self-attentions (FMS), each of which has multiple specialized selfattentions inside. The high dimensionality of the intermediate attention outputs within MTL and FMS is controlled using two distinct summarization networks. The continuation of this section provides detailed explanation of the inner-operations of MTL. ·,i) denote the input to the k-th MTL. We assume a total of K MTLs in a FMT (indexed 0 . . . K − 1), with k = 0 being the output of the embedding layer (input to k = 0 MTL). The input of MTL, immediately goes through one/multiple 2 Factorized 1 Code: github.com/removed-for-blind-review, Public Data: https://github.com/A2Zadeh/CMUMultimodalSDK 2 Multiple FMS have the same time-domain receptive field, which is equal to the length of the input. This is contrary to the implementations of the transformer model that split the sequence based on number of attention heads. Output (The grayed areas are for demonstration purposes, and not a part of the implementation. Figure 2 . For 3 modalities 3, there exist 7 distinct attentions inside a single FMS unit. Each attention has a unique receptive field with respect to modalities f ∈ F = {L,V,A,LV,LA,VA,LVA}; essentially denoting the modalities visible to the attention. Using this factorization, FMS explicitly accounts for possible unimodal, bimodal and trimodal interactions existing within the multimodal input space. All attentions within a FMS extend to the length of the sequence, and therefore can extract asynchronous relations within and across modalities. For f ∈ F, each attention within a single FMS unit is controlled by the Key K f, Query Q f, and Value V f all with dimensionality R T ×T; parameterized respectively using affine maps W K f, W Q f, and W V f. After the attention is applied using Key, Query and Value operations , the output of each of the attentions goes through a residual addition with its perceived input (input in the attention receptive field), followed by a normalization. The output of the FMS contains the aligned and extracted information from the unimodal, bimodal and trimodal factors. This output is high-dimensional; essentially R 4×T ×ex (each dimension within input of shape T ×e x is present in 4 factors). Our goal is to reduce this high-dimensional data using a mapping from R 4×T ×ex → R T ×ex. Without overparameterizing the FMS, in practice, we observed this mapping can be efficiently done using a simple 1D convolutional network S1 M∈{L,V,A} (·); R 4 → R. Internally, S1(·) maps its input to multiple layers of higher dimensions and subsequently to R. Using language as an example, S1 L moves across language modality dimensions e L for t = 1... T and summarizes the information across all the factors. The output of this summarization applied on all modality dimensions and timesteps, is the output of FMS, which has the dimensionality R T ×ex. In practice, there can be various possible unimodal, bimodal or trimodal interactions within a multimodal input. For example, consider multiple sets of important interactions between L and V (e.g. smile + positive word, as well as eyebrows up + excited phrase), all of which need to be highlighted and extracted. A single FMS may not be able to highlight all these interactions without diluting its intrinsic attentions. Multiple FMS can be used inside a MTL to efficiently extract diverse multimodal interactions existing in the input data 4. Consider a total of U FMS units inside a MTL. The output of each FMS goes through a feedforward network (for each timestamp t of the FMS output). 73.9/-73.4/-1.040 0.633 MARN (b) 77.1/-77.0/-0.968 0.625 MFN (a) 77.4/-77.3/-0.965 0.632 RMFN 78.4/-78.0/-0.922 0.681 RAVEN 78.0/--/-0.915 0.691 MulT -/83.0 -/82.8 0.87 0.698 FMT (ours) 81.5/83.5 81.4/83.5 0.837 0.744 Table 1: FMT achieves superior performance over baseline models for CMU-MOSI dataset (multimodal sentiment analysis). We report BA (binary accuracy) and F1 (both higher is better), MAE (Mean-absolute Error, lower is better), and Corr (Pearson Correlation Coefficient, higher is better). For BA and F1, we report two numbers: the number on the left side of "/" is calculated based on approach taken by Zadeh et al. (2018b), and the right side is by. The output of this feedfoward network is residually added with its input, and subsequently normalized. The feedforward network is the same across all U FMS units and timestamps t. Subsequently, the dimensionality of the output of the normalizations collectively is R U ×T ×ex. Similar to operations performed by S1, a secondary summarization network S2 M∈{L,V,A} (·); R U → R can be used here. S2 is also a 1D convolutional network that moves across modality dimensions and different timesteps to map R U ×T ×ex to R T ×ex. The output of the secondary summarization network is the final output of MTL, and denoted asx be the output of last MTL in the stack. For supervision, we feed this input one timestamp at a time as input to a Gated Recurrent Unit (GRU) . The prediction is conditioned on output at timestamp t = T of the GRU, using an affine map to d y. In this section, we discuss the experimental methodology including tasks, datasets, computational descriptors, and comparison baselines. The following inherently multimodal tasks (and accompanied datasets) are studied in this paper. All the tasks are related to multimodal language: a complex and idiosyncratic sequential multimodal signal, where semantics are arbitrarily scattered across modalities . The first benchmark in our experiments is multimodal sentiment analysis, where the goal is to identify a speaker's sentiment based on the speaker's display of verbal and nonverbal behaviors. We use the well-studied CMU-MOSI (CMU Multimodal Opinion Sentiment Intensity) dataset for this purpose . There are a total of 2199 data points (opinion utterances) within CMU-MOSI dataset. The dataset has real-valued sentiment intensity annotations in the range [−3, +3]. It is considered a challenging dataset due to speaker diversity (1 video per distinct speaker), topic variations and low-resource setup. The second benchmark in our experiments is multimodal emotion recognition, where the goal is to identify a speaker's emotions based on the speaker's verbal and nonverbal behaviors. We use the well-studied IEMOCAP dataset . IEMOCAP consists of 151 sessions of recorded dialogues, of which there are 2 speaker's per session for a total of 302 videos across the dataset. We perform experiments for discrete emotions of Happy, Sad, Angry and Neutral (no emotions) -similar to previous works (; 24.1 31.0 31.5 34.5 24.6 25.6 27.6 29.1 MARN (b) 29.1 33.0 --31.5 ---MFN (a) 34.5 35.5 37.4 41.9 34.5 36.9 36.0 37.9 RMFN 37.4 38.4 37.4 -37.4 38.9 38.9 -MulT 34.5 34.5 36.5 38.9 37.4 36.9 37. 30.5 38.9 35.5 37.4 33.0 42.4 27.6 33.0 MARN (b) 36.9 -52.2 --47.3 31.0 44.8 MFN (a) 38. Table 3: FMT achieves superior performance over baseline models in POM dataset (multimodal personality traits recognition). For label abbreviations please refer to Section 4.3. MA denotes multi-class accuracy for-class personality labels (higher is better). The third benchmark in our experiments is speaker trait recognition based on communicative behavior of a speaker. It is a particularly difficult task, with 16 different speaker traits in total. We study the POM dataset which contains 1,000 movie review videos . Each video is annotated for various personality and speaker traits, specifically: Confident (Con), Passionate (Pas), Voice Pleasant (Voi), Dominant (Dom), Credible (Cre), Vivid (Viv), Expertise (Exp), Entertaining (Ent), Reserved (Res), Trusting (Tru), Relaxed (Rel), Outgoing (Out), Thorough (Tho), Nervous (Ner), Persuasive (Per) and Humorous (Hum). The short form of these speaker traits is indicated inside the parentheses and used for the rest of this paper. The following computational descriptors are used by FMT and baselines (all the baselines use the same descriptors in their original respective papers). Language: P2FA forced alignment model is used to align the text and audio at word level. From the forced alignment, the timing of words and sentences are extracted. Word-level alignment is used to unify the modality frequencies. GloVe embeddings are subsequently used for word representation. Visual: For the visual modality, the Emotient FACET (iMotions, 2017) is used to extract a set of visual features including Facial Action Units , visual indicators of emotions, and sparse facial landmarks. Acoustic: COVAREP is used to extract the following features: fundamental frequency, quasi open quotient , normalized amplitude quotient, glottal source parameters (H1H2, Rd, Rd conf) , Voiced/Unvoiced segmenting features (VUV) , maxima dispersion quotient (MDQ), the first 3 formants, parabolic spectral parameter (PSP), harmonic model and phase distortion mean (HMPDM 0-24) and deviations (HMPDD 0-12), spectral tilt/slope of wavelet responses (peak/slope), Mel Cepstral Coefficients (MCEP 0-24). The following strong baselines are compared to FMT: MV-LSTM . There are fundamental distinctions between FMT and MulT, chief among them: 1) MulT consists of 6 transformers, 3 cross-modal transformers and 3 unimodal. Naturally this increases the overall model size substantially. FMT consists of only one transformer, with components to avoid overparameterization. 2) FMT sees interactions as undirected (unlike MulT which has L → V and V → L), and therefore semantically combines two attentions in one. 3) MulT has no trimodal factors (which are important according to Section 5). 4) MulT has no direct unimodal path (e.g. only L), as input to unimodal transformers are outputs of cross-modal transformers. 5) All FMT attentions have full time-domain receptive field, while MulT splits the input based on the heads. In their original publication, all the models report 6 the performance over the datasets in Section 4.1, using the same descriptors discussed in Section 4.2. The models in this paper are compared using the following performance measures (depending on the dataset): (BA) denotes binary accuracyhigher is better, (MA5,MA7) are 5 and 7 multiclass accuracy -higher is better, (F1) denotes F1 score -higher is better, (MAE) denotes the Mean-Absolute Error -lower is better, (Corr) is Pearson Correlation Coefficient -higher is better. The hyperparameter space search for FMT (and baselines if retrained) is discussed in Appendix A.1. The of sentiment analysis experiments on CMU-MOSI dataset are presented in Table 1. FMT achieves superior performance than the previously proposed models for multimodal sentiment analysis. We use two approaches for calculating BA and F1 based on negative vs. non-negative sentiment (b) on the left side of /, and negative vs. positive on the right side. MAE and Corr are also reported. For multimodal emotion recognition, experiments on IEMOCAP are reported in Table 2. The performance of FMT is superior than other baselines for multimodal emotion recognition (with the exception of Happy emotion). The of experiments for personality traits recognition on POM dataset are reported in Table 3. We report MA5 and MA7, depending on the label. FMT outperforms baselines across all personality traits. We study the importance of the factorization in FMT. We first remove the unimodal, bimodal and trimodal attentions from the FMT model, ing in 3 alternative implementations of FMT. demonstrates the of this ablation experiment over CMU-MOSI dataset. Furthermore, we use only one modality as input for FMT, to understand the importance of each modality (all other factors removed). We also replace the summarization networks with simple vector addition operation. All factors, modalities, and summarization components are needed for achieving best performance. We also perform experiments to understand the effect of number of FMT units within each MTL. Table 5 shows the performance trend for different number of FMT units. The model with 6 number of FMS (42 attentions in total) achieves the highest performance (6 is also the highest number we experimented with). reports the best performance for CMU-MOSI dataset is achieved when using 40 attentions per cross-modal transformer (3 of each, therefore 120 attention, without counting the subsequent unimodal transformers). FMT uses fewer number of attentions than MulT, yet achieves better performance. We also experiment with number of heads for original transformer model and compare to FMT (Appendix A.3). In this paper, we presented the Factorized Multimodal Transformer (FMT) model for multimodal sequential learning. Using a Factorized Multimodal Self-attention (FMS) within each Multimodal Transformer Layer (MTL), FMT is able to model the intra-model and inter-modal dynamics within asynchronous multimodal sequences. We compared the performance of FMT to baselines approaches over 3 publicly available datasets for multimodal sentiment analysis (CMU-MOSI, 1 label), emotion recognition (IEMOCAP, 4 labels) and personality traits recognition (POM, 16 labels). Overall, FMT achieved superior performance than previously proposed models across the studied datasets. A APPENDIX The hyperparameters of FMT include the Adam learning rate ({0.001, 0.0001}), structure of summarization network (randomly picked 5 architectures from {1, 2, 3} layers of conv, with kernel shapes of {2, 5, 10, 15, 20}), number of MTL layers ({4, 6, 8} except for ablation experiments which was 2... 8), number of FMT units ({4, 6}, except for ablation experiment which was 1... 6), e M ∈{L,V,A} ({20, 40}), dropout (0, 0.1). The same parameters (when applicable) are used for training MulT for POM dataset (e.g. num encoder layers same as number of MTL). Furthermore, for MulT specific hyperparameters, we use similar values as Table 5 in the original paper. All models are trained for a maximum of 200 epochs. The hyperparameter validation is similar to Zadeh et al. (2018b). We study the effect of number of MTL on FMT performance. Table 6 shows the of this experiment. The best performance is achieved using 8 MTL layers (which was also the maximum layers we tried in our hyperparameter search). In this section, we discuss the effect of increasing the number of heads on the original transformer model . Please note that we implement the OTF to allow for all attention heads to have full input receptive field (from 1 . . . T), similar to FMT. We increase the attention heads from 1 to 35 (after 35 does not fit on a Tesla-V100 GPU with batchsize of 20). Table 7 shows the of increasing number of attention heads for both models. We observe that achieving superior performance is not a matter of increasing the attention heads. Even using 1 FMS unit, which leads to 7 total attention, FMT achieves higher performance than counterpart OTF. In many scenarios in nature, as well as what is currently pursued in machine learning, the number of modalities goes as high as 3 (mostly language, vision and acoustic, as studied in this paper). This leads to 7 attentions within each FMS, well manageable for successful training of FMT as demonstrated in this paper. However, as the number of modalities increases, the underlying multimodal phenomena becomes more challenging to model. This causes complexities for any competitive multimodal model, regardless of their internal design. While studying these cases are beyond the scope of this paper, due to rare nature of having more than 3 main modalities modalities, for FMT, the complexity can be managed due to the factorization in FMS. We propose two approaches: 1) for high number of modalities, the involved factors can be reduced based on domain knowledge, the nature of the problem, and the assumed dependencies between modalities (e.g. removing factors between modalities that are deemed weakly related). Alternatively, without making assumptions about inter-modality dependencies, a greedy approach may be taken for adding factors; an approach similar to stepwise regression , iteratively adding the next most important factor. Using these two methods, the model can cope with higher number of modalities with a controllable compromise between performance and overparameterization. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJxD11HFDS | A multimodal transformer for multimodal sequential learning, with strong empirical results on multimodal language metrics such as multimodal sentiment analysis, emotion recognition and personality traits recognition. |
We develop a novel and efficient algorithm for optimizing neural networks inspired by a recently proposed geodesic optimization algorithm. Our algorithm, which we call Stochastic Geodesic Optimization (SGeO), utilizes an adaptive coefficient on top of Polyak's Heavy Ball method effectively controlling the amount of weight put on the previous update to the parameters based on the change of direction in the optimization path. Experimental on strongly convex functions with Lipschitz gradients and deep Autoencoder benchmarks show that SGeO reaches lower errors than established first-order methods and competes well with lower or similar errors to a recent second-order method called K-FAC (Kronecker-Factored Approximate Curvature). We also incorporate Nesterov style lookahead gradient into our algorithm (SGeO-N) and observe notable improvements. First order methods such as Stochastic Gradient Descent (SGD) with Momentum and their variants are the methods of choice for optimizing neural networks. While there has been extensive work on developing second-order methods such as Hessian-Free optimization and Natural Gradients , they have not been successful in replacing them due to their large per-iteration costs, in particular, time and memory. Although Nesterov's accelerated gradient and its modifications have been very effective in deep neural network optimization , some research have shown that Nesterov's method might perform suboptimal for strongly convex functions without looking at local geometry of the objective function. Further, in order to get the best of both worlds, search for optimization methods which combine the the efficiency of first-order methods and the effectiveness of second-order updates is still underway. In this work, we introduce an adaptive coefficient for the momentum term in the Heavy Ball method as an effort to combine first-order and second-order methods. We call our algorithm Geodesic Optimization (GeO) and Stochastic Geodesic Optimization (SGeO) (for the stochastic case) since it is inspired by a geodesic optimization algorithm proposed recently . The adaptive coefficient effectively weights the momentum term based on the change in direction on the loss surface in the optimization process. The change in direction can contribute as implicit local curvature information without resorting to the expensive second-order information such as the Hessian or the Fisher Information Matrix. Our experiments show the effectiveness of the adaptive coefficient on both strongly-convex functions with Lipschitz gradients and general non-convex problems, in our case, deep Autoencoders. GeO can speed up the convergence process significantly in convex problems and SGeO can deal with illconditioned curvature such as local minima effectively as shown in our deep autoencoder benchmark experiments. SGeO has similar time-efficiency as first-order methods (e.g. Heavy Ball, Nesterov) while reaching lower reconstruction error. Compared to second-order methods (e.g., K-FAC), SGeO has better or similar reconstruction errors while consuming less memory. The structure of the paper is as follows: In section 2, we give a brief on the original geodesic and contour optimization introduced in , neural network optimization methods and the conjugate gradient method. In section 3, we introduce our adaptive coefficient specifically designed for strongly-convex problems and then modify it for general non-convex cases. In section 4, we discuss some of the related work in the literature. Section 5 illustrates the algorithm's performance on convex and non-convex benchmarks. More details and insights regarding the algorithm and the experiments can be found in the Appendix. The goal is to solve the optimization problem min θ∈R f (θ) where f: R D → R is a differentiable function. approach the problem by following the geodesics on the loss surface guided by the gradient. In order to solve the geodesic equation iteratively, the authors approximate it using a quadratic. In the neighbourhood of θ t, the solution of the geodesic equation can be approximated as: Clearly, one can see the conjugate gradient method as a momentum method where t is the learning rate and t γ t−1 is the momentum parameter: We added the prime notation to avoid confusion with d t = θ t+1 − θ t throughout the paper). To avoid calculating the Hessian ∇ 2 f which can be very expensive in terms of computation and memory, is usually determined using a line search, i.e. by approximately calculating t = arg min f (θ t + d t) and several approximations to γ have been proposed. For example, have proposed the following: Note that γ F R (with an exact line search) is equivalent to the original conjugate gradient algorithm in the quadratic case. The adaptive coefficient that appears before the unit tangent vector in equation 2 has an intuitive geometric interpretation: where φ t is the angle between the previous update d t = θ t − θ t−1 and the negative of the current gradient −g t. Since 0 ≤ φ ≤ π, thus −1 ≤ cos (π − φ t) ≤ 1. The adaptive coefficient embeds a notion of direction change on the path of the algorithm which can be interpreted as implicit second-order information. The change in direction at the current point tells us how much the current gradient's direction is different from the previous gradients which is similar to what second-order information (e.g. the Hessian) provide. For more details on the adaptive coefficient we refer the reader to Appendix C. We propose to apply this implicit second-order information to the Heavy Ball method of as an adaptive coefficient for the momentum term such that, in strongly-convex functions with Lipschitz gradients, we reinforce the effect of the previous update when the directions align, i.e. in the extreme case: φ = 0 and decrease when they don't, i.e. the other extreme case: φ = π. Thus, we write the coefficient as γ C t = 1 −ḡ t ·d t with C indicating "convex". It's obvious that 0 ≤ γ C t ≤ 2. Note that we will use the bar notation (e.g.d) throughout the paper indicating normalization by magnitude. Aμ-strongly convex function f with L-Lipschitz gradients has the following properties: Applying the proposed coefficient to the Heavy Ball method, we have the following algorithm which we call GeO (Geodesic Optimization): Algorithm 1: GEO (STRONGLY CONVEX AND LIPSCHITZ) Calculate the gradient g t = ∇f (θ t) Calculate adaptive coefficient γ where T is total number of iterations and α is a tunable parameter set based on the function being optimized and is the learning rate. Incorporating Nesterov's momentum We can easily incorporate Nesterov's lookahead gradient into GeO by modifying line 4 to g t = ∇f (θ t + µd t) which we call GeO-N. In GeO-N the gradient is taken at a further point θ t + µd t where µ is a tunable parameter usually set to a value close to 1. However, the algorithm proposed in the previous section would be problematic for non-convex functions such as the loss function when optimizing neural networks. Even if the gradient information was not partial (due to minibatching), the current direction of the gradient cannot be trusted because of non-convexity and poor curvature (such as local minima, saddle points, etc). To overcome this issue, we propose to alter the adaptive coefficient to with N C indicating "non-convex". By applying this small change we are reinforcing the previous direction when the directions do not agree thus avoiding sudden and unexpected changes of direction (i.e. gradient). In other words, we choose to trust the previous history of the path already taken more, thus acting more conservatively. To increase efficiency, we use minibatches, calling the following algorithm SGeO (Stochastic Geodesic Optimization): Draw minibatch from training set Calculate the gradient g t = ∇f (θ t) Calculate adaptive coefficient γ Further we found that using the unit vectors for the gradientḡ and the previous updated, when calculating the next update in the non-convex case makes the algorithm more stable. In other words, the algorithm behaves more robustly when we ignore the magnitude of the gradient and the momentum term and only pay attention to their directions. Thus, the magnitudes of the updates are solely determined by the corresponding step sizes, which are in our case, the learning rate and the adaptive geodesic coefficient. Same as the strongly convex case, we can integrate Nesterov's lookahead gradient into SGeO by replacing line 5 with g t = ∇f (θ t + µd t) which we call SGeO-N. There has been extensive work on large-scale optimization techniques for neural networks in recent years. A good overview can be found in. Here, we discuss some of the work more related to ours in three parts. Adagrad is an optimization technique that extends gradient descent and adapts the learning rate according to the parameters. Adadelta and RMSprop improve upon Adagrad by reducing its aggressive deduction of the learning rate. Adam improves upon the previous methods by keeping an additional average of the past gradients which is similar to what momentum does. Adaptive Restart proposes to reset the momentum whenever rippling behaviour is observed in accelerated gradient schemes. AggMo keeps several velocity vectors with distinct parameters in order to damp oscillations. AMSGrad on the other hand, keeps a longer memory of the past gradients to overcome the suboptimality of the previous algorithms on simple convex problems. We note that these techniques are orthogonal to our approach and can be adapted to our geodesic update to further improve performance. Several recent works have been focusing on acceleration for gradient descent methods. propose an adaptive method to accelerate Nesterov's algorithm in order to close a small gap in its convergence rate for strongly convex functions with Lipschitz gradients adding a possibility of more than one gradient call per iteration. , the authors propose a differential equation for modeling Nesterov inspired by the continuous version of gradient descent, a.k.a. gradient flow. take this further and suggest that all accelerated methods have a continuous time equivalent defined by a Lagrangian functional, which they call the Bregman Lagrangian. They also show that acceleration in continuous time corresponds to traveling on the same curve in spacetime at different speeds. It would be of great interest to study the differential equation of geodesics in the same way. In a recent work, proposes a differential geometric interpretation of Nesterov's method for strongly-convex functions with links to continuous time differential equations mentioned earlier and their Euler discretization. Second-order methods are desirable because of their fine convergence properties due to dealing with bad-conditioned curvature by using local second-order information. Hessian-Free optimization is based on the truncated-Newton approach where the conjugate gradient algorithm is used to optimize the quadratic approximation of the objective function. The natural gradient method reformulates the gradient descent in the space of the prediction functions instead of the parameters. This space is then studied using concepts in differential geometry. K-FAC approximates the Fisher information matrix which is based on the natural gradient method. Our method is different since we are not using explicit second-order information but rather implicitly deriving curvature information using the change in direction. We evaluated SGeO on strongly convex functions with Lipschitz gradients and benchmark deep autoencoder problems and compared with the Heavy-Ball and Nesterov's algorithms and K-FAC. We borrow these three minimization problems from where they try to accelerate Nesterov's method by using adaptive step sizes. The problems are Anisotropic Bowl, Ridge Regression and Smooth-BPDN. The learning rate for all methods is set to 1 L except for Nesterov which is set to 4 3L+μ and the momentum parameter µ for Heavy Ball, Nesterov and GeO-N is set to the following: where L is the Lipschitz parameter andμ is the strong-convexity parameter. The adaptive parameter γ t for Fletcher-Reeves is set to γ and for GeO and GeO-N is γ C t = 1 −ḡ t ·d t. The functionspecific parameter α is set to 1, 0.5 and 0.9 in that order for the following problems. It's important to note that the approximate conjugate gradient method is only exact when an exact line search is used, which is not the case in our experiments with a quadratic function (Ridge Regression). Anisotropic Bowl The Anisotropic Bowl is a bowl-shaped function with a constraint to get Lipschitz continuous gradients: As in , we set n = 500, τ = 4 and and µ = 1. Figure 1 shows the convergence for our algorithms and the baselines. The algorithms terminate when f (θ) − f * < 10 −12. GeO-N and GeO take only 82 and 205 iterations to converge, while the closest is that of Heavy-Ball and Fletcher-Reeves which take approximately 2500 and 3000 iterations respectively. Ridge Regression The Ridge Regression problem is a linear least squares function with Tikhonov regularization: where A ∈ R m×n is a measurement matrix, b ∈ R m is the response vector and γ > 0 is the ridge parameter. The function f (θ) is a positive definite quadratic function with the unique solution of 2 + λ and strong convexity parameterμ = λ. , m = 1200, n = 2000 and λ = 1. A is generated from U ΣV T where U ∈ R m×m and V ∈ R n×m are random orthonormal matrices and Σ ∈ R m×m is diagonal with entries linearly distanced in while b = randn(m, 1) is drawn (i.i.d) from the standard normal distribution. Thusμ = 1 and L ≈ 1001. Figure 2 shows the where Fletcher-Reeves, which is a conjugate gradient algorithm, performs better than other methods but we observe similar performances overall except for gradient descent. The tolerance is set to f (θ) − f * < 10 −13. Smooth-BPDN Smooth-BPDN is a smooth and strongly convex version of the BPDN (basis pursuit denoising) problem: where Since we cannot find the solution analytically, Nesterov's method is used as an approximation to the solution (f * N) and the tolerance is set to f (θ) − f * N < 10 −12. Figure 3 shows the for the algorithms. GeO-N and GeO converge in 308 and 414 iterations respectively, outperforming all other methods. Closest to these two is Fletcher-Reeves with 569 iterations and Nesterov and Heavy Ball converge similarly in 788 iterations. To evaluate the performance of SGeO, we apply it to 3 benchmark deep autoencoder problems first introduced in which use three datasets, MNIST, FACES and CURVES. Due to the difficulty of training these networks, they have become standard benchmarks for neural network optimization. To be consistent with previous literature (; ;), we use the same network architectures as in and also report the reconstruction error instead of the log-likelihood objec- Our baselines are the Heavy Ball algorithm (SGD-HB) , SGD with Nesterov's Momentum (SGD-N) and K-FAC , a second-order method utilizing natural gradients using an approximation of the Fisher information matrix. Both the baselines and SGeO were implemented using MATLAB on GPU with single precision on a single machine with a 3.6 GHz Intel CPU and an NVIDIA GeForce GTX 1080 Ti GPU with 11 GBs of memory. The are shown in Figures 4 to 6. Since we are mainly interested in optimization and not in generalization, we only report the training error, although we have included the test set performances in the Appendix B. We report the reconstruction relative to the computation time to be able to compare with K-FAC, since each iteration of K-FAC takes orders of magnitude longer than SGD and SGeO. The per-iteration graphs can be found in the Appendix A. All methods use the same parameter initialization scheme known as "sparse initialization" introduced in. The experiments for the Heavy Ball algorithm and SGD with Nesterov's momentum follow which were tuned to maximize performance for these problems. For SGeO, we chose a fixed momentum parameter and used a simple multiplicative schedule for the learning rate: where the initial value was chosen from {0.1,0.15,0.2,0.3,0.4,0.5} and is decayed (K) every 2000 iterations (parameter updates). The decay parameter (β) was set to 0.95. For the momentum parameter µ, we did a search in {0.999,0.995,0.99}. The minibatch size was set to 500 for all methods except K-FAC which uses an exponentially increasing schedule for the minibatch size. For K-FAC we used the official code provided 1 by the authors with default parameters to reproduce the . The version of K-FAC we ran was the Blk-Tri-Diag approach which achieves the best in all three cases. To do a fair comparison with other methods, we disabled iterate averaging for K-FAC. It is also worth noting that K-FAC uses a form of momentum . In all three experiments, SGeO-N is able to outperform the baselines (in terms of reconstruction error) and performs similarly as (if not better than) K-FAC. We can see the effect of the adaptive coefficient on the Heavy Ball method, i.e. SGeO, which also outperforms SGD with Nesterov's momentum in two of the experiments, MNIST and FACES, and also outperforms K-FAC in the MNIST experiment. Use of Nesterov style lookahead gradient significantly accelerates training for the MNIST and CURVES dataset, while we see this to a lesser extent in the FACES dataset. This is also the case for the other baselines . Further, we notice an interesting phenomena for the MNIST dataset (Figure 4). Both SGeO and SGeO-N reach very low error rates, after only 900 seconds of training, SGeO and SGeO-N arrive at an error of 0.004 and 0.0002 respectively. We proposed a novel and efficient algorithm based on adaptive coefficients for the Heavy Ball method inspired by a geodesic optimization algorithm. We compared SGeO against SGD with Nesterov's Momentum and regular momentum (Heavy Ball) and a recently proposed second-order method, K-FAC, on three deep autoencoder optimization benchmarks and three strongly convex functions with Lipschitz gradients. We saw that SGeO is able to outperform all first-order methods that we compared to, by a notable margin. SGeO is easy to implement and the computational overhead it has over the first-order methods, which is calculating the dot product, is marginal. It can also perform as effectively as or better than second-order methods (here, K-FAC) without the need for expensive higher-order operations in terms of time and memory. We believe that SGeO opens new and promising directions in high dimensional optimization research and in particular, neural network optimization. We are working on applying SGeO to other machine learning paradigms such as CNNs, RNNs and Reinforcement Learning. It remains to analyse the theoretical properties of SGeO such as its convergence rate in convex and non-convex cases which we leave for future work. Here we include the per-iteration for the autoencoder experiments in Figures 7 to 9. We reported the reconstruction error vs. running time in the main text to make it easier to compare to K-FAC. K-FAC, which is a second-order algorithm, converges in fewer iterations but has a high per-iteration cost. All other methods are first-order and have similar per-iteration costs. We include generalization experiments on the test set here. However, as mentioned before, our focus is optimization and not generalization, we are aware that the choice of optimizer can have a significant effect on the performance of a trained model in practise. Results are shown in Figures 10 to 12. SGeO-N shows a significant better predictive performance than SGeO on the CURVES data set and both perform similarly on the two other datasets.. Note that the algorithms are tuned for best performance on the training set. Overfitting can be dealt with in various ways such as using appropriate regularization during training and using a small validation set to tune the parameters. C ADAPTIVE COEFFICIENT BEHAVIOUR C.1 GEOMETRIC INTERPRETATION Figure 13 (b) shows the dot product valueḡ ·d which is equivalent to cos (π − φ) (where φ is the angle between the previous update and the negative of the current gradient) for different values of φ. Figure 13 (a) shows the adaptive coefficient (γ) behaviour for different values of φ for both convex and non-convex cases. Recall that the adaptive coefficient is used on top the Heavy Ball method. For strongly convex function with Lipschitz gradients we set γ C = 1 −ḡ ·d and for non-convex cases γ N C = 1 +ḡ ·d. Here we include the values of the adaptive coefficient during optimization from our experiments. where w i = 1 + θi−1 4 for i = 1, 2. We initialize all three methods at. Scaled Goldstein-Price Function The scaled Godstein-Price function (Surjanovic & Bingham;) features several local minima, ravines and plateaus which can be representative whereθ i = 4θ i − 2 for i = 1, 2. We initialize all methods at (1.5, 1.5). Details The momentum parameter µ for both Nesterov and Geodesic-N was set to 0.9. The learning rate for all methods is fixed and is tuned for best performance. The from both experiments indicate that Geodesic is able to effectively escape local minima and recover from basins of attraction, while Nesterov's method gets stuck at local minima in both cases. We can also observe the effect of lookahead gradient on our method where the path taken by Geodesic-N is much smoother than Geodesic. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Hyee-0VFPH | We utilize an adaptive coefficient on top of regular momentum inspired by geodesic optimization which significantly speeds up training in both convex and non-convex functions. |
We introduce the masked translation model (MTM) which combines encoding and decoding of sequences within the same model component. The MTM is based on the idea of masked language modeling and supports both autoregressive and non-autoregressive decoding strategies by simply changing the order of masking. In experiments on the WMT 2016 Romanian-English task, the MTM shows strong constant-time translation performance, beating all related approaches with comparable complexity. We also extensively compare various decoding strategies supported by the MTM, as well as several length modeling techniques and training settings. Neural machine translation (NMT) has been developed under the encoder-decoder framework with an intermediary attention mechanism . The encoder learns contextualized representations of source tokens, which are used by the decoder to predict target tokens. These two components have individual roles in the translation process, and they are connected via an encoder-decoder attention layer . Many advances in NMT modeling are based on changes in the internal layer structure (; ; ; ;, tweaking the connection between the layers (; ; ; a), or appending extra components or latent variables (; ; ;) -all increasing the overall architectural complexity of the model while keeping the encoder and decoder separated. Our goal is to simplify the general architecture of machine translation models. For this purpose, we propose the masked translation model (MTM) -a unified model which fulfills the role of both the encoder and decoder within a single component. The MTM gets rid of the conventional decoder as well as the encoder-decoder attention mechanism. Its architecture is only a sequence encoder with self-attention layers, trained with an objective function similar to masked language modeling . In order to model the translation problem, the MTM is given the concatenation of the source and target side from a parallel sentence pair. This approach is similar to the translation language model presented by , but focuses on the target side, i.e. the masking is applied to some selected positions in the target sentence. The MTM is trained to predict the masked target words relying on self-attention layers which consider both the source sentence and a masked version of the target sentence. Trained in this way, the model is perfectly suitable for non-autoregressive decoding since the model learned to predict every position in parallel, removing the dependency on decisions at preceding target positions. Within its extremely simple architecture, one can realize various decoding strategies, e.g., using left-to-right, non-autoregressive, or iterative decoding by merely adjusting the masking schemes in search. We present a unified formulation of the MTM for different decoding concepts by factorizing the model probability over a set of masked positions. The MTM has several advantages over the conventional encoder-decoder framework: • A simpler architecture • A variety of decoding strategies including constant-time approaches (Section 3.3.1) On the WMT 2016 Romanian→English translation task, our MTM achieves a better performance than comparable non-autoregressive/constant-time methods while keeping its simple architecture. Using our general formulation of the MTM, we compare the translation performance of various decoding strategies. Moreover, we show that this model allows for decoding speed-up by merely adjusting the number of iterations at the small cost of translation performance. There have been some attempts to combine the encoder and decoder into a single component for simplified translation modeling. share the encoder and decoder parameters of a Transformer translation model and allow the encoder-decoder attention to access the inner layers of the encoder as well. extend this idea by adding locality constraints in all attention layers. train a Transformer decoder on a large monolingual corpus as a language model and use it as an unsupervised translation model on pairs of source and target sentences. Similarly to our work, all these approaches couple the encoding and decoding on the self-attention level. However, their decoding considers only left-side target context, enabling only left-to-right autoregressive translation. Furthermore, their encoding of a source sentence is limited to the source side itself, while our MTM can refine the source representations according to a partial target hypothesis represented in the decoder states. Both aspects hinder their methods from making the best use of the bidirectional representation power of the combined model. Non-autoregressive NMT, which predicts all target words in parallel, potentially exploits full bidirectional context in decoding. To make the parallel decoding produce a reasonable hypothesis, reuse the source words as inputs to the decoder and insert an additional attention module on the positional embeddings. use a separate decoder to revise the target hypotheses iteratively, where train a single decoder with MLM objectives for both the first prediction and its refinements. To improve the integrity of the hypotheses, one could also employ an autoregressive teacher to guide the states of the non-autoregressive decoder b), apply sentence-level rewards in training , or integrate generative flow latent variables . The self-attention layers of their decoders attend to all target positions, including past and future contexts. However, all these methods still rely on the encoder-decoder framework. In this work, we collapse the boundary of the encoding and decoding of sequences and realize non-autoregressive NMT with a unified model. Regardless of the encoding or decoding, the self-attention layers of our MTM attend to all available source and target words for flexible information flow and a model of simplicity. A common problem in non-autoregressive sequence generation is that the length of an output should be predefined beforehand. The problem has been addressed by averaging length difference (b), estimating fertility , dynamically inserting blank outputs via connectionist temporal classification (CTC) (Libovickỳ &), or directly predicting the total length from the encoder representations (; ;). In this work, we train a separate, compact length model on given bilingual data. The training of an MTM is based on the MLM objective , developed for pretraining representations for natural language understanding tasks. concatenate source and target sentences and use them together as the input to an MLM, where both source and target tokens are randomly masked for the training. This improves cross-lingual natural language inference in combination with the original MLM objective, but has not been applied to translation tasks. We use the concatenated input sentences but selectively mask out only target tokens to implement source→target translation. As for inference with an and propose to build up an output sequence iteratively by adjusting the input masking for each iteration. In this work, the generation procedures of both works are tested and compared within our MTM, along with other autoregressive/non-autoregressive decoding strategies (Section 4.1). We introduce the masked translation model (MTM) focusing on three aspects: model architecture, training, and decoding. In the corresponding sections, we show how the MTM 1) relaxes the con-......... ventional architectural constraint of the encoder-decoder framework, 2) learns to perform both nonautoregressive translation and its refinements, and 3) does translation with various decoding strategies in a variable number of iterations. Given a source sentence f.., f J (source input) the goal of an MTM p θ is to generate the correct translation e I 1 = e 1,..., e i,..., e I (target output) by modeling: 1, e I i+1 ) (target input) is a corrupted version -a subset -of the surrounding context in the target sentence e I 1. Therefore, the MTM models the true target sentence e I 1 independently for each position, given both the true source and a noisy version of the target context. Figure 1 illustrates the MTM network architecture which is the same as the MLM presented by or the encoder of a Transformer network with an output softmax layer on top of all corrupted positions. In particular, an MLM consists of N transformer encoder layers each containing two blocks: a self-attention layer with full bidirectional context as well as two linear layers with a RELU activation in between. Layer normalization is applied before every block, and a residual connection is used to combine the output of a block with its input . The source f, whose space is shared over the source and target languages by a joint subword vocabulary. A positional encoding vector is added where the position index is reset at the first target token. Similarly to and , we add language embeddings to distinguish source and target vocabularies efficiently. The embedded representations pass through N transformer encoder layers, where the attention has no direction constraints; this allows the model to use the full bidirectional context beyond the language boundary. Note that the hidden representations of source words can also attend to those of (partially hypothesized) target tokens, which is impossible in the encoder-decoder architecture. During training the target side inputẽ I 1 may be 1) fully masked out, resembling the initial stage of translation before hypothesizing any target words, 2) partially corrupted, simulating intermediate hypotheses of the translation process that need to be corrected, 3) or even the original target sentence e I 1, representing the final stage of translation which should not be refined further. The different levels of corruptions are used to model all plausible cases which we encounter in the decoding process -from primitive hypotheses to high-quality translations (Section 3.3). Given bilingual training data D = {(f Note that a model for length prediction is trained separately and the details of which can be found Appendix A. We cannot expect a single step of parallel decoding to output an optimal translation from only the source context. Therefore, we consider the second scenario of refining the hypothesis, where we simulate a partial hypothesis in training by artificially corrupting the given target sentence in the input. For training an MTM, we formulate this corruption by probabilistic models to 1) select the positions to be corrupted (p s) and 2) decide how such a position is corrupted (p c). The goal of the MTM is now to reconstruct the original target sentence e I 1. This leads to the training loss: where C is a set of positions to be corrupted. The corrupted target inputẽ I 1 is generated in two steps of random decisions : 1. Target positions i ∈ C for the corruption are sampled from a uniform distribution until ρ s · I samples are drawn, with hyperparameter ρ s ∈. We denote this selection process by C ∼ p s (C|e I 1) = p s (C|I) as a simplified notation. 2. The specific type of corruption for each selected position i ∈ C is chosen independently by samplingẽ i from p c (ẽ i |e i). Note that we train the model to reconstruct the original token e i only for the corrupted positions i ∈ C. For the remaining positions i / ∈ C, the corrupted sentence is filled with the original word, i.e.ẽ i = e i. These uncorrupted positions provide the context to the network for the denoising, and no loss is applied for these positions to prevent a bias towards copying. We optimize this criterion in a stochastic way, where C and theẽ i are sampled anew for each epoch and each training instance. In principle, the MTM is a denoising autoencoder of the target sentence conditioned on a source sentence. The MTM training can be customized by selecting p s and p c appropriately. , the probability p s is defined as a uniform distribution over all target positions, without considering the content of the sentence. For the corruption model p c, we define a set of operations and assign a probability mass ρ o ∈ to each operation o. We use the three operations presented by: • Replace with a mask token: • Replace with a random word e * uniformly sampled from the target vocabulary V e: Random: • Keep unchanged: Original: e Figure 2 shows an example of corrupting a target input sentence in the MTM training. Here all positions except 1 are corrupted, i.e. C = {2, 3, 4}. At Position 2 the original word is kept by the Keep operation of p c but in contrast to Position 1 (No Operation) there is a training loss added for Position 2. 3.3 DECODING As described above, the MTM is designed and trained to deal with intermediate hypotheses of varying quality as target input. Accordingly, decoding with an MTM consists of multiple iterations τ = 1,..., T: A non-autoregressive generation of the initial hypothesis (for τ = 1) and several steps of iterative refinements (inspired by) of the hypothesis (for τ > 1). In the context of this work, an iteration of the MTM during decoding refers to one forward pass of the model based on a given source and target input, followed by the selection of a target output based on the predictions of the MTM. To simplify the notation, we denote the given source sentence by F = f and the generated target translation byÊ =ê I 1. Similar to traditional translation models, the goal of decoding with an MTM is to find a hypothesisÊ that maximizes the translation probability: {p(E, I|F)} = arg max We approximate this by a two stage maximization process where we first determine the most likely target lengthÎ:= arg max I {p(I|F)} followed by: Instead of a left-to-right factorization of the target sentence which is common in autoregressive decoding, we perform a step-wise optimization on the whole sequence. For this we define the sequenceÊ (τ) starting fromÊ:=</M>,...,</M> by selecting the best hypothesis given the predecessorÊ (τ −1), i.e.:Ê (τ):= arg max Introducing an intermediate representationẼ allows for a more fine-grained control of the decoding process: where the probability p θ is modeled by a neural network with parameters θ and the masked sequencẽ is modelled by p m, which defines a specific search strategy (Section 3.3.1). In most scenarios, the search strategy is defined to be deterministic, which has the effect that all probability mass of p m is concentrated on one masked sequenceẼ (τ −1) and we can reformulate Equation as: and thus the score is defined solely by the MTM network. The iterative procedure presented in Equation describes a greedy optimization, as it selects the currently best hypothesis in each iteration. This does not provide any guarantee for the quality of Sample a masked sequence: 1) ) Compute the model output greedily for every position: e the final output. However, if we assume that the model score improves in each iteration, i.e. if we can show that: then we know that the maximum score is obtained in the last iteration T. To the best of our knowledge, it is not possible to provide a theoretical proof for this property, yet we will show empirical evidence that it holds in practice (see Section 4.1). Thus, in order to find the best hypothesis, it is sufficient to follow the recursive definition presented in Equation, which can be computed straight forward ing in an iterative decoding scheme. Algorithm 1 describes this process of MTM decoding. In short, 1) generate a hypothesis (Ê), 2) select positions to be masked, and 3) feed the masked hypothesis (Ẽ) back to the next iteration. Note that the output for each position e (τ) i is computed in the same forward pass without a dependency on other words e (τ) i (i = i) from the same decoding step. This means that the first iteration (τ = 1) is non-autoregressive decoding (; Libovickỳ &). Nonautoregressive models tend to suffer from the multimodality problem , where conditionally independent models are inadequate to capture the highly multimodal distribution of target translations. Our MTM decoding prevents this problem making iterative predictionsÊ (τ) each conditioned on the previous sequence. EachÊ (τ) from one forward pass, yielding a complexity of O(T) decoding steps instead of O(I) as in traditional autoregressive NMT. Thus the MTM decoding can potentially be much faster than conditional decoding of a standard encoder-decoder NMT model, as the number of iterations is not dictated by the target output length I. Furthermore, compared to the pure non-autoregressive decoding with only one iteration, our decoding algorithm may collapse the multimodal distribution of the target translation by conditioning on the previous output . The masking probability p m introduced in Equation resembles the two-step corruption of the training (Equation): where C is a set of positions to be masked. Similarly to the training, the corruption is performed only for i ∈ C (τ) and the remaining positions i / ∈ C (τ) are kept untouched. For the corruption model p c in decoding, only the Mask operation is activated, i.e. ρ mask = 1 and ρ o = 0 for o = mask. This leads to the following simple decisions: The ing masked sequenceẼ (τ) is supposed to shift the model's focus towards a selected number of words, chosen by the decoding strategy p s. Given this definition of p c above, a masked (intermediate) hypothesis in decoding is determined solely by the position selection p s, which differs by decoding strategy. Each decoding strategy starts from a fully masked target input, i.e. C = {1, ..., I}, and uncovers positions incrementally in each iteration. The simplest solution is to simply feed back the completely unmasked sequence in each iteration : This method works with the richest context from the output of the previous iteration. This may, however, hurt the model's performance as the first output is often quite poor, and the focus of the model is spread across the whole sentence. Instead of unmasking all positions from the beginning, one can unmask the sequence randomly one position at a time , inspired by Gibbs sampling : Note that this method is nondeterministic and it takes at least I iterations before the output is conditioned on the completely unmasked sequence. A deterministic alternative to the random strategy is to unveil the sequence step-wise, starting from the left-most position in the target sequence. In every decoding iteration τ = 1,..., T, the index i = τ − 1 is removed from the set of masked positions: This corresponds to the traditional autoregressive NMT, but the parallel nature of our MTM decoding inherently enables to update the prediction for any position at any time, e.g. the prediction for the first position can change in the last iteration. Furthermore, it allows for different step-wise strategies -revealing the sequence right-to-left (R2L) or starting from the middle in both directions (middleout) -without re-training the model. L2R decoding ignores a huge advantage of the MTM, which is the property that the fully bidirectional model can predict sequence elements in any given order. This characteristic is leveraged by masking a decreasing number of K(τ) positions in each iteration: At each iteration, K(τ) positions with the lowest model score p θ (confidence) remain masked: where the number of masked positions K(τ) is chosen to be linearly decreasing over the number of iterations T : One can also unmask positions one by one, i.e. K(τ) = I − τ, sacrificing potential improvements in decoding speed. Fertility-based 29.1 -CTC (Libovickỳ &) 24.7 -Imitation learning 28.9 -Reinforcement learning 27.9 -Generative flow 30. We implemented the MTM in the RETURNN framework and evaluate the performance on the WMT 2016 Romanian→English translation task 1. All data used in the experiments are preprocessed using the MOSES tokenizer and frequent-casing . We learn a joint source/target byte pair encoding (BPE) with 20k merge operations on the bilingual training data. Unless mentioned otherwise, we report on the newstest2016 test set, computed with case sensitivity and tokenization using the software SacreBLEU 2 . The MTMs in our experiments follow the base configuration of , however, with a depth of 12 layers and 16 attention heads. They are trained using Adam with an initial learning rate of 0.0001 and a batch size of 7,200 tokens. Dropout is applied with a probability of 0.1 to all hidden layers and word embeddings. We set a checkpoint after every 400k sentence pairs of training data and reduce the learning rate by a factor of 0.7 whenever perplexity on the development set (newsdev2016) does not improve for nine consecutive checkpoints. The final models are selected after 200 checkpoints based on the development set perplexity. During training, we select a certain percentage ρ s of random target tokens to be corrupted. This parameter is selected randomly from a uniform distribution ρ s ∼ U in every training step. We further deviate from , by selecting the hyperparameters for corruption to be ρ mask = 0.6, ρ rand = 0.3, and ρ keep = 0.1, which performed best for MTMs in our preliminary experiments. The main of the MTM are given in Table 1 along with comparative baselines. In total, our MTM, despite its extremely simple architecture, outperforms comparable constant-time NMT methods which do not depend on sequence lengths. Compared to the conventional autoregressive baseline, the MTM falls behind by only -2.4% BLEU with a constant number of decoding steps and the lower model complexity. Furthermore, a control experiment using the gold length instead of a predicted length improves the from the same MTM model by 1.5% BLEU. This minimizes the gap between our MTM and a comparable encoder-decoder model down to 0.9% BLEU, while our model has the ability to improve the decoding speed without retraining by simply reducing the number of iterations, thus trading in performance against speed. Note that all other methods shown in Table 1 are based on the encoder-decoder architecture, which is more sophisticated. Moreover, the performance of , , and relies heavily on knowledge distillation from a well-trained autoregressive model, which involves building an additional NMT model and translating the entire training data with that model. This causes a lot more effort and computation time in training, while the MTM requires no such complicated steps, and its training is entirely end-to-end. et al. demonstrate in their work that even better could be possible by computing multiple translations in parallel for a set of most likely length hypotheses. This approach or even a beam-search variant of our iterative unmasking, will be a focus in our future work. As described in Section 3.3.1, the flexibility of the MTM allows us to easily implement different decoding strategies within a single model. Pure non-autoregressive decoding, i.e., a single forward pass to predict all target positions simultaneously, yields poor translation performance of 13.8% BLEU on the validation set (newsdev2016), which implies that several decoding iterations are needed to produce a good hypothesis. We know this to be true if the inequality in Equation holds, i.e. if we see our model score improving in every iteration. Figure 3 shows that we can actually observe this property by monitoring the average model score throughout all iterations. Outputs for individual sentences might still worsen between two iterations. The overall score, however, shows a steady improvement in each iteration. In Figure 4, we compare various decoding strategies and plot their performance for different number of decoding iterations T. "Fully unmasking", i.e. re-predicting all positions based on the previous hypothesis, improves the hypothesis fast in the early iterations but stagnates at 22% BLEU. L2R, R2L, and confidence-based one-by-one all unmask one position at a time and show a very similar tendency with confidence-based one-by-one decoding reaching the strongest final performance of 31.9% BLEU. Confidence-based fixed-T unmasks several target positions per time step and achieves similar performance. In contrast to position-wise unmasking, the decoding with a fixed number of T (and linear unmasking) only needs ten decoding iterations to reach close to optimal performance. We test "middle-out" a variation of the L2R strategy to see whether the generation order is negligible as long as the sentence is generated contiguously. While this improves the hypothesis faster -most likely due to its doubled rate of unmasking -the final performance is worse than those of L2R or R2L decoding. Random selection of the decoding positions shows comparably fast improvements up to the 10th iterations, keeping up with middle-out, even though it reveals the hypothesis for a single position per iteration, however its performance saturates below most other decoding strategies. Overall the best for a low iteration count is obtained with confidence-based decoding with a fixed number of iterations. This shows that it is possible and sometimes even beneficial to hypothesize several positions simultaneously. We conclude that the choice of the decoding strategy has substantial impacts on the performance and hypothesize that a good decoding strategy relies on the model score to choose which target positions should be unmasked. In this work we simplify the existing Transformer architecture by combining the traditional encoder and decoder elements into a single component. The ing masked translation model is trained by concatenating source and target and applying BERT-style masking to the target sentence. The novel training strategy introduced with the MTM requires a rethinking of the search process and allows for various new decoding strategies to be applied in the theoretical framework we developed in this work. A detailed comparison shows that unmasking the sequence one-by-one gives the overall best performance, be it left-to-right, right-to-left, or confidence-based. Unveiling a constant number of tokens based on confidence in each decoding step, however, can achieve reasonable performance with a fixed, much smaller number of iterations. We show that there is a potential of at least 1.5 % BLEU improvement that can be achieved by more elaborate length models, which yields itself as a good start for further research. Furthermore, we plan to extend the decoding strategies to work with beam search and verify our observations on further language pairs. In this section we present several ablation studies as well as a deeper look into the decoding to further investigate the MTM. Autoregressive models determine the output length by stopping the generation process once a special token that marks the end of the sentence (e.g. '</S>') is predicted. This approach is not compatible with the MTM decoding as target tokens for all positions are predicted in parallel. Therefore we assume a given output length I and a train length model p(I|f J 1) on the bilingual training data. In training, the true length is used, and in decoding, we choose arg max I p(I|f • Count-based Table. For the unseen source lengths in training, we assume I = J. • Poisson Distribution: a more smoothly parametrized model. For J's not appearing in the training data, we back off to a global distribution with the parameter λ that is learned via maximum likelihood overall source/target length pairs in the training data, i.e. • Recurrent Neural Network (RNN): We take the last hidden state as the input to a target length classifier, i.e. a linear projection with a softmax layer over the possible target lengths I ∈ {1, 2, ..., 200}. • Bidirectional RNN: a variation of the above which employs a BiLSTM and uses the last hidden state of the forward and the backward LSTM for length prediction. Table 2 verifies that the translation performance depends highly on the length model. We also report the averaged absolute difference of the predicted length and the reference length. The simplest count-based model shows the worst performance, where the target length is merely determined by a frequency table. The Poisson distribution models the length more systematically, giving +1.1% BLEU against the count-based model. RNN models consider not only the source length but also the whole sentence semantics, and slightly outperform the Poisson distribution. The bidirectional RNN model predicts the target length even better, but the translation performance only improves marginally. Note that using the Reference length improves even further by +1.6% BLEU over our strongest system, which shows that a good length model is a crucial component of the MTM system. As discussed earlier, the MTM architecture is equivalent to a traditional transformer encoder. Nevertheless, the way it is applied to the task at hand differs very much, even compared to an MLM. Thus it was crucial to do a thorough hyperparameter search to obtain an optimal model performance. The baseline MTM configuration we present here is already the product of many preliminary experiments, setting the number of attention heads h = 16, dropout P drop = 0.1, and the learning rate reduction scheme. The ablation study presented in table 3 highlights the importance of both the number of attention heads and especially dropout. It also shows that it was crucial to increase the model depth N compared to the standard transformer encoder, by matching the total number of layers N = 12 as they are used in an encoder-decoder model. In this section, we show a detailed derivation of the decoding process which justifies our iterative optimization procedure and modularizes the unmasking procedure to apply the application of various decoding strategies. Assuming we have a hypothesized target lengthÎ, the goal is to find an optimal target sequenceÊ given lengthÎ and source sentence F: The MTM decoding to find such a sequence is performed in T iterations, whose intermediate hypotheses are introduced as latent variables E,..., E (T −1): = arg max where the last iteration should provide the final prediction, i.e. E (T):= E. Applying the chain rule and a first-order Markov assumption on E (τ) yields: = arg max with E 0:= </M>,..., </M> a sequence of lengthÎ. In a next step, we approximate the sum by a maximization and subsequently apply a logarithm to get: To simplify further derivations, we introduce the score function Q here and do another approximation by considering only the score Q from a single maximum timestep instead of the full sum over τ = 1,..., T. ≈ arg max Even with this approximation, we are still trying to find a value for each E (τ) that optimizes the score of another iterationτ via the connection of dependencies in Q. As this is impractical to compute, we alleviate the problem by focusing on a step-wise maximization. For this we define the sequenceÊ (τ) (withÊ:= E ) as the best hypothesis of iteration τ givenÊ (τ −1), i.e.: −1), F,Î) = arg max If we useÊ (τ −1) instead of E (τ −1) as the dependency in Q, we restrict the optimization to maximizing each step independently, given its predecessors optimum: Ideally, this iterative procedure should improve the score Q in each iteration, i.e.: Q(E (τ) |Ê (τ −1), F,Î) ∀τ = 1,..., T − 1 While Equation is not true for the general case we observe empirically that this the statement is true on average (see Figure 3). This means that the maximum score to be obtained in the last iteration T:Ê (F,Î) ≈ arg max which can be simplified to: = arg max Q(E (T) |Ê (T −1), F,Î) =Ê For completeness we report the strongest for each decoding strategy from Figure 4 in Table 4. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HygaSxHYvH | We use a transformer encoder to do translation by training it in the style of a masked translation model. |
We present local ensembles, a method for detecting extrapolation at test time in a pre-trained model. We focus on underdetermination as a key component of extrapolation: we aim to detect when many possible predictions are consistent with the training data and model class. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is extrapolating on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning. As machine learning is deployed in increasingly vital areas, there is increasing demand for metrics that draw attention to potentially unreliable predictions. One important source of unreliability is extrapolation. Extrapolation can be formalized in a number of ways: it can refer to making predictions on inputs outside the support of the training data, making predictions with high Bayesian or Frequentist uncertainty, or making predictions that depend strongly on arbitrary choices outside of the learning problem specification (e.g., a random seed). In this paper, we develop a method for detecting this last form of extrapolation. Specifically, we say that a trained model is extrapolating on a test input if the prediction at this input is underdetermined -meaning that many different predictions are all equally consistent with the constraints posed by the training data and the learning problem specification (i.e., the model architecture and the loss function). Underdetermination is just one form of extrapolation, but it is particularly relevant in the context of overparameterized model classes (e.g. deep neural networks). Recently, simple (but computationally expensive) ensembling methods , which train many models on the same data from different random seeds, have proven highly effective at uncertainty quantification tasks . This suggests that underdetermination is a key threat to reliability in deep learning, and motivates flexible methods that can detect underdetermined predictions cheaply. With this motivation, we present local ensembles, a post-hoc method for measuring the extent to which a pre-trained model's prediction is underdetermined for a particular test input. Given a trained model, our method returns an extrapolation score that measures the variability of test predictions across a local ensemble, i.e. a set of local perturbations of the trained model parameters that fit the training data equally well. Local ensembles are a computationally cheap, post-hoc alternative to fully trained ensembles, and do not require special training procedures of approximate ensembling methods that measure related, but distinct, notions of uncertainty . Local ensembles also address a gap in approximate methods for estimating prediction uncertainty. Specifically, whereas exact Bayesian or Frequentist uncertainty includes underdetermination as one component, approximate methods such as Laplace approximations or influence function-based methods break down when underdetermination is present. In contrast, our method leverages the pathology that makes these methods struggle (an ill-conditioned Hessian). Our contributions in this paper are as follows: • We present local ensembles, a test-time method for detecting underdetermination-based extrapolation in overparameterized models. • We demonstrate theoretically that our method approximates the variance of a trained ensemble with local second-order information. • We give a practical method for tractably approximating this quantity, which is simpler and cheaper than alternative second-order reliability methods. • Through experiments aimed at testing underdetermination, we show our method approximates the behavior of trained ensembles, and can detect extrapolation in a range of scenarios. 2.1 SETUP Let z = (x, y) be an example input-output pair, where x is a vector of features and y is a label. We define a model in terms of a loss function L with parameters θ as a sum over training examples, where is an example-wise loss (e.g., mean-squared error or cross entropy). Let θ be the parameters of the trained model, obtained by, e.g., minimizing the loss over this dataset, i.e., θ = arg min θ L(θ). We write the prediction function given by parameters θ at an input x asŷ(x, θ). We consider the problem of auditing a trained model, where unlabeled test points x arrive one at a time in a stream, and we wish to assess extrapolation on a point-by-point basis. Figure 1: In this quadratic bowl, arrows denote the small eigendirection, where predictions are slow to change. We argue this direction is key to extrapolation. In this section, we introduce our local ensemble extrapolation score E m (x) for an unlabeled test point x (significance of m explained below). The score is designed to measure the variability that would be induced by randomly choosing predictions from an ensemble of models with similar training loss. Our score has a simple form: it is the norm of the prediction gradient g θ (x):= ∇ θŷ (x, θ) multiplied by a matrix of Hessian eigenvectors spanning a subspace of low curvature U m (defined in more detail below). Here, we show that this score is proportional to the standard deviation of predictions across a local ensemble of models with near-identical training loss, and demonstrate that this approximation holds in practice. Our derivation proceeds in two steps. First, we define a local ensemble of models with similar training loss, then we state the relationship between our extrapolation score and the variability of predictions within this ensemble. The spectral decomposition of the Hessian H θ plays a key role in our derivation. Let where U is a square matrix whose columns are the orthonormal eigenvectors of H θ, written (ξ, · · ·, ξ (p) ), and Λ is a square, diagonal matrix with the eigenvalues of H θ, written (λ, · · ·, λ (p) ), along its diagonal. As a convention, we index the eigenvectors and eigenvalues in decreasing order of the eigenvalue magnitude. To construct a local ensemble of loss-preserving models, we exploit the fact that eigenvectors with large corresponding eigenvalues represent directions of high curvature, whereas eigenvectors with small corresponding eigenvalues represent directions of low curvature. In particular, under the assumption that the model has been trained to a local minimum or saddle point, parameter perturbations in flat directions (those corresponding to small eigenvalues λ (j) ) do not change the training loss substantially (see Fig. 1). 1 We characterize this subspace by the span of eigenvectors with 1 To see this, consider a second-order Taylor approximation to the training loss when the parameters are Note that the corresponding small eigenvalues. Formally, let m be the eigenvalue index such that the eigenvalues {λ (j): j > m}, are sufficiently small to be considered "flat". 2 We call the subspace spanned by {ξ (j): j ∈ σ} the ensemble subspace. Parameter perturbations in the ensemble subspace generate an ensemble of models with near-identical training loss. Our score E m (x) characterizes the variance of test predictions with respect to random parameter perturbations ∆ θ that occur in the ensemble subspace. We now show that our extrapolation score E m (x) is, to the first order, proportional to the standard deviation of predictions at a point x. Proposition 1. Let ∆ θ be a random parameter perturbation with mean zero and covariance proportional to the identity · I, projected into the ensemble subspace spanned by {ξ (j): j > m}. Let P ∆ (x) be the linearized change in a test prediction at x induced by perturbing the learned parameters θ by ∆ θ: Proof. First, we characterize the distribution of the loss-preserving parameter perturbation ∆ θ. Let U m be the matrix whose columns {ξ (j): j > m} span the ensemble subspace. Then U m U m is a projection matrix that projects vectors into the ensemble subspace, so the projected perturbation ∆ θ has covariance · U m U m. From this, we derive the variance of the linearized change in prediction We test this hypothesized relationship on several tabular datasets. We train an ensemble of twenty neural networks using the same architecture and training set, varying only the random seed. Then, for each model in the ensemble, we calculate our local ensemble extrapolation score E m (x) for each input x in the test set (see Sec. 3 for details). For each x, we compare, across the ensemble, the mean value of E m (x) to the standard deviationŷ(x). In Fig. 2, we plot these two quantities against each other for one of the datasets, finding a nearly linear relationship. On each dataset, we found a similar, significantly linear relationship (see Table 1 and Appendix A). We note the relationship is weaker with the Diabetes dataset; the standard deviations of these ensembles are an order of magnitude higher than the other datasets, indicating much noisier data. Finally, we note that we can obtain similar for the standard deviation of the loss at test points if we redefine g as the loss gradient rather than the prediction gradient. We now discuss a practical method for computing our extrapolation scores. The key operation is constructing the set of eigenvectors that span a suitably loss-preserving ensemble subspace (i.e. have sufficiently small corresponding eigenvalues). Because eigenvectors corresponding to large eigenvalues are easier to estimate, we construct this subspace by finding the top m eigenvectors and defining the ensemble subspace as their orthogonal complement. Our method can be implemented with any algorithm that returns the top m eigenvectors. See below for discussion on some tradeoffs in the choice of m, as well as the algorithm we choose (the Lanczos iteration ). first-order term drops out because we assume θ lies at a local minimum or saddle point. Perturbations ∆ θ that lie in the subspace spanned by eigenvectors ξ (j) whose corresponding eigenvalues λ (j) are close to zero contribute negligibly to this sum. These eigenvectors define an m-dimensional subspace which is the orthogonal complement to the ensemble subspace. We use these eigenvectors to construct a matrix that projects gradients into the ensemble subspace. Specifically, let U m ⊥ be the matrix whose columns are these large-eigenvalue eigenvectors {ξ (j): j ≤ m}. Then U m ⊥ U m ⊥ is the projection matrix that projects vectors into the "large eigenvalue" subspace, and I − U m ⊥ U m ⊥ projects into its complement: the ensemble subspace. Now, for any test input x, we take the norm of this projected gradient to compute our score The success of this approach depends on the choice of m. Specifically, the extrapolation score E m (x) is the most sensitive to underdetermination in the region of the trained parameters if we set m to be the smallest index for which the training loss is relatively flat in the implied ensemble subspace. If m is set too low, the ensemble subspace will include well-constrained directions, and E m (x) will over-estimate the prediction's sensitivity to loss-preserving perturbations. If m is set too high, the ensemble subspace will omit some under-constrained directions, and E m (x) will be less sensitive. For models where all parameters are well-constrained by the training data, a suitable m may not exist. This will usually not be the case for deep neural network models, which are known to have very ill-conditioned Hessians (see, e.g.,). Whether a direction is "well-constrained" is ultimately a judgment call for the user. One potential heuristic is to consider a small parameter perturbation of a fixed norm in the ensemble subspace, and to set a tolerance for how much that perturbation can change the training loss; for small perturbations, the change in loss is a linear function of the curvature in the direction of the perturbation, which is upper bounded by λ (m). We use the Lanczos iteration to estimate the top m eigenvectors, which presents a number of practical advantages for usage in our scenario. Firstly, it performs well under early stopping, returning good estimates of the top m eigenvectors after m iterations. Secondly, we can cache intermediate steps, meaning that computing the m + 1-th eigenvector is fast once we have computed the first m. Thirdly, it requires only implicit access to the Hessian through a function which applies matrix multiplication, meaning we can take advantage of efficient Hessian-vector product methods . Finally, the Lanczos iteration is simple -it can be implemented in less than 20 lines of Python code (see Appendix B.1). It contains only one hyperparameter, the stopping value m. Fortunately, tuning this parameter is efficient -given a maximum value M, we can try many values m < M at once, by estimating M eigenvectors and then calculating E m by using the first m eigenvectors. The main constraint of our method is space rather than time -while estimating the first m eigenvectors enables easy caching for later use, it may be difficult to work with these eigenvectors in memory as m and model size p increase. This tradeoff informed our choice of m in this paper; we note in some cases that increasing m further could have improved performance (see Appendix E). This suggests that further work on techniques for mitigating this tradeoff, e.g. online learning of sparse representations , could improve the performance of our method. See Appendix B for more details on the Lanczos iteration. 4 RELATED WORK It is instructive to compare our extrapolation score to two other approximate reliability quantification methods that are aimed at Bayesian and Frequentist notions of extrapolation, respectively. Like our extrapolation score, both of these methods make use of local information in the Hessian to make an inference about the variance of a prediction. First, consider the Laplace approximation of the posterior predictive variance. This metric is derived by interpreting the loss function as being equivalent to a Bayesian log-posterior distribution over the model parameters θ, and approximating it with a Gaussian. Specifically, (see, e.g.,) Second, consider scores such as RUE (Resampling Under Uncertainty) designed to approximate the variability of predictions by resampling the training data . These methods approximate the change in trained parameter values induced by perturbing the training data via influence functions . Specifically, the gradient of the parameters with respect to the weight of a given training example z i is given by combine this influence function with a specific random distribution of weights to approximate the variance of predictions under bootstrap resampling; other similar formulations are possible, sometimes with theoretical guarantees . Importantly, both of these methods work well when model parameters are well-constrained by the training data, but they struggle when predictions are (close to) underdetermined. This is because, in the presence of underdetermination, the Hessian becomes ill-conditioned. Practical advice for dealing with this ill-conditioning is available , but we note that this not merely a numerical pathology; by our argument above, a poorly conditioned Hessian is a clear signal of extrapolation. In contrast to these methods, our method focuses specifically on prediction variability induced by underconstrained parameters. Our extrapolation score incorporates only those terms with small eigenvalues, and removes the inverse eigenvalue weights that make inverse-Hessian methods break down. This is clear from its summation representation: Our method also has computational advantages over approaches that rely on inverting the Hessian. Firstly, implicitly inverting the Hessian is a complex and costly process -by finding only the important components of the projection explicitly, our method is simpler and more efficient. Furthermore, we only need to find these components once; we can reuse them for future test inputs. This type of caching is not possible with methods which require us to calculate the inverse Hessian-vector product for each new test input (e.g. conjugate gradient descent). Some recent works explore the relationship between test points, the learned model, and the training set. Several papers examine reliability criteria that are based on distance in some space: within/betweengroup distances , a pre-specified kernel in a learned embedding space , or the activation space of a neural network . We implement some nearest-neighbor baselines inspired by this work in Sec. 5. Additionally, a range of methods exist for related tasks, such as OOD detection (; ; Schölkopf et al., 2001) and calibration . Some work using generative models for OOD detection uses related second-order analysis . A line of work explores the benefits of training ensemble methods explicitly, discussed in detail in. These methods have been discussed for usage in some of the applications we present in Sec. 5, including uncertainty detection , active learning and OOD detection . A number of approximate ensembling methods have also been proposed. For example, MC-Dropout and Bayes by Backprop are approximate Bayesian model averaging methods that leverage special training procedures to represent an ensemble of prediction functions. These target a distinct notion of uncertainty from our loss-preserving ensembles (see Appendix F). Finally, a line of work on "Rashomon sets" explores loss-preserving ensembles more formally in simpler model classes . In this section, we give evidence that local ensembles can detect extrapolation due to underdetermination in trained models. In order to explicitly evaluate underdetermination, we present a range of experiments where a pre-trained model has a "blind spot", and evaluate its ability to detect when an input is in that blind spot. We probe our method's ability to detect a range of extrapolation, exploring cases where the blind spot is: 1. easily visualized, 2. well-defined by the feature distribution, 3. well-defined by a latent distribution, and 4. unknown, but where we can evaluate our model's detection performance through an auxiliary task. See Appendix D for experimental details. (Fig. 3a). We compute extrapolation scores (solid line), which correlate with the standard deviation of the ensemble (dotted line) (Fig. 3b). Our OOD performance achieves high AUC (solid line) even though some of our eigenvector estimates have low cosine similarity to ground truth (dotted line) (Fig. 3c). We note that the first few eigenvalues account for most of the variation, and that our estimates are accurate. We begin with an easily visualized toy experiment. In Fig. 3a, we show our data (y = sin 4x + N (0, 1 4)). We generate in-distribution training data from x ∈ [−1, 0] ∪, but at test time, we consider all x ∈ [−3, 4]. We train 20 neural networks with the same architecture (two hidden layers of three units each). As shown in Fig. 3a, the ensemble disagrees most on x < −1, x > 2. This means that we should most mistrust predictions from this model class on these extreme values, since there are many models within the class that perform equally well on the training data, but differ greatly on those inputs. We should also mistrust predictions from x ∈, although detecting this extrapolation may be harder since the ensemble agrees more strongly on these points. For each model in the ensemble, we test our method on an OOD task: can we flag test points which fall outside the training distribution? Since OOD examples may be uncommon in practice, we use AUC to measure our performance. We show that the extrapolation score is empirically related to the standard deviation of the ensemble's predictions at the input, which in turn is related to whether the input is OOD (Fig. 3b). Examining one model from this ensemble, we observe that by estimating only m = 2 eigenvectors, we achieve > 90% AUC (Fig. 3c). It turns out that m = 10 performs best on this task/model. As we complete more iterations (m > 10) we start finding smaller eigenvalues, which are more important to the ensemble subspace and whose eigenvector we do not wish to project out. We note our AUC improves even with some eigenvector estimates having low cosine similarity to the ground truth eigenvectors (the Lanczos iteration has some stochasticity due to minibatch estimationsee Appendix B for details). We hypothesize this robustness is because the ensemble subspace of this model class is relatively low-dimensional. Even if an estimated vector is noisy, the non-overlapping parts of the projection will likely be mostly perpendicular to the ensemble subspace, due to the properties of high-dimensional space. In this experiment, we create a blind spot in a given dataset by extending and manipulating the data's feature distribution. We induce a collinearity in feature space by generating new features which are a linear combination of two other randomly selected features in the training data. This means there is potential for underdetermination: multiple, equally good learnable relationships exist between those features and the target. However, at test time, we will sometimes sample these simulated features from their marginal distribution instead. This breaks the linear dependence, requiring extrapolation (the model is by definition underconstrained), without making the new data trivially out-of-distribution. We can make this extrapolation detection task easier by generating several features this way, or make it harder by adding some noise ∼ N (0, σ 2) to these features. We run this experiment on four tabular datasets, training a two hidden-layer neural network for each task. We compare to three nearest-neighbour baselines, where the metric is the distance (in some space) of the test point to its nearest neighbour by Euclidean distance in an in-distribution validation set. NN (Inputs) uses input space; NN (Reprs) uses hidden representation space, which is formed by concatenating all the activations of the network together (inspired by , who propose a similar method for adversarial robustness); and NN (Final Layer), uses just the final hidden layer of representations. We note that since our method is post-hoc and can be applied to any twice-differentiable pre-trained model, we do not compare to training-based methods e.g. those producing Bayesian predictive distributions . Our metric is AUC: we aim to assign higher scores to inputs which break the collinearity (where the feature is drawn from the marginal), than those which do not. In Figure 5, we show that local ensembles (LE) outperform the baselines for each number of extra simulated features, and that this performance is fairly robust to added noise. Here, we extend the experiment from Section 5.2 by inducing a blind spot in latent space. The rationale is similar: if two latent factors are strongly correlated at training time, the model may grow reliant on that correlation, and at test-time may be underdetermined if that correlation is broken. We use the CelebA dataset of celebrity faces, which annotates every image with 40 latent binary attributes describing the image (e.g. "brown hair"). To induce the blind spot, we choose two attributes: a label L and a spurious correlate C. We then create a training set where L and C are perfectly correlated: a point is only included in the training set if L = C. We train a convolutional neural network (CNN), with two convolutional layers and a fully-connected layer at the end, as a binary classifier to predict L. Then, we create a test set of held-out data where The test data where L = C is in our model's blind spot; these are the inputs for which we want to output high extrapolation scores. We show in Appendix E that the models dramatically fail to classify these inputs (L = C). We compare to four baseline extrapolation scores: the three nearest-neighbour methods described in Sec. 5.2, as well as MaxProb, where we use 1− the maximum outputted probability of the softmax. We test two values of L (Male and Attractive) and two values of C (Eyeglasses and WearingHat). We chose these specific values of L because they are difficult to predict and holistic i.e.and not localized to particular areas of image space. In Table 2, we present for each of the four L, C settings, showing both the loss gradient and the prediction gradient variant of local ensembles. Note that the loss gradient cannot be calculated at test time since we do not have labels available -instead, we calculate a separate extrapolation score using the gradient for the loss with respect to each possible label, and take the minimum. Our method achieves the best performance on most settings, and is competitive with the best baseline on each. However, the variation between the tasks is quite noteworthy. We note two patterns in particular. Firstly, we note the performance of MaxProb and the loss gradient variant of our method are quite correlated, which we hypothesize relates to ∇Ŷ. Additionally, the effect of increasing m is inconsistent between experiments: we discuss possible relationships to the eigenspectrum of the trained models. See Appendix E for a discussion on these patterns. Table 2: AUC for Latent Factors OOD detection task. Column heading denotes in-distribution definitions: labels are M (Male) and A (Attractive); spurious correlates are E (Eyeglasses) and H (Wearing Hat). Image is in-distribution iff label = spurious correlate. LE stands for local ensembles. Each Lanczos iteration uses 3000 eigenvectors. 500 examples from each test set are used. 95% CI is bolded. Finally, we consider the case where we know our model has blind spots, but do not know where they are. We use active learning to probe this situation. If underdetermination is an important factor in extrapolation, then local ensembles should be useful for selecting the most useful points to add to our training set. We use MNIST and FashionM-NIST for our active learning experiments. We begin the first round with a training set of twenty: two labelled data points from each of the ten classes. In each round, we train our model (a small CNN with two convolutional layers and a fully connected layer at the end) to a minimum validation loss using the current training set. After each round, we select ten new points from a randomly selected pool of 500 unlabelled points, and add those ten points and their labels to our training set. We compare local ensembles (selecting the points with the highest extrapolation scores using the loss-gradient variant) to a random baseline selection mechanism. In Fig. 6, we show that our method outperforms the baseline on both datasets, and this improvement increases in later rounds of active learning. We only used 10 eigenvectors in our Lanczos approximation, which we found to be a surprisingly effective approximation; we did not observe improvement with more eigenvectors. This experiment serves to emphasize the flexibility of our method: by detecting an underlying property of the model, we can use the method for a range of tasks (active learning as well as OOD detection). We present local ensembles, a post-hoc method for detecting extrapolation due to underdetermination in a trained model. Our method uses local second-order information to approximate the variance of an ensemble. We give a tractable implementation using the Lanczos iteration to estimate the largest eigenvectors of the Hessian, and demonstrate its practical flexibility and utility. Although this method is not a full replacement for ensemble methods, which can characterize more complexity (e.g. multiple modes), we believe it fills an important role in isolating one component of prediction unreliability. In future work, we hope to scale these methods to larger models and to further explore the properties of different stopping points m. We also hope to explore applications in fairness and interpretability, where understanding model and training bias is of paramount importance. Bernhard Schölkopf, John C Platt, John Shawe-Taylor, Alex J Smola, and Robert C Williamson. Estimating the support of a high-dimensional distribution. Neural computation, 13 The Lanczos iteration is a method for tridiagonalizing a Hermitian matrix. It can be thought of as a variant of power iteration, iteratively building a larger basis through repeatedly multiplying an initial vector by M, ensuring orthogonality at each step. Once M has been tridiagonalized, computing the final eigendecomposition is relatively simple -a number of specialized algorithms exist which are relatively fast (O(p 2)) . The Lanczos iteration is simple to implement, but presents some challenges. The first challenge is numerical instability -when computed in floating point arithmetic, the algorithm is no longer quarnteed to return a good approximation to the true eigenbasis, or even an orthogonal one. As such, the standard implementation of the Lanczos iteration is unstable, and can be inaccurate even on simple problems. Fortunately, solutions exist: a procedure known as two-step classical Gram-Schmidt orthogonalization -which involves ensuring twice that each new vector is linearly independent of the previous ones -is guaranteed to produce an orthogonal basis, with errors on the order of machine roundoff . A second potential challenge is presented by the stochasticity of minibatch computation. Since we access our Hessian H only through Hessian-vector products, we must use only minibatch computation at each stage. This means that each iteration will be stochastic, which will decrease the accuracy (but not the orthogonality) of the eigenvector estimates provided. However, in practice, we found that even fairly noisy estimates were nonetheless useful -see Sec. 5.1 for more discussion. The Lanczos algorithm is quite simple to implement. Figure 9 shows a a short implementation using Python/Numpy . In Section 5.2, we present for an experiment where we aim to detect broken collinearities in the feature space. In Figures 10, 11, and 12, we show on three more tabular datasets. See the main text for more experimental details. Here we give more experimental details on the datasets and models used. We subtracted the mean and divided by the standard deviation for each feature (as calculated from the training set). Boston and Diabetes . These datasets were loaded from Scikit-Learn . Abalone . This dataset was downloaded from the UCI repository at http://archive.ics.uci.edu/ml/datasets/Abalone. We converted sex to a three-dimensional one-hot vector (M, F, I). WineQuality . This dataset was downloaded from the UCI repository at http://archive.ics.uci.edu/ml/datasets/Wine+Quality. We used both red and white wines. These datasets were loaded using Tensorflow Datasets https://github.com/tensorflow/ datasets. We divided the pixel values by 255 and, for MNIST and FashionMNIST, binarized by thresholding at 0.7. For the first experiment, we train a two-layer neural network with 3 hidden units in each layer and tanh units. We train for 400 optimization steps using minibatch size 32. Our data is generated from y = sin(4x) + N (0, 1 4). We generate 200 training points, 100 test points, and 200 OOD points. We aggregate Y over a grid of 10 points from -1 to 1, with aggregation function min. We run the Lanczos algorithm until convergence. For the second experiment, we train a two-layer neural network with 5 hidden units in each layer and ReLU units. Our data is generated from y = βx 2 + N. Our training set consists of x drawn Table 3: Error rate for in and out of distribution test set with correlated latent factors setup. Column heading denotes in-distribution definitions: labels are M (Male) and A (Attractive); spurious correlate are E (Eyeglasses) and H (Wearing Hat). Image is in-distribution iff label == spurious correlate. uniformly from [−0.5, 0.5] and [2.5, 3.5]. However, at test time, we will consider x ∈ [−3, 6]. We generate 200 training points, 100 test points, and 200 OOD points. We aggregate Y over a grid of 5 points from -6 to 9, with aggregation function min. For each dataset we use the same setup. We use a two-layer MLP with ReLU activations and hidden layer sizes of 20 and 100. We trained all models with mean squared error loss. We use batch size 64, patience 100 and a 100-step running average window for estimating current performance. For the Lanczos iteration, we run up to 2000 iterations. We always report numbers from the final iteration run. For estimating the Hessian in the HVPs in the Lanczos iteration, we use batch size 32 and sample 5 minibatches. To pre-process the data, we first split randomly out 30% of the dataset as OOD. We choose 2 random features i, j and a number β ∼ U, and generate the new featurex. We also normalize this feature by its mean and standard deviation as calculated on the training set. We add random noise to the features after splitting into in-distribution and OOD -meaning we are not redrawing from the same marginal noise distribution. We use 1000 examples from in-distribution and OOD for testing. We use a CNN with ReLU hidden activations. We use two convolutional layers with 50 and 20 filters each and stride size 2 for a total of 1.37 million parameters. We trained all models with cross entropy loss. We use an extra dense layer on top with 30 units. We use batch size 32, patience 100 steps, and a 100-step running average window for estimating current performance. We sample the validation set randomly as 20% of the training set. For the Lanczos iteration, we run 3000 iterations. We always report numbers from the final iteration run. We use 500 examples from in-distribution and OOD for testing. For estimating the Hessian in the HVPs in the Lanczos iteration, we use batch size 16 and sample 5 minibatches. We use a CNN with ReLU hidden activations. We use two convolutional layers with 16 and 32 layers, stride size 5, and a dense layer on top with 64 units. We trained all models with mean squared error loss. We use batch size 32, patient 100 steps, and a 100-step running average window for estimating current performance. In Section 5.3, we discuss an experiment where we correlated a latent label L and confounder C attribute. Table 3 shows the in-distribution and out-of-distribution test error. These are drastically different, meaning that learning to detect this type of extrapolation is critical to maintain model performance. In Fig. 13 and 14, we show that the tasks present differing behaviours as more eigenvectors are estimated. We observe that for the Male/Eyeglasses and Attractive/WearingHat tasks, we get improved performance with more eigenvectors, but for the others we do not necessarily see the same improvements. Interestingly, this upward trend occurs both times that our method achieves a statistically significant improvement over baselines. It is unclear why this occurs for some settings of the task and not others, but we hypothesize that this is a sign that the method is working more correctly in these settings. E.3 RELATIONSHIP BETWEEN LOSS GRADIENT AND MaxProb METHOD As discussed in Sec. 5.3, we have the relationship between the loss, predictionŶ, and parameters θ: ∇ θ = ∇Ŷ · ∇ θŶ. Using min as an aggregation function, we find that min Y ∈{0,1} |∇Ŷ (Y,Ŷ)| has an inverted V-shape (Fig. 15). This is a similar shape to 1 − max(Ŷ, 1 −Ŷ), which is the metric implicitly measured by MaxProb. In Fig. 16, we examine the estimated eigenspectrums of the four tasks we present in the correlated latent factors experiment, to see if we can detect a reason why performance might differ on these tasks. In Fig. 16a, we show that the two tasks with the label Attractive, the eigenvalues are larger. These are also the tasks where the lossgradient-based variant of local ensembles failed, indicating that that variant of the method may be worse at handling larger eigenvalues. In Fig. 16b, we show that the two tasks where the local ensembles methods performed best (M/E, A/H, achieving statistically significant improvements over the baselines at a 95% confidence level, and also showing improvement as more eigenvectors were estimated), the most prominent negative eigenvalue is relatively smaller magnitude compared to the most prominent positive eigenvalue. This could mean that the local ensembles method was less successful in the other tasks (M/H, A/E) Figure 16: We show the estimated eigenspectrum of the four CNNs we train on the correlated latent factors task. In Fig. 16a, we show the absolute estimated eigenvalues sorted by absolute value, on a log scale. In Fig. 16b, we show the estimated eigenvalues divided by the maximum estimated eigenvalues. We only show 3000 estimated eigenvalues because we ran the Lanczos iteration for only 3000 iterations, meaning we did not estimate the rest of the eigenspectrum. simply because those models were not trained close enough to a convex minimum and still had fairly significant eigenvalues. One of the strengths of our method is that it can be applied post-hoc, making it usable in a wide range of situations. Therefore, we choose to compare the local ensembles method to only baselines which can also be applied post-hoc. However, one might still be interested in how some of these methods compare against ours. One such method is the MC-Dropout method , which constructs an ensemble at test time from a model trained with dropout, by averaging over several random dropout inferences. This method is not considered post-hoc, since it is only applicable to methods which have been trained with dropout (and works best for models with dropout on each layer, rather than just a single layer). Both MC-dropout and local ensembles estimate the variance of an ensemble achieved through perturbations of a trianed model. In MC-dropout, those perturbations take the form of stochastically setting some weights of the model to 0; on local ensembles, they are Gaussian perturbation projected into the ensemble subspace. However, despite these similarities, we provide empirical evidence here that our method is categorically different than MCDropout, by showing that our perturbations find a "flatter" (more loss-preserving) ensemble. This means that our method is more directly measuring underdetermination, since we are comparing nearby models which are equally justified by the training data. We use the four tabular datasets discussed in Sec 5.2. For each dataset, we train a neural network with dropout on each weight (as in MC-dropout), to minimize a mean squared error objective. We use a dropout parameter p = 0.1, as suggested in. We then create an MC-dropout ensemble of size 50: that is, we randomly dropout each weight with probability 0.1, and repeat 50 times. We calculate the training loss for each model in the MC-dropout ensemble. To estimate how "flat" this ensemble is (i.e. how loss-preserving on the training set), we simply subtract the training loss of the full model (no weights dropped out) from each model in the dropout- ensemble, and average these differences. We repeat this process 20 times, changing only the random seeds. We can now estimate a similar quantity (average increase in training loss across the ensemble) for our method. For each dataset, we train a neural network with the same architecture as in the dropout experiment. We then choose a standard deviation σ equal to the average magnitude of the perturbations created through MC-dropout on the corresponding task. This is to ensure a fair comparison -for each method, we are using perturbations on the same scale. We run the Lanczos iteration to convergence. We then compute the ensemble subspace for each Lanczos iteration m (recall that p − m is the dimensionality of the ensemble subspace, and m is the number of dimensions projected out). For each ensemble subspace of dimensionality p − m, we sample Gaussian noise from N (0,), project it into the ensemble subspace, and add it to the model's trained parameters. This ensures that when projected into the ensemble subspace, the Gaussian perturbation has the same expected magnitude as the dropout perturbations for the corresponding task. Using these projected Gaussian perturbations, we create an ensemble of size 50 for each m, and calculate the training loss for each model in the local ensemble. We subtract the training loss of the original model (perturbation 0) from each model in the local ensemble, and average across the ensemble. We repeat this process 20 times, changing only the random seeds. In Figure 18, we show the of this comparison for each of the four datasets. The decreasing curve shows that as we project out more eigenvectors (increasing m), the ing ensemble subspace becomes flatter, i.e. perturbations in this subspace increase training loss by a smaller amount. The horizontal dotted line is the average training loss increase for model in an an MC-dropout ensemble. We see that this horizontal line is well above the descending curve representing local ensembles. Recall that we scaled the perturbations to be the same size for both MC-Dropout and all value of m for local ensembles. This means that, for perturbations of the same magnitude, for most values of m, the ensembles found by our method are much flatter than those found by MC-dropout. While MC-dropout ensembles may be useful for other tasks, this shows that our method is more directly measuring underdetermination in the model. Additionally, we run the MC dropout method on the simulated features task from Sec. 5.2. We find (see Fig. 17) that it is unable to match the performance of our method, reinforcing the point that it is concerned mostly with other types of uncertainty. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJl6bANtwH | We present local ensembles, a method for detecting extrapolation in trained models, which approximates the variance of an ensemble using local-second order information. |
Coalition operations are essential for responding to the increasing number of world-wide incidents that require large-scale humanitarian assistance. Many nations and non-governmental organizations regularly coordinate to address such problems but their cooperation is often impeded by limits on what information they are able to share. In this paper, we consider the use of an advanced cryptographic technique called secure multi-party computation to enable coalition members to achieve joint objectives while still meeting privacy requirements. Our particular focus is on a multi-nation aid delivery scheduling task that involves coordinating when and where various aid provider nations will deliver relief materials after the occurrence of a natural disaster. Even with the use of secure multi-party computation technology, information about private data can leak. We describe how the emerging field of quantitative information flow can be used to help data owners understand the extent to which private data might become vulnerable as the of possible or actual scheduling operations, and to enable automated adjustments of the scheduling process to ensure privacy requirements Coalition operations are becoming an increasing focus for many nations. The need for collaboration derives from increasing awareness of mutual interests among both allies and nations that traditionally have not worked closely together. For example, coalition operations for Humanitarian Assistance and Disaster Relief (HADR) have increased substantially in recent years in both numbers and scope. With the impact of global warming, it is anticipated that there will be even more large-scale humanitarian crises ing from adverse weather-related events and sea-level rises. These coalitions often involve not just government and military organizations but also non-governmental organizations (NGOs) and commercial entities. A key challenge facing coalitions is how to collaborate without releasing information that could jeopardize national (or organizational) interests. Information security mechanisms for the military to date have focused on safeguarding information by limiting its access and use. This approach has lead to a significant undersharing problem, which impedes Distribution Statement A: Approved for Public Release; Distribution is Unlimited.effective joint operations. Recent work on defining access control mechanisms is useful for enabling selective sharing of information that is safe to release BID4. However, many coalition tasks require information that participants do not wish to make available to others. For example, consider the problem of scheduling the delivery of aid (food, water, medicine, fuel) to impacted nations after a natural disaster. Historically, international response has involved dozens of countries and NGOs, each with the desire to contribute in interconnected and potentially conflicting ways. Ideally coordination would be achieved by directly sharing information about the amount of aid each contributor has available or can produce, where that aid is situated, the position and storage capacity of ships for delivering the aid, harbor facilities where aid ships could dock, etc. But the owners of this data may be unwilling to directly share it with coalition partners, for fear of revealing information that impacts national security (e.g., ship locations) or competitive advantages (e.g., a company's backlog of inventory).To address this problem of coalition collaboration without revealing private information, we exploit a cryptographic technology called secure multi-party computation (MPC) BID6. MPC protocols enable mutually distrusting parties to perform joint computations on private information while it remains encrypted. In our work, we use MPC to enable privacy-preserving computations over several types of coordination tasks related to scheduling aid delivery. While participants cannot directly learn others' private inputs from the process/protocol of a secure multi-party computation, they may be able to infer something about private data based on the of the computation. For this reason, our privacy-aware scheduling solution makes use of a complementary technology called quantitative information flow (QIF) to measure the degree to which private data used in the scheduling task might leak to other participants. Insights gained from this analysis are folded back into the scheduling process via an adaptive workflow capability, to ensure that the vulnerability of private data stays within acceptable thresholds. The paper is organized as follows. We begin by summarizing our aid distribution task, including a description of the core scheduling problem and the data (private and non-private) belonging to the various coalition members. We then provide a short overview of secure multi-party computation followed by a description of how we employ it within a broader adaptive workflow framework to address the aid delivery scheduling task. Next, we describe our use of quantitative information flow to assess the vulnerability of private information as the scheduling progresses and to adapt the scheduling workflow in light of those assessments to ensure adherence to predefined privacy requirements. We conclude by identifying directions for future work and summarizing our contributions. The aid delivery problem that we consider involves two categories of participants:• N Aid Provider nations, each of which has some number S n of ships with aid (e.g., food, medicine) to be delivered• a single Aid Recipient nation that has P ports to which aid can be deliveredCollectively, these participants need to formulate a schedule for delivering aid on board the Aid Provider ships to ports belonging to the Aid Recipient, ensuring delivery prior to a specified deadline. As summarized in FIG0, a solution involves assigning to each Aid Provider ship: a port to which its aid should be delivered, a berth at the port, and a docking time. These assignments are subject to various constraints on ship and port characteristics to ensure physical compatibility between the ship and the port/berth, schedule availability of the berth, and the ability for the ship to completing the docking before the assigned deadline. We further seek an assignment that optimizes according to the following load-balancing criteria. Optimization Criteria: Load-balancing across ports. Let Assigned(Port i) designate the number of aid provider ships assigned to aid recipient port Port i. An optimal solution is a set of assignments that minimizes DISPLAYFORM0 Generating a solution to this scheduling problem requires information from the various parties about their assets (ships, ports), some of which they would prefer not to share with other coalition members. FIG0 summarizes this private data. In both FIG0 and FIG0 the problem data is color-coded, with blue used for private data belonging to an Aid Provider nation and green for private data belonging to the Aid Recipient nation. As the coloring clearly shows, determining a solution requires combining private information from multiple parties, which motivates our use of secure multi-party within our scheduling algorithm. To address privacy concerns in our aid delivery use case, we leverage MPC. MPC protocols compute a function and reveal the output of the computation without revealing private inputs to any other participant. Early MPC work was based on two-party problems BID6, and subsequent work produced general approaches for any number of participants to participate in shared computation without learning private information BID0.Most MPC approaches involve modeling the desired algorithm as a boolean circuit (fixed-time algorithms expressed as a combination of AND, OR, and NOT logic gates). However, circuit-based approaches may require unrolling loops, introducing potential performance implications. Alternate approaches based on Oblivious RAM (Goldreich and Ostrovsky 1996) do not require this full unrolling though have other performance trade-offs. We use a circuit-based MPC approach in our scenario, though our privacy analysis and adaptive scheduling do not depend on the specifics of the underlying MPC approach. MPC has proven to be a powerful tool to enable a wide range of use cases. For example, private keys can be split among several hosts in order to reduce the number of hosts that must be compromised in order to obtain the key. A computation that requires the decryption operation can then be done under MPC between the hosts, ensuring that the whole key is never revealed in the clear BID0. MPC can also be used between companies in collaborative supplychain management, where the reluctance to share sensitive data (such as costing and capacities) can lead to sub-optimal production plans. The use of MPC allows for collaborative supply-chain planning without compromising sensitive data BID0.In our setting, each data owner provides their private input to a MPC circuit designed such that their inputs are not revealed to other participants. Then participants follow the protocol for the multi-party computation, which allows them to collectively compute the solution to the planning and scheduling optimization problem. In the end, only the final (schedule) is revealed to the relevant parties. Creating a single MPC circuit to solve the full optimized scheduling problem in a general way is not practical at this point in time. For this reason, our solution approach consists of a workflow that decomposes the scheduling task into the following sequence of steps. Step A. Collect relevant inputs:• Aid Provider: determine ports that can be reached by each ship before the deadline D • Aid Recipient: select ports to which aid will be acceptedStep B. Determine ports that satisfy joint ship/harbor physical compatibility constraints:• Aid Provider ship draft is compatible with the harbor depth in the port • Aid Recipient port has sufficient offload capacityStep C. Determine all berths within each feasible port fromStep B that satisfy joint ship/berth compatibility and berth availability constraints:• Aid Provider ship fits within the berth For a Ship to be scheduled to dock at a given Docking Time at a Berth belonging to a Port, the following must hold: DISPLAYFORM0 • ship-earliest-arrival ≤ deadline Berth in Port• ship-length ≤ berth-length Docking Time at Berth• ship-earliest-arrival ≤ docking-time ≤ deadline • berth-availability (pairs of berth-occupied-start and berth-occupied-end) for berth at docking-time Step D. Schedule the aid ships across possible ship-portberth-arrival-time options (from Step C) in accordance with the optimization goal of load-balancing across ports. Step A is a simple information gathering/filtering task performed locally by individual participants. Step B requires computation over private inputs from both the Aid Provider and the Aid Recipient. Steps C and D combine private inputs with intermediate from earlier steps. For these reasons, we use three secure multi-party circuits within our workflow:• A two-party circuit for the physical compatibility test inStep B (Aid Provider and Aid Recipient) • A two-party circuit for the viability test in Step C (Aid Provider and Aid Recipient) • An N-party circuit for optimizing berth allocation in Step D (all N Aid Providers)The circuits for Steps B and C are straightforward but must be run for each possible ship/port (Step B) and ship/berth (Step C) combination. The circuit for Step D is more complex. Because it is not possible to implement a truly optimized solution in the MPC circuit model, we instead opted for the following greedy approach to load balancing across ports.1. Initialization: set the list of ship-port-berth solutions to be empty and the working set of options to be the set of shipport-berth-unload time entries from Step C 2. Select a port P with the fewest number of assignments and for which there remain options 3. Select earliest ship-port-berth-unload time entry for P and add to the solutions list 4. Remove from the set of options all entries for the ship and berth selected in Step 3 5. Repeat steps 2-4 until no more ship-port-berth solutions remain 6. Output the solution listWe use the Lumen agent technology (described in (Myers et al. 2011)) to provide an adaptive workflow capability for executing this scheduling process. Although we depict only one workflow here, more generally our adaptive workflow capability draws from a library of alternative approaches to a range of aid distribution scheduling problems, enabling solution approaches to be matched to the specifics of a given situation. For example, we have defined alternative workflows that embed different scheduling and optimization strategies and that make use of secure multi-party computation in different ways as a means of investigating tradeoffs between efficiency and privacy. In this section we describe what is involved in using QIF to reason about questions of how private data can be inferred from the of a computation. The "Millionaires' Problem" BID6 ) is an early usecase for multi-party computation in which two people, Alice and Bob, use MPC to securely determine which of them is richer without revealing their actual wealth to each other. But while they don't learn the other's precise values, they do learn something about each other. The nature of the relationship between the revealed output and the private inputs determines how much one may infer about the inputs from the . For example, if Alice and Bob use MPC to instead compute the arithmetic mean of their net worths, Alice could use the input she provided and the ing mean from the computation to solve for Bob's exact net worth. Alice may not be able to extract Bob's input via examination of the MPC circuit itself, but MPC cannot change the relationship between inputs and output that exists outside the circuit. In our Aid Delivery scenario, private "inputs" may involve sensitive capability and operational details. As the scenarios increase in scope and complexity, it becomes difficult to reason informally about what an adversary may be able to infer.1 We use Quantitative Information Flow (QIF) to address these concerns by characterizing an adversary's ability to make these inferences based on information-theoretic bounds on the relationship between and private inputs. A QIF analysis begins by transforming a program (such as our aid distribution workflow) into a model that represents the relationship between the (private) inputs and outputs. Specifically, these models are based on informationtheoretic channels, which we use to construct a mapping from prior 2 distributions on private data to distributions on posterior distributions BID0 Smith 2009 ).We use these models to support "predictive-mode" adaptive workflows in which we reason about what an adversary could learn from any possible set of private inputs, as well as "posterior-mode" in which we determine how much an adversary may learn from some specific concrete . We can construct games in which we quantify the adversary's inference capability by measuring how their chances of "winning" (i.e., guessing a piece of private data) improve given additional information. We call the probability that an adversary can with one chance correctly guess a piece of private data the vulnerability of that variable. In the "prior game," the defender picks a private input value by sampling from the prior distribution of possible input values. The adversary makes their best guess of the defender's input based only on their knowledge of the prior distribution of possible input values. The prior vulnerability 1 In this work, an'adversary' is a so-called honest-but-curious actor who attempts to infer the values of private data by observing the public of each step in the workflow. Adversaries could be other coalition partners or an unrelated third party that has access to the computational framework.2 A'prior' can be thought of as the initial set of beliefs that an adversary has about a system. An adversary may have a prior over the lengths of naval ships (e.g., between 10 and 1,500 feet long), or the possible locations of the ships. If an adversary has no reason to believe one value is more likely than any other then the prior is a uniform distribution over the space of secrets, hence it is known as a uniform prior.is the probability the adversary will correctly guess the defender's private input (without any additional information).In the "posterior game," the defender again picks a private input, but then performs the computation using that input and shares (only) the with the adversary. We also assume the adversary has full knowledge of how the inputs relate to outputs in the computation (e.g., could construct an identical channel-based model of the computation). Here the chance the adversary correctly guesses the defender's private input is called the posterior vulnerability. To summarize, the prior game is how likely an adversary is to guess the private input before seeing the of the computation, the posterior game is how likely the adversary is to guess the private input after seeing the specific output of the computation. Vulnerability as a metric of risk is appealing because of its relatively intuitive nature. The caveat to vulnerability as a metric is that it is extremely sensitive to the chosen prior belief. Therefore, care must be taken in deciding upon a prior in order to assure that the metric reflects reality as much as possible. Our predictive-mode adaptation strategy is based on QIF predictive leakage, which attempts to predict how much will be leaked about the private data before the computation is run on the concrete private values. We can also approximate predictive leakage using Monte Carlo simulation, permitting the use of these metrics even in scenarios where operational requirements make it infeasible to complete the precise predictive leakage within a desired time frame. Incorporating predictive leakage metrics into our workflows enables identification of potentially higher-risk situations where it may be preferable to adapt or halt the workflow (based on some policy) rather than participating in a computation that may reveal too much. Our posterior-mode adaptation strategy is based on QIF dynamic leakage, which takes into account the actual of a computation. As opposed to the predictive leakage's assessments, that average all possible private input values, dynamic leakage enables our workflows to incorporate more accurate assessments of what was actually revealed in the specific ongoing workflow. As discussed previously, the HADR aid delivery problem provides a strong foundation for analyzing adaptive workflows in a privacy-aware setting. In this section we describe how the notion of vulnerability can be used to inform the cooperating parties of risk to their private data. To begin our discussion we first describe a simplified version of the HADR aid delivery problem involving a single ship, S, and a single port, P. In this simplified scenario, the only private data is S's location (ship_loc). We assume that other relevant data (such as ship maximum speed) is known publicly.def reachable(deadline): dist = distance(ship_loc, port_loc) return (dist <= (max_speed * deadline))Figure 2: The pseudo-code for a simple queryThe channel is "is S able to reach port P within d hours?". As pseudo-code we write this channel as a function of d, over some global set of private (ship_loc) and public (port_loc, max_speed) variables. Prior Belief In our simplified scenario an adversary who wants to determine S's private data will have some prior belief about that data. In this case, the prior belief will be some area of the ocean where S could be located. This may or may not be informed by other information available to the adversary. While QIF can be calculated over non-uniform priors, in this exposition we assume uniform priors for the sake of simplicity. Figure 3a shows a graphical representation of a possible prior for the simplified scenario. Because our prior is uniform, the adversary's chance of guessing the ship's position (the prior vulnerability, V prior), is simply V prior = 1 number of possible locations in the prior This makes intuitive sense: under a uniform prior, the adversary's likelihood of winning the prior game is equivalent to choosing a point out of the prior at random. Posterior Belief When ship S, cooperating in the simplified scenario, responds to the reachable query in Figure 2, the adversary is able to observe the output of the channel. This observation allows the adversary to rule out entire sections of the space of possible locations. If the of the query is True, then the adversary can infer that the location of S is within a circle of radius r, where r is the distance that the ship can travel at max_speed with deadline amount of time. Inversely, if the of the query is False, then the adversary could infer that the ship must be outside of the same circle. Observing False is the circumstance illustrated in Figure 3b.The important point is to see that regardless of the of the query, the adversary may be able to infer rule out subsets of possible values for the private data. The probability the adversary has of guessing the ship's position after seeing the channel output (the posterior vulnerability, V post) is simply V post = 1 number of possible locations in the posterior Further Queries Because cooperation is the goal, it is likely that the coordinator of our simple scenario will need to query S multiple times. This is particularly true if the initial query's was False. The coordinator may query S with reachable(d2), where d2 is a new, longer, deadline. If the is then True then the adversary is able to infer that S resides within the ring formed by two overlapping circles: an inner circle with radius r, described above, and a wider circle with radius r2, where r2 is the distance that S is able to travel at max speed in time d2. This circumstance is illustrated in Figure 3c.Worst-Case Simple Scenario If we add another port, P 2, to the simple scenario above, but keep all other details the same, it may be possible for S's location to be determined within a very small tolerance. If the of reachable is True for both P and P 2, then (using the same process as above) the adversary would intersect the appropriate circles. The smaller the intersection, the more the adversary knows about the ship's position. The simplified scenario above only has one piece of private data: the ship's position. As a reminder, FIG0 shows the private data in the full HADR scenario. In this section we present some from an analysis over a model of the full scenario, using a single ship (Ship #9) as a running example. Prior Vulnerability As discussed above, an adversary has some notion of the possible values that a secret can take on even before running any computations over that secret data. This is referred to as the Prior. When analyzing a system, the analyst must choose appropriate ranges for the private data in question. For some variables this may be straightforward (e.g., the ocean sector in question when deciding on a prior for a ship's location), for others it may depend on domain knowledge (e.g., the appropriate range for naval ship drafts).We can visualize the prior vulnerability over location easily. Figure 5 shows several aspects of the HADR scenario. The boundaries of the map in the image are the boundaries of the prior belief over ship position. It is important to remind ourselves that the choice of prior (part of which is determined by the geographic area under consideration) affects the vulnerability metric significantly. Increasing the area decreases the prior vulnerability, while decreasing the area increases the prior vulnerability. Because the prior models the adversary's view of the world, it should be constructed carefully. Predictive Vulnerability for Steps A+B In resourceconstrained systems, it is useful to be able to predict how much of a given resource would be used if a certain action were to be executed. Private data can similarly be viewed as a form of resource for which predictive analysis can be applied to assess the impact of planned or possible actions. The ability to predict the future vulnerability can be implemented in several ways, the practicality and efficiency of different methods depends heavily on the constraints of the system, imposed by the state-space and the adversarial model.(a) The initial prior belief: no reason to believe the ship is in one location over any other.(b) Posterior belief after observing a False: The ship must be outside the distance of the reachable query.(c) If a subsequent query for a further distance returns True the ship must be within the newly formed ring. Figure 3: The effects of observing query on a prior belief over the simplified scenario. In our system we have chosen an approximate method that enables an analyst to calculate a histogram over the possible vulnerability outcomes without any access to the private data. This method uses Monte Carlo simulation to run our analysis over randomly sampled points from the space of possible private data values. FIG1 shows the of a Monte Carlo simulation of the predictive vulnerability of running Steps A+B (using 13788 samples 4). The vertical dotted line shows the median value (6.847603e−6%) of all the samples. This method of approximation provides analysts with various options. In a scenario where the preservation of privacy is paramount, the analyst may focus on the right-hand side of the figure, where the potential vulnerability is higher, 0.03039514%, though still unlikely. This method also adapts well to use cases where an analyst may have access to some of the private data. For example, an analyst for one of the coalition partner nations might have access to that nation's private data, but not the private data of the ports. Using this same method of sampling from the space of (unknown) private data, such an analyst would be able to approximate the future vulnerability of their data if they were to respond to a query. Posterior Vulnerability for Steps A+B Once a step is taken, we can calculate the posterior vulnerability of the private data. Unlike the predictive analysis in the previous section, this analysis is'exact' in that the real vulnerability cannot be more than what the analysis reports. Ship #9 Position 6.847603e−6%In this case, the posterior vulnerability coincides with the median of the predictive vulnerability. This is not too surprising as the median was also the most likely value by a significant margin. Predictive Vulnerability for Step C Figure 6 shows the Monte Carlo simulation for vulnerability of a ship after Step C is completed. 5 The median predicted value in this instance, 6.644298e−4%, is two orders of magnitude higher than the vulnerability after Steps A+B alone. This makes intuitive sense as much more information is revealed after Step C that can be used to infer a ship's private data. Unsurprisingly, the maximum sampled predictive vulnerability is also substantially higher: 0.2012072%.Posterior Vulnerability for Steps C As with the posterior vulnerability for Steps A+B, the posterior vulnerability forStep C is based on a sound analysis using the real of the workflow step, and are not simulated as in the predictive vulnerability. Ship #9 Position 2.049806e−4%In the case of Step C, the posterior vulnerability for Ship #9 is lower than the median of the predictive (6.644298e−4%). From an analyst's perspective, this could mean that Ship #9 has revealed even less about its private data than the'average' ship would in this scenario. Step D has no meaningful consequences on the vulnerability of the private data for any stakeholder in the HADR scenario if the of Step C has been observed by the adversary. The reason for this is that Step D's algorithm can be computed completely from the of previous steps in the workflow, i.e. it does not require the values of the private data directly. Interestingly, this point reinforces an important aspect of QIF analysis: even though Step C's was computed with private data, the vulnerability metrics from the QIF analysis of Step C take The vulnerability assessment computed by the QIF capability provides insights to data owners as to the security of private information that they wish to protect. We exploit these insights within our workflow manager to adapt the scheduling process in order to ensure adherence to privacy objectives. More specifically, we use the QIF capability in two ways: in a predictive mode to estimate the amount of leakage associated with potentially performing a particular query or task and in a posterior mode to track actual leakage based on the specific values that a query or task returns. When executing a particular task our workflow manager invokes the predictive mode to estimate leakage. If the estimate of aggregate leakage for designated private data does not exceed set thresholds, then the workflow proceeds and the posterior leakage analysis is invoked to determine actual leakage values. If the estimate does exceed the threshold then the workflow is either terminated or (if possible) modified via a remediation strategy to keep leakage below the threshold. The idea behind a remediation strategy is to modify the problem or state in ways that will likely reduce impacts on private data. For example, our aid delivery problem requires computing the reachability of a port by a given deadline. Knowing that a given ship can reach the port by that deadline reveals information about the combination of ship position and maximum speed. One simple remediation strategy is to postpone the deadline, which will reveal less information about the position and speed values (e.g., the fact that a ship can reach a port by a given deadline reveals something about the lower bound for its max-speed; a later deadline introduces greater uncertainty as to what that speed might be by decreasing that lower bound). Our Lumen-based adaptive workflow engine includes such remediation strategies to enable adaptivity based on QIF predictive analyses. Privacy thresholds are implemented using an existing policy framework within Lumen that was developed previously to enable users to impose boundaries on the behaviors of autonomous agents BID2. The privacy policies have the general form:Keep below <percentage> the probability of knowing <private-data> within <tolerance> Below we show two examples used in the system, one for the Aid Provider nation and one for the Aid Recipient nation. Aid Provider Sample Policy:"Keep below 10 % the probability of knowing the location of my ships within 50 NMs"Aid Recipient Sample Policy: "Keep below 20 % the probability of knowing the port harbor depth within 40 feet" FIG3 shows sample vulnerability assessments for two types of private data (max-speed, location) for a select set of ships belonging to an individual Aid Provider nation. The top image shows the initial vulnerabilities of the data, prior to performing any computations; the bottom image shows the vunlerabilities after workflow completion. The display shows the QIF-derived vulnerability level as a colored bar representing the adversary's likelihood of guessing the private data within the specified tolerance. The vertical line bisecting the display for each piece of private data marks the policy-prescribed threshold of acceptability for the vulnerability. We note that the initial vulnerabilities for the ship locations are non-zero but so small as to not be perceptible in the image. Here, we consider two avenues for future work. Analogs can be drawn between our use of the QIF analysis to predict and track vunlerabilities of private data within the scheduling workflow and prior work on estimating resource usage in workflows (Morley, Myers, and YorkeSmith 2006). Although we currently consider individual actions incrementally as the workflow executes, we envision performing predictive vulnerability assessments of entire workflows prior to execution, to enable informed choices about alternative approaches before any information usage has occurred. Generating useful assessments will require predictive QIF techniques that consider cases beyond worstcase leakage, whose inherent pessimism can make them of limited value for certain vulnerability assessment tasks. Longer term, such analyses could also potentially open the door to using first-principles planning techniques to synthesize privacy-aware workflows on an as needed basis that are tailored to the specifics of a given task and privacy requirements. The scalability of QIF techniques can be an issue for systems where there are complex relationships between sets of variables. Some work has been done on attaining scalability by enhancing static analysis techniques with approximations that speed up analysis with probabilistic bounds on certainty BID5. However, there is still further work required before the QIF analysis of arbitrary channels could scale to use cases such as the end-to-end HADR scenario considered in this paper. In particular, the methods described in this paper utilize bespoke analyses for the channels under consideration, providing a more scalable approach at the cost of generality. One future direction may be to design Domain Specific Languages that enable description of a scenario and its analysis in tandem. A key challenge facing coalitions is how to collaborate without releasing information that could jeopardize national (or organizational) interests. In this paper, we consider this challenge for a realistic scheduling problem tied to aid delivery. Our work makes several contributions. First, we show how state-of-the-art secure multi-party computation can be used to safeguard private information with an overall distributed scheduling solution to the aid delivery problem. A second contribution relates to the use of quantitative information flow (QIF): even with secure multi-party computation, scheduling outputs can reveal information about coalition members' private data. We show how QIF can be applied to assess the vulnerability of providate data for both prospective (i.e., where are not known) and actual (i.e., where are known) computations. As a third contribution, these assessments can be used to adapt the scheduling algorithm to ensure it remains within accepted vulnerability thresholds established by data owners. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1x8VWq5KE | Privacy can be thought about in the same way as other resources in planning |
The challenge of learning disentangled representation has recently attracted much attention and boils down to a competition. Various methods based on variational auto-encoder have been proposed to solve this problem, by enforcing the independence between the representation and modifying the regularization term in the variational lower bound. However recent work by has demonstrated that the proposed methods are heavily influenced by randomness and the choice of the hyper-parameter. This work is built upon the same framework in Stage 1 , but with different settings; to make it self-contained, we provide this manuscript, which is unavoidably very similar to the report for Stage 1. In detail, in this work, instead of designing a new regularization term, we adopt the FactorVAE but improve the reconstruction performance and increase the capacity of network and the training step. The strategy turns out to be very effective in achieving disentanglement. The great success of unsupervised learning heavily depends on the representation of the feature in the real-world. It is widely believed that the real-world data is generated by a few explanatory factors which are distributed, invariant, and disentangled . The challenge of learning disentangled representation boils down into a competition 1 to build the best disentangled model. The key idea in disentangled representation is that the perfect representation should be a one-to-one mapping to the ground truth disentangled factor. Thus, if one factor changed and other factors fixed, then the representation of the fixed factor should be fixed accordingly, while others' representation changed. As a , it is essential to find representations that (i) are independent of each other, and (ii) align to the ground truth factor. Recent line of works in disentanglement representation learning are commonly focused on enforcing the independence of the representation by modifying the regulation term in the and FactorVAE . See Appendix A for more details of these model. To evaluate the performance of disentanglement, several metrics have been proposed, including the FactorVAE metric , Mutual Information Gap (MIG) , DCI metric , IRS metric , and SAP score . However, one of our findings is that these methods are heavily influenced by randomness and the choice of the hyper-parameter. This phenomenon was also discovered by. Therefore, rather than designing a new regularization term, we simply use FactorVAE but at the same time improve the reconstruction performance. We believe that, the better the reconstruction, the better the alignment of the ground-truth factors. Therefore, the more capacity of the encoder and decoder network, the better the would be. Furthermore, after increasing the capacity, we also try to increase the training step which also shows a significant improvement of evaluation metrics. The final architecture of FactorVAE is given in Figure 1. Note that, this work is built upon the same framework in stage 1 , but with different settings; to make it self-contained, we provide this manuscript, which is unavoidably very similar to the report for Stage 1. Overall, our contribution can be summarized as follow: we found that the performance of the reconstruction is also essential for learning disentangled representation, and we achieve state-of-the-art performance in the competition. In this section, we explore the effectiveness of different disentanglement learning models and the performance of the reconstruction for disentangle learning. We first employ different kinds of variational autoencoder including BottleneckVAE, AnneledVAE, DIPVAE, BetaTCVAE, and BetaVAE with 30000 training step. Second, we want to know whether the capacity plays an important role in disentanglement. The hypothesis is that the larger the capacity, the better reconstruction can be obtained, which further reinforces the disentanglement. In detail, we control the number of latent variables. In this section, we present our experiment in stage 1 and stage 2 of the competition. We first present the performance of different kinds of VAEs in stage 1, which is given in Table 1. It shows that FactorVAE achieves the best when the training step is 30000. In the following experiment, we choose FactorVAE as the base model. Then, as shown in Table 2, we increase the step size and we find that the best was achieved at 1000k training steps. The experiment in this part may not be sufficient, but it still suggests the fact that the larger the capacity is, the better the disentanglement performance. Since we increase the capacity of the model, it is reasonable to also increase the training steps at the same time. Furthermore, as shown in Table 3, using sufficient large training step (≥ 800k), we investigate the effectiveness of the number of latent variables. This experiment is performed in stage 2 and suggests that the FactorVAE and the DCI metric are positive as the latent variables increase, while the other metrics decrease. The best in the ranking is marked as bold, which suggests that we should choose an appropriate number of latent variables. In this work, we conducted an empirical study on disentangled learning. We first conduct several experiments with different disentangle learning methods and select the FactorVAE as the base model; and second we improve the performance of the reconstruction, by increasing the capacity of the model and the training step. Finally, our appear to be competitive. (VAE) , a generative model that maximize the following evidence lower bound to approximate the intractable distribution p θ (x|z) using q φ (z|x), where q φ (z|x) denote Encoder with parameter φ and p θ (x|z) denote Decoder with parameter θ. As shown in Table 4, all the lower bound of variant VAEs can be described as Reconstruction Loss+ Regularization where all the Regularization term and the hyper-parameters are given in this table. | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | r1griYFhiB | disentangled representation learning |
We propose a generative adversarial training approach for the problem of clarification question generation. Our approach generates clarification questions with the goal of eliciting new information that would make the given context more complete. We develop a Generative Adversarial Network (GAN) where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question. We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training. A goal of natural language processing is to develop techniques that enable machines to process naturally occurring language. However, not all language is clear and, as humans, we may not always understand each other BID10; in cases of gaps or mismatches in knowledge, we tend to ask questions BID9. In this work, we focus on the task of automatically generating clarification questions: questions that ask for information that is missing from a given linguistic context. Our clarification question generation model builds on the sequence-to-sequence approach that has proven effective for several language generation tasks BID37 BID39 BID5. Unfortunately, training a sequence-to-sequence model directly on context/question pairs yields generated questions that are highly generic 1, corroborating a common finding in dialog systems BID17. Our goal is to be able to generate questions that are useful and specific. To achieve this, we begin with a recent observation of BID30, who considered the task of question reranking: the system should learn to generate clarification questions whose answers have high utility, which they defined as the likelihood that this question would lead to an answer that will make the context more complete (§2.3). Inspired by this, we construct a question generation model that first generates a question given a context, and then generates a hypothetical answer to that question. Given this (context, question, answer) tuple, we train a utility calculator to estimate the usefulness of this question. We then show that this utility calculator can be generalized using ideas for generative adversarial networks BID8 for text BID40, wherein the utility predictor plays the role of the "discriminator" and the question generator is the "generator" (§2.2), which we train using the MIXER algorithm BID29.We evaluate our approach on two question generation datasets: for posts on Stack Exchange and for Amazon product descriptions (Figure 1). Using both automatic metrics and human evaluation, we demonstrate that our adversarially trained model generates a more diverse set of questions than all the baseline models. Furthermore, we find that although all models generate questions that are relevant to the context at hand, our adversarially-trained model generates questions that are more specific to the context. Our goal is to build a model that, given a context, can generate an appropriate clarification question. As a running example, we will use the Amazon setting: where the dataset consists of (context, question, answer) triples where the context is the product description, question is clarification question about that product that (preferably) is not already answered in the description and answer is the seller's (or other users') reply to the question. Representationally, our question generator is a standard sequence-to-sequence model with attention (§2.1). The learning problem is: how to train the sequence-to-sequence model to produce good question. An overview of our training setup is shown in FIG1. Given a context, our question generator outputs a question. In order to evaluate the usefulness of this question, we then have a second sequence-to-sequence model called the "answer generator" that generates a hypothetical answer based on the context and the question (§2.5). This (context, question and answer) triple is fed into a UTILITY calculator, whose initial goal is to estimate the probability that this question/answer pair is useful in this context (§2.3). This UTILITY is treated as a reward, which is used to update the question generator using the MIXER BID29 algorithm (§2.2). Finally, we reinterpret the answer-generator-plus-utility-calculator component as a discriminator for differentiating between true (context, question, answer) triples and synthetic triples (§ 2.4), and optimize this adversarial objective using MIXER. We use a standard attention based sequence-to-sequence model BID21 for our question generator. Given an input sequence (context) c = (c 1, c 2, ..., c N), this model generates an output sequence (question) q = (q 1, q 2, ..., q T). The architecture of this model is an encoder-decoder with attention. The encoder is a recurrent neural network (RNN) operating over the input word embeddings to compute a source context representationc. The decoder uses this source representation to generate the target sequence one word at a time: DISPLAYFORM0 In Eq 1,h t is the attentional hidden state of the RNN at time t and W s and W c are parameters of the model (details in Appendix A). The predicted token q t is the token in the vocabulary that is assigned the highest probability using the softmax function. The standard training objective for sequence-tosequence model is to maximize the log-likelihood of all (c, q) pairs in the training data D which is equivalent to minimizing the loss, DISPLAYFORM1 2.2 TRAINING THE GENERATOR TO OPTIMIZE QUESTION UTILITYTraining sequence-to-sequence models for the task of clarification question generation (with context as input and question as output) using maximum likelihood objective unfortunately leads to the generation of highly generic questions, such as "What are the dimensions?" when asking questions about home appliances. This issue has been observed in dialog generation as well BID17. Recently BID30 observed that usefulness of a question can be better measured as the utility that would be obtained if the context were updated with the answer to the proposed question. We use this observation to define a UTILITY based reward function and train the question generator to optimize this reward. We train the UTILITY reward to predict the likelihood that a question would generate an answer that would increase the utility of the context by adding useful information to it (see §2.3 for details).Similar to optimizing metrics like BLEU and ROUGE, this UTILITY function also operates on discrete text outputs, which makes optimization difficult due to non-differentiability. A successful recent approach dealing with the non-differentiability while also retaining some advantages of maximum likelihood training is the Mixed Incremental Cross-Entropy Reinforce BID29 algorithm (MIXER). In MIXER, the overall loss L is differentiated as in REINFORCE BID38: DISPLAYFORM2 where y s is a random output sample according to the model p θ, where θ are the parameters of the network. We then approximate the expected gradient using a single sample q s = (q DISPLAYFORM3 In REINFORCE, the policy is initialized random, which can cause long convergence times. To solve this, MIXER starts by optimizing maximum likelihood and slowly shifts to optimizing the expected reward from Eq 3. For the initial ∆ time steps, MIXER optimizes L mle and for the remaining (T − ∆) time steps, it optimizes the external reward. In our model, we minimize the UTILITY-based loss L max-utility defined as: DISPLAYFORM4 where r(q p) is the UTILITY based reward on the predicted question and r(q b) is a baseline reward introduced to reduce the high variance otherwise observed when using REINFORCE.In MIXER, the baseline is estimated using a linear regressor that takes in the current hidden states of the model as input and is trained to minimize the mean squared error (||r( DISPLAYFORM5 Instead we use a self-critical training approach BID32 where the baseline is estimated using the reward obtained by the current model under greedy decoding during test time. Given a (context, question, answer) triple, BID30 introduce a utility function UTILITY(c, q, a) to calculate the value of updating a context c with the answer a to a clarification question q. The inspiration for thier utility function is to estimate the probability that an answer would be a meaningful addition to a context, and treat this as a binary classification problem where the positive instances are the true (context, question, answer) triples in the dataset whereas the negative instances are contexts paired with a random (question, answer) from the dataset. The model we use is to first embed of the words in the context c, then use an LSTM (long-short term memory) BID12 to generate a neural representationc of the context by averaging the output of each of the hidden states. Similarly, we obtain a neural representationq and a of q and a respectively using question and answer LSTM models. Finally, a feed forward neural network F UTILITY (c,q,ā) predicts the usefulness of the question. The UTILITY function trained on true vs random samples from real data (as described in the previous section) can be a weak reward signal for questions generated by a model due to the large discrepancy between the true data and the model's outputs. In order to strengthen the reward signal, we reinterpret the UTILITY function (coupled with the answer generator) as a discriminator in an adversarial learning setting. That is, instead of taking the UTILITY calculator to be a fixed model that outputs the expected quality of a question/answer pair, we additionally optimize it to distinguish between true question/answer pairs and model-generated ones. This reinterpretation turns our model into a form of a generative adversarial network (GAN) BID8.A GAN is a training procedure for "generative" models that can be interpreted as a game between a generator and a discriminator. The generator is an arbitrary model g ∈ G that produces outputs (in our case, questions). The discriminator is another model d ∈ D that attempts to classify between true outputs and model-generated outputs. The goal of the generator is to generate data such that it can fool the discriminator; the goal of the discriminator is to be able to successfully distinguish between real and generated data. In the process of trying to fool the discriminator, the generator produces data that is as close as possible to the real data distribution. Generically, the GAN objective is: DISPLAYFORM0 where x is sampled from the true data distributionp, and z is sampled from a prior defined on input noise variables p z.Although GANs have been successfully used for image tasks, training GANs for text generation is challenging due to the discrete nature of outputs in text. The discrete outputs from the generator make it difficult to pass the gradient update from the discriminator to the generator. Recently, BID40 proposed a sequence GAN model for text generation to overcome this issue. They treat their generator as an agent and use the discriminator as a reward function to update the generative model using reinforcement learning techniques. Our GAN-based approach is inspired by this sequence GAN model with two main modifications: a) We use the MIXER algorithm as our generator (§2.2) instead of policy gradient approach; and b) We use the UTILITY function (§2.3) as our discriminator instead of a convolutional neural network (CNN).In our model, the answer is an latent variable: we do not actually use it anywhere except to train the discriminator. Because of this, we train our discriminator using (context, true question, generated answer) triples as positive instances and (context, generated question, generated answer) triples as the negative instances. Formally, our objective function is:LGAN DISPLAYFORM1 where U is the UTILITY discriminator, M is the MIXER generator,p is our data of (context, question, answer) triples and A is our answer generator. Question Generator. We pretrain our question generator using the sequence-to-sequence model §2.1 where we define the input sequence as the context and the output sequence as the question. This answer generator is trained to maximize the log-likelihood of all ([context+question], answer) pairs in the training data. Parameters of this model are updated during adversarial training. Answer Generator. We pretrain our answer generator using the sequence-to-sequence model §2.1 where we define the input sequence as the concatenation of the context and the question and the output sequence as the answer. This answer generator is trained to maximize the log-likelihood of all (context, question) pairs in the training data. Unlike the question generator, the parameters of the answer generator are kept fixed during the adversarial training. Discriminator. We pretrain the discriminator using (context, question, answer) triples from the training data. For positive instances, we use a context and its true question, answer and for negative instances, we use the same context but randomly sample a question from the training data (and use the answer paired with that random question). We base our experimental design on the following research questions:1. Do generation models outperform simpler retrieval baselines? 2. Does optimizing the UTILITY reward improve over maximum likelihood training? 3. Does using adversarial training improve over optimizing the pretrained UTILITY? 4. How do the models perform when evaluated for nuances such as specificity and usefulness? We evaluate our model on two datasets. The first is from StackExchange and was curated by BID30; the second is from Amazon, curated by BID22, and has not previously been used for the task of question generation. StackExchange. This dataset consists of posts, questions asked to that post on stackexchange.com (and answers) collected from three related subdomains on stackexchage.com (askubuntu, unix and superuser). Additionally, for 500 instances each from the tune and the test set, the dataset includes 1 to 5 other questions identified as valid questions by expert human annotators from a pool of candidate questions. This dataset consists of 61, 681 training, 7710 validation and 7709 test examples. Amazon. Each instance consists of a question asked about a product on amazon.com combined with other information (product ID, question type "Yes/No", answer type, answer and answer time).To obtain the description of the product, we use the metadata information contained in the amazon reviews dataset BID23. We consider at most 10 questions for each product. This dataset includes several different product categories. We choose the Home and Kitchen category since it contains a high number of questions and is relatively easy category for human based evaluation. This dataset consists of 19, 119 training, 2435 validation and 2305 test examples, and each product description contains between 3 and 10 questions (average: 7). We compare three variants (ablations) of our proposed approach, together with an information retrieval baseline: GAN-Utility is our full model which is a UTILITY function based GAN training (§ 2.4) including the UTILITY discriminator, a MIXER question generator and a sequence-tosequence based answer generator. Max-Utility is our reinforcement learning baseline with a pretrained question generator described model (§ 2.2) without the adversarial training. MLE is the question generator model pretrained on context, question pairs using maximum likelihood objective (§2.1). Lucene 3 is a TF-IDF (term frequency-inverse document frequency) based document ranking system which given a document, retrieves N other documents that are most similar to the given document. Given a context, we use Lucene to retrieve top 10 contexts that are most similar to the given context. We randomly choose a question from the 10 questions paired with these contexts to construct our Lucene baseline BID45. Experimental details of all our models are described in Appendix B. We evaluate initially with several automated evaluation metrics, and then more substantially based on crowdsourced human judgments. Automatic metrics include: Diversity, which calculates the proportion of unique trigrams 5 in the output to measure the diversity as commonly used to evaluate dialogue generation BID17; BLEU BID27, which evaluate n-gram precision between a predicted sentence and reference sentences; and METEOR BID1, which is similar to BLEU but includes stemmed and synonym matches when measuring the similarity between the predicted sequence and the reference sequences. Table 1: DIVERSITY as measured by the proportion of unique trigrams in model outputs. BLEU and METEOR scores using up to 10 references for the Amazon dataset and up to six references for the StackExchange dataset. Numbers in bold are the highest among the models. All for Amazon are on the entire test set whereas for StackExchange they are on the 500 instances of the test set that have multiple references. Human judgments involve showing contexts and generated questions to crowdworkers 6 and asking them to evaluate the questions along several axes. Roughly, we ask for the following five judgments for each question (exact wordings in Appendix C): Is it relevant (yes/no); Is it grammatical (yes/comprehensible/incomprehensible); How specific is it to this product (four options from "specific to only this product" to "generic to any product"); Does this question ask for new information not contained in the discription (completely/somewhat/no); and How useful is this question to a potential buyer (four options from "should be included in the description" to "useful only to the person asking"). For the last three questions, we also allowed a "not applicable" response in the case that the question was either ungrammatical or irrelevant. Table 1 shows the on the two datasets when evaluated according to automatic metrics. In the Amazon dataset, GAN-Utility outperforms all ablations on DIVERSITY, suggesting that it produces more diverse outputs. Lucene, on the other hand, has the highest DIVERSITY since it consists of human generated questions, which tend to be more diverse because they are much longer compared to model generated questions. This comes at the cost of lower match with the reference as visible in the BLEU and METEOR scores. In terms of BLEU and METEOR, there is inconsistency. Although GAN-Utility outperforms all baselines according to METEOR, the fully ablated MLE model has a higher BLEU score. This is because BLEU score looks for exact n-gram matches and since MLE produces more generic outputs, it is much more likely that it will match one of 10 references compared to the specific/diverse outputs of GAN-Utility, since one of those ten is highly likely to itself be generic. In the StackExchange dataset GAN-Utility outperforms all ablations on both BLEU and METEOR. Unlike in the Amazon dataset, MLE does not outperform GAN-Utility in BLEU. This is because the MLE outputs in this dataset are not as generic as in the amazon dataset due to the highly technical nature of contexts in StackExchange. As in the Amazon dataset, GAN-Utility outperforms MLE on DIVERSITY. Interestingly, the Max-Utility ablation achieves a higher DIVERSITY score than GAN-Utility. On manual analysis we find that Max-Utility produces longer outputs compared to GAN-Utility but at the cost of being less grammatical. Table 2 shows the numeric of human-based evaluation performed on the reference and the system outputs on 500 random samples from the test set of the Amazon dataset. 7 These overall show that the GAN-Utility model successfully generates the most specific questions, while being equally good at seeking new information and being useful to potential buyers. All approaches produce relevant, grammatical questions. All our models are all equally good at seeking new information, but are weaker than Lucene, which performs better according to new information but at Table 2: Results of human judgments on model generated questions on 500 sample Home & Kitchen product descriptions. The options described in §3.3 are converted to corresponding numeric range (as described in Appendix C). The difference between the bold and the non-bold numbers is statistically insignificant with p <0.001. Reference is excluded in the significance calculation. the cost of much lower specificity and slightly lower relevance. Our models are all equally good also at generating useful questions: their usefulness score is significantly better than both Lucene and Reference, largely because Lucene and Reference tend to ask questions that are more often useful only to the person asking the question, making them less useful for potential other buyers (see FIG3). Our full model, GAN-Utility, performs significantly better when measured by specificity to the product, which aligns with the higher DIVERSITY score obtained by GAN-Utility under automatic metric evaluation. Question Generation. Most previous work on question generation has been on generating reading comprehension style questions i.e. questions that ask about information present in a given text BID11 BID33 BID11 BID6. Outside reading comprehension questions, BID15 use crowdsourcing to generate question templates, BID19 use templated questions to help authors write better related work sections, BID24 introduced visual question answer tasking that focuses on generating natural and engaging questions about an image. BID25 introduced an extension of this task called the Image Grounded Conversation task where they use both the image and some initial textual context to generate a natural follow-up question and a response to that question. BID3 propose an active question answering model where they build an agent that learns to reformulate the question to be asked to a question-answering system so as to elicit the best possible answers. BID6 extract large number of question-answer pairs from community question answering forums and use them to train a model that can generate a natural question given a passage. Neural Models and Adversarial Training for Text Generation. Neural network based models have had significant success at a variety of text generation tasks, including machine translation BID0 BID21, summarization BID26 ), dialog (BID2 BID16 BID36, textual style transfer BID13 BID14 BID31 and question answering BID39 .Our task is most similar to dialog, in which a wide variety of possible outputs are acceptable, and where lack of specificity in generated outputs is common. We addresses Table 3 : Example outputs from each of the systems for a single product description this challenge using an adversarial network approach BID8, a training procedure that can generate natural-looking outputs, which have been effective for natural image generation BID4 . Due to the challenges in optimizing over discrete output spaces like text, BID40 introduced a Seq(uence)GAN approach where they overcome this issue by using RE-INFORCE to optimize. BID18 train an adversarial model similar to SeqGAN for generating next utterance in a dialog given a context. However, unlike our work, their discriminator is a binary classifier trained to distinguish between human and machine generated utterances. Finally, BID7 introduce an actor-critic conditional GAN for filling in missing text conditioned on the surrounding context. In this work, we describe a novel approach to the problem of clarification question generation. Given a context, we use the observation of BID30 that the usefulness of a clarification question can be measured by the value of updating the context with an answer to the question. We use a sequence-to-sequence model to generate a question given a context and a second sequenceto-sequence model to generate an answer given the context and the question. Given the (context, predicted question, predicted answer) triple we calculator the utility of this triple and use it as a reward to retrain the question generator using reinforcement learning based MIXER model. Further, to improve upon the utility function, we reinterpret it as a discriminator in an adversarial setting and train both the utility function and the MIXER model in a minimax fashion. We find that our adversarial training approach produces more diverse questions compared to both a model trained using maximum likelihood objective and a model trained using utility reward based reinforcement learning. There are several avenues of future work in this area. Following BID24, we could combine text input with image input to generate more relevant questions. Because some questions can be answered by looking at the product image in the Amazon dataset BID22, this could help generate more relevant and useful questions. As in most One significant research challenge in the space of free text generation problems when the set of possible outputs is large, is that of automatic evaluation BID20: in our we saw some correlation between human judgments and automatic metrics, but not enough to trust the automatic metrics completely. Lastly, integrating such a question generation model into a real world platform like StackExchange or Amazon to understand the real utility of such models and to unearth additional research questions. A DETAILS OF SEQUENCE-TO-SEQUENCE MODELIn this section, we describe the attention based sequence-to-sequence model introduced in §2.1 of the main paper. In Eq 1,h t is the attentional hidden state of the RNN at time t obtained by concatenating the target hidden state h t and the source-side context vectorc t, and W s is a linear transformation that maps h t to an output vocabulary-sized vector. The predicted token q t is the token in the vocabulary that is assigned the highest probability using the softmax function. Each attentional hidden stateh t depends on a distinct input context vectorc t computed using a global attention mechanism over the input hidden states as: DISPLAYFORM0 a nt h n DISPLAYFORM1 The attention weights a nt is calculated based on the alignment score between the source hidden state h n and the current target hidden state h t. In this section, we describe the details of our experimental setup. We preprocess all inputs (context, question and answers) using tokenization and lowercasing. We set the max length of context to be 100, question to be 20 and answer to be 20. Our sequence-to-sequence model (§ 2.1) operates on word embeddings which are pretrained on in domain data using Glove BID28. We use embeddings of size 200 and a vocabulary with cut off frequency set to 10. During train time, we use teacher forcing. During test time, we use beam search decoding with beam size 5. We use a hidden layer of size two for both the encoder and decoder recurrent neural network models with size of hidden unit set to 100. We use a dropout of 0.5 and learning ratio of 0.0001 In the MIXER model, we start with ∆ = T and decrease it by 2 for every epoch (we found decreasing ∆ to 0 is ineffective for our task, hence we stop at 2). In this section, we describe in detail the human based evaluation methodology introduced in §3.3 of the main paper. | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1eKJ3R5KQ | We propose an adversarial training approach to the problem of clarification question generation which uses the answer to the question to model the reward. |
Pattern databases are the foundation of some of the strongest admissible heuristics for optimal classical planning. Experiments showed that the most informative way of combining information from multiple pattern databases is to use saturated cost partitioning. Previous work selected patterns and computed saturated cost partitionings over the ing pattern database heuristics in two separate steps. We introduce a new method that uses saturated cost partitioning to select patterns and show that it outperforms all existing pattern selection algorithms. A * search BID10 with an admissible heuristic BID23 ) is one of the most successful methods for solving classical planning tasks optimally. An important building block of some of the strongest admissible heuristics are pattern database (PDB) heuristics. A PDB heuristic precomputes all goal distances in a simplified state space obtained by projecting the task to a subset of state variables, the pattern, and uses these distances as lower bounds on the true goal distances. PDB heuristics were originally introduced for solving the 15-puzzle BID2 and have later been generalized to many other combinatorial search tasks (e.g., BID21 BID7 and to the setting of domainindependent planning BID3 .Using a single PDB heuristic of reasonable size is usually not enough to cover sufficiently many aspects of challenging planning tasks. It is therefore often beneficial to compute multiple PDB heuristics and to combine their estimates admissibly BID15 . The simplest approach for this is to choose the PDB with the highest estimate in each state. Instead of this maximization scheme, we would like to sum estimates, but this renders the ing heuristic inadmissible in general. However, if two PDBs are affected by disjoint sets of operators, they are independent and we can admissibly add their estimates BID19 BID7 . BID11 later generalized this idea by introducing the canonical heuristic for PDBs, which computes all maximal subsets of pairwise independent PDBs and then uses the maximum over the sums of independent PDBs as the heuristic value. Cost partitioning BID17 BID40) is a generalization of the independence-based methods above. It makes the sum of heuristic estimates admissible by distributing the costs of each operator among the heuristics. The literature contains many different cost partitioning algorithms such as zero-one cost partitioning BID4 BID11 ), uniform cost partitioning BID17, optimal cost partitioning BID17 BID16 BID18 BID25, posthoc optimization BID26 and delta cost partitioning BID6.In previous work BID34, we showed experimentally for the benchmark tasks from previous International Planning Competitions (IPC) that saturated cost partitioning (SCP) BID30 BID37 is the cost partitioning algorithm of choice for PDB heuristics. Saturated cost partitioning considers an ordered sequence of heuristics. Iteratively, it gives each heuristic the minimum amount of costs that the heuristic needs to justify all its estimates and then uses the remaining costs for subsequent heuristics until all heuristics have been served this way. Before we can compute a saturated cost partitioning over pattern database heuristics, we need to select a collection of patterns. The first domain-independent automated pattern selection algorithm is due to BID3. It partitions the state variables into patterns via best-fit bin packing. BID5 later used a genetic algorithm to search for a pattern collection that maximizes the average heuristic value of a zero-one cost partitioning over the PDB heuristics. BID11 proposed an algorithm that performs a hill-climbing search in the space of pattern collections (HC). HC evaluates a collection C by estimating the search effort of the canonical heuristic over C based on a model of IDA * runtime BID20. BID8 presented the Complementary PDBs Creation (CPC) method, that combines bin packing and genetic algorithms to create a pattern collection minimizing the estimated search effort of an A * search BID22. BID28 repeatedly compute patterns using counterexample-guided abstraction refinement (CEGAR): starting from a random goal variable, their CEGAR algorithm iteratively finds solutions in the corresponding projection and executes them in the original state space. Whenever a solution cannot be executed due to a violated precondition, it adds the missing precondition variable to the pattern. Finally, BID26 systematically generate all interesting patterns up to a given size X (SYS-X). Experiments showed that cost-partitioned heuristics over SYS-2 and SYS-3 yield accurate estimates BID26 BID34, but using all interesting patterns of larger sizes is usually infeasible. We introduce SYS-SCP, a new pattern selection algorithm based on saturated cost partitioning that potentially considers all interesting patterns, but only selects useful ones. SYS-SCP builds multiple pattern sequences that together form the ing pattern collection. For each sequence σ, it considers the interesting patterns in increasing order by size and adds a pattern P to σ if P is not part of an earlier sequence and the saturated cost partitioning heuristic over σ plus P is more informative than the one over σ alone. We consider optimal classical planning tasks in a SAS + -like notation BID1 and represent a planning task Π as a tuple V, O, s 0, s. Each variable v in the finite set of variables V has a finite domain dom(v). A partial state s is defined over a subset of variables vars(s) ⊆ V and maps each v ∈ vars(s) to a value in dom(v), written as s [v]. We call the pair v, s [v] an atom and interchangeably treat partial states as mappings from variables to values or as sets of atoms. If vars(s) = V, we call s a state. We write S(Π) for the set of all states in Π.Each operator o in the finite set of operators O has a precondition pre(o) and an effect eff(o), both of which are partial states, and a cost cost(o) ∈ R Transition systems assign semantics to planning tasks. Definition 1 (Transition Systems). A transition system T is a labeled digraph defined by a finite set of states S(T), a finite set of labels L(T), a set T (T) of labeled transitions s − → s with s, s ∈ S(T) and ∈ L(T), an initial state s 0 (T), and a set S (T) of goal states. A planning task Π = V, O, s 0, s induces a transition system T with states S(Π), labels O, transitions {s DISPLAYFORM0 Separating transition systems from cost functions allows us to evaluate the same transition system under different cost functions, which is important for cost partitioning. A cost function for transition system T is a function cost : L(T) → R ∪ {−∞, ∞}. It is finite if −∞ < cost < ∞ for all labels. It is nonnegative if cost ≥ 0 for all labels. We write C(T) for the set of all cost functions for T.Note that we assume that the cost function of the planning task is non-negative and finite, but as in previous work we allow negative BID25 and infinite costs BID32 in cost partitionings. The generalization to infinite costs is necessary to cleanly state some of our definitions. Definition 3 (Weighted Transition Systems). A weighted transition system is a pair T, cost where T is a transition system and cost ∈ C(T) is a cost function for T.The cost of a path π = s DISPLAYFORM0 It is ∞ if the sum contains both +∞ and −∞. If s n is a goal state, π is called a goal path for s 0.Definition 4 (Goal Distances and Optimal Paths). The goal distance of a state s ∈ S(T) in a weighted transition system T, cost is defined as inf π∈Π (T,s) cost(π), where Π (T, s) is the set of goal paths from s in T. (The infimum of the empty set is ∞.) We write h * DISPLAYFORM1 Optimal classical planning is the problem of finding an optimal goal path from s 0 or showing that s 0 is unsolvable. We use heuristics to estimate goal distances BID23. DISPLAYFORM2 Cost partitioning makes adding heuristics admissible by distributing the costs of each operator among the heuristics. Definition 6 (Cost Partitioning). Let T be a transition system. A cost partitioning for a cost function cost ∈ C(T) is a tuple cost 1,..., cost n ∈ C(T) n whose sum is bounded by cost: DISPLAYFORM3 n over the heuristics h 1,..., h n for T induces the cost-partitioned heuristic DISPLAYFORM4. If the sum contains +∞ and −∞, it evaluates to the leftmost infinite value. One of the cost partitioning algorithms from the literature is saturated cost partitioning BID31. It is based on the insight that we can often reduce the amount of costs given to a heuristic without changing any heuristic estimates. Saturated cost functions formalize this idea. Definition 7 (Saturated Cost Function). Consider a transition system T, a heuristic h for T and a cost function cost ∈ C(T). A cost function scf ∈ C(T) is saturated for h and cost if 1. scf ≤ cost for all labels ∈ L(T) and 2. h(scf, s) = h(cost, s) for all states s ∈ S(T).A saturated cost function scf is minimal if there is no other saturated cost function scf for h and cost with scf ≤ scf for all labels ∈ L(T).Whether we can efficiently compute a minimal saturated cost function depends on the type of heuristic. In earlier work BID31, we showed that this is possible for explicitly-represented abstraction heuristics BID12, which include PDB heuristics. Definition 8 (Minimum Saturated Cost Function for Abstraction Heuristics). Let T, cost be a weighted transition system and h an abstraction heuristic for T with abstract transition system T. The minimum saturated cost function mscf for h and cost is DISPLAYFORM5 Given a sequence of abstraction heuristics, the saturated cost partitioning algorithm iteratively assigns to each heuristic only the costs that the heuristic needs to preserve its estimates and uses the remaining costs for subsequent heuristics. Definition 9 (Saturated Cost Partitioning). Consider a transition system T and a sequence of abstraction heuristics DISPLAYFORM6 receives a cost function rem and returns the minimum saturated cost function for h i and rem. The saturated cost partitioning cost 1,..., cost n of a function cost ∈ C(T) over H is defined as: DISPLAYFORM7 where the auxiliary cost functions rem i represent the remaining costs after processing the first i heuristics in H. We write h SCP H for the saturated cost partitioning heuristic over the sequence of heuristics H. In this work, we compute saturated cost partitionings over pattern database heuristics. A pattern for task Π with variables V is a subset P ⊆ V. By syntactically removing all variables from Π that are not in P, we obtain the projected task Π| P inducing the abstract transition system T P. The PDB heuristic h P for a pattern P is defined as h P (cost, s) = h * T P (cost, s| P), where s| P is the abstract state that s is projected to in Π| P. For the pattern sequence P 1,..., P n we define h DISPLAYFORM8 One of the simplest pattern selection algorithms is to generate all patterns up to a given size X (Felner, Korf, and Hanan 2004) and we call this approach SYS-NAIVE-X. It is easy to see that for tasks with n variables, SYS-NAIVE-X generates X i=1 n i patterns. Usually, many of these patterns do not add much information to a cost-partitioned heuristic over the patterns. Unfortunately, there is no efficiently computable test that allows us to discard such uninformative patterns. Even patterns without any goal variables can increase heuristic estimates in a cost partitioning BID27.However, in the setting where only non-negative cost functions are allowed in cost partitionings, there are efficiently computable criteria for deciding whether a pattern Algorithm 1 SYS-SCP: Given a planning task with states S(T), cost function cost and interesting patterns SYS, select a subset C ⊆ SYS.1: function SYS-SCP(Π) 2: DISPLAYFORM9 repeat for at most T x seconds 4: DISPLAYFORM10 for P ∈ ORDER(SYS) and at most T y seconds do 6:if P / ∈ C and PATTERNUSEFUL(σ, P) then 7: DISPLAYFORM11 until σ = 10: DISPLAYFORM12 is interesting, i.e., whether it cannot be replaced by a set of smaller patterns that together yield the same heuristic estimates BID26. The criteria are based on the causal graph CG(Π) of a task Π BID13. CG(Π) is a directed graph with a node for each variable in Π. If there is an operator with a precondition on u and an effect on v = u, CG(Π) contains a precondition arc from u to v. If an operator affects both u and v, CG(Π) contains co-effect arcs from u to v and from v to u. Definition 10 (Interesting Patterns). A pattern P is interesting if 1. CG(Π| P) is weakly connected, and 2. CG(Π| P) contains a directed path via precondition arcs from each node to some goal variable node. The systematic pattern generation method SYS-X generates all interesting patterns up to size X. We let SYS denote the set of all interesting patterns for a given task. On IPC benchmark tasks, SYS-X often generates much fewer patterns than SYS-NAIVE-X for the same size limit X. Still, it is usually infeasible to compute all SYS-X patterns and the corresponding projections for X > 3 within reasonable amounts of time and memory. Also, we hypothesize that even when considering only interesting patterns, usually only a small percentage of the systematic patterns up to size 3 contribute much information to the ing heuristic. For these two reasons we propose a new pattern selection algorithm that potentially considers all interesting patterns, but only selects the ones that it deems useful. Our new pattern selection algorithm repeatedly creates a new empty pattern sequence σ and only appends those interesting patterns to σ that increase any finite heuristic values of a saturated cost partitioning heuristic computed over σ. Algorithm 1 shows pseudo-code for the procedure, which we call SYS-SCP. It starts with an empty pattern collection C. In each iteration of the outer loop, SYS-SCP creates a new empty pattern sequence σ and then loops over the interesting patterns P ∈ SYS in the order chosen by ORDER (see Section 3.2) for at most T y seconds. SYS-SCP appends a pattern P to σ and includes it in C if there is a state s for which the saturated cost partitioning over σ extended by P has a higher finite heuristic value than the one over σ alone. Once an iteration selects no new patterns or SYS-SCP hits the time limit T x, the algorithm stops and returns C.We impose a time limit T x on the outer loop of the algorithm since the number of interesting patterns is exponential in the number of variables and therefore SYS-SCP usually cannot evaluate them all in a reasonable amount of time. By imposing a time limit T y on the inner loop, we allow SYS-SCP to periodically start over with a new empty pattern sequence. The most important component of the SYS-SCP algorithm is the PATTERNUSEFUL function that decides whether to select a pattern P. The function enumerates all states s ∈ S(Π), which is obviously infeasible for all but the smallest tasks Π. Fortunately, we can efficiently compute an equivalent test in the projection to P. Lemma 1. Consider a planning task Π with non-negative cost function cost and induced transition system T. Let s ∈ S(T) be a state, P be a pattern for Π and σ be a (possibly empty) sequence of patterns P 1,..., P n for Π. Finally, let rem be the remaining cost function after computing h DISPLAYFORM0 ⇔ 0 < h * T P (rem, s| P) < ∞ Step 1 substitutes P 1,..., P n for σ and Step 2 uses the definition of saturated cost partitioning heuristics. For Step 3 we need to show that x = n i=1 h Pi (cost i, s) is finite. The inequality states x < ∞. We now show x ≥ 0, which implies x > −∞. Using requirement 1 for saturated cost functions from Definition 7 and the fact that rem 0 = cost is non-negative, it is easy to see that all remaining cost functions are non-negative. Consequently, h Pi (cost i, s) = h Pi (rem i−1, s) ≥ 0 for all s ∈ S(T), which uses requirement 2 from Definition 7 and the fact that goal distances are non-negative in transition systems with non-negative weights. Step 4 uses the definition of PDB heuristics. Consider a planning task Π with non-negative cost function cost and induced transition system T. Let P be a single pattern and σ be a (possibly empty) sequence of patterns. Finally, let rem be the remaining cost function after computing DISPLAYFORM0 Follows directly from Lemma 1 and the fact that projections are induced abstractions: for each abstract state s in an induced abstraction there is at least one concrete state s which is projected to s.We use Theorem 1 in our SYS-SCP implementation by keeping track of the cost function rem, i.e., the costs that remain after computing h SCP σ. We select a pattern P if there are any goal distances d with 0 < d < ∞ in T P under rem. Theorem 1 also removes the need to compute h SCP σ⊕P from scratch for every pattern P. This is important since we want to decide whether or not to add P quickly and this operation should not become slower when σ contains more patterns. To obtain high finite heuristic values for solvable states it is important to choose good cost partitionings. In contrast, cost functions are irrelevant for detecting unsolvable states. This is the underlying reason why Lemma 1 only holds for finite values and therefore why SYS-SCP ignores unsolvable states. However, we can still use the information about unsolvable states contained in projections. It is easy to see that each abstract state in a projection corresponds to a partial state in the original task. If an abstract state is unsolvable in a projection, we call the corresponding partial state a dead end. Since projections preserve all paths, any state in the original task subsuming a dead end is unsolvable. We can extract all dead ends from the projections that SYS-SCP evaluates and use this information to prune unsolvable states during the A * search BID24. We showed in earlier work that the order in which saturated cost partitioning considers the component heuristics has a strong influence on the quality of the ing heuristic BID35. Choosing a good order is even more important for SYS-SCP, since it usually only sees a subset of interesting patterns within the allotted time. To ensure that this subset of interesting patterns covers different aspects of the planning task, we let the ORDER function generate the interesting patterns in increasing order by size. This leaves the question how to sort patterns of the same size. We propose four methods for making this decision. The first one (random) simply orders patterns of the same size randomly. The remaining three assign a key to each pattern, allowing us to sort by key in increasing or decreasing order. Causal Graph. The first ordering method is based on the insight that it is often more important to have accurate heuristic estimates near the goal states rather than elsewhere in the state space (e.g., BID15 BID39 . We therefore want to focus on patterns containing goal variables or variables that are closely connected to goal variables. To quantify "goalconnectedness" we use an approximate topological ordering ≺ of the causal graph CG(Π). We let the function cg: V → N + 0 assign each variable v ∈ V to its index in ≺. For a given pattern P, the cg ordering method returns the key cg(v 1),..., cg(v n), where v i ∈ P and cg(v i) < cg(v j) for all 1 ≤ i < j ≤ n. Since the keys are unique, they define a total order. Sorting the patterns by cg in decreasing order (cg-down), yields the desired order which starts with "goal-connected" patterns. States in Projection. Given a pattern P, the ordering method states returns the key |S(Π| P)|, i.e., the number of states in the projection to P. We use cg-down to break ties. Active Operators. Given a pattern P, the ops ordering method returns the number of operators that affect a variable in P. We break ties with cg-down. We implemented the SYS-SCP pattern selection algorithm in the Fast Downward planning system BID14 and conducted experiments with the Downward Lab toolkit on Intel Xeon Silver 4114 processors. Our benchmark set consists of all 1827 tasks without conditional effects from the optimization tracks of the 1998-2018 IPCs. The tasks belong to 48 different domains. We limit time by 30 minutes and memory by 3.5 GiB. All benchmarks 1, code 2 and experimental data 3 have been published online. To fairly compare the quality of different pattern collections, we use the same cost partitioning algorithm for all collections. Saturated cost partitioning is the obvious choice for the evaluation since experiments showed that it is preferable to all other cost partitioning algorithms for HC, SYS-2 and CPC patterns in almost all evaluated benchmark domains BID34 BID28.Diverse Saturated Cost Partitioning Heuristics. For a given pattern collection C, we compute diverse saturated cost partitioning heuristics using the diversification procedure by BID35: we start with an empty family of saturated cost partitioning heuristics F and a setŜ of 1000 sample states obtained with random walks ). Then we iteratively sample a new state s and compute a greedy order ω of C that works well for s BID36. If h SCP ω has a higher heuristic estimate for any state s ∈Ŝ than all heuristics in F, we add h SCP ω to F. We stop this diversification procedure after 200 seconds and then perform an A * search using the maximum over the heuristics in F. TAB1: Number of tasks solved by SYS-SCP using different time limits T x and T y for the outer loop (x axis) and inner loop (y axis).cg-up states-up random ops-down states-down ops-up cg-down Coverage cg-up -5 6 5 4 3 3 1140.0 states-up 6 -6 8 5 2 2 1153.0 random 10 10 -8 7 6 3 1148.2 ops-down 7 8 9 -4 7 3 1141.0 states-down 9 8 9 7 -4 2 1152.0 ops-up 11 12 12 11 11 -6 1166.0 cg-down 12 10 12 10 9 6 -1168.0 TAB2: Per-domain coverage comparison of different orders for patterns of the same size. The entry in row r and column c shows the number of domains in which order r solves more tasks than order c. For each order pair we highlight the maximum of the entries (r, c) and (c, r) in bold. Right: Total number of solved tasks. The for random are averaged over 10 runs (standard deviation: 3.36).Before we compare SYS-SCP to other pattern selection algorithms, we evaluate the effects of changing its parameters in four ablation studies. We use at most 2M states per PDB and 20M states in the PDB collection for all SYS-SCP runs. TAB1 shows that a time limit for the outer loop is more important than one for the inner loop, but for maximum coverage we need both limits. The combination that solves the highest number of tasks is 10s for the inner and 100s for the outer loop. We use these values in all other experiments. All configurations from Instead of discarding the computed pattern sequences when SYS-SCP finishes, we can turn each pattern sequence σ into a full pattern order by randomly appending all SYS-SCP patterns missing from σ to σ and pass the ing order to the diversification procedure. Feeding the diversification exclusively with such orders leads to solving 1130 tasks, while using only greedy orders for sample states BID36 ) solves 1156 tasks. We obtain the best by diversifying both types of orders, solving 1168 tasks, and we use this variant in all other experiments. In the next experiment, we evaluate the obvious baseline for SYS-SCP: selecting all (interesting) patterns up to a fixed size. TAB3 holds coverage of SYS-NAIVE-X and SYS-X for 1 ≤ X ≤ 5. We also include variants (*-LIM) that use at most 100 seconds, no more than 2M states in each projection and at most 20M states per collection. For the *-LIM variants, we sort the patterns in the cg-down order. The show that interesting patterns are always preferable to naive patterns, both with and without limits, which is why we only consider interesting patterns in SYS-SCP. Imposing limits is not important for SYS-1 and SYS-2, but leads to solving many more tasks for X ≥ 3. Overall, SYS-3-LIM has the highest total coverage (1088 tasks). In Table 4 we compare SYS-SCP to the strongest pattern selection algorithms from the literature: HC, SYS-3-LIM, CPC and CEGAR. (See Table 6 for per-domain coverage .) We run each algorithm with its preferred parameter values, which implies using at most 900s for HC and CPC and 100s for the other algorithms. HC is outperformed by all other algorithms. Interestingly, already the simple SYS-3-LIM approach is competitive with Table 4: Per-domain coverage comparison of pattern selection algorithms. For an explanation of the data see the caption of TAB2.CPC and CEGAR. However, we obtain the best with SYS-SCP. It is preferable to all other pattern selection algorithms in per-domain comparisons: no algorithm has higher coverage than SYS-SCP in more than three domains, while SYS-SCP solves more tasks than each of the other algorithms in at least 21 domains. SYS-SCP also has the highest total coverage of 1168 tasks, solving 70 more tasks than the strongest contender. This is a considerable improvement in the setting of optimal classical planning, where task difficulty tends to scale exponentially. In our final experiment, we evaluate whether Scorpion BID37, one of the strongest optimal planners in IPC 2018, benefits from using SYS-SCP patterns. Scorpion computes diverse saturated cost partitioning heuristics over HC and SYS-2 PDB heuristics and Cartesian abstraction heuristics (CART) BID31. We abbreviate this combination with COMB=HC+SYS-2+CART. In TAB6 we compare the original Scorpion planner, three Scorpion variants that use different sets of heuristics and the top three optimal planners from IPC 2018, Delfi 1 (Sievers et al. 2019), Complementary 1 BID9 and Complementary 2 BID8 ). (Table 6 holds perdomain coverage .) In contrast to the configurations we evaluated above, all planners in TAB6 prune irrelevant operators in a preprocessing step BID0.The show that all Scorpion variants outperform the top three IPC 2018 planners in per-domain comparisons. We also see that Scorpion benefits from using SYS-SCP PDBs instead of the COMB heuristics in many domains. Using the union of both sets is clearly preferable to using either COMB or SYS-SCP alone, since it raises the total coverage to 1261 by 56 and 44 tasks, respectively. For maximum coverage (1265 tasks), Scorpion only needs SYS-SCP PDBs and Cartesian abstraction heuristics. We introduced a new pattern selection algorithm based on saturated cost partitioning and showed that it outperforms Table 6: Number of tasks solved by different planners. | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HkfUEWPpDE | Using saturated cost partitioning to select patterns is preferable to all existing pattern selection algorithms. |
State-of-the-art on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion. Vaswani et. al. propose a new architecture that avoids recurrence and convolution completely. Instead, it uses only self-attention and feed-forward layers. While the proposed architecture achieves state-of-the-art on several machine translation tasks, it requires a large number of parameters and training iterations to converge. We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15-40% faster. Specifically, we replace the multi-head attention by multiple self-attention branches that the model learns to combine during the training process. Our model improves the state-of-the-art performance by 0.5 BLEU points on the WMT 2014 English-to-German translation task and by 0.4 on the English-to-French translation task. Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs) BID12, form an important building block for many tasks that require modeling of sequential data. RNNs have been successfully employed for several such tasks including language modeling BID23 BID24 BID25, speech recognition BID9 BID19, and machine translation BID34. RNNs make output predictions at each time step by computing a hidden state vector h t based on the current input token and the previous states. This sequential computation underlies their ability to map arbitrary input-output sequence pairs. However, because of their auto-regressive property of requiring previous hidden states to be computed before the current time step, they cannot benefit from parallelization. Variants of recurrent networks that use strided convolutions eschew the traditional time-step based computation BID15 BID20 BID4 BID7 BID6 BID16. However, in these models, the operations needed to learn dependencies between distant positions can be difficult to learn BID13 BID11. Attention mechanisms, often used in conjunction with recurrent models, have become an integral part of complex sequential tasks because they facilitate learning of such dependencies BID22 BID26 BID27 BID17.In BID33, the authors introduce the Transformer network, a novel architecture that avoids the recurrence equation and maps the input sequences into hidden states solely using attention. Specifically, the authors use positional encodings in conjunction with a multi-head attention mechanism. This allows for increased parallel computation and reduces time to convergence. The authors report for neural machine translation that show the Transformer networks achieves state-of-the-art performance on the WMT 2014 English-to-German and English-to-French tasks while being orders-of-magnitude faster than prior approaches. Transformer networks still require a large number of parameters to achieve state-of-the-art performance. In the case of the newstest2013 English-to-German translation task, the base model required 65M parameters, and the large model required 213M parameters. We propose a variant of the Transformer network which we call Weighted Transformer that uses self-attention branches in lieu of the multi-head attention. The branches replace the multiple heads in the attention mechanism of the original Transformer network, and the model learns to combine these branches during training. This branched architecture enables the network to achieve comparable performance at a significantly lower computational cost. Indeed, through this modification, we improve the state-of-the-art performance by 0.5 and 0.4 BLEU scores on the WMT 2014 English-to-German and English-to-French tasks, respectively. Finally, we present evidence that suggests a regularizing effect of the proposed architecture. Most architectures for neural machine translation (NMT) use an encoder and a decoder that rely on deep recurrent neural networks like the LSTM BID22 BID34 BID3. Several architectures have been proposed to reduce the computational load associated with recurrence-based computation BID7 BID6 BID15 BID16. Self-attention, which relies on dot-products between elements of the input sequence to compute a weighted sum BID21 BID26 BID17, has also been a critical ingredient in modern NMT architectures. The Transformer network BID33 avoids the recurrence completely and uses only self-attention. We propose a modified Transformer network wherein the multi-head attention layer is replaced by a branched self-attention layer. The contributions of the various branches is learned as part of the training procedure. The idea of multi-branch networks has been explored in several domains BID0 BID6 BID35. To the best of our knowledge, this is the first model using a branched structure in the Transformer network. In, the authors use a large network, with billions of weights, in conjunction with a sparse expert model to achieve competitive performance. BID0 analyze learned branching, through gates, in the context of computer vision while in BID6, the author analyzes a two-branch model with randomly sampled weights in the context of image classification. The original Transformer network uses an encoder-decoder architecture with each layer consisting of a novel attention mechanism, which the authors call multi-head attention, followed by a feedforward network. We describe both these components below. From the source tokens, learned embeddings of dimension d model are generated which are then modified by an additive positional encoding. The positional encoding is necessary since the network does not otherwise possess any means of leveraging the order of the sequence since it contains no recurrence or convolution. The authors use additive encoding which is defined as: DISPLAYFORM0 where pos is the position of a word in the sentence and i is the dimension of the vector. The authors also experiment with learned embeddings BID7 BID6 but found no benefit in doing so. The encoded word embeddings are then used as input to the encoder which consists of N layers each containing two sub-layers: (a) a multi-head attention mechanism, and (b) a feed-forward network. A multi-head attention mechanism builds upon scaled dot-product attention, which operates on a query Q, key K and a value V: DISPLAYFORM1 where d k is the dimension of the key. In the first layer, the inputs are concatenated such that each of (Q, K, V) is equal to the word vector matrix. This is identical to dot-product attention except for the scaling factor d k, which improves numerical stability. Multi-head attention mechanisms obtain h different representations of (Q, K, V), compute scaled dot-product attention for each representation, concatenate the , and project the concatenation with a feed-forward layer. This can be expressed in the same notation as Equation FORMULA1: DISPLAYFORM2 where the W i and W O are parameter projection matrices that are learned. Note that DISPLAYFORM3 where h denotes the number of heads in the multi-head attention. BID33 proportionally reduce DISPLAYFORM4 that the computational load of the multi-head attention is the same as simple self-attention. The second component of each layer of the Transformer network is a feed-forward network. The authors propose using a two-layered network with a ReLU activation. Given trainable weights W 1, W 2, b 1, b 2, the sub-layer is defined as: DISPLAYFORM5 The dimension of the inner layer is d f f which is set to 2048 in their experiments. For the sake of brevity, we refer the reader to BID33 for additional details regarding the architecture. For regularization and ease of training, the network uses layer normalization BID1 after each sub-layer and a residual connection around each full layer. Analogously, each layer of the decoder contains the two sub-layers mentioned above as well as an additional multi-head attention sub-layer that receives as inputs (V, K) from the output of the corresponding encoding layer. In the case of the decoder multi-head attention sub-layers, the scaled dot-product attention is masked to prevent future positions from being attended to, or in other words, to prevent illegal leftward-ward information flow. One natural question regarding the Transformer network is why self-attention should be preferred to recurrent or convolutional models. BID33 state three reasons for the preference: (a) computational complexity of each layer, (b) concurrency, and (c) path length between long-range dependencies. Assuming a sequence length of n and vector dimension d, the complexity of each layer is O(n 2 d) for self-attention layers while it is O(nd 2) for recurrent layers. Given that typically d > n, the complexity of self-attention layers is lower than that of recurrent layers. Further, the number of sequential computations is O for self-attention layers and O(n) for recurrent layers. This helps improved utilization of parallel computing architectures. Finally, the maximum path length between dependencies is O for the self-attention layer while it is O(n) for the recurrent layer. This difference is instrumental in impeding recurrent models' ability to learn long-range dependencies. We now describe the proposed architecture, the Weighted Transformer, which is more efficient to train and makes better use of representational power. In Equations FORMULA2 and, we described the attention layer proposed in BID33 comprising the multi-head attention sub-layer and a FFN sub-layer. For the Weighted Transformer, we propose a branched attention that modifies the entire attention layer in the Transformer network (including both the multi-head attention and the feed-forward network).The proposed attention layer can be mathematically described as: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where M denotes the total number of branches, κ i, α i ∈ R + are learned parameters and W Oi ∈ R dv×dmodel. The FFN functions above are identical in form to Equation them, they have commensurately reduced dimensionality to ensure that no additional parameters are added to the network. Further, we require that κ i = 1 and α i = 1 so that Equation FORMULA8 is a weighted sum of the individual branch attention values. We now briefly contrast the modified architecture with the base Transformer model. In the same notation as-, the attention layer in the base model can be described as: DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 Instead of aggregating the contributions from the different heads through W O right away and using a feed-forward sub-layer, we retain head i for each of the M heads, learn to amplify or diminish their contribution, use a feed-forward sub-layer and then aggregate them, again in a learned fashion. In the equations above, κ can be interpreted as a learned concatenation weight and α as the learned addition weight. Indeed, κ scales the contribution of the various branches before α is used to sum them in a weighted fashion. We ensure that the simplex constraint is respected during each training step by projection. Finally, note that our modification does not add depth (i.e., through the FFN sublayers) to any of the attention head transformation since the feed-forward computation is merely split and not stacked. One interpretation of our proposed architecture is that it replaces the multi-head attention by a multibranch attention. Rather than concatenating the contributions of the different heads, they are instead treated as branches that a multi-branch network learns to combine. While it is possible that α and κ could be merged into one variable and trained, we found better training outcomes by separating them. It also improves the interpretability of the models gives that (α, κ) can be thought of as probability masses on the various branches. This mechanism adds O(M) trainable weights. This is an insignificant increase compared to the total number of weights. Indeed, in our experiments, the proposed mechanism added 192 weights to a model containing 213M weights already. Without these additional trainable weights, the proposed mechanism is identical to the multi-head attention mechanism in the Transformer. The proposed attention mechanism is used in both the encoder and decoder layers and is masked in the decoder layers as in the Transformer network. Similarly, the positional encoding, layer normalization, and residual connections in the encoder-decoder layers are retained. We eliminate these details from FIG0 for clarity. Instead of using (α, κ) learned weights, it is possible to also use a mixture-ofexperts normalization via a softmax layer. However, we found this to perform worse than our proposal. Unlike the Transformer, which weighs all heads equally, the proposed mechanism allows for ascribing importance to different heads. This in turn prioritizes their gradients and eases the optimization process. Further, as is known from multi-branch networks in computer vision BID6, such mechanisms tend to cause the branches to learn decorrelated input-output mappings. This reduces co-adaptation and improves generalization. This observation also forms the basis for mixture-ofexperts models. The weights κ and α are initialized randomly, as with the rest of the Transformer weights. In addition to the layer normalization and residual connections, we use label smoothing with ls = 0.1, attention dropout, and residual dropout with probability P drop = 0.1. Attention dropout randomly drops out elements BID31 from the softmax in.As in BID33, we used the Adam optimizer BID18 with (β 1, β 2) = (0.9, 0.98) and = 10 −9. We also use the learning rate warm-up strategy for Adam wherein the learning rate lr takes on the form: DISPLAYFORM0 for the all parameters except (α, κ) and DISPLAYFORM1 for (α, κ).This corresponds to the warm-up strategy used for the original Transformer network except that we use a larger peak learning rate for (α, κ) to compensate for their bounds. Further, we found that freezing the weights (κ, α) in the last 10K iterations aids convergence. During this time, we continue training the rest of the network. We hypothesize that this freezing process helps stabilize the rest of the network weights given the weighting scheme. We note that the number of iterations required for convergence to the final score is substantially reduced for the Weighted Transformer. We found that Weighted Transformer converges 15-40% faster as measured by the total number of iterations to achieve optimal performance. We train the baseline model for 100K steps for the smaller variant and 300K for the larger. We train the Weighted Transformer for the respective variants for 60K and 250K iterations. We found that the objective did not significantly improve by running it for longer. Further, we do not use any averaging strategies employed in BID33 and simply return the final model for testing purposes. In order to reduce the computational load associated with padding, sentences were batched such that they were approximately of the same length. All sentences were encoded using byte-pair encoding BID29 and shared a common vocabulary. Weights for word embeddings were tied to corresponding entries in the final softmax layer BID14 BID28. We trained all our networks on NVIDIA K80 GPUs with a batch containing roughly 25,000 source and target tokens. We benchmark our proposed architecture on the WMT 2014 English-to-German and English-toFrench tasks. The WMT 2014 English-to-German data set contains 4.5M sentence pairs. The English-to-French contains 36M sentence pairs. Transformer (small) BID33 27.3 38.1 Weighted Transformer (small) 28.4 38.9Transformer (large) BID33 28.4 41.0 Weighted Transformer (large) 28.9 41.4ByteNet BID16 23.7 -Deep-Att+PosUnk BID37 -39.2 GNMT+RL BID34 24.6 39.9 ConvS2S BID8 25.2 40.5 MoE 26.0 40.6 Table 1: Experimental on the WMT 2014 English-to-German (EN-DE) and English-toFrench (EN-FR) translation tasks. Our proposed model outperforms the state-of-the-art models including the Transformer BID33. The small model corresponds to configuration (A) in TAB1 while large corresponds to configuration (B).Results of our experiments are summarized in Table 1. The Weighted Transformer achieves a 1.1 BLEU score improvement over the state-of-the-art on the English-to-German task for the smaller network and 0.5 BLEU improvement for the larger network. In the case of the larger English-toFrench task, we note a 0.8 BLEU improvement for the smaller model and a 0.4 improvement for the larger model. Also, note that the performance of the smaller model for Weighted Transformer is close to that of the larger baseline model, especially for the English-to-German task. This suggests that the Weighted Transformer better utilizes available model capacity since it needs only 30% of the parameters as the baseline transformer for matching its performance. Our relative improvements do not hinge on using the BLEU scores for comparison; experiments with the GLEU score proposed in BID34 also yielded similar improvements. Finally, we comment on the regularizing effect of the Weighted Transformer. Given the improved , a natural question is whether the stem from improved regularization of the model. To investigate this, we report the testing loss of the Weighted Transformer and the baseline Transformer against the training loss in FIG1. Models which have a regularizing effect tend to have lower testing losses for the same training loss. We see this effect in our experiments suggesting that the proposed architecture may have better regularizing properties. This is not unexpected given similar outcomes for other branching-based strategies such as Shake-Shake Gastaldi FORMULA1 BID33 architecture and our proposed Weighted Transformer. Reported BLEU scores are evaluated on the English-to-German translation development set, newstest2013. Weighted Transformer 24.8 Train κ, α fixed to 1 24.5 Train α, κ fixed to 1 23.9 α, κ both fixed to 1 23.6 Without the simplex constraints 24.5 Table 3: Model ablations of Weighted Transformer on the newstest2013 English-to-German task for configuration (C). This shows that the learning both (α, κ) and retaining the simplex constraints are critical for its performance. In TAB1, we report sensitivity on the newstest2013 English-to-German task. Specifically, we vary the number of layers in the encoder/decoder and compare the performance of the Weighted Transformer and the Transformer baseline. Using the same notation as used in the original Transformer network, we label our configurations as (A), (B) and (C) with (C) being the smallest. The clearly demonstrate the benefit of the branched attention; for every experiment, the Weighted Transformer outperforms the baseline transformer, in some cases by up to 1.3 BLEU points. As in the case of the baseline Transformer, increasing the number of layers does not necessarily improve performance; a modest improvement is seen when the number of layers N is increased from 2 to 4 and 4 to 6 but the performance degrades when N is increased to 8. Increasing the number of heads from 8 to 16 in configuration (A) yielded an even better BLEU score. However, preliminary experiments with h = 16 and h = 32, like in the case with N, degrade the performance of the model. In Figure 3, we present the behavior of the weights (α, κ) for the second encoder layer of the configuration (C) for the English-to-German newstest2013 task. The figure shows that, in terms of relative weights, the network does prioritize some branches more than others; circumstantially by as much as 2×. Further, the relative ordering of the branches changes over time suggesting that the network is not purely exploitative. A purely exploitative network, which would learn to exploit a subset of the branches at the expense of the rest, would not be preferred since it would effectively reduce the number of available parameters and limit the representational power. Similar are seen for other layers, including the decoder layers; we omit them for brevity. Finally, we present an ablation study to highlight the ingredients of our proposal that assisted the improved BLEU score in Table 3. The show that having both α and κ as learned, in conjunction with the simplex constraint, was necessary for improved performance. Figure 3: Convergence of the (α, κ) weights for the second encoder layer of Configuration (C) for the English-to-German newstest2013 task. We smoothen the curves using a mean filter. This shows that the network does prioritize some branches more than others and that the architecture does not exploit a subset of the branches while ignoring others. Weights (α, κ) BLEU Learned 24.8 Random 21.1 Uniform 23.4 Table 4: Performance of the architecture with random and uniform normalization weights on the newstest2013 English-to-German task for configuration (C). This shows that the learned (α, κ) weights of the Weighted Transformer are crucial to its performance. The proposed modification can also be interpreted as a form of Shake-Shake regularization proposed in BID6. In this regularization strategy, random weights are sampled during forward and backward passes for weighing the various branches in a multi-branch network. During test time, they are weighed equally. In our strategy, the weights are learned instead of being sampled randomly. Consequently, no changes to the model are required during test time. In order to better understand whether the network benefits from the learned weights or if, at test time, random or uniform weights suffice, we propose the following experiment: the weights for the Weighted Transformer, including (α, κ) are trained as before, but, during test time, we replace them with (a) randomly sampled weights, and (b) 1/M where M is the number of incoming branches. In Table 4, we report experimental on the configuration (C) of the Weighted Transformer on the English-to-German newstest2013 data set (see TAB1 for details regarding the configuration). It is evident that random or uniform weights cannot replace the learned weights during test time. Preliminary experiments suggest that a Shake-Shake-like strategy where the weights are sampled randomly during training also leads to inferior performance. In order to analyze whether a hard (discrete) choice through gating will outperform our normalization strategy, we experimented with using gates instead of the proposed concatenation-addition strategy. Specifically, we replaced the summation in Equation FORMULA8 by a gating structure that sums up the contributions of the top k branches with the highest probabilities. This is similar to the sparselygated mixture of experts model in. Despite significant hyper-parameter tuning of k and M, we found that this strategy performs worse than our proposed mechanism by a large margin. We hypothesize that this is due to the fact that the number of branches is low, typically less than 16. Hence, sparsely-gated models lose representational power due to reduced capacity in the model. We plan to investigate the setup with a large number of branches and sparse gates in future work. We present the Weighted Transformer that trains faster and achieves better performance than the original Transformer network. The proposed architecture replaces the multi-head attention in the Transformer network by a multiple self-attention branches whose contributions are learned as a part of the training process. We report numerical on the WMT 2014 English-to-German and English-to-French tasks and show that the Weighted Transformer improves the state-of-the-art BLEU scores by 0.5 and 0.4 points respectively. Further, our proposed architecture trains 15 − 40% faster than the baseline Transformer. Finally, we present evidence suggesting the regularizing effect of the proposal and emphasize that the relative improvement in BLEU score is observed across various hyper-parameter settings for both small and large models. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SkYMnLxRW | Using branched attention with learned combination weights outperforms the baseline transformer for machine translation tasks. |
Imitation learning provides an appealing framework for autonomous control: in many tasks, demonstrations of preferred behavior can be readily obtained from human experts, removing the need for costly and potentially dangerous online data collection in the real world. However, policies learned with imitation learning have limited flexibility to accommodate varied goals at test time. Model-based reinforcement learning (MBRL) offers considerably more flexibility, since a predictive model learned from data can be used to achieve various goals at test time. However, MBRL suffers from two shortcomings. First, the model does not help to choose desired or safe outcomes -- its dynamics estimate only what is possible, not what is preferred. Second, MBRL typically requires additional online data collection to ensure that the model is accurate in those situations that are actually encountered when attempting to achieve test time goals. Collecting this data with a partially trained model can be dangerous and time-consuming. In this paper, we aim to combine the benefits of imitation learning and MBRL, and propose imitative models: probabilistic predictive models able to plan expert-like trajectories to achieve arbitrary goals. We find this method substantially outperforms both direct imitation and MBRL in a simulated autonomous driving task, and can be learned efficiently from a fixed set of expert demonstrations without additional online data collection. We also show our model can flexibly incorporate user-supplied costs at test-time, can plan to sequences of goals, and can even perform well with imprecise goals, including goals on the wrong side of the road. Reinforcement learning (RL) algorithms offer the promise of automatically learning behaviors from raw sensory inputs with minimal engineering. However, RL generally requires online learning: the agent must collect more data with its latest strategy, use this data to update a model, and repeat. While this is natural in some settings, deploying a partially-trained policy on a real-world autonomous system, such as a car or robot, can be dangerous. In these settings the behavior must be learned offline, usually with expert demonstrations. How can we incorporate such demonstrations into a flexible robotic system, like an autonomous car? One option is imitation learning (IL), which can learn policies that stay near the expert's distribution. Another option is model-based RL (MBRL) BID8 BID2, which can use the data to fit a dynamics model, and can in principle be used with planning algorithms to achieve any user-specified goal at test time. However, in practice, model-based and model-free RL algorithms are vulnerable to distributional drift BID32 BID24: when acting according to the learned model or policy, the agent visits states different from those seen during training, and in those it is unlikely to determine an effective course of action. This is especially problematic when the data intentionally excludes adverse events, such as crashes. A model ignorant to the possibility of a crash cannot know how to prevent it. Therefore, MBRL algorithms usually require online collection and training BID6 BID12. Imitation learning algorithms use expert demonstration data and, despite similar drift shortcomings BID26, can sometimes learn effective policies without additional online data collection BID35. However, standard IL offers little task flexibility since it only predicts low-level behavior. While several works augmented IL with goal conditioning BID4 BID1, these goals must be specified in advance during training, and are typically simple (e.g., turning left or right).Figure 1: We apply our approach to navigation in CARLA BID5. Columns 1,2: Images depicting the current scene. The overhead image depicts a 50 m 2 area. Column 3: LIDAR input and goals are provided to our deep imitative trajectory model, and plans to the goals are computed under the model's likelihood objective, and colored according to their ranking under the objective, with red indicating the best plan. The red square indicates the chosen high-level goal, and the yellow cross indicates a point along our plan used as a setpoint for a PID controller. The LIDAR map is 100 m 2, and each goal is ≥20 m away from the vehicle. Column 4: Our model can incorporate arbitrary test-time costs, and use them to adjust its planning objective and plan ranking. Figure 2: A brief taxonomy of learning-based control methods. In our scenario, we avoid online data collection, specifically from the policy we seek to imitate. We structure our imitation learner with a model to make it flexible to new tasks at test time. We compare against other offline approaches (front face).The goal in our work is to devise a new algorithm that combines the advantages of IL and MBRL, affording both the flexibility to achieve new user-specified goals at test time and the ability to learn entirely from offline data. By learning a deep probabilistic predictive model from expert-provided data, we capture the distribution of expert behaviors without using manually designed reward functions. To plan to a goal, our method infers the most probable expert state trajectory, conditioned on the current position and reaching the goal. By incorporating a model-based representation, our method can easily plan to previously unseen user-specified goals while respecting rules of the road, and can be flexibly repurposed to perform a wide range of test-time tasks without any additional training. Inference with this model resembles trajectory optimization in model-based reinforcement learning, and learning this model resembles imitation learning. Our method's relationship to other work is illustrated in Fig. 2. We demonstrate our method on a simulated autonomous driving task (see FIG0 . A high-level route planner provides navigational goals, which our model uses to automatically generate plans that obey the rules of the road, inferred entirely from data. In contrast to IL, our method produces an interpretable distribution over trajectories and can follow a variety of goals without additional training. In contrast to MBRL, our method generates human-like behaviors without additional data collection or learning. In our experiments, our approach substantially outperforms both MBRL and IL: it can efficiently learn near-perfect driving through the static-world CARLA simulator from just 7,000 trajectories obtained from 19 hours of driving. We also show that our model can flexibly incorporate and achieve goals not seen during training, and is robust to errors in the high-level navigation system, even when the high-level goals are on the wrong side of the road. Videos of our are available. To learn robot dynamics that are not only possible, but preferred, we construct a model of expert behavior. We fit a probabilistic model of trajectories, q, to samples of expert trajectories drawn from an unknown distribution p. A probabilistic model is necessary because expert behavior is often stochastic and multimodal: e.g., choosing to turn either left or right at an intersection are both common decisions. Because an expert's behavior depends on their perception, we condition our model, q, on observations φ. In our application, φ includes LIDAR features χ ∈ R H×W ×C and a small window of previous positions s −τ :0 = {s −τ, . . ., s 0}, such that φ = {χ, s −τ :0}.By training q(s 1:T |φ) to forecast expert trajectories with high likelihood, we model the sceneconditioned expert dynamics, which can score trajectories by how likely they are to come from the expert. At test time, q(s 1:T |φ) serves as a learned prior over the set of undirected expert trajectories. To execute samples from this distribution is to imitate an expert driver in an undirected fashion. We first describe how we use the generic form of this model to plan, and then discuss our particular implementation in Section 2.2. Besides simply imitating the expert demonstrations, we wish to direct our agent to desired goals at test time, and have the agent reason automatically about the mid-level details necessary to achieve these goals. In general, we can define a driving task by a set of goal variables G. We will instantiate examples of G concretely after the generic goal planning derivation. The probability of a plan conditioned on the goal G is given as posterior distribution p(s 1:T |G, φ). Planning a trajectory under this posterior corresponds to MAP inference with prior q(s 1:T |φ) and likelihood p(G|s 1:T, φ). We briefly derive the MAP inference starting from the posterior maximization objective, which uses the learned Imitative Model to generate plans that achieve abstract goals: DISPLAYFORM0 Waypoint planning: One example of a concrete inference task is to plan towards a specific goal location, or waypoint. We can achieve this task by using a tightly-distributed goal likelihood function centered at the user's desired final state. This effectively treats a desired goal location, g T, as if it were a noisy observation of a future state, with likelihood p(G|s 1:T, φ) = N (g T |s T, I). The ing inference corresponds to planning the trajectory s 1:T to a likely point under the distribution N (g T |s T, I). We can also plan to successive states with DISPLAYFORM1 ) if the user (or program) wishes to specify the desired end velocity or acceleration when reached the final goal g T location FIG3. Alternatively, a route planner may propose a set of waypoints with the intention that the robot should reach any one of them. This is possible using a Gaussian mixture likelihood and can be useful if some of those waypoints along a route are inadvertently located at obstacles or potholes (Fig. 4).Waypoint planning leverages the advantage of conditional imitation learning: a user or program can communicate where they desire the agent to go without knowing the best and safest actions. The planning-as-inference procedure produces paths similar to how an expert would acted to reach the given goal. In contrast to black-box, model-free conditional imitation learning that regresses controls, our method produces an explicit plan, accompanied by an explicit score of the plan's quality. This provides both interpretability and an estimate of the feasibility of the plan. Costed planning: If the user desires more control over the plan, our model has the additional flexibility to accept arbitrary user-specified costs c at test time. For example, we may have updated knowledge of new hazards at test time, such as a given map of potholes (Fig. 4) or a predicted cost map. Given costs c(s i |φ), this can be treated by including an optimality variable C in G, where BID33 BID11. The goal log-likelihood is log p({g T, C = 1}|s 1:T, φ) = log N (g T |s T, I) + Figure 4: Imitative planning to goals subject to a cost at test time. The cost bumps corresponds to simulated "potholes," which the imitative planner is tasked with avoiding. The imitative planner generates and prefers routes that curve around the potholes, stay on the road, and respect intersections. Demonstrations of this behavior were never observed by our model. DISPLAYFORM2 The primary structural requirement of an Imitative Model is the ability to compute q(s 1:T |φ). The ability to also compute gradients ∇ s 1:T q(s 1:T |φ) enables gradient-based optimization for planning. Finally, the quality and efficiency of learning are important. One deep generative model for Imitation Learning is the Reparameterized Pushforward Policy (R2P2) BID23 ). R2P2's use of pushforward distributions BID16, employed in other invertible generative models BID21 BID3 allows it to efficiently minimize both false positives and false negatives (type I and type II errors) BID18. Optimization of KL(p, q), which penalizes mode loss (false negatives), is straightforward with R2P2, as it can evaluate q(s 1:T |φ). Here, p is the sampleable, but unknown, distribution of expert behavior. Reducing false positives corresponds to minimizing KL(q, p), which penalizes q heavily for generating bad DISPLAYFORM0 DISPLAYFORM1 end while 6: return s 1:T samples under p. As p is unknown, R2P2 first uses a spatial cost modelp to approximate p, which we can also use as c in our planner. The learning objective is KL(p, q) + βKL(q,p).Figure 5: Architecture of m t and σ t, modified from BID23 with permission. In R2P2, q(s 1:T |φ) is induced by an invertible, differentiable function: f (z; φ): R 2T → R 2T, which warps latent samples from a base distribution z ∼ q 0 = N (0, I 2T ×2T) to the output space over s 1:T. f embeds the evolution of learned discrete-time stochastic dynamics; each state is given by: DISPLAYFORM2 The m t ∈ R 2 and σ t ∈ R 2×2 are computed by expressive, nonlinear neural networks that observe previous states and LIDAR input. The ing trajectory distribution is complex and multimodal. We modified the RNN method described by BID23 and used LIDAR features χ = R 200×200×2, with χ ij representing a 2-bin histogram of points below and above the ground in 0.5 m 2 cells (Fig 5). We used T = 40 trajectories at 5Hz (8 seconds of prediction or planning), τ = 19. At test time, we use three layers of spatial abstractions to plan to a faraway destination, common to model-based (not end-to-end) autonomous vehicle setups: coarse route planning over a road map, path planning within the observable space, and feedback control to follow the planned path BID19 BID29. For instance, a route planner based on a conventional GPSbased navigation system might output waypoints at a resolution of 20 meters -roughly indicating the direction of travel, but not accounting for the rules of the road or obstacles. The waypoints are treated as goals and passed to the Imitative Planner (Algorithm 1), which then generates a path chosen according to the optimization in Eq. 1. These plans are fed to a low-level controller (we use a PID-controller) that follows the plan. In Fig. 6 we illustrate how we use our model in our application. Figure 6: Illustration of our method applied to autonomous driving. Our method trains an Imitative Model from a dataset of expert examples. After training, the model is repurposed as an Imitative Planner. At test time, a route planner provides waypoints to the Imitative Planner, which computes expert-like paths to each goal. The best plan chosen according to the planning objective, and provided to a low-level PID-controller in order to produce steering and throttle actions. Previous work has explored conditional IL for autonomous driving. Two model-free approaches were proposed by BID1, to map images to actions. The first uses three network "heads", each head only trained on an expert's left/straight/right turn maneuvers. The robot is directed by a route planner that chooses the desired head. Their second method input the goal location into the network, however, this did not perform as well. While model-free conditional IL can be effective given a discrete set of user directives, our model-based conditional IL has several advantages. Our model has flexibility to handle more complex directives post training, e.g. avoiding hazardous potholes (Fig. 4) or other costs, the ability to rank plans and goals by its objective, and interpretability: it can generate entire planned and unplanned (undirected) trajectories. Work by BID12 also uses multi-headed model-free conditional imitation learning to "warm start" a DDPG driving algorithm BID13. While warm starting hastens DDPG training, any subsequent DDPG post fine-tuning is inherently trial-and-error based, without guarantees of safety, and may crash during this learning phase. By contrast, our method never executes unlikely transitions w.r.t. expert behavior at training time nor at test time. Our method can also stop the car if no plan reaches a minimum threshold, indicating none are likely safe to execute. While our target setting is offline data collection, online imitation learning is an active area of research in the case of hybrid IL-RL BID25 BID31 and "safe" IL BID30 BID17 BID34. Although our work does not consider multiagent environments, several methods predict the behavior of other vehicles or pedestrians. Typically this involves recurrent neural networks combined with Gaussian density layers or generative models based on some context inputs such as LIDAR, images, or known positions of external agents BID28 BID37 BID7 BID14. However, none of these methods can evaluate the likelihood of trajectories or repurpose their model to perform other inference tasks. Other methods include inverse reinforcement learning to fit a probabilistic reward model to human demonstrations using the principle of maximum entropy BID36 BID27 BID22. We evaluate our method using the CARLA urban driving simulator BID5. Each test episode begins with the vehicle randomly positioned on a road in the Town01 or Town02 maps. The task is to drive to a goal location, chosen to be the furthest road location from the vehicle's initial position. As shown in Fig. 6, we use three layers of spatial abstractions to plan to the goal location, common to model-based (not end-to-end) autonomous vehicle setups: coarse route planning over a road map, path planning within the observable space, and feedback control to follow the planned path BID19 BID29. First, we compute a route to the goal location using A * given knowledge of the road graph. Second, we set waypoints along the route no closer than 20 m of the vehicle at any time to direct the vehicle. Finally, we use a PID-controller to compute the vehicle steering value. The PID-controller was tuned to steer the vehicle towards a setpoint (target) 5 meters away along the planned path. We consider four metrics for this task: 1) Success rate in driving to the goal location without any collisions. 2) Proportion of time spent driving in the correct lane. 3) Frequency of crashes into obstacles. 4) Passenger comfort, by comparing the distribution of accelerations (and higher-order terms) between each method. To contrast the benefits of our method against existing approaches, we compare against several baselines that all receive the same inputs and training data as our method. Since our approach bridges model-free IL and MBRL, we include an IL baseline algorithm, and a MBRL baseline algorithm. PID control: The PID baseline uses the PID-controller to follow the high-level waypoints along the route. This corresponds to removing the middle layer of autonomous vehicle decision abstraction, which serves as a baseline for the other methods. The PID controller is effective when the setpoint is several meters away, but fails when the setpoint is further away (i.e. at 20 m), causing the vehicle to cut corners at intersections. We designed an IL baseline to control the vehicle. A common straightforward approach to IL is behavior-cloning: learning to predict the actions taken by a demon-strator BID20 BID0 BID15 BID1. Our setting is that of goal-conditioned IL: in order to achieve different behaviors, the imitator is tasked with generating controls after observing a target high-level waypoint and φ. We designed two baselines: one with the branched architecture of BID1, where actions are predicted based on left/straight/right "commands" derived from the waypoints, and other that predicts the setpoint for the PID-controller. Each receives the same φ and is trained with the same set of trajectories as our main method. We found the latter method very effective for stable control on straightaways. When the model encounters corners, however, prediction is more difficult, as in order to successfully avoid the curbs, the model must implicitly plan a safe path. In the latter method, we used a network architecture nearly identical to our approach's.. To compare against a purely model-based reinforcement learning algorithm, we propose a model-predictive control baseline. This baseline first learns a forwards dynamics model f: (s t−3, s t−2, s t−1, s t, a t) → s t+1 given observed expert data (a t are recorded vehicle actions). We use an MLP with two hidden layers, each 100 units. Note that our forwards dynamics model does not imitate the expert preferred actions, but only models what is physically possible. Together with the same LIDAR map χ our method uses to locate obstacles, this baseline uses its dynamics model to plan a reachability tree BID9 through the free-space to the waypoint while avoiding obstacles. We plan forwards over 20 time steps using a breadth-first search search over CARLA steering angle {−0.3, −0.1, 0., 0.1, 0.3}, noting valid steering angles are normalized to [−1, 1], with constant throttle at 0.5, noting the valid throttle range is. Our search expands each state node by the available actions and retains the 50 closest nodes to the waypoint. The planned trajectory efficiently reaches the waypoint, and can successfully plan around perceived obstacles to avoid getting stuck. To convert the LIDAR images into obstacle maps, we expanded all obstacles by the approximate radius of the car, 1.5 meters. Performance that compare our methods against baselines according to multiple metrics are includes in TAB1. With the exception of the success rate metric, lower numbers are better. We define success rate as the proportion of episodes where the vehicles navigated across the road map to a goal location on the other side without any collisions. In our experiments we do not include any other drivers or pedestrians, so a collision is w.r.t. a stationary obstacle. Collision impulse (in N · s) is the average cumulative collision intensities over episodes. " Wrong lane" and "Off road" percentage of the vehicle invading other lanes or offroad (averaged over time and episodes). While safety metrics are arguably the most important metric, passenger comfort is also relevant. Passenger comfort can be ambiguous to define, so we simply record the second to sixth derivatives of the position vector with respect to time, respectively termed acceleration, jerk, snap, crackle, and pop. In TAB1 we note the 99th percentile of each statistic given all data collected per path planning method. Generally speaking, lower numbers correspond to a smoother driving experience. The poor performance of the PID baseline indicates that the high-level waypoints do not communicate sufficient information about the correct driving direction. Imitation learning achieves better levels of comfort than MBRL, but exhibits substantially worse generalization from the training data, since it does not reason about the sequential structure in the task. Model-based RL succeeds on most of the trials in the training environment, but exhibits worse generalization. Notably, it also scores much worse than IL in terms of staying in the right lane and maintaining comfort, which is consistent with our hypothesis: it is able to achieve the desired goals, but does not capture the behaviors in the data. Our method performs the best under all metrics, far exceeding the success and comfort metrics of imitation learning, and far exceeding the lane-obeyance and comfort metrics of MBRL. To further illustrate the capability of our method to incorporate test-time costs, we designed a pothole collision experiment. We simulated 2m-wide potholes in the environment by randomly inserting them in the cost map offset from each waypoint, distributed N (µ = [−15m, 2m], Σ = diag([1, 0.01])), (i.e. the mean is centered on the right side of the lane 15m before each waypoint). We ran our method that incorporates a test-time cost map of the simulated potholes, and compared to our method that did not incorporate the cost map (and thus had no incentive to avoid potholes). In addition to the other metrics, we recorded the number of collisions with potholes. In TAB2, we see that our method with cost incorporated achieved nearly perfect pothole avoidance, while still avoiding collisions with the environment. To do so, it drove closer to the centerline, and occasionally dipped into the opposite lane. Our model internalized obstacle avoidance by staying on the road, and demonstrated its flexibility to obstacles not observed during training. FIG4 shows an example of this behavior. As another test of our model's capability to stay in the distribution of demonstrated behavior, we designed a "decoy waypoints" experiment, in which half of the waypoints are highly perturbed versions of the other half, serving as distractions for our planner. The planner is tasked with planning to all of the waypoints under the Gaussian mixture likelihood. The perturbation distribution is N (0, σ = 8m): each waypoint is perturbed with a standard deviation of 8 meters. We observed the imitative model to be surprisingly robust to decoy waypoints. Examples of this robustness are shown in Fig. 8. One failure mode of this approach is when decoy waypoints lie on a valid off-route path at intersections, which temporarily confuses the planner about the best route. In TAB3, we report the success rate and the mean number of planning rounds for successful and failed episodes. These numbers indicate our method can execute dozens to hundreds of planning rounds without decoy waypoints derailing it. We also designed an experiment to test our method under systemic bias in the route planner. Our method is provided waypoints on the wrong side of the road. We model this by increasing the goal likelihood observation noise. After tuning the noise, we found our method to still be very effective at navigating, and report in TAB3. This further illustrates our method's tendency to stay near the distribution of expert behavior, as our expert never drove on the wrong side of the road. Our method with waypoints on wrong side, Town01 10 / 10 0.338% 0.002% Our method with waypoints on wrong side, Town02 7 / 10 3.159% 0.044% We proposed a method that combines elements of imitation learning and model-based reinforcement learning (MBRL). Our method first learns what preferred behavior is by fitting a probabilistic model to the distribution of expert demonstrations at training time, and then plans paths to achieve userspecified goals at test time while maintaining high probability under this distribution. We demonstrated several advantages and applications of our algorithm in autonomous driving scenarios. In the context of MBRL, our method mitigates the distributional drift issue by explicitly preferring plans that stay close to the expert demonstration data. This implicitly allows our method to enforce basic safety properties: in contrast to MBRL, which requires negative examples to understand the potential for adverse outcomes (e.g., crashes), our method automatically avoids such outcomes specifically because they do not occur (or rarely occur) in the training data. In the context of imitation learning, our method provides a flexible, safe way to generalize to new goals by planning, compared to prior work on black-box, model-free conditional imitation learning. Our algorithm produces an explicit plan within the distribution of preferred behavior accompanied with a score: the former offers interpretability, and the latter provides an estimate of the feasibility of the plan. We believe our method is broadly applicable in settings where expert demonstrations are available, flexibility to new situations is demanded, and safety is critical. Figure 8: Tolerating bad waypoints. The planner prefers waypoints in the distribution of expert behavior: on the road at a reasonable distance. Columns 1,2: Planning with 1 /2 decoy waypoints. Columns 3,4: Planning with all waypoints on the wrong side of the road. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SyehMhC9Y7 | Hybrid Vision-Driven Imitation Learning and Model-Based Reinforcement Learning for Planning, Forecasting, and Control |
Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in the broad comparison, we introduce the deep ensemble equivalent (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of very few independently trained networks in terms of the test log-likelihood. Deep neural networks (DNNs) have become one of the most popular families of machine learning models. The predictive performance of DNNs for classification is often measured in terms of accuracy. However, DNNs have been shown to yield inaccurate and unreliable probability estimates, or predictive uncertainty . This has brought considerable attention to the problem of uncertainty estimation with deep neural networks. There are many faces to uncertainty estimation. Different desirable uncertainty estimation properties of a model require different settings and metrics to capture them. Out-of-domain uncertainty of the model is measured on data that does not follow the same distribution as the training dataset (out-of-domain data). Out-of-domain data can include images corrupted with rotations or blurring, adversarial attacks or data points from a completely different dataset. The model is expected to be resistant to data corruptions and to be more uncertain on out-of-domain data than on in-domain data. This setting was explored in a recent study by . On the contrary, in-domain uncertainty of the model is measured on data taken from the training data distribution, i.e. data from the same domain. In this case, a model is expected to provide correct probability estimates: it should not be overconfident in the wrong predictions, and should not be too uncertain about the correct predictions. Ensembles of deep neural networks have become a de-facto standard for uncertainty estimation and improving the quality of deep learning models (; ;). There are two main directions in the field of training ensembles of DNNs: training stochastic computation graphs and obtaining separate snapshots of neural network weights. Methods based on the paradigm of stochastic computation graphs introduce noise over weights or activations of deep learning models. When the model is trained, each sample of the noise corresponds to a member of the ensemble. During test time, the predictions are averaged across the noise samples. These methods include (test-time) data augmentation, dropout , variational inference (; ;), batch normalization (; ;), Laplace approximation and many more. Snapshot-based methods aim to obtain sets of weights for deep learning models and then to average the predictions across these weights. The weights can be trained independently (e.g., deep ensembles ), collected on different stages of a training trajectory (e.g., snapshot ensembles and fast geometric ensembles ), or obtained from a sampling process (e.g., MCMC-based methods ). These two paradigms can be combined. Some works suggest construction of ensembles of stochastic computation graphs , while others make use of the collected snapshots to construct a stochastic computation graph . In this paper, we focus on assessing the quality of in-domain uncertainty estimation. We show that many common metrics in the field are either not comparable across different models or fail to provide a reliable ranking, and then address some of stated pitfalls. Following that, we perform a broad evaluation of modern DNN ensembles on CIFAR-10/100 and ImageNet datasets. To aid interpretatability, we introduce the deep ensemble equivalent score that essentially measures the number of "independent" models in an ensemble of DNNs. We draw a set of with regard to ensembling performance and metric reliability to guide future research practices. For example, we find that methods specifically designed to traverse different "optima" of the loss function (snapshot ensembles and cyclical SGLD) come close to matching the performance of deep ensembles while methods that only explore the vicinity of a single "optimum" (Dropout, FGE, K-FAC Laplace and variational inference) fall far behind. We use standard benchmark problems of image classification as it is a common setting in papers on learning ensembles of neural networks. There are other practically relevant settings where the correctness of probabilistic estimates can be a priority. These settings include, but are not limited to, regression, image segmentation, language modelling , active learning and reinforcement learning . We focus on in-domain uncertainty, as opposed to out-of-domain uncertainty. Out-of-domain uncertainty includes detection of inputs that come from a completely different domain or have been corrupted by noise or adversarial attacks. This setting has been thoroughly explored by . We only consider methods that are trained on clean data with simple data augmentation. Some other methods use out-of-domain data or more elaborate data augmentation e.g., mixup , adversarial training to improve accuracy, robustness and uncertainty. We use conventional training procedures. We use the stochastic gradient descent (SGD) and use batch normalization , both being the de-facto standards in modern deep learning. We refrain from using more elaborate optimization techniques including works on superconvergence and stochastic weight averaging . These techniques can be used to drastically accelerate training and improve the predictive performance. Because of that, we do not not comment on the training time of different ensembling methods since the use of more efficient training techniques would render such a comparison obsolete. A number of related works study ways of approximating and accelerating prediction in ensembles. The distillation mechanism allows to approximate the prediction of an ensemble by a single neural network , whereas fast dropout and deterministic variational inference allow to approximate the predictive distribution of specific stochastic computation graphs. We measure the raw power of ensembling techniques without these approximations. All of the aforementioned alternative settings are orthogonal to the scope of this paper and are promising points of interest for further research. No single metric measures all desirable properties of uncertainty estimates obtained with a model. Because of this, the community has used different metrics that aim to measure the quality of uncertainty estimation, e.g. the Brier score , log-likelihood (, different calibration metrics , performance of misclassification detection , and threshold-accuracy curves . We consider a classification problem with a dataset that consists of N training and n testing pairs (x i, y * i) ∼ p(x, y), where x i is an object and y * i ∈ {1, . . ., C} is a discrete class label. A probabilistic classifier maps an object x i into a predictive distributionp(y | x i). The predictive distributionp(y | x i) of deep neural networks is usually defined as a softmax function p(y | x) = Softmax(z(x)/T ), where z(x) is a vector of logits and T is a scalar parameter standing for the temperature of the predictive distribution. The maximum probability max cp (y = c | x i) is called a confidence of a classifierp on object x i. The indicator function is denoted by I[·] throughout the text. The average test log-likelihood LL = 1 n n i=1 logp(y = y * i | x i) is a popular metric for measuring the quality of in-domain uncertainty of deep learning models. It directly penalizes high probability scores assigned to incorrect labels and low probability scores assigned to the correct labels y * i. LL is sensitive to the temperature T. The temperature that has been learned during training can be far from optimal for the test data. However, a nearly optimal temperature can be found post-hoc by maximizing the log-likelihood on validation data. This approach is called temperature scaling or calibration . Despite its simplicity, temperature scaling in a marked improvement in the LL. While ensembling techniques tend to have better temperature than single models, the default choice of T = 1 is still sub-optimal. Comparing the LL with sub-optimal temperatures-that is often the case-can produce an arbitrary ranking of different methods. Comparison of the log-likelihood should only be performed at the optimal temperature. Empirically, we demonstrate that the overall ordering of methods and also the best ensembling method according to the LL can vary depending on temperature T. While this applies to most ensembling techniques (see Appendix C), this effect is most noticeable on experiments with data augmentation on ImageNet (Figure 1). We will call the log-likelihood at the optimal temperature the calibrated log-likelihood. We show how to obtain an unbiased estimate of the calibrated loglikelihood without a held-out validation set in Section 3.5. LL also demonstrates a high correlation with accuracy (ρ > 0.86), that in case of calibrated LL becomes even stronger (ρ > 0.95). That suggest that while (calibrated) LL measures the uncertainty of the model, it still significantly depends on the accuracy and vice versa. A model with higher accuracy would likely have a higher log-likelihood even if the quality of its uncertainty is lower in some respects. See Appendix C for more details. 2 has been known for a long time as a metric for verification of predicted probabilities . Similarly to the log-likelihood, the Brier score penalizes low probabilities assigned to correct predictions and high probabilities assigned to wrong ones. It is also sensitive to the temperature of the softmax distribution and behaves similarly to the log-likelihood. While these metrics are not strictly equivalent, they show a high empirical correlation for a wide range of models on CIFAR-10, CIFAR-100 and ImageNet datasets (see Appendix A). Detection of wrong predictions of the model, or misclassifications, is a popular downstream problem aiding in assessing the quality of in-domain uncertainty. Since misclassification detection is essentially a binary classification problem, some papers measure its quality using conventional metrics for binary classification such as AUC-ROC and AUC-PR (; ; Możejko et al., 2018). These papers use an uncertainty criterion like confidence or predictive entropy H[p(y | x i)] as a prediction score. While these metrics can be used to assess the misclassification detection performance of a single model, they cannot be used to directly compare misclassification performance across different models. Correct and incorrect predictions are specific for every model, therefore, every model induces Figure 2: Thresholded adaptive calibration error (TACE) is highly sensitive to the threshold and the number of bins and does not provide a stable ranking for different methods. TACE is reported for VGG16BN model on CIFAR-100 dataset and is evaluated at the optimal temperature. its own binary classification problem. The induced problems can differ significantly from each other since different models produce different confidences and misclassify different objects. AUCs for misclassification detection can not be directly compared between different models. While comparing AUCs is incorrect in this setting, it is correct to compare these metrics in many out-of-domain data detection problems. In that case, both objects and targets of induced binary classification problems remain fixed for all models. Note however that this condition still usually breaks down in the problem of detection of adversarial attacks since different models generally have different inputs after an adversarial attack. Accuracy-confidence curves are another way to measure the performance of misclassification detection. These curves measure the accuracy on the set of objects with confidence max cp (y = c | x i) above a certain threshold τ and ignoring or rejecting the others. The main problem with accuracy-confidence curves is that they rely too much on calibration and the actual values of confidence. Models with different temperatures have different numbers of objects at each confidence level which does not allow for a meaningful comparison. To overcome this problem, one can switch from thresholding by the confidence level to thresholding by the number of rejected objects. The corresponding curves are then less sensitive to temperature scaling and allow to compare the rejection ability in a more meaningful way. Such curves have been known as accuracyrejection curves . In order to obtain a scalar metric for easy comparisons, one can compute the area under this curve, ing in AU-ARC . Informally speaking, a probabilistic classifier is calibrated if any predicted class probability is equal to the true class probability according to the underlying data distribution (see for formal definitions). Any deviation from perfect calibration is called miscalibration. For brevity, we will usep i,c to denotep(y = c | x i) in the current section. Expected Calibration Error (ECE) is a metric that estimates model miscalibration by binning the assigned probability scores and comparing them to average accuracies inside these bins. Assuming B m denotes the m-th bin and M is overall number of bins, the ECE is defined as follows: where acc(B) = |B| A recent line of works on measuring calibration in deep learning outline several problems of the ECE score. Firstly, ECE is a biased estimate of the true calibration. Secondly, ECE-like scores cannot be optimized directly since they are minimized by a model with constant uniform predictions, making the infinite temperature T = +∞ its global optimum. Thirdly, ECE only estimates miscalibration in terms of the maximum assigned probability whereas practical applications may require the full predicted probability vector to be calibrated. Finally, biases of ECE on different models may not be equal, rendering the miscalibration estimates incompatible. Thresholded Adaptive Calibration Error (TACE) was proposed as a step towards solving some of these problems . TACE disregards all predicted probabilities that are less than a certain threshold (hence thresholded), chooses the bin locations adaptively so that each bin has the same number of objects (hence adaptive), and estimates miscalibration of probabilties across all classes in the prediction (not just the top-1 predicted class as in ECE). Assuming that B TA m denotes the m-th thresholded adaptive bin and M is the overall number of bins, TACE is defined as follows: where objs(B TA, c) = Although TACE does solve several problems of ECE and is useful for measuring calibration of a specific model, it still cannot be used as a reliable criterion for comparing different models. Theory suggests that it is still a biased estimate of true calibration with different bias for each model. In practice, TACE is sensitive to its two parameters, the number of bins and the threshold, and does not provide a consistent ranking of different models which is shown in Figure 2. There are two common ways to perform temperature scaling using a validation set when training on datasets that only feature public training and test sets (e.g. CIFARs). The public training set might be divided into a smaller training set and validation set, or the public test set can be split into test and validation parts . The problem with the first method is that the ing models cannot be directly compared with all the other models that have been trained on the full training set. The second approach, however, provides an unbiased estimate of metrics such as log-likelihood and Brier score but introduces more variance. In order to reduce the variance of the second approach, we perform a "test-time cross-validation". We randomly divide the test set into two equal parts, then compute metrics for each half of the test set using the temperature optimized on another half. We repeat this procedure five times and average the across different random partitions to reduce the variance of the computed metrics. In this paper we consider the following ensembling techniques: deep ensembles , snapshot ensembles (SSE by ), fast geometric ensembling (FGE by ), SWA-Gaussian (SWAG by ), cyclical SGLD (cSGLD by ), variational inference (VI by ), dropout and test-time data augmentation . These techniques were chosen to cover a diverse set of approaches keeping their predictive performance in mind. All these techniques can be summarized as distributions q m (ω) over some parameters ω of computation graphs z ω (x), where m stands for the technique. During testing, one can average the predictions across parameters ω ∼ q m (ω) to approximate the predictive distribution For example, a deep ensemble of S networks can be represented in this form as a mixture of S Dirac's deltas q DE (ω) = 1 S S s=1 δ(ω − ω s), centered at independently trained snapshots ω s. Similarly, a Bayesian neural network with a fully-factorized Gaussian approximate posterior distribution over the weight matrices and convolutional kernels ω is represented as q VI (ω) = N (ω | µ, diag(σ 2)), µ and σ 2 being the optimal variational means and variances respectively. If one considers data augmentation as a part of the computational graph, it can be parameterized by the coordinates of the random crop and the flag for whether to flip the image horizontally or not. Sampling from the corresponding q aug (ω) would generate different ways to augment the data. However, as data augmentation is present by default during the training of all othe mentioned ensembling techniques, it is suitable to study it in combination with these methods and not as a separate ensembling technique. We perform such an evaluation in Section 4.3. Typically, the approximation (equation 3) requires K independent forward passes through a neural network, making the test-time budget directly comparable across all methods. Most ensembling techniques under consideration are either bounded to a single mode, or provide positively correlated samples. Deep ensembles, on the other hand, is a simple technique that provides independent samples from different modes of the loss landscape, which can intuitively in a better ensemble. Therefore deep ensembles can be considered as a strong baseline for performance of other ensembling techniques given a fixed test-time budget. Instead of comparing the values of uncertainty estimation metrics directly, we ask the following question aiming to introduce perspective and interpretability in our comparison: What number of independently trained networks combined yields the same performance as a particular ensembling method? Following insights from the previous sections, we use the calibrated log-likelihood (CLL) as the main measure of uncertainty estimation performance of the ensemble. We define the Deep Ensemble Equivalent (DEE) for an ensembling method m and its upper and lower bounds as follows: . We use PyTorch for implementation of these models, building upon available public implementations. Our implementation closely matches the quality of methods that has been reported in original works. Technical details on training, hyperparameters and implementations can be found in Appendix D. We plan to make all computed metrics, source code and trained models publicly available. As one can see on Figure 3, ensembling methods clearly fall into three categories. SSE and cSGLD outperform all other techniques except deep ensembles and enjoy a near-linear scaling of DEE with the number of samples. The investigation of weight-space trajectories of cSGLD and SSE suggests that these methods can efficiently explore different modes of the loss landscape. In terms of deep ensemble equivalent, these methods do not saturate unlike other methods that are bound to a single mode. More verbose are presented in Appendix E. In our experiments SSE typically outperforms cSGLD. This is mostly due to the fact that SSE has a much larger training budget. The cycle lengths and learning rates of SSE and cSGLD are comparable, however, SSE collects one snapshot per cycle while cSGLD collects three snapshots. This makes samples from SSE less correlated with each other while increasing the training budget. Both SSE and cSGLD can be adjusted to obtain a different trade-off between the training budget and the DEE-to-samples ratio. We reused the schedules provided in the original papers . Being more "local" methods, FGE and SWAG perform worse than SSE and cSGLD, but still significantly outperform "single-snapshot" methods like dropout, K-FAC Laplace approximation and variational inference. We hypothesize that by covering a single mode with a set of snapshots, FGE and SWAG provide a better fit for the local geometry than methods based on stochastic computation graphs. This implies that the performance of FGE and SWAG should be achievable by methods that approximate the geometry of a single mode. However, one might need more elaborate posterior approximations and better inference techniques in order to match the performance of FGE and SWAG by training a stochastic computation graph end-to-end (as opposed to SWAG that constructs a stochastic computation graph post-hoc). Data augmentation is a time-honored technique that is widely used in deep learning, and is a crucial component for training modern DNNs. Test-time data augmentation have been used for a long time to improve the performance of convolutional networks. For example, multi-crop evaluation has long been a standard procedure for the ImageNet challenge (; ;). It, however, is not very popular in the literature on ensembling techniques in deep learning. In this section, we study the effect of test-time data augmentation on the aforementioned ensembling techniques. We report the on combination of ensembles and test-time data augmentation for CIFAR-10 in Interestingly, test-time data augmentation on ImageNet improves accuracy but decreases the (uncalibrated) log-likelihood of the deep ensembles (Figure 1, Table REF). It breaks the nearly optimal temperature of deep ensembles and requires temperature scaling to show the actual performance of the method, as discussed in Section 3.1. We show that test-time data augmentation with temperature scaling significantly improves predictive uncertainty of ensembling methods and should be considered as a baseline for them. It is a striking example that highlights the importance of temperature scaling. Our experiments demonstrate that ensembles may be severely miscalibrated by default while still providing superior predictive performance after calibration. We have explored the field of in-domain uncertainty estimation and performed an extensive evaluation of modern ensembling techniques. Our main findings can be summarized as follows: • Temperature scaling is a must even for ensembles. While ensembles generally have better calibration out-of-the-box, they are not calibrated perfectly and can benefit from the procedure. Comparison of log-likelihoods of different ensembling methods without temperature scaling might not provide a fair ranking especially if some models happen to be miscalibrated. • Many common metrics for measuring in-domain uncertainty are either unreliable (ECE and analogues) or cannot be used to compare different methods (AUC-ROC, AUC-PR for misclassification detection; accuracy-confidence curves). In order to perform a fair comparison of different methods, one needs to be cautious of these pitfalls. • Many popular ensembling techniques require dozens of samples for test-time averaging, yet are essentially equivalent to a handful of independently trained models. Deep ensembles dominate other methods given a fixed test-time budget. The indicate in particular that exploration of different modes in the loss landscape is crucial for good predictive performance. • Methods that are stuck in a single mode are unable to compete with methods that are designed to explore different modes of the loss landscape. Would more elaborate posterior approximations and better inference techniques shorten this gap? • Test-time data augmentation is a surprisingly strong baseline for in-domain uncertainty estimation and can significantly improve other methods without increasing training time or model size since data augmentation is usually already present during training. Our takeaways are aligned with the take-home messages of that relate to indomain uncertainty estimation. We also observe a stable ordering of different methods in our experiments, and observe that deep ensembles with few members outperform methods based on stochastic computation graphs. A large number of unreliable metrics inhibits a fair comparison of different methods. Because of this, we urge the community to aim for more reliable benchmarks in the numerous setups of uncertainty estimation. Implied probabilistic model Conventional neural networks for classification are usually trained using the average cross-entropy loss function with weight decay regularization hidden inside an optimizer in a deep learning framework like PyTorch. The actual underlying optimization problem can be written as follows: where is the training dataset of N objects x i with corresponding labels y * i, λ is the weight decay scale andp(y * i = j | x i, w) denotes the probability that a neural network with parameters w assigns to class j when evaluated on object x i. The cross-entropy loss defines a likelihood function p(y * | x, w) and weight decay regularization, or L 2 regularization, corresponds to a certain Gaussian prior distribution p(w). The whole optimization objective then corresponds to maximum a posteriori inference in the following probabilistic model: log p(y As many of the considered methods are probabilistic in nature, we use the same probabilistic model for all of them. We use the SoftMax-based likelihood for all models, and use the fully-factorized zero-mean Gaussian prior distribution with variances σ 2 = (N λ) −1, where the number of objects N and the weight decay scale λ are dictated by the particular datasets and neural architectures, as defined in the following paragraph. In order to make the comparable across all ensembling techniques, we use the same prababilistic model for all methods, choosing fixed weight decay parameters for each architecture. Conventional networks On CIFAR-10/100 datasets all networks were trained by SGD optimizer with batch size of 128, momentum 0.9 and model-specific parameters i.e., initial learning rate (lr init), weight decay (wd), and number of optimization epoch (epoch). The specific hyperparameters are shown in Table 2. The models used a unified learning rate scheduler that is shown in equation 10. All models have been trained using data augmentation that consists of horizontal flips, random crop of size 32 with padding 4. The standard data normalization has also been applied. Weight decays, initial learning rates, and the learning rate scheduler were taken from paper. Compared with hyperparameters of , the number of optimization epochs has been increased since we found that all models were underfitted. While original WideResNet28x10 includes number of dropout layers with p = 0.3 and 200 training epoch, in this setting we find that WideResNet28x10 underfits, and requires a longer training. Thus, we used p = 0, effectively it does not affect the final performance of the model in our experiments, but reduces training time. On ImageNet dataset we used ResNet50 examples with a default hyperparameters from PyTorch examples 5. Specifically SGD optimizer with momentum 0.9, batch size of 256, initial learning rate 0.1, and with decay 1e-4. The training also includes data augmentation random crop of size 224 × 224, horizontal flips, and normalization, and learning rate scheduler lr = lr init · 0.1 epoch//30, where // denotes integer division. We only deviated from standard parameters by increasing the number of training epochs from 90 to 130. Or models achived top-1 error of 23.81 ± 0.15 that closely matches accuracy of the ResNet50 probided by PyTorch which is 23.85 6. Training of one model on a single NVIDIA Tesla V100 GPU takes approximately 5.5 days. Deep Ensembles Deep ensembles average the predictions across networks trained independently starting from different initializations. To obtain Deep Ensemble we repeat the procedure of training standard networks 128 times for all architectures on CIFAR-10 and CIFAR-100 datasets (1024 networks over all) and 50 times for ImageNet dataset. Every single member of Deep Ensembles were actually trained with exactly the same hyperparameters as conventional models of the same arhitecture. Dropout The binary dropout (or MC dropout) is one of the most known ensembling techniques. It puts a multiplicative Bernoulli noise with parameter p over activations of ether fully-connected or convolutional layer, averaging predictions of the network w.r.t. the noise during test. The dropout layers have been applied to VGG, and WideResNet networks on CIFAR-10 and CIFAR-100 datasets. For VGG the dropout has been applied to fully-connected (fc) layers with p = 0.5, overall two dropout layers, one before the first fc-layer and one before the second one. While original version of VGG for CIFARs exploits more dropout layers, we observed that any additional dropout layer deteriorates the performance on the model in ether deterministic or stochastic mode. For WideResNet network we applied dropout consistently with the original paper with p = 0.3. The dropout usually increases the time to convergence, thus, VGG and WideResNet networks with dropout was trained for 400 epoch instead of 300 epoch for deterministic case. The all other hyperparameters was the same as in case of conventional models. Variational Inference The VI approximates a true posterior distribution p(w | Data) with a tractable variational approximation q θ (w), by maximizing so-called variational lower bound L (eq. 11) w.r.t. parameters of variational approximation θ. We used fully-factorized Gaussian approximation q(w), and Gaussian prior distribution p(w). In the case of such a prior p(w) the probabilistic model remains consistent with conventional training which corresponds to MAP inference in the same probabilistic model. We used variational inference for both convolutional and fully-connected layers, where variances of the weights was parameterized by log σ. For fully-connected layers we applied the LRT . While variational inference provide a theoretical grounded way to approximate a true posterior, on practice, it tends to underfit deep learning models . The following tricks are applied to deal with it: pre-training Consistently with the practical tricks we use a pre-training, specifically, we initialize µ with a snapshot of the weights of pretrained conventional model, and initialize log σ with model-specific constant log σ init. The KL-divergence -except the term that corresponds to a weight decay -was scaled on model specific parameter β. The weigh decay term was implemented as a part of the optimizer. We used a fact that KL-divergence between two Gaussian distributions can be rewritten as two terms one of which is equal to wd regularization. On CIFAR-10 and CIFAR-100 we used β 1e-4 for VGG, ResNet100 and ResNet164 networks, and β 1e-5 for WideResNet. The initialization of log-variance log σ init was set to −5 for all models. Parameters µ were optimized with conventional SGD (with the same parameters as conventional networks, except initial learning rate lr init that was set to 1e-3). We used a separate Adam optimizer with constant learning rate 1e-3 to optimize log-variances of the weights log σ. The training was held for 100 epochs, that corresponds to 400 epochs of training (including pre-training). On ImageNet we used β = 1e-3, lr init = 0.01, log σ init = −6, and held training for a 45 epoch form a per-trained model. The Laplace approximation uses the curvature information of the appropriately scaled loss function to construct a Gaussian approximation to the posterior distribution. Ideally, one would use the Hessian of the loss function as the covariance matrix and use the maximum a posteriori estimate w M AP as the mean of the Gaussian approximation: log p(w | x, y *) = log p(y * | x, w) + log p(w) + const In order to keep the method scalable, we use the Fisher Information Matrix as an approximation to the true Hessian . For K-FAC Laplace, we use the whole dataset to construct an approximation to the empirical Fisher Information Matrix, and use the π correction to reduce the bias . Following , we find the optimal noise scale for K-FAC Laplace on a held-out validation set by averaging across five random initializations. We then reuse this scale for networks trained without a hold-out validation set. We report the optimal values of scales in Table 3. Note that the optimal scale is different depending on whether we use test-time data augmentation or not. Since the data augmentation also introduces some amount of additional noise, the optimal noise scale for K-FAC Laplace with data augmentation is lower. Snapshot Ensembles Snapshot Ensembles (SSE) is a simple example of an array of methods which collect samples from a training trajectory of a network in weight space to construct an ensemble. Samples are collected in a cyclical manner: each cycle learning rate goes from a large value to near-zero and weights snapshot is taken at the end of the cycle. SSE uses SGD with a cosine learning schedule defined as follows: where α 0 is the initial learning rate, T is the total number of training iterations and M is the number of cycles. On CIFAR-10/100 parameters from the original paper are reused, length of cycle is 40 epochs, maximum learning rate is 0.2, batch size is 64. On ResNet50 on ImageNet we used hyperparameters from the original paper which are 45 epoch per cycle, maximum learning rate 0.1, and cosine scheduler of learning rate (eq. 16). All other parameters are equal to the ones as were used conventional networks. Cyclical SGLD Cyclical Stochastic Gradient Langevin Dynamics (cSGLD) is a state-of-the-art ensembling method for deep neural networks pertaining to stochastic Markov Chain Monte Carlo family of methods. It bears similarity to SSE, e.g. it employs SGD with a learning rate schedule described with the equation 16 and training is cyclic in the same manner. Its main differences from SSE are introducing gradient noise and capturing several snapshots per cycle, both of which aid in sampling from posterior distribution over neural network weights efficiently. Some parameters from the original paper are reused: length of cycle is 50 epochs, maximum learning rate is 0.5, batch size is 64. Number of epochs with gradient noise per cycle is 3 epochs. This was found to yield much higher predictive performance and better uncertainty estimation compared to the original paper choice of 10 epochs for CIFAR-10 and 3 epochs for CIFAR-100. Finally, cyclical Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) which reportedly has marginally better performance compared with cyclical SGLD could not be reproduced with a wide range of values of SGD momentum term. Because of this, we only include cyclical SGLD in our benchmark. FGE Fast Geometric Ensembling (FGE) is an ensembling method that is similar to SSE in that it collects samples from a training trajectory of a network in weight space to construct an ensemble. Table 3: Optimal noise scale for K-FAC Laplace for different datasets and architectures. For ResNet50 on ImageNet, the optimal scale found was 2.0 with test-time augmentation and 6.8 without test-time augmentation. Its main differences from SSE are pretraining, a short cycle length and a piecewise-linear learning rate schedule Original hyperparameters are reused. Model pretraining is done with SGD for 160 epochs according to the standard learning rate schedule described in equation 10 with maximum learning rates from Table 2. After that, a desired number of FGE cycles is done with one snapshot per cycle collected. Learning rate in a cycle is changed with parameters α 1 = 1e − 2, α 2 = 5e − 4, cycle length of 2 epochs for VGG and α 1 = 5e − 2, α 2 = 5e − 4, cycle length of 4 epochs for other networks. Batch size is 128. SWAG SWA-Gaussian (SWAG) is an ensembling method based on fitting a Gaussian distribution to model weights on the SGD training trajectory and sampling from this distribution to construct an ensemble. Like FGE, SWAG has a pretraining stage which is done according to the standard learning rate schedule described in equation 10 with maximum learning rates from Table 2. After that, training continues with a constant learning rate of 1e-2 for all models except for PreResNet110 and PreResNet164 on CIFAR-100 where it continues with a constant learning rate of 5e-2 in accordance with the original paper. Rank of the empirical covariance matrix which is used for estimation of Gaussian distribution parameters is set to be 20. Area between DEE lower and DEE upper is shaded. Lines 2-4 correspond to DEE based on other metrics, defined similarly to the log-likelihoodbased DEE. Note that while the actual scale of DEE varies from metric to metric, the ordering of different methods and the overall behaviour of the lines remain the same. SSE outperforms deep ensembles on CIFAR-10 on the WideResNet architecture. It possibly indicates that the cosine learning rate schedule of SSE is more suitable for this architecture than the piecewise-linear learning rate schedule used in deep ensembles. We will change the learning rate schedule on WideResNets to a more suitable option in further revisions of the paper. ResNet110 0.0037±0.00 0.0032±0.00 0.0041±0.00 0.0051±0.00 0.0054±0.00 0.0035±0.00 0.0043±0.00 0.0049±0.00 ResNet164 0.0035±0.00 0.0031±0.00 0.0039±0.00 0.0049±0.00 0.0053±0.00 0.0034±0.00 0.0038±0.00 0.0049±0.00 VGG16 0.0051±0.00 0.0046±0.00 0.0053±0.00 0.0076±0.00 0.0109±0.00 0.0045±0.00 0.0054±0.00 0.0076±0.00 0.0116±0.00 WideResNet 0.0031±0.00 0.0029±0.00 0.0031±0.00 0.0040±0.00 0.0043±0.00 0.0031±0.00 0.0037±0.00 0.0039±0.00 ResNet110 0.0421±0.00 0.0382±0.00 0.0456±0.00 0.0607±0.00 0.0648±0.00 0.0426±0.00 0.0496±0.00 0.0593±0.00 ResNet164 0.0388±0.00 0.0359±0.00 0.0429±0.00 0.0556±0.00 0.0600±0.00 0.0402±0.00 0.0455±0.00 0.0534±0.00 VGG16 0.0528±0.00 0.0508±0.00 0.0570±0.00 0.0779±0.00 0.0855±0.00 0.0518±0.00 0.0572±0.00 0.0741±0.00 0.0857±0.00 WideResNet 0.0343±0.00 0.0324±0.00 0.0364±0.00 0.0498±0.00 0.0499±0.00 0.0366±0.00 0.0439±0.00 0.0476±0.00 Table 7: Results before and after data augmentation on CIFAR10. Error (%) on CIFAR100 dataset (100 samples) Table 9: Results before and after data augmentation on ImageNet. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJxI5gHKDr | We highlight the problems with common metrics of in-domain uncertainty and perform a broad study of modern ensembling techniques. |
Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive . Although most techniques developed so far requires knowledge of the architecture of the machine learning model and remains hard to scale to complex prediction pipelines, the method of randomized smoothing has been shown to overcome many of these obstacles. By requiring only black-box access to the underlying model, randomized smoothing scales to large architectures and is agnostic to the internals of the network. However, past work on randomized smoothing has focused on restricted classes of smoothing measures or perturbations (like Gaussian or discrete) and has only been able to prove robustness with respect to simple norm bounds. In this paper we introduce a general framework for proving robustness properties of smoothed machine learning models in the black-box setting. Specifically, we extend randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using $f$-divergences. Our methodology achieves state-of-the-art}certified robustness on MNIST, CIFAR-10 and ImageNet and also audio classification task, Librispeech, with respect to several classes of adversarial perturbations. Predictors obtained from machine learning algorithms have been shown to be vulnerable to making errors when the inputs are perturbed by carefully chosen small but imperceptible amounts . This has motivated significant amount of research in improving adversarial robustness of a machine learning model (see, e.g. ;). While significant advances have been made, it has been shown that models that were estimated to be robust have later been broken by stronger attacks . This has led to the need for methods that offer provable guarantees that the predictor cannot be forced to misclassify an example by any attack algorithm restricted to produce perturbations within a certain set (for example, within an p norm ball). While progress has been made leading to methods that are able to compute provable guarantees for several image and text classification tasks (; ; ; ; ; ;), these methods require extensive knowledge of the architecture of the predictor and are not easy to extend to new models or architectures, requiring specialized algorithms for each new class of models. Further, the computational complexity of these methods grows significantly with input dimension and model size. Consequently, to deal with these obstacles, recent work has proposed the randomized smoothing strategy for verifying the robustness of classifiers.; have shown that robustness properties can be more easily verified for the smoothed version of a base classifier h: h s (x) = arg max y∈Y P X∼µ(x) [h(X) = y], where the labels returned by the smoothed classifier h s are obtained by taking a "majority vote" over the predictions of the original classifier h on random inputs drawn from a probability distribution µ(x), called the smoothing measure (here Y denotes the set of classes in the problem). showed that verifying the robustness of this smoothed classifier is significantly simpler than verifying the original classifier h and only requires estimating the distribution of outputs of the classifier under random perturbations of the input, but does not require access to the internals of the classifier h. We refer to this as black-box verification. In this work, we develop a general framework for black-box verification that recovers prior work as special cases, and improves upon previous in various ways. Contributions Our contributions are summarized as follows: 1. We formulate the general problem of black-box verification via a generalized randomized smoothing procedure, which extends existing approaches to allow for arbitrary smoothing measures. Specifically, we show that robustness certificates for smoothed classifiers can be obtained by solving a small convex optimization problem when allowed adversarial perturbations can be characterized via divergence-based bounds on the smoothing measure. 2. We prove that our certificates generalize previous obtained in related work (; ;), and vastly extend the class of perturbations and smoothing measures that can be used while still allowing certifiable guarantees. 3. We introduce the notion of full-information and information-limited settings, and show that the information-limited setting that has been the main focus of prior work leads to weaker certificates for smoothed probabilistic classifiers, and can be improved by using additional information (the distribution of label scores under randomized smoothing). 4. We evaluate our framework experimentally on image and classification tasks, obtaining robustness certificates that improve upon other black-box methods either in terms of certificate tightness or computation time on robustness to 0, 1 or 2 perturbations on MNIST, CIFAR-10 and ImageNet. 2 perturbations from worst-case realizations of white noise that is common in many image, speech and video processing. 0 perturbations can model missing data (missing pixels in an image, or samples in a time-domain audio signal) while 1 perturbations can be used to model convex combinations of discrete perturbations in text classification . We also obtain the first, to the best of our knowledge, certifiably robust model for an audio classification task, Librispeech , with variable-length inputs. Consider a binary classifier h: X → {±1} given to us as a black box, so we can only access the inputs and outputs of h but not its internals. We are interested in investigating the robustness of the smoothed classifier h s (defined in Eq. 1) against adversarial perturbations of size at most with respect to a given norm ·. To determine whether a norm-bounded adversarial attack on a fixed input x ∈ X with h s (x) = +1 could be successful, we can solve the optimization problem and check whether the minimum value can be smaller than 1 2. This is a non-convex optimization problem for which we may not even be able to compute gradients since we only have black-box access to h. While techniques have been developed to address this problem, obtaining provable guarantees on whether these algorithms actually find the worst-case adversarial perturbation is difficult since we do not know anything about the nature of h. Motivated by this difficulty, we take a different approach: Rather than studying the adversarial attack in the input space X, we study it in the space of probability measures over inputs, denoted by P(X). Formally, this amounts to rewriting Eq. 2 as This is an infinite dimensional optimization problem over the space of probability measures ν ∈ P(X) subject to the constraint ν ∈ D = {µ(x): x − x ≤ }. While this set is still intractable to deal with, we can consider relaxations of this set defined by divergence constraints between ν and ρ = µ(x), i.e., D ⊆ {ν : D(ν ρ) ≤ D } where D denotes some divergence between probability distributions. We will show in Section 3 that for several commonly used divergences (in fact, for any f -divergence; cf.), the relaxed problem can be solved efficiently. To formulate the general verification problem, consider a specification φ: X → Z ⊆ R: a generic function over the input space (that typically is a function of the classifier output) that we want to verify has certain properties. Unless otherwise specified, we will assume that X ⊆ R d (we work in a d dimensional input space). Our framework also involves a reference measure ρ (in the above example we would take ρ = µ(x)) and a collection of perturbed distributions D (in the above example we would take D = D x, = {µ(x): x − x ≤ }). Verifying that a given specification φ is robustly certified is equivalent to checking whether the optimal value of the optimization problem is non-negative. Solving problems of this form is the key workhorse of our general framework for black-box certification of adversarial robustness for smoothed classifiers. Using these ingredients we introduce two closely related certification problems: information-limited robust certification and full-information robust certification. In the former case, we assume that we are given only given access to In the latter case, we are given full-access to specification φ. The definitions are below. Definition 2.1 (Information-limited robust certification). Given reference distribution ρ ∈ P(X), probabilities θ a, θ b that satisfy θ a, θ b ≥ 0, θ a + θ b ≤ 1 and collection of perturbed distributions D ⊂ P(X) containing ρ, define the class of specifications S as We say that S is information-limited robustly certified at ρ with respect to D if the following condition holds: Note since we don't have access to φ, we need to prove that E X∼ν [φ(X)] ≥ 0 ∀ν ∈ D is satisfied for all specifications in set S. Although the information-limited case may seem challenging because we need to provide guarantees that hold simultaneously over a whole class of specifications, it turns out that, for perturbation sets D specified by an f -divergence bound, this certification task can be solved efficiently using convex optimization. Definition 2.2 (Full-information robust certification). Given a reference distribution ρ ∈ P(X), a specification φ: X → Z ⊆ R and a collection of perturbed distributions D ⊂ P(X) containing ρ, we say that φ is full-information robustly certified at ρ with respect to D if the following condition holds: Most often we are dealing with the case where we have full access to the specification φ, thus we should be able to certify using full-information robust certification. However, prior works, and , have only provided solutions to certify with respect to the information-limited case where we cannot use all of the information about φ. The framework we develop is a more general method that can be used in both information-limited and full-information scenarios. We will demonstrate that our framework recovers certificates provided by Cohen et al., Li et al. (2019 and dominates in the information-limited setting (see section 5). Further, it can utilize full-information about the specification φ to provide tighter certificates for smoothed probabilistic classifiers (see section 6). We first note that the definitions above are sufficient to capture the standard usage of randomized smoothing as it has been used in past work (e.g. ;) to verify the robustness of smoothed multi-class classifiers. Specifically, consider smoothing a classifier h: X → Y with a finite set of labels Y using a smoothing measure µ: X → P(X). The ing randomly smoothed classifier h s is defined in Eq. 1. Our goal is to certify that the prediction h s (x) is robust to perturbations of size at most measured by distance function To pose this question within our framework, we choose the reference distribution ρ = µ(x), the set of perturbed distributions D x, = {µ(x): d(x, x) ≤ }, and the following specifications. Let c = h s (x). For every c ∈ Y \ {c}, we define the specification φ c,c: X → {−1, 0, +1} as follows: Then, Eq. 5 holds if and only if every φ c,c, c = c, is robustly certified at µ(x) with respect to D x, (see Appendix A.1). Dealing with the set D x, directly is difficult due to its possibly non-convex geometry. In this section, we discuss specific relaxations of this set, i.e., choices for sets D such that D x, ⊆ D that are easier to optimize over. In particular, we focus on a general family of constraint sets defined in terms of f -divergences. These divergences satisfy a number of useful properties and include many well-known instances (e.g. relative entropy, total variation); see Appendix A.2 for details. Definition 2.3. (f -divergence constraint set). Given ρ, ν ∈ P(X), their f -divergence is defined as where f: R + → R is a convex function with f = 0. Given a reference distribution ρ, an f -divergence D f and a bound f ≥ 0, we define the f -divergence constraint set to be: Technically, this definition depends on the Radon-Nikodym derivative of ν with respect to ρ, but we ignore measure-theoretic issues in this paper for simplicity of exposition. For continuous distributions, ν and ρ should be treated as densities, and for discrete distributions as probability mass functions. Relaxations using f -divergence This construction immediately allows us to obtain relaxations of D x,. For example, by choosing f (u) = u log(u), we have the KL-divergence. Using KL-divergence yields the following relaxation between norm-based and divergence-based constraint sets for Gaussian smoothing measures, i.e. µ(x) = N (x, σ 2 I): Tighter relaxations can be constructed by combining multiple divergence-based constraints. In particular, suppose F is a collection of convex functions each defining an f -divergence, and assume each f ∈ F has a bound f associated with it. Then we can define the constraint set containing perturbed distributions where all the bounds hold simultaneously (Fig. 1): In this paper, we work with the following divergences: Rényi 1 d is an arbitrary distance function (not necessarily a metric e.g. 0). where f (x) = x α − 1 (for α ≥ 1) and with f (x) = 1 − x α (for 0 ≤ α ≤ 1). The limit α → ∞ yields the infinite order Rényi divergence It turns out that the Rényi and KL divergences are computationally attractive for a broad class of smoothing measures, while the Hockey-Stick divergences are theoretically attractive as they lead to optimal certificates in the information-limited setting. However, Hockey-Stick divergences are harder to estimate in general, so we only use them for Gaussian smoothing measures. In general, our framework can be used with any family of smoothing measures and any family of f divergences such that an upper bound on max ν∈Dx, D f (ν ρ) can be estimated efficiently. We describe how f -divergence bounds can be obtained for several classes of smoothing measures: Product measures Product measures are of the form µ(X i and µ i is a smoothing measure on X i . We note that the discrete smoothing measure used in , the Gaussian measure used in and the Laplacian measure used in are all of this form. For such measures, one can construct bounds on Rényi-divergences subject to any p norm constraint using a Lagrangian relaxation of the optimization problem max x: x−x p ≤ R α (µ(x) µ(x)) (see Appendix A.3 for details). Norm-based smoothing measures Appendix A.9.1 also shows how we can obtain bounds on the infinite-order Rényi divergence R ∞, as well as on several classes of f -divergences, for norm-based smoothing measures of the form µ(x)[X] ∝ exp(− X − x). We now show how to reduce the problems of full-information and information-limited robust blackbox certification to simple convex optimization problems for general constraint sets D defined in terms of f -divergences. This allows us, by extension, to solve the problem for related divergences like Rényi divergences. The following two theorems provide the main foundation for the verification procedures in the paper. Theorem 1 (Verifying full-information robust certification). Let D F be the constraint set defined by and denote its convex conjugate 2 by f * λ. The specification φ is robustly certified at ρ with respect to D F (cf. Definition 2.2) if and only if the optimal value of the following convex optimization problem is non-negative: The proof of Theorem 1, given in Appendix A.4, uses standard duality to show that the dual of the verification optimization problem has the desired form. We note that the special case where M = 1 reduces to Proposition 1 of , although the is used in a completely different context in that work. To build a practical certification algorithm from Theorem 1, we must do two things: 1) compute the optimal values of λ and κ; and 2) estimate the expectation in Eq. 6. Since the estimation of Algorithm 1 Full information certification (see appendix A.9 for details of subroutines) Inputs: Query access to specification φ: X → [a, b], sampling access to reference distribution ρ, divergences f i and bounds i, sample sizes N,Ñ, confidence level ζ. the expectation cannot be done in closed form (due to the black-box nature of φ), we must rely on sampling. In step 1 of Algorithm 1, we use N samples taken independently from ρ to estimate the expectation and solve the "sampled" optimization problem using an off-the-shelf solver . This gives us κ *, λ *, the estimated optimal values of κ and λ, respectively. Then we take these values and compute a high-confidence lower bound on the objective function of Eq. 6, which is then used to verify robustness. In particular, in step 2, we compute a high-confidence upper bound E ub on the expectation term in the objective such that ] with probability at least ζ; this computation involves takingÑ independent samples from ρ and finding a confidence interval around the ing empirical estimate of the expectation (for details, see Eq. 25 in Appendix A.9.1). Plugging in this estimate back into Eq. 6 gives the desired high-confidence lower bound in step 3. Details of both subroutines ESTIMATEOPT and UPPERCONFIDENCEBOUND used in Algorithm 1 are given in Algorithm 3 in Appendix A.9.2. Our next theorem concerns the specialization of this verification procedure to the information-limited setting. Theorem 2 (Verifying information-limited robust certification). Let D F be as in Theorem 1, and S and θ a, θ b be as in Definition 2.1. The class of specifications S is information-limited robustly certified at ρ with respect to D F (cf. Definition 2.1) if and only if the optimal value of the following convex optimization problem is non-negative: where θ = (θ a, θ b, 1 − θ a − θ b) and ζ = (ζ a, ζ b, ζ c) are interpreted as probability distributions. The proof of Theorem 2 is presented in Appendix A.5. It is based on the fact that in the informationlimited setting, it is possible to directly compute the expectation in Eq. 6, and in fact this expectation only depends on φ via the probabilities θ a and θ b. Theorem 2 naturally leads to a certification algorithm, presented in Algorithm 2. It simply uses the same procedure as to compute a high-confidence lower bound θ a on the probability of the correct class under randomized smoothing and then solves the convex optimization problem Eq. 7. Again, we can use an off-the-shelf solver CVXPY in step 2 for the general M > 1 case, but closed-form solutions are also available for M = 1; these are given in Table 4 in Appendix A.6. Inputs: Query access to classifier h, correct label y, sampling access to reference distribution ρ, divergences f i and bounds i, sample sizes N,Ñ, confidence level ζ. 1: Use the method in Section 3.2 of to determine whether y is the most likely label produced for h(X) with X ∼ ρ (using N samples) and if so, obtain a lower bound θ a on the probability that h outputs the correct class with confidence ζ (usingÑ samples). 2: Obtain o * by solving Eq. 7 with θ a ← θ a and We now present theoretical characterizing our certification methods and show the following: 1 For smoothed probabilistic classifiers, the full-information certificate dominates the information-limited one. 2 In the information-limited setting, if we define the f -divergence relaxation D F using HockeyStick divergences with specific parameters, then the computed certificate is provably tight. Consider a soft binary classifier H: X → that outputs the probability of label +1 and consider a point x ∈ X with H(x) > 1/2. We define the specification φ(x) = H(x) − 1 2. Then, the smoothed classifier H s (x) = E X∼µ(x) [H(X)] predicts label +1 for all x with x − x ≤ if and only if φ is full-information robustly certified at µ(x) with respect to D x, = {µ(x): x − x ≤ }. Note that the optimization in Theorem 1 depends on the full distribution of On the other hand, to certify this robustness in the information-limited setting is equivalent to taking the specification φ(To compare the two approaches, consider the objective of Eq. 6 with a single f -divergence constraint where the third line follows from Jensen's inequality. The proof of Theorem 2 shows that maximizing the final expression above with respect to κ, λ is equivalent to the dual of the information-limited certification problem Eq. 7. Thus, the information-limited setting computes a weaker certificate than the full-information setting for soft classifiers: Corollary 3. The optimization problem of Eq. 6 with the specification φ defined above has an optimal value that is greater than or equal to that of the optimization problem defined in Eq. 7. Ideally, we would like to certify robustness of specifications with respect to sets of the form The following shows that the gap between the ideal D x, and the tractable constraint sets D F can be closed in the context of information-limited robust certification provided that we can measure hockey-stick divergences of every non-negative order β ≥ 0. The proof is given in Appendix A.7. Define the constraint set Then, S is information-limited robustly certified at ρ with respect to D if and only if S is informationlimited robustly certified at ρ with respect to D HS . Thus, the optimal information-limited certificate in this case can be obtained by applying theorem 2 to D HS . Table 1 summarizes the differences between our work and prior work in terms of the set of smoothing measures admitted, the offline computation cost of the certification procedure (which needs to be performed once for every possible perturbation size and choice of smoothing measure), the perturbations considered, whether they can use information beyond θ a, θ b to improve the certificates and whether they compute optimal certificates for a given smoothing measure in the informationlimited setting. study the problem of verifying hard classifiers smoothed by Gaussian noise, and derive optimal certificates with respect to 2 perturbations of the input. Their can be recovered as a special case of our framework when applied to sets defined via constraints on hockey-stick divergences. Theorem 4 shows that the optimal certificate in the information-limited setting can be computed by applying theorem 2 to a constraint set with two hockey-stick divergences. For the Gaussian measure µ(x) = N x, σ 2 I, the HS divergence D HS,β (µ(x) µ(x)) can be computed in closed form and is purely a function of the 2 distance x − x 2. This enables us to efficienctly compute the β * a, β * b in theorem 4. Thus, we obtain the following (see Appendix A.7.2 for a proof): Let D HS be defined as in theorem 4. Then, applying theorem 2 to the constraint set D HS gives the following condition for robust certification: where Ψ g is the CDF of a standard normal random variable N. With straightforward algebra (worked out in appendix A.7.2), this can be shown to be equivalent to which is the certificate from Theorem 1 of. derive optimal certificates in the information-limited setting under the assumption that the likelihood ratio between measures can only take values from a finite set. This is a restrictive assumption that prevents the authors from accommodating natural smoothing measures like Gaussian or Laplacian measures. Further, the complexity of computing the certificates in their framework is significant: O(d 3) computation (where d is the input dimension) is needed to certify smoothness to 0 perturbations. The authors also derive tighter certificates for the special case of certain classes of decision trees by exploiting the tree structure. In contrast, our framework can derive tighter certificates in the full-information setting for arbitrary classifiers. use properties of Rényi divergences to derive robustness certificates for classifiers smoothed by Gaussian (resp. Laplacian) noise under 2 (resp. 1) perturbations. Their can be obtained as special cases of ours; in particular, the Rényi divergence certificates in Table 4 (in Appendix A.6) recover the of Lemma 1 of , but the latter are only applicable for Gaussian and Laplacian smoothing measures. introduce the notion of pixel differential privacy (pixelDP) and show that smoothing measures µ satisfying pixelDP with respect to a certain type of perturbations lead to adversarially robust classifiers. We can show that pixelDP can be viewed as a special instance of our certification framework with two specific hockey-stick divergences, and that the certificates derived from the pixelDP are provably dominated by the certificates from our framework (Theorem 1) with the same choice of divergences (see Corollary 7 in Appendix A.7.3). To compare full-information certificates with limited-information certificates, we trained a ResNet-152 model on ImageNet with data augmentation by adding noise via sampling from a zero-mean Gaussian with variance 0.5 for each coordinate; during certification we sample from the same distribution to estimate lower bounds on the probability of the top predicted class. For the fullinformation certificate, we use two hockey-stick divergences for the certificate and tune the parameters β to obtain the highest value in the optimization problem in step 2 of Algorithm 1. For the infromationlimited certificate, our approach reduces to that of and we follow the same certification procedure. We use N = 1000,Ñ = 1000000, ζ =.99 for both certification procedures. The dashed line represents equal certificates and every point below the dashed line has a stronger certificate from the full information verification setting. We run the comparison on 50 randomly selected examples from the validation set. Each blue dot in Figure 2 corresponds to one test point, with its x coordinate representing the radius for full information certificate (from Algorithm 1) and y coordinate the information-limited certificate (which is equivalent to the certification procedure of). The running time of the full-information certification procedure is.2s per example (excluding the sampling cost) while the limited-information certification takes.002s per example. Both procedures incur the same sampling cost as they use the same number of samples. Figure 2 shows the difference between the two certificates. The certificate provided by the fullinformation method is always stronger than the one given by the information-limited method. The difference is often substantia -for one of the test samples, the full-information setting can certify robustness to 2 perturbations of radius = 9.42 in the full-information case while the limitedinformation certificate can only be provided for perturbation radius = 2.69. In this section we consider 0 perturbations for both ImageNet and Binary MNIST (that is, we consider the number of pixels that can be perturbed without changing the prediction). To test for scalability and tightness trade-offs of our framework, we compare our methodology to that of , as their work obtains the optimal bound for 0. We computed certificates for a single model for each classification task; for Binary MNIST we used the same model and training procedure as and for ImageNet, we used the model released in the Github code accompanying the paper of. We use the discrete smoothing measure (appendix A.10) with parameter p = 0.8 for Binary MNIST certification, and p = 0.2 for ImageNet certification. In our experiments we ran the certification procedures on all test examples from the Binary MNIST dataset, while for ImageNet, following prior work , on every 100th example from validation set. The proportion of the examples for which accuracy can be certified are reported in Table 2, 2019). However, building audio classifiers that are provably robust to adversarial attacks has been hard due to the complexity of audio processing architectures. We take a step towards provably robust audio classifiers by showing that our approach can certify robustness of a classifier trained for speaker recognition on a state-of-the-art model for this task. We focus on 0 perturbations that zero out a fraction of the audio sample, as they correspond to missing data in an audio signal. Missing data can occur due to errors in recording audio or packets dropped while transmitting an audio signal over a network and is a common issue . In principle, the method of is applicable to compute robustness certificates, but at an impractically large computational cost, since the computation needs to be repeated whenever an input of a new length (for which a certificate has not previously been computed) arrives. Concretely, this constitutes an O(d 3) computation for the length d ranging from 38 to 522,320 (the set of audio sequence lengths observed in the Librispeech test dataset ). The are shown in Table 3. To the best of our knowledge, these are the first showing certified robustness of an audio classifier. We believe this is a significant advance towards certification of classifiers in audio and classifiers operating on variable-length inputs more generally. . From the Librispeech dataset, we created a corpus of sentence utterances from ten different speakers. The classification task is, given an audio sample, to predict whom is speaking. The test set consisted of 30 audio samples for each of the ten speakers. We use a DeepSpeaker architecture , trained with the Adam optimizer (β 1 = 0.9, β 2 = 0.5) for 50,000 steps with a learning rate of 0.0001. The architecture is the same as that of , except for changing the number of neurons in the final layer for speaker identification with ten classes. Three models were trained with smoothing values of p = 0.5, p = 0.7, and p = 0.9, respectively, and we used the same values for certification. Certification was performed using N = 1000,Ñ = 1000000, ζ =.99 using M = 1 Rényi divergence, with α tuned to obtain the best certificate. The proportion of samples with certified robustness for different accuracy values are reported, computed on 300 test set samples. We have introduced a general framework for black-box verification using f -divergence constraints. The framework improves upon state-of-the-art on both image classification and audio tasks by a significant margin in terms of robustness certificates or computation time. We believe that our framework can potentially enable scalable computation of robustness verification for more complex predictors and structured perturbations that can be modeled using f-divergence constraints. Therefore, E X∼ν [φ c,c (X)] ≥ 0 for all c ∈ Y \ {c} is equivalent to c ∈ arg max y∈Y P X∼ν [h(X) = y]. For ν = µ(x), this means that h s (x) = c (assuming the argmax is unique). In other words, E X∼ν [φ c,c (X)] ≥ 0 for all c ∈ Y \ {c} and all µ(x) ∈ D x, if and only if h s (x) = c for all x such that d(x, x) ≤, proving the required robustness certificate. Consider a soft classifier H: X → P(Y) that for each input x returns a probability distribution H(x) over the set of potential labels Y (e.g. H might represent the outputs of the soft-max layer of a neural network). As in the case of hard classifiers, our methodology can be used to provide robustness guarantees for smoothed soft classifiers obtained by applying a smoothing measure µ(x) to the input. In this case, the smoothed classifier is again a soft classifier given by Let x be a fixed input point and write p = H s (x) ∈ P(Y) to denote the distribution over labels. A number of robustness properties about the soft classifier H s at x can be phrased in terms of Definition 2.2. For example, let Y = {1, . . ., K} and suppose that p 1 ≥ p 2 ≥ · · · ≥ p K so that {1, . . ., k} are the top k labels at x. Then we can verify that the set of top k labels will not change when moving the input from x to x with x − x ≤ by defining the specifications and showing that all of these φ i,j are robustly certified at µ(x) with respect to the set D x, defined above. The case k = 1 corresponds to robustness of the standard classification rule outputting the label with the largest score. Another example is robustness of classifiers which are allowed to abstain. For example, suppose we build a hard classifierh out of H s which returns the label with the maximum score as long as the gap between this score and the score of any other label is at least γ; otherwise it produces no output. Then we can certify thath will not abstain and return the label c = arg max y∈Y p y at any point close to x by showing that every φ c (z) = H(z) c − H(z) c − γ, c = c, is robustly certified at µ(x) with respect to D x,. A number of well-known properties about f -divergences are used throughout the paper, both explicitly and implicitly. Here we review such properties for the readers' convenience. Proofs and further details can be found in, e.g., (Csiszár et al., 2004;). Recall that the f -divergences can be defined for any convex function f: R + → R such that f = 0. We note that this requirement holds without loss of generality as the map x → f (x) − f is convex whenever f is convex. Any f -divergence D f satisfies the following: 2. D f (ρ ρ) = 0, and D f (ν ρ) = 0 implies ν = ρ whenever f is strictly convex at 1. for any function F, where F * (ρ) is the push-forward of ρ. u is again convex withf = 0. We being with the optimization problem The constraint can be rewritten as Forming the Lagrangian relaxation, we obtain where the constraint |x i − x i | ≤ is implied by x − x p ≤. We can maximize separately over each x i to obtain By weak duality, for any γ ≥ 0, this is an upper bound on Eq. 10. We can minimize this bound over γ ≥ 0 to obtain the tightest bound. The minimization over x i for each i can be solved in closed-form or via as simple 1-dimensional minimization problem for most smoothing measures. For simplicity of exposition (and to avoid measure theoretic issues), we focus on the case where ν, ρ have well defined densities ν(x), ρ(x) such that ρ(x) > 0 whenever ν(x) > 0. We begin by rewriting the optimization problem in terms of the likelihood ratio r(X) = where the first two equalities follow directly by plugging in ν(X) = ρ(X)r(X) and the third is obtained using the fact that ν is a probability measure. Using these relations, the optimization over ν can be rewritten as where r ≥ 0 denotes that r(x) ≥ 0 ∀x ∈ X. The optimization over r is a convex optimization problem and can be solved using Lagrangian duality as follows -we first dualize the constraints on r to obtain By strong duality, it holds that maximizing the final expression with respect to λ ≥ 0, κ achieves the optimal value in Eq. 11. Thus, if the optimal value is smaller than 0, the specification is not robustly certified and if it is larger than 0, the specification is robustly certified. Finally, since we are ultimately interested in proving that the objective is non-negative, we can restrict ourselves to λ ≥ 0 such that i λ i = 1 (since if the optimal λ added up to something larger, we could simply rescale the values to add up to 1 and multiply κ by the same scaling factor without changing the sign of the objective function). This concludes the proof of correctness of the certificate Eq. 6. For the next , we observe that when φ is ternary valued, the optimization over κ, λ above can be written as max where Writing out the expression for f *, we obtain where the second inequality follows from strong duality. The inner maximization is unbounded unless y∈{a,b,c} One thing to note is that, we can rewrite these constraints in terms of ζ = θ γ, i.e. ζ y = θ y γ y for y ∈ {a, b, c}. These constraints ensure that ζ is a probability distribution over {+1, 0, −1} and furthermore Thus, the second constraint above is equivalent to D fi (ζ θ) ≤ i. Writing the optimization problem in terms of ζ, we obtain min In this section we present closed-form certificates for the information-limited setting which can be derived from Theorem 2 for M = 1. The are summarized in Table 4. In the next subsections we present the derivation of the certificates for Hockey-Stick and Rényi divergences. The certificates for the KL and infinite Rényi divergence can be derived by taking limits of the Rényi certificate (as α → 1, ∞ respectively). The function f (u) = max(u − β, 0) − max(1 − β, 0) is a convex function with f = 0. Then, we have Table 4: Certificates for various f -divergences for the information-limited setting. Note that the Rényi divergences are not proper f -divergences, but are defined as R α (ν ρ) = 1 α−1 log(1 + D f (ν ρ)). The infinite Rényi divergence, defined as sup x log(ν(x)/ρ(x)), is obtained by taking the limit α → ∞. All certificates depend on the gap between θ a and θ b. Notation: The certificate given by Eq. 6 in Theorem 1 for this divergence in the case of a smoothed hard classifier takes the form where the specification takes the values Plugging in the expression for f * the objective function above takes the form where we use the notation [u] + = max(u, 0) and assumed the constraints κ ≤ λ − 1 since the objective is −∞ otherwise. If β ≤ 1, the objective is increasing monotonically in κ, so the optimal value is to set κ to its upper bound λ − 1. Plugging this in, the possible values of the derivative with respect to λ are Thus, if ≤ βθ a, the maximum is attained at 2, if βθ a ≤ ≤ β(1 − θ b), the maximum is attained at 1, else the maximum is attained at 0, leading to the certificate: Thus, the certificate is non-negative only if The case β ≥ 1 can be worked out similarly, leading to The two cases can be combined as We consider the cases α ≥ 1 and α ≤ 1 separately. Suppose we have a bound on the Rényi divergence Then the certificate Eq. 6 simplifies to (after some algebra) Setting the derivative with respect to λ to 0 and solving for λ, we obtain, and the optimal certificate reduces to For this number to be positive, we need that κ ≥ 0 and The LHS above evaluates to where γ = 1 κ ≥ 0. Maximizing this expression with respect to γ, we obtain, so that the certificate reduces to Taking logarithms now gives the . is a convex function with f = 0. Then, we have Then the certificate from Eq. 6 reduces to with the constraint κ ≤ −1 (otherwise the certificate is −∞). Setting the derivative with respect to λ to 0 and solving for λ, we obtain Plugging this back into the certificate and setting β = α 1−α, we obtain For this number to be positive, we require that The LHS of the above expression evaluates to where γ = − 1 κ. Maximizing this expression over γ ∈, we obtain the final certificate to be Taking logarithms, we obtain A.7.1 PROOF OF THEOREM 4 At a high level, the proof shows that, in the information-limited case, to achieve robust certification under an arbitrary set of constraints D it suffices to know the "envelope" of D with respect to all hockey-stick divergences of order β ≥ 0, i.e. the function β → max ν∈D D HS,β (ν ρ) captures all the necessary information to provide information-limited robust certification with respect to D. We start by considering the following optimization problem: In the information-limited setting, this problem attains the minimum expected value over φ ∈ S. Here 1[φ(X) = 1] denotes the indicator function. It will be convenient to write this in a slightly different form: Rather than looking at the outputs of Ψ as the +1, 0, −1, we look at them as vectors in R 3: Then, we can write the optimization problem Eq. 12 equivalently as We first consider the minimization over Ψ for a fixed value of ν. We begin by observing that since the objective is linear, the optimization over Ψ can be replaced with the optimization over the convex hull of the set of Ψ that satisfy the constraints . Since each input x ∈ X can be mapped independently of the rest, the convex hull is simply the cross product of the convex hull at every x, to obtain the constraint set Therefore, the optimization problem reduces to This is a convex optimization problem in Ψ. Denote Considering the dual of this optimization problem with respect to the optimization variable Ψ, we obtain Since we can choose Ψ(x) independently for each x ∈ X, we can minimize each term in the expectation independently to obtain This implies that the Lagrangian evaluates to We now consider two cases: Case 1 (λ a ≥ λ b ≥ 0) In this case, we can see that Then, the Lagrangian reduces to Case 2 (λ b ≥ λ a ≥ 0) In this case, we can see that Then, the Lagrangian reduces to We know that 1 − θ a ≥ θ b and λ b ≥ λ a. If λ b > λ a, by choosing λ a = λ a + κ and λ b = λ b − κ for some small κ > 0, we know that the the sum of the first three terms would reduce while the final term would remain unchanged. Thus, at the the optimum in this case, we can assume λ a = λ b and we obtain Final analysis of the Lagrangian Combining the two cases we can write the dual problem as By strong duality, the optimal value of the above problem precisely matches the optimal value of Eq. 14 (and hence Eq. 12). Thus, information limited robust certification with respect to D holds if and only if Eq. 15 has a non-negative optimal value for each ν ∈ D. Since we have that information-limited robust certification holds if and only if the optimal value of is non-negative. Further, since the optimal value only depended on the value of D HS,β (ν ρ) for β ≥ 0, it is equivalent to information-limited robust certification with respect to D HS. The above argument also shows that in this case, information-limited robust certification with respect to D is equivalent to requiring that the following convex optimization problem has a non-negative optimal value: Let λ * a, λ * b be the optimal values attained. Since this certificate depends only on the value of two Hockey-stick divergences at λ * a, λ * b, it must coincide with the application of theorem 2 to the constraint set D HS defined by constraints on these hockey-stick divergences (as we know that 2 computes the optimal certificate for any constraint set defined only by a set of f-divergences). This observation completes the proof. Theorem 4 gives us the optimal limited-information certificate problem provided that we can compute for each β ≥ 0. In particular, when µ is a Gaussian measure µ(x) = N x, σ 2 I, we can leverage the following from. Lemma 6. Let Ψ g be the CDF of a standard normal random variable N. For any β ≥ 0 and x ∈ R d we have Applying Eq. 17 to the expression in Lemma 6 proves Corollary 5. Proof of Corollary 5. With the notation from Theorem 4 we have, for β ≥ 0, Plugging this expression into Eq. 17 allows us to verify information-limited robust certification of N x, σ 2 I with respect to D x, = {N x, σ 2 I : x − x 2 ≤} by solving Eq. 9 then follows from setting the derivatives of this expression to 0 with respect to λ a, λ b and imposing the condition that the optimal solution is non-negative. To check that Corollary 5 is equivalent to the optimal certification in (, Theorem 1) we first recall that, in our notation, their can be stated as: the class of specifications S in Definition 2.1 is information-limited robustly certified at ρ = N x, σ 2 I with respect to The equivalence between Eq. 9 and Eq. 18 now follows from the identity 1 − Ψ g (θ) = Ψ g (−θ) and the monotonicity of Ψ g: Pixel differential privacy (pixelDP) was introduced in using the same similarity measure between distributions used in differential privacy: a distribution-valued function G: R d → P(Z) satisfies (ε, τ)-pixelDP with respect to p perturbations if for any x − x p ≤ 1 it holds that D DP,e ε (G(x) G(x)) ≤ τ, where and the supremum is over all (measurable) subsets E of Z. In particular, Lecuyer et al. show that using a smoothing measure µ satisfying pixelDP with respect to p leads to adversarially robust classifiers against p perturbations. To show that their fits as a particular instance of our framework, take ρ = µ(x) and fix ε ≥ 0 and τ ∈. Due to the symmetry of the constraint x − x p ≤ 1, if µ satisfies (ε, τ)-pixelDP with respect to p perturbations, then we have the relaxation condition {µ(x): Now we recall that noticed that D DP,e ε is equivalent to the hockey-stick divergence D HS,β of order β = e ε. Thus, since f -divergences are closed under reversal (property 4 in Appendix A.2), we see that the constraint set D ε,τ can be directly written in the form D F (cf. Section 2.2). The main in is a limited-information black-box certification method for smoothed classifiers. The ing certificate for, which provides certification with respect to D ε,τ, is given by For comparison, the certificate we obtain for the relaxation {ν : D DP,e ε (ν ρ) ≤ τ } of D ε,τ (HS certificate in Table 4) already improves on the certificate by Lecuyer et al. whenever θ a − θ b ≥ (β − 1)(1 − θ a − θ b), which, e.g., always holds in the binary classification case. Furthermore, since Theorem 2 provides optimal certificates for D, we have the following . Corollary 7. The optimal certificates for the constraint set D (cf. Eq. 20) obtained from Theorem 2 are stronger than those obtained from Eq. 21. Lemma 8. The smoothing measure µ: X → P(X) with density µ(if x is any norm. Further, if f is convex function with f = 0 such that f 1 u is convex and monotonically increasing in u, then Proof. By the triangle inequality, we have Similarly, for f that satisfy the conditions of the theorem, it can be shown that D f (µ(x) µ(x)) is convex in x so that its maximum over the convex set x − x ≤ is attained on the boundary. For several norms, the optimization problem in Eq. 22 can be solved in closed form. These include 1, 2, ∞ norms and the matrix spectral norm and nuclear norm (the final two are relevant when X is a space of matrices). The are documented in Table 5. Thus, every f -divergence that meets the conditions of Lemma 8 can be estimated efficiently for these norms. In particular, the divergences that are induced by the functionsf (u −α) for any monotonic convex functionf and α ≥ 0 satisfy this constraint. This gives us a very flexible class of f -divergences that can be efficiently estimated for these norm-based smoothing measures. Table 5: Bounds on f -divergences: e 0 is the vector with 1 in the first coordinate and zeros in all other coordinates and 1 is the vector with all coordinates equal to 1. µ p refers to the smoothing measure induced by the p norm, U(S) refers to the uniform measure over the set S, O is the set of orthogonal matrices and B p = {z p ≤ 1} is the unit ball in the p norm. Efficient sampling The only other requirement for obtaining a certificate computationally is to be able to sample from µ(x) to estimate θ a, θ b. Since µ(x) is log-concave, there are general purpose polynomial time algorithms for sampling from this measure. However, for most norms, more efficient methods exist, as outlined below. The random variable X ∼ µ(x) can be obtained as X = x + Z with Z ∼ µ. Thus, to sample from µ(x) for any x it is enough to be able to sample from µ. For · 1, this reduces to sampling from a Laplace distribution which can be done easily. For · ∞, give the following efficient sampling procedure: first sample r from a Gamma distribution with shape d + 1 and mean d + 1, i.e. r ∼ Γ(d + 1, 1), and then sample each Theorem 9 gives a short proof of correctness for this procedure. Theorem 10 also has a similar for the case of · 2 and Table 5 lists the sampling procedures for several norms. Proof. We first compute the normalization constant for a density of the form ∝ e − z ∞ as follows: Next we show the density of Z satisfies p Z (z) = e 2. Applying Bernstein's inequality to the sum and the sum of the squares of these random variables, we get the empirical Bernstein bound , which states that with probability at least 1 − ζ, |Z − m| ≤ 2σ 2 log(3/ζ) N + 3R log(3/ζ) t. The main benefit of the above inequality is that as long as the variance of the sample Z 1,..., Z N is small, the convergence rate becomes essentially O(1/N) instead of the standard O(1/ √ N). Also, since Eq. 23 only contains empirical quantities apart from the range R, it can be used to obtain computable bounds for the expectation µ: with probability at least 1 − ζ, Z − 2σ 2 log(3/ζ) N − 3R log(3/ζ) N ≤ m ≤Z + 2σ 2 log(3/ζ) N + 3R log(3/ζ) N. A.9.2 SUBROUTINES FOR ALGORITHM 1 The bound in Eq. 24 can be applied to approximate the expectation in Eq. 6 with high probability for given values of λ and κ. More specifically, if the function f * λ (κ − φ(·)) is bounded with range R, then taking N samples X 1,..., X N independently from ρ, and defining Z i = f * λ (κ − φ(X i)), and Z andσ 2 as above, Eq. 24 implies that with probability at least 1 − ζ, Plugging in this bound to Eq. 6 gives a high-probability lower bound for the function to be maximized for any given λ and κ. Details of the above procedures are given in Algorithm 3. We use an off-the-shelf convex optimization solver in the ESTIMETEOPT subroutine. Algorithm 3 return κ *, λ *. end function function UPPERCONFIDENCEBOUND(ρ, φ,Ñ,, a, b, λ, κ, ζ) Sample X 1,..., XÑ ∼ ρ and compute. Set E ub ←Z + 2σ 2 log(3/ζ) N + 3R log(3/ζ) N. return E ub. end function A.10 0 SMOOTHING MEASURE We can also handle discrete perturbations in our framework. A natural case to consider is 0 perturbations. In this case, we assume that X = A d where A = {1, . . ., K} is a discrete set. Then, we can choose where p + q = 1, p ≥ q ≥ 0, and p denotes the probability that the measure retains the value of x and q K−1 denotes a uniform probability of switching it to a different value. In this case, it can be shown that for every α > 0 that This can be extended to structured discrete perturbations by introducing coupling terms between the perturbations: This would correlate perturbations between adjacent features (which for example may be useful to model correlated perturbations for time series data). Since this can be viewed as a Markov Chain, Rényi divergences between µ(x), µ(x) are still easy to compute. Here we compare our certificates to on the MNIST, CIFAR-10 and ImageNet datasets. The smoothing distribution is as described in A.8; a zero mean Laplacian distribution with smoothing value defined by the scale of the distribution. We first describe the hyperparameters used in training and certification for each of the datasets. For all datasets, images were normalized into a range. MNIST hyperparameters: We trained a standard three layer CNN ReLU classifier for 50,000 steps with a batch size of 128 and a learning rate of 0.001. The smoothing value during training was set to 1.0. For certification we use N = 1K,Ñ = 10M, ζ =.99, and sweep over a range of smoothing values between 0.5 and 1.5 and report the best certificate found. Certified accuracy is reported on 1,000 MNIST test set images. We trained a Wide ResNet classifier for 50,000 training steps with a batch size of 32 and a learning rate of 0.001. The smoothing value during training was set to 0.2. For certification we use N = 1K,Ñ = 1M, ζ =.99, and sweep over a range of smoothing values between 0.1 and 0.5 and report the best certificate found. Certified accuracy is reported on 1,000 CIFAR-10 test set images. We trained a ResNet-152 classifier for 1 million training steps with a batch size of 16 and an initial learning rate of 0.1 that is decayed by a factor of ten every 25,000 steps. The smoothing value during training was set to 0.1. For certification we use N = 1K,Ñ = 100K, ζ =.99, and sweep over a range of smoothing values between 0.05 and 0.25 and report the best certificate found. Certified accuracy is reported on 500 ImageNet validation set images. Certified Accuracy Results for MNIST can be seen in table 6, CIFAR-10 and ImageNet are shown in figure 3. We significantly outperform on all three datasets. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJlKrkSFPH | Develop a general framework to establish certified robustness of ML models against various classes of adversarial perturbations |
The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page One of the unsolved key challenges in machine learning is unsupervised learning of structured representation for a visual scene containing many objects with occlusion, partial observability, and complex . When properly decomposed into meaningful abstract entities such as objects and spaces, this structured representation brings many advantages of abstract (symbolic) representation to areas where contemporary deep learning approaches with a global continuous vector representation of a scene have not been successful. For example, a structured representation may improve sample efficiency for downstream tasks such as a deep reinforcement learning agent . It may also enable visual variable binding for reasoning and causal inference over the relationships between the objects and agents in a scene. Structured representations also provide composability and transferability for better generalization. Recent approaches to this problem of unsupervised object-oriented scene representation can be categorized into two types of models: scene-mixture models and spatial-attention models. In scenemixture models (; ;), a visual scene is explained by a mixture of a finite number of component images. This type of representation provides flexible segmentation maps that can handle objects and segments of complex morphology. However, since each component corresponds to a full-scale image, important physical features of objects like position and scale are only implicitly encoded in the scale of a full image and further disentanglement is required to extract these useful features. Also, since it does not explicitly reflect useful inductive biases like the locality of an object in the Gestalt principles , the ing component representation is not necessarily a representation of a local area. Moreover, to obtain a complete scene, a component needs to refer to other components, and thus inference is inherently performed sequentially, ing in limitations in scaling to scenes with many objects. In contrast, spatial-attention models can explicitly obtain the fully disentangled geometric representation of objects such as position and scale. Such features are grounded on the semantics of physics and should be useful in many ways (e.g., sample efficiency, interpretability, geometric reasoning and inference, transferability). However, these models cannot represent complex objects and segments that have too flexible morphology to be captured by spatial attention (i.e. based on rectangular bounding boxes). Similar to scene-mixture models, previous models in this class show scalability issues as objects are processed sequentially. In this paper, we propose a method, called Spatially Parallel Attention and Component Extraction (SPACE), that combines the best of both approaches. SPACE learns to process foreground objects, which can be captured efficiently by bounding boxes, by using parallel spatial-attention while decomposing the remaining area that includes both morphologically complex objects and segments by using component mixtures. Thus, SPACE provides an object-wise disentangled representation of foreground objects along with explicit properties like position and scale per object while also providing decomposed representations of complex components. Furthermore, by fully parallelizing the foreground object processing, we resolve the scalability issue of existing spatial attention methods. In experiments on 3D-room scenes and Atari game scenes, we quantitatively and qualitatively compare the representation of SPACE to other models and show that SPACE combines the benefits of both approaches in addition to significant speed-ups due to the parallel foreground processing. The contributions of the paper are as follows. First, we introduce a model that unifies the benefits of spatial-attention and scene-mixture approaches in a principled framework of probabilistic latent variable modeling. Second, we introduce a spatially parallel multi-object processing module and demonstrate that it can significantly mitigate the scalability problems of previous methods. Lastly, we provide an extensive comparison with previous models where we illustrate the capabilities and limitations of each method. In this section, we describe our proposed model, Spatially Parallel Attention and Component Extraction (SPACE). The main idea of SPACE, presented in Figure 1, is to propose a unified probabilistic generative model that combines the benefits of the spatial-attention and scene-mixture models. SPACE assumes that a scene x is decomposed into two independent latents: foreground z fg and z bg. The foreground is further decomposed into a set of independent foreground objects z fg = {z fg i} and the is also decomposed further into a sequence of segments z bg = z bg 1:K. While our choice of modeling the foreground and independently worked well empirically, for better generation, it may also be possible to condition one on the other. The image distributions of the foreground objects and the components are combined together with a pixel-wise mixture model to produce the complete image distribution: Here, the foreground mixing probability α is computed as α = f α (z fg). This way, the foreground is given precedence in assigning its own mixing weight and the remaining is apportioned to the . The mixing weight assigned to the is further sub-divided among the K components. These weights are computed as π k = f π k (z bg 1:k) and k π k = 1. With these notations, the complete generative model can be described as follows. Figure 1: Illustration of the SPACE model. SPACE consists of a foreground module and a module. In the foreground module, the input image is divided into a grid of H × W cells (4 × 4 in the figure). An image encoder is used to compute the z where, z depth, and z pres for each cell in parallel. z where is used to identify proposal bounding boxes and a spatial transformer is used to attend to each bounding box in parallel, computing a z what encoding for each cell. The model selects patches using the bounding boxes and reconstructs them using a VAE from all the foreground latents z fg. The module segments the scene into K components (4 in the figure) using a pixel-wise mixture model. Each component consists of a set of latents z bg = (z m, z c) where z m models the mixing probability of the component and z c models the RGB distribution of the component. The components are combined to reconstruct the using a VAE. The reconstructed and foreground are then combined using a pixel-wise mixture model to generate the full reconstructed image. We now describe the foreground and models in more detail. Foreground. SPACE implements z fg as a structured latent. In this structure, an image is treated as if it were divided into H × W cells and each cell is tasked with modeling at most one (nearby) object in the scene. This type of structuring has been used in (; ;). Similar to SPAIR, in order to model an object, each cell i is associated with a set of latents (z). In this notation, z pres is a binary random variable denoting if the cell models any object or not, z where denotes the size of the object and its location relative to the cell, z depth denotes the depth of the object to resolve occlusions and z what models the object appearance and its mask. These latents may then be used to compute the foreground image component p(x|z fg) which is modeled as a Gaussian distribution N (µ fg, σ 2 fg). In practice, we treat σ 2 fg as a hyperparameter and decode only the mean image µ fg. In this process, SPACE reconstructs the objects associated to each cell having z SPACE imposes a prior distribution on these latents as follows: Here, only z pres i is modeled using a Bernoulli distribution while the remaining are modeled as Gaussian. Since we cannot analytically evaluate the integrals in equation 2 due to the continuous latents z fg and z bg 1:K, we train the model using a variational approximation. The true posterior on these variables is approximated as follows. This is used to derive the following ELBO to train the model using the reparameterization trick and SGD . See Appendix B for the detailed decomposition of the ELBO and the related details. Parallel Inference of Cell Latents. SPACE uses mean-field approximation when inferring the cell latents, so z ) for each cell does not depend on other cells. As shown in Figure 1, this allows each cell to act as an independent object detector, spatially attending to its own local region in parallel. This is in contrast to inference in SPAIR, where each cell's latents auto-regressively depend on some or all of the previously traversed cells in a row-major order i.e., q(z However, this method becomes prohibitively expensive in practice as the number of objects increases. claim that these lateral connections are crucial for performance since they model dependencies between objects and thus prevent duplicate detections, we challenge this assertion by observing that 1) due to the bottom-up encoding conditioning on the input image, each cell should have information about its nearby area without explicitly communicating with other cells, and 2) in (physical) spatial space, two objects cannot exist at the same position. Thus, the relation and interference between objects should not be severe and the mean-field approximation is a good choice in our model. In our experiments, we verify empirically that this is indeed the case and observe that SPACE shows comparable detection performance to SPAIR while having significant gains in training speeds and efficiently scaling to scenes with many objects. Preventing Box-Splitting. If the prior for the bounding box size is set to be too small, then the model could split a large object by multiple bounding boxes and when the size prior is too large, the model may not capture small objects in the scene, ing in a trade-off between the prior values of the bounding box size. To alleviate this problem, we found it helpful to introduce an auxiliary loss which we call the boundary loss. In the boundary loss, we construct a boundary of thickness b pixels along the borders of each glimpse. Then, we restrict an object to be inside this boundary and penalize the model if an object's mask overlaps with the boundary area. Thus, the model is penalized if it tries to split a large object by multiple smaller bounding boxes. A detailed implementation of the boundary loss is mentioned in Appendix C. Our proposed model is inspired by several recent works in unsupervised object-oriented scene decomposition. The Attend-Infer-Repeat (AIR) framework uses a recurrent neural network to attend to different objects in a scene and each object is sequentially processed one at a time. An object-oriented latent representation is prescribed that consists of'what','where', and'presence' variables. The'what' variable stores the appearance information of the object, the'where' variable represents the location of the object in the image, and the'presence' variable controls how many steps the recurrent network runs and acts as an interruption variable when the model decides that all objects have been processed. Since the number of steps AIR runs scales with the number of objects it attends to, it does not scale well to images with many objects. Spatially Invariant Attend, Infer, Repeat (SPAIR) attempts to address this issue by replacing the recurrent network with a convolutional network. Similar to YOLO , the locations of objects are specified relative to local grid cells rather than the entire image, which allow for spatially invariant computations. In the encoder network, a convolutional neural network is first used to map the image to a feature volume with dimensions equal to a pre-specified grid size. Then, each cell of the grid is processed sequentially to produce objects. This is done sequentially because the processing of each cell takes as input feature vectors and sampled objects of nearby cells that have already been processed. SPAIR therefore scales with the pre-defined grid size which also represents the maximum number of objects that can be detected. Our model uses an approach similar to SPAIR to detect foreground objects, but importantly we make the foreground object processing fully parallel to scale to large number of objects without performance degradation. Works based on Neural Expectation Maximization do achieve unsupervised object detection but do not explicitly model the presence, appearance, and location of objects. These methods also suffer from the problem of scaling to images with a large number of objects. For unsupervised scene-mixture models, several recent models have shown promising . MONet leverages a deterministic recurrent attention network that outputs pixel-wise masks for the scene components. A variational autoencoder (VAE) is then used to model each component. IODINE approaches the problem from a spatial mixture model perspective and uses amortized iterative refinement of latent object representations within the variational framework. GENESIS also uses a spatial mixture model which is encoded by component-wise latent variables. Relationships between these components are captured with an autoregressive prior, allowing complete images to be modeled by a collection of components. We evaluate our model on two datasets: 1) an Atari dataset that consists of random images from a pretrained agent playing the games, and 2) a generated 3D-room dataset that consists of images of a walled enclosure with a random number of objects on the floor. In order to test the scalability of our model, we use both a small 3D-room dataset that has 4-8 objects and a large 3D-room dataset that has 18-24 objects. Each image is taken from a random camera angle and the colors of the objects, walls, floor, and sky are also chosen at random. Additional details of the datasets can be found in the Appendix E. Baselines. We compare our model against two scene-mixture models (IODINE and GENESIS) and one spatial-attention model (SPAIR). Since SPAIR does not have an explicit component, we add an additional VAE for processing the . Additionally, we test against two implementations of SPAIR: one where we train on the entire image using a 16 × 16 grid and another where we train on random 32 × 32 pixel patches using a 4 × 4 grid. We denote the former model as SPAIR and the latter as SPAIR-P. SPAIR-P is consistent with the SPAIR's alternative training regime on Space Invaders demonstrated in to address the slow training of SPAIR on the full grid size because of its sequential inference. Lastly, for performance reasons, unlike the original SPAIR implementation, we use parallel processing for rendering the objects from their respective latents onto the canvas 1 for both SPAIR and SPAIR-P. Thus, because of these improvements, our SPAIR implementation can be seen as a stronger baseline than the original SPAIR. The complete details of the architecture used is given in Appendix D. In this section, we provide a qualitative analysis of the generated representations of the different models. For each model, we performed a hyperparameter search and present the for the best settings of hyperparameters for each environment. Figure 2 shows sample scene decompositions of our baselines from the 3D-Room dataset and Figure 3 shows the on Atari. Note that SPAIR does not use component masks and IODINE and GENESIS do not separate foreground from , hence the corresponding cells are left empty. Additionally, we only show a few representative components for IODINE and GENESIS since we ran those experiments with larger K than can be displayed. More qualitative of SPACE can be found in Appendix A. In the 3D-Room environment, IODINE is able to segment the objects and the into separate components. However, it occasionally does not properly decompose objects (in the Large 3D-room , the orange sphere on the right is not reconstructed) and may generate blurry objects. GENESIS is able to segment the walls, floor, and sky into multiple components. It is able to capture blurry foreground objects in the Small 3D-Room, but is not able to cleanly capture foreground objects with the larger number of objects in the Large 3D-Room. In Atari, for all games, both IODINE and GENESIS fail to capture the foreground properly. We believe this is because the objects in Atari games are smaller, less regular and lack the obvious latent factors like color and shape as in the 3D dataset, which demonstrates that detection-based approaches are more appropriate in this case. SPAIR & SPAIR-P. SPAIR is able to detect tight bounding boxes in both 3D-Room and most Atari games (it does not work as well on dynamic games, which we discuss below). SPAIR-P, however, often fails to detect the foreground objects in proper bounding boxes, frequently uses multiple bounding boxes for one object and redundantly detects parts of the as foreground objects. This is a limitation of the patch training as the receptive field of each patch is limited to a 32 × 32 glimpse, hence prohibiting it to detect objects larger than that and making it difficult to distinguish the from foreground. These two properties are illustrated well in Space Invaders, where it is able to detect the small aliens, but it detects the long piece of ground on the bottom of the image as foreground objects. SPACE. In 3D-Room, SPACE is able to accurately detect almost all objects despite the large variations in object positions, colors, and shapes, while producing a clean segmentation of the walls, ground, and sky. This is in contrast to the SPAIR model, while being able to provide similar foreground detection quality, encodes the whole into a single component, which makes the representation less disentangled and the reconstruction more blurry. Similarly in Atari, SPACE consistently captures all foreground objects while producing clean segmentation across many different games. Dynamic Backgrounds. SPACE and SPAIR exhibit some very interesting behavior when trained on games with dynamic s. For the most static game -Space Invaders, both SPACE and SPAIR work well. For Air Raid, in which the building moves, SPACE captures all objects accurately while providing a two-component segmentation, whereas SPAIR and SPAIR-P produce splitting and heavy re-detections. In the most dynamic games, SPAIR completely fails because of the difficulty to model dynamic with a single VAE component, while SPACE is able to perfectly segment the blue racing track while accurately detecting all foreground objects. Foreground vs Background. Typically, foreground is the dynamic local part of the scene that we are interested in, and is the relatively static and global part. This definition, though intuitive, is ambiguous. Some objects, such as the red shields in Space Invaders and the key in Montezuma's Revenge (Figure 5) are detected as foreground objects in SPACE, but are considered in SPAIR. Though these objects are static 2, they are important elements of the games and should be considered as foreground objects. Similar behavior is observed in Atlantis (Figure 7), where SPACE detects some foreground objects from the middle base that is above the water. We believe this is an interesting property of SPACE and could be very important for providing useful representations for downstream tasks. By using a spatial broadcast network which is much weaker when compared to other decoders like sub-pixel convolutional nets , we limit the capacity of module, which favors modeling static objects as foreground rather than . Boundary Loss. We notice SPAIR sometimes splits objects into two whereas SPACE is able to create the correct bounding box for the objects (for example, see Air Raid). This may be attributed to the addendum of the auxiliary boundary loss in the SPACE model that would penalize splitting an object with multiple bounding boxes. In this section we compare SPACE with the baselines in several quantitative metrics 3. We first note that each of the baseline models has a different decomposition capacity (C), which we define as the capability of the model to decompose the scene into its semantic constituents such as the foreground objects and the segmented components. For SPACE, the decomposition capacity is equal to the number of grid cells H × W (which is the maximum number of foreground objects that can be detected) plus the number of components K. For SPAIR, the decomposition capacity is equal to the number of grid cells H × W plus 1 for . For IODINE and GENESIS, it is equal to the number of components K. For each experiment, we compare the metrics for each model with similar decomposition capacities. This way, each model can decompose the image into the same number of components. For a setting in SPACE with a grid size of H × W with K SPACE components, the equivalent settings in IODINE and GENESIS would be with C = (H × W) + K SPACE. The equivalent setting in SPAIR would be a grid size of H × W. Step Latency. The leftmost chart of Figure 4 shows the time taken to complete one gradient step (forward and backward propagation) for different decomposition capacities for each of the models. We see that SPAIR's latency grows with the number of cells because of the sequential nature of its latent inference step. Similarly GENESIS and IODINE's latency grows with the number of components K because each component is processed sequentially in both the models. IODINE is Training Latency Plot Figure 4: Quantitative performance comparison between SPACE, SPAIR, IODINE and GENESIS in terms of batch-processing time during training, training convergence and converged pixel MSE. Convergence plots showing pixel-MSE were computed on a held-out set during training. the slowest overall with its computationally expensive iterative inference procedure. Furthermore, both IODINE and GENESIS require storing data for each of the K components, so we were unable to run our experiments on 256 components or greater before running out of memory on our 22GB GPU. On the other hand, SPACE employs parallel processing for the foreground which makes it scalable to large grid sizes, allowing it to detect a large number of foreground objects without any significant performance degradation. Although this data was collected for gradient step latency, this comparison implies a similar relationship exists with inference time which is a main component in the gradient step. Time for Convergence. The remaining three charts in Figure 4 show the amount of time each model takes to converge in different experimental settings. We use the pixel-wise mean squared error (MSE) as a measurement of how close a model is to convergence. We see that not only does SPACE achieve the lowest MSE, it also converges the quickest out of all the models. Average Precision and Error Rate. In order to assess the quality of our bounding box predictions and the effectiveness of boundary loss, we measure the Average Precision and Object Count Error Rate of our predictions. Our are shown in Table 1. We only report these metrics for 3D-Room since we have access to the ground truth bounding boxes for each of the objects in the scene. All three models have very similar average precision and error rate. Despite being parallel in its inference, SPACE has a comparable count error rate to that of SPAIR. SPACE also achieves better average precision and count error rate compared to its variant without the boundary loss (SPACE-WB), which shows the efficacy of our proposed loss. From our experiments, we can assert that SPACE can produce similar quality bounding boxes as SPAIR while 1) having orders of magnitude faster inference and gradient step time, 2) converging more quickly, 3) scaling to a large number of objects without significant performance degradation, and 4) providing complex segmentation. We propose SPACE, a unified probabilistic model that combines the benefits of the object representation models based on spatial attention and the scene decomposition models based on component mixture. SPACE can explicitly provide factorized object representation per foreground object while also decomposing complex segments. SPACE also achieves a significant speed-up and thus makes the model applicable to scenes with a much larger number of objects without performance degradation. Besides, the detected objects in SPACE are also more intuitive than other methods. We show the above properties of SPACE on Atari and 3D-Rooms. Interesting future directions are to replace the sequential processing of by a parallel one and to improve the model for natural images. Our next plan is to apply SPACE for object-oriented model-based reinforcement learning. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection.: Object detection and segmentation using SPACE on 3D-Room data set with large number of objects. In this section, we derive the ELBO for the log-likelihood log p(x). KL Divergence for the Foreground Latents Under the SPACE's approximate inference, the inside the expectation can be evaluated as follows. KL Divergence for the Background Latents Under our GENESIS-like modeling of inference for the latents, the KL term inside the expectation for the is evaluated as follows. Relaxed treatment of z pres In our implementation, we model the Bernoulli random variable z pres i using the Gumbel-Softmax distribution . We use the relaxed value of z pres in the entire training and use hard samples only for the visualizations. In this section we elaborate on the implementation details of the boundary loss. We construct a kernel of the size of the glimpse, gs × gs (we use gs = 32) with a boundary gap of b = 6 having negative uniform weights inside the boundary and a zero weight in the region between the boundary and the glimpse. This ensures that the model is penalized when the object is outside the boundary. This kernel is first mapped onto the global space via to obtain the global kernel. This is then multiplied element-wise with global object mask α to obtain the boundary loss map. The objective of the loss is to minimize the mean of this boundary loss map. In addition to the ELBO, this loss is also back-propagated via RMSProp (Tieleman & Hinton. ). This loss, due to the boundary constraint, enforces the bounding boxes to be less tight and in lower average precision, so we disable the loss and optimize only the ELBO after the model has converged well. D.1 ALGORITHMS Algorithm 1 and Algorithm 2 present SPACE's inference for foreground and . Algorithm 3 show the details of the generation process of the module. For foreground generation, we simply sample the latent variables from the priors instead of conditioning on the input. Note that, for convenience the algorithms for the foreground module and module are presented with for loops, but inference for all variables of the foreground module are implemented as parallel convolution operations and most operations of the module (barring the LSTM module) are parallel as well.) optimizer with a learning rate of 1 × 10 −3 for the module. We use gradient clipping with a maximum norm of 1.0. For Atari games, we find it beneficial to set α to be fixed for the first 1000-2000 steps, and vary the actual value and number of steps for different games. This allows both the foreground as well as the module to learn in the early stage of training. Atari. For each game, we sample 60,000 random images from a pretrained agent . We split the images into 50,000 for the training set, 5,000 for the validation set, and 5,000 for the testing set. Each image is preprocessed into a size of 128 × 128 pixels with BGR color channels. We present the for the following games: Space Invaders, Air Raid, River Raid, Montezuma's Revenge. We also train our model on a dataset of 10 games jointly, where we have 8,000 training images, 1,000 validation images, and 1,000 testing images for each game. We use the following games: Asterix, Atlantis, Carnival, Double Dunk, Kangaroo, Montezuma Revenge, Pacman, Pooyan, Qbert, Space Invaders. Room 3D. We use MuJoCo to generate this dataset. Each image consists of a walled enclosure with a random number of objects on the floor. The possible objects are randomly sized spheres, cubes, and cylinders. The small 3D-Room dataset has 4-8 objects and the large 3D-Room dataset has 18-24 objects. The color of the objects are randomly chosen from 8 different colors and the colors of the (wall, ground, sky) are chosen randomly from 5 different colors. The angle of the camera is also selected randomly. We use a training set of 63,000 images, a validation set of 7,000 images, and a test set of 7,000 images. We use a 2-D projection from the camera to determine the ground truth bounding boxes of the objects so that we can report the average precision of the different models. | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkl03ySYDH | We propose a generative latent variable model for unsupervised scene decomposition that provides factorized object representation per foreground object while also decomposing background segments of complex morphology. |
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot". The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples. In past years, a number of generative probabilistic models based on neural networks have been proposed. The most popular approaches include variational autoencoder (VAE) and generative adversarial net (GANs). They learn a distribution over objects p(x) and allow sampling from this distribution. In many cases, we are interested in learning a conditional distribution p(x|y). For instance, if x is an image of a face, y could be the characteristics describing the face (are glasses present or not; length of hair, etc.) Conditional variational autoencoder and conditional generative adversarial nets are popular methods for this problem. In this paper, we consider the problem of learning all conditional distributions of the form p(x I |x U \I), where U is the set of all features and I is its arbitrary subset. This problem generalizes both learning the joint distribution p(x) and learning the conditional distribution p(x|y). To tackle this problem, we propose a Variational Autoencoder with Arbitrary Conditioning (VAEAC) model. It is a latent variable model similar to VAE, but allows conditioning on an arbitrary subset of the features. The conditioning features affect the prior on the latent Gaussian variables which are used to generate unobserved features. The model is trained using stochastic gradient variational Bayes .We consider two most natural applications of the proposed model. The first one is feature imputation where the goal is to restore the missing features given the observed ones. The imputed values may be valuable by themselves or may improve the performance of other machine learning algorithms which process the dataset. Another application is image inpainting in which the goal is to fill in an unobserved part of an image with an artificial content in a realistic way. This can be used for removing unnecessary objects from the images or, vice versa, for complementing the partially closed or corrupted object. The experimental evaluation shows that the proposed model successfully samples from the conditional distributions. The distribution over samples is close to the true conditional distribution. This property is very important when the true distribution has several modes. The model is shown to be effective in feature imputation problem which helps to increase the quality of subsequent discriminative models on different problems from UCI datasets collection . We demonstrate that model can generate diverse and realistic image inpaintings on MNIST , Omniglot and CelebA datasets, and works even better than the current state of the art inpainting techniques in terms of peak signal to noise ratio (PSNR).The paper is organized as follows. In section 2 we review the related works. In section 3 we briefly describe variational autoencoders and conditional variational autoencoders. In section 4 we define the problem, describe the VAEAC model and its training procedure. In section 5 we evaluate VAEAC. Section 6 concludes the paper. Appendix contains additional explanations, theoretical analysis, and experiments for VAEAC. Universal Marginalizer is a model based on a feed-forward neural network which approximates marginals of unobserved features conditioned on observable values. A related idea of an autoregressive model of joint probability was previously proposed in and. The description of the model and comparison with VAEAC are available in section 5.3. propose a GANs-based model called GAIN which solves the same problem as VAEAC. In contrast to VAEAC, GAIN does not use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high. For example, in inpainting setting GAIN cannot learn the conditional distribution over MNIST digits given one horizontal line of the image while VAEAC can (see appendix D.4). The comparison of VAEAC and GAIN on the missing feature imputation problem is given in section 5.1 and appendix D.2. [Appendix F], , , and propose to fill missing data with noise and run Markov chain with a learned transition operator. The stationary distribution of such chains approximates the true conditional distribution of the unobserved features. BID0 consider missing feature imputation in terms of Markov decision process and propose LSTM-based sequential decision making model to solve it. Nevertheless, these methods are computationally expensive at the test time and require fully-observed training data. Image inpainting is a classic computer vision problem. Most of the earlier methods rely on local and texture information or hand-crafted problem-specific features . In past years multiple neural network based approaches have been proposed. , and use different kinds and combinations of adversarial, reconstruction, texture and other losses. focuses on face inpainting and uses two adversarial losses and one semantic parsing loss to train the generative model. GANs are first trained on the whole training dataset. The inpainting is an optimization procedure that finds the latent variables that explain the observed features best. Then, the obtained latents are passed through the generative model to restore the unobserved portion of the image. We can say that VAEAC is a similar model which uses prior network to find a proper latents instead of solving the optimization problem. All described methods aim to produce a single realistic inpainting, while VAEAC is capable of sampling diverse inpaintings. Additionally, Yeh et al., Yang et al. (2017 and have high testtime computational complexity of inpainting, because they require an optimization problem to be solved. On the other hand, VAEAC is a "single-shot" method with a low computational cost. Variational autoencoder (VAE) is a directed generative model with latent variables. The generative process in variational autoencoder is as follows: first, a latent variable z is generated from the prior distribution p(z), and then the data x is generated from the generative distribution p θ (x|z), where θ are the generative model's parameters. This process induces the distribution p θ (x) = E p(z) p θ (x|z). The distribution p θ (x|z) is modeled by a neural network with parameters θ. p(z) is a standard Gaussian distribution. The parameters θ are tuned by maximizing the likelihood of the training data points {x i} N i=1 from the true data distribution p d (x). In general, this optimization problem is challenging due to intractable posterior inference. However, a variational lower bound can be optimized efficiently using backpropagation and stochastic gradient descent: DISPLAYFORM0 Here q φ (z|x) is a proposal distribution parameterized by neural network with parameters φ that approximates the posterior p(z|x, θ). Usually this distribution is Gaussian with a diagonal covariance matrix. The closer q φ (z|x) to p(z|x, θ), the tighter variational lower bound L V AE (θ, φ). To compute the gradient of the variational lower bound with respect to φ, reparameterization trick is used: z = µ φ (x) + εσ φ (x) where ε ∼ N (0, I) and µ φ and σ φ are deterministic functions parameterized by neural networks. So the gradient can be estimated using Monte-Carlo method for the first term and computing the second term analytically: DISPLAYFORM1 So L V AE (θ, φ) can be optimized using stochastic gradient ascent with respect to φ and θ. Conditional variational autoencoder (CVAE) approximates the conditional distribution p d (x|y). It outperforms deterministic models when the distribution p d (x|y) is multi-modal (diverse xs are probable for the given y). For example, assume that x is a real-valued image. Then, a deterministic regression model with mean squared error loss would predict the average blurry value for x. On the other hand, CVAE learns the distribution of x, from which one can sample diverse and realistic objects. Variational lower bound for CVAE can be derived similarly to VAE by conditioning all considered distributions on y: DISPLAYFORM0 Similarly to VAE, this objective is optimized using the reparameterization trick. Note that the prior distribution p ψ (z|y) is conditioned on y and is modeled by a neural network with parameters ψ. Thus, CVAE uses three trainable neural networks, while VAE only uses two. Also authors propose such modifications of CVAE as Gaussian stochastic neural network and hybrid model. These modifications can be applied to our model as well. Nevertheless, we don't use them, because of their disadvantage which is described in appendix C. Let binary vector b ∈ {0, 1} D be the binary mask of unobserved features of the object. Then we describe the vector of unobserved features as x b = {x i:bi=1}. For example, x = (x 2, x 3, x 5). Using this notation we denote x 1−b as a vector of observed features. Our goal is to build a model of the conditional distribution DISPLAYFORM1 for an arbitrary b, where ψ and θ are parameters that are used in our model at the testing stage. However, the true distribution p d (x b |x 1−b, b) is intractable without strong assumptions about p d (x). Therefore, our model p ψ,θ (x b |x 1−b, b) has to be more precise for some b and less precise for others. To formalize our requirements about the accuracy of our model we introduce the distribution p(b) over different unobserved feature masks. The distribution p(b) is arbitrary and may be defined by the user depending on the problem. Generally it should have full support over {0, 1}D so that p ψ,θ (x b |x 1−b, b) can evaluate arbitrary conditioning. Nevertheless, it is not necessary if the model is used for specific kinds of conditioning (as we do in section 5.2).Using p(b) we can introduce the following log-likelihood objective function for the model: DISPLAYFORM2 The special cases of the objective are variational autoencoder (b i = 1 ∀i ∈ {1, . . ., D}) and conditional variational autoencoder (b is constant). The generative process of our model is similar to the generative process of CVAE: for each object firstly we generate z ∼ p ψ (z|x 1−b, b) using prior network, and then sample unobserved features x b ∼ p θ (x b |z, x 1−b, b) using generative network. This process induces the following model distribution over unobserved features: DISPLAYFORM0 We use z ∈ R d, and Gaussian distribution p ψ over z, with parameters from a neural network with weights ψ: DISPLAYFORM1 is parameterized by a function w i,θ (z, x 1−b, b), whose outputs are logits of probabilities for each category: DISPLAYFORM2. Therefore the components of the latent vector z are conditionally independent given x 1−b and b, and the components of x b are conditionally independent given z, x 1−b and b. The variables x b and x 1−b have variable length that depends on b. So in order to use architectures such as multi-layer perceptron and convolutional neural network we consider x 1−b = x • (1 − b) where • is an element-wise product. So in implementation x 1−b has fixed length. The output of the generative network also has a fixed length, but we use only unobserved components to compute likelihood. The theoretical analysis of the model is available in appendix B.1. We can derive a lower bound for log p ψ,θ (x b |x 1−b, b) as for variational autoencoder: DISPLAYFORM0 Therefore we have the following variational lower bound optimization problem: DISPLAYFORM1 We use fully-factorized Gaussian proposal distribution q φ which allows us to perform reparameterization trick and compute KL divergence analytically in order to optimize. During the optimization of objective FORMULA9 the parameters µ ψ and σ ψ of the prior distribution of z may tend to infinity, since there is no penalty for large values of those parameters. We usually observe the growth of z 2 during training, though it is slow enough. To prevent potential numerical instabilities, we put a Normal-Gamma prior on the parameters of the prior distribution to prevent the divergence. Formally, we redefine p ψ (z|x 1−b, b) as follows: DISPLAYFORM0 As a , the regularizers − µ 2 ψ 2σ 2 µ and σ σ (log(σ ψ) − σ ψ ) are added to the model log-likelihood. Hyperparameter σ µ is chosen to be large (10 4) and σ σ is taken to be a small positive number (10 −4). This distribution is close to uniform near zero, so it doesn't affect the learning process significantly. The optimization objective requires all features of each object at the training stage: some of the features will be observed variables at the input of the model and other will be unobserved features used to evaluate the model. Nevertheless, in some problem settings the training data contains missing features too. We propose the following slight modification of the problem in order to cover such problems as well. The missing values cannot be observed so x i = ω ⇒ b i = 1, where ω describes the missing value in the data. In order to meet this requirement, we redefine mask distribution as conditioned on x: p(b) turns into p(b|x) in and. In the reconstruction loss we simply omit the missing features, i. e. marginalize them out: DISPLAYFORM0 The proposal network must be able to determine which features came from real object and which are just missing. So we use additional missing features mask which is fed to proposal network together with unobserved features mask b and object x. The proposed modifications are evaluated in section 5.1. In this section we validate the performance of VAEAC using several real-world datasets. In the first set of experiments we evaluate VAEAC missing features imputation performance using various UCI datasets . We compare imputations from our model with imputations from such classical methods as MICE and MissForest (Stekhoven & Bühlmann, 2011) and recently proposed GANs-based method GAIN . In the second set of experiments we use VAEAC to solve image inpainting problem. We show inpainitngs generated by VAEAC and compare our model with models from papers Pathak et al. FORMULA1, and in terms of peak signal-to-noise ratio (PSNR) of obtained inpaintings on CelebA dataset . And finally, we evaluate VAEAC against the competing method called Universal Marginalizer . Additional experiments can be found in appendices C and D. The code is available at https://github.com/tigvarts/ vaeac. The datasets with missing features are widespread. Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by x i = ω) and their target values y. The majority of discriminative methods do not support missing values in the objects. The procedure of filling in the missing features values is called missing features imputation. In this section we evaluate the quality of imputations produced by VAEAC. For evaluation we use datasets from UCI repository . Before training we drop randomly 50% of values both in train and test set. After that we impute missing features using MICE , MissForest (Stekhoven & Bühlmann, 2011), GAIN and VAEAC trained on the observed data. The details of GAIN implementation are described in appendix A.4.Our model learns the distribution of the imputations, so it is able to sample from this distribution. We replace each object with missing features by n = 10 objects with sampled imputations, so the size of the dataset increases by n times. This procedure is called missing features multiple imputation. MICE and GAIN are also capable of multiple imputation (we use n = 10 for them in experiments as well), but MissForest is not. For more details about the experimental setup see appendices A.1, A.2, and A.4.In table 1 we report NRMSE (i.e. RMSE normalized by the standard deviation of each feature and then averaged over all features) of imputations for continuous datasets and proportion of falsely classified (PFC) for categorical ones. For multiple imputation methods we average imputations of continuous variables and take most frequent imputation for categorical ones for each object. We also learn linear or logistic regression and report the regression or classification performance after applying imputations of different methods in table 2. For multiple imputation methods we average predictions for continuous targets and take most frequent prediction for categorical ones for each object in test set. As can be seen from the tables 1 and 2, VAEAC can learn joint data distribution and use it for missing feature imputation. The imputations are competitive with current state of the art imputation methods in terms of RMSE, PFC, post-imputation regression R2-score and classification accuracy. Nevertheless, we don't claim that our method is state of the art in missing features imputation; for some datasets MICE or MissForest outperform it. The additional experiments can be found in appendix D.2. The image inpainting problem has a number of different formulations. The formulation of our interest is as follows: some of the pixels of an image are unobserved and we want to restore them in a natural way. Unlike the majority of papers, we want to restore not just one most probable inpainting, but the distribution over all possible inpaintings from which we can sample. This distribution is extremely multi-modal because often there is a lot of different possible ways to inpaint the image. Unlike the previous subsection, here we have uncorrupted images without missing features in the training set, so p(b|x) = p(b).As we show in section 2, state of the art use different adversarial losses to achieve more sharp and realistic samples. VAEAC can be adapted to the image inpainting problem by using a combination of those adversarial losses as a part of reconstruction loss p θ (x b |z, x 1−b, b). Nevertheless, such construction is out of scope for this research, so we leave it for the future work. In the current work we show that the model can generate both diverse and realistic inpaintings. In figures 1, 2, 3 and 4 we visualize image inpaintings produced by VAEAC on binarized MNIST , Omniglot (and CelebA . The details of learning procedure and description of datasets are available in appendixes A.1 and A.3.To the best of our knowledge, the most modern inpainting papers don't consider the diverse inpainting problem, where the goal is to build diverse image inpaintings, so there is no straightforward way to compare with these models. Nevertheless, we compute peak signal-to-noise ratio (PSNR) for one random inpainting from VAEAC and the best PSNR among 10 random inpaintings from VAEAC. One inpainting might not be similar to the original image, so we also measure how good the inpainting which is most similar to the original image reconstructs it. We compare these two metrics computed for certain masks with the PSNRs for the same masks on CelebA from papers Yeh et al. FORMULA1 and. The are available in tables 3 and 4.We observe that for the majority of proposed masks our model outperforms the competing methods in terms of PSNR even with one sample, and for the rest (where the inpaintings are significantly diverse) the best PSNR over 10 inpaintings is larger than the same PSNR of the competing models. Even if PSNR does not reflect completely the visual quality of images and tends to encourage blurry VAE samples instead of realistic GANs samples, the show that VAEAC is able to solve inpainting problem comparably to the state of the art methods. FORMULA1 ) is that it needs the distribution over masks at the training stage to be similar to the distribution over them at the test stage. However, it is not a very strict limitation for the practical usage. Universal Marginalizer (UM) is a model which uses a single neural network to estimate the marginal distributions over the unobserved features. So it optimizes the following objective: DISPLAYFORM0 For given mask b we fix a permutation of its unobserved components: (i 1, i 2, . . ., i |b|), where |b| is a number of unobserved components. Using the learned model and the permutation we can generate objects from joint distribution and estimate their probability using chain rule. DISPLAYFORM1 For example, DISPLAYFORM2 Conditional sampling or conditional likelihood estimation for one object requires |b| requests to UM to compute p θ (x i |x 1−b, b). Each request is a forward pass through the neural network. In the case of conditional sampling those requests even cannot be paralleled because the input of the next request contains the output of the previous one. We propose a slight modification of the original UM training procedure which allows learning UM efficiently for any kind of masks including those considered in this paper. The details of the modification are described in appendix B.3. Without skip-connections all information for decoder goes through the latent variables. In image inpainting we found skip-connections very useful in both terms of log-likelihood improvement and the image realism, because latent variables are responsible for the global information only while the local information passes through skip-connections. Therefore the border between image and inpainting becomes less conspicuous. The main idea of neural networks architecture is reflected in FIG4. The number of hidden layers, their widths and structure may be different. The neural networks we used for image inpainting have He-Uniform initialization of convolutional ResNet blocks, and the skip-connections are implemented using concatenation, not addition. The proposal network structure is exactly the same as the prior network except skip-connections. Also one could use much simpler fully-connected networks with one hidden layer as a proposal, prior and generative networks in VAEAC and still obtain nice inpaintings on MNIST. We split the dataset into train and test set with size ratio 3:1. Before training we drop randomly 50% of values both in train and test set. We repeat each experiment 5 times with different train-test splits and dropped features and then average and compute their standard deviation. As we show in appendix B.2, the better can be achieved when the model learns the concatenation of objects features x and targets y. So we treat y as an additional feature that is always unobserved during the testing time. To train our model we use distribution p(b i |x) in which p(b i |x i = ω) = 1 and p(b i |x) = 0.2 otherwise. Also for VAEAC trainig we normalize real-valued features, fix σ θ = 1 in the generative model of VAEAC in order to optimize RMSE, and use 25% of training data as validation set to select the best model among all epochs of training. For the test set, the classifier or regressor is applied to each of the n imputed objects and the predictions are combined. For regression problems we report R2-score of combined predictions, so we use averaging as a combination method. For classification problem we report accuracy, and therefore choose the mode. We consider the workflow where the imputed values of y are not fed to the classifier or regressor to make a fair comparison of feature imputation quality. MNIST is a dataset of 60000 train and 10000 test grayscale images of digits from 0 to 9 of size 28x28. We binarize all images in the dataset. For MNIST we consider Bernoulli log-likelihood as the reconstruction loss: DISPLAYFORM0 is an output of the generative neural network. We use 16 latent variables. In the mask for this dataset the observed pixels form a three pixels wide horizontal line which position is distributed uniformly. Omniglot is a dataset of 19280 train and 13180 test black-and-white images of different alphabets symbols of size 105x105. As in previous section, the brightness of each pixel is treated as a Bernoulli probability of it to be 1. The mask we use is a random rectangular which is described below. We use 64 latent variables. We train model for 50 epochs and choose best model according to IWAE log-likelihood estimation on the validation set after each epoch. CelebA is a dataset of 162770 train, 19867 validation and 19962 test color images of faces of celebrities of size 178x218. Before learning we normalize the channels in dataset. We use logarithm of fully-factorized Gaussian distribution as reconstruction loss. The mask we use is a random rectangular which is describe below. We use 32 latent variables. Rectangular mask is the common shape of unobserved region in image inpainting. We use such mask for Omniglot and Celeba. We sample the corner points of rectangles uniprobably on the image, but reject those rectangles which area is less than a quarter of the image area. In Li et al. FORMULA1 six different masks O1-O6 are used on the testing stage. We reconstruct the positions of masks from the illustrations in the paper and give their coordinates in table 6. The visualizations of the masks are available in FIG0.At the training stage we used a rectangle mask with uniprobable random corners. We reject masks with width or height less than 16pt. We use 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation. We can obtain slightly higher PSNR values than reported in table 4 if use only masks O1-O6 at the training stage. four types of masks are used. Center mask is just an unobserved 32x32 square in the center of 64x64 image. Half mask mean that one of upper, lower, left or right half of the image is unobserved. All these types of a half are equiprobable. Random mask means that we use pixelwise-independent Bernoulli distribution with probability 0.8 to form a mask of unobserved pixels. Pattern mask is proposed in. As we deduced from the code 3, the generation process is follows: firstly we generate 600x600 one-channel image with uniform distribution over pixels, then bicubically interpolate it to image of size 10000x10000, and then apply Heaviside step function H(x − 0.25) (i. e. all points with value less than 0.25 are considered as unobserved). To sample a mask we sample a random position in this 10000x10000 binary image and crop 64x64 mask. If less than 20% or more than 30% of pixel are unobserved, than the mask is rejected and the position is sampled again. In comparison with this paper in section 5.2 we use the same distribution over masks at training and testing stages. We use VAEAC with 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation. For missing feature imputation we reimplemented GAIN in PyTorch based on the paper and the available TensorFlow source code for image inpainting 4.For categorical features we use one-hot encoding. We observe in experiments that it works better in terms of NRMSE and PFC than processing categorical features in GAIN as continuous ones and then rounding them to the nearest category. For categorical features we also use reconstruction loss L M (x i, x i) = − 1 |Xi| |Xi| j=1 x i,j log(x i,j). |X i | is the number of categories of the i-th feature, and x i,j is the j-th component of one-hot encoding of the feature x i. Such L M enforces equal contribution of each categorical feature into the whole reconstruction loss. We use one more modification of L M (x, x) for binary and categorical features. Cross-entropy loss in L M penalizes incorrect reconstructions of categorical and binary features much more than incorrect reconstructions for continuous ones. To avoid such imbalance we mixed L2 and cross-entropy reconstruction losses for binary and categorical features with weights 0.8 and 0.2 respectively: DISPLAYFORM0 We observe in experiments that this modification also works better in terms of NRMSE and PFC than the original model. We use validation set which contains 5% of the observed features for the best model selection (hyperparameter is the number of iterations).In the original GAIN paper authors propose to use cross-validation for hyper-parameter α ∈ {0.1, 0.5, 1, 2, 10}. We observe that using α = 10 and a hint h = b • m + 0.5(1 − b) where vector b is sampled from Bernoulli distribution with p = 0.01 provides better in terms of NRMSE and PFC than the original model with every α ∈ {0.1, 0.5, 1, 2, 10}. Such hint distribution makes model theoretically inconsistent but works well in practice (see table 7). Table 7 shows that our modifications provide consistently not worse or even better imputations than the original GAIN (in terms of NRMSE and PFC, on the considered datasets). So in this paper for the missing feature imputation problem we report the of our modification of GAIN. We can imagine 2 D CVAEs learned each for the certain mask. Because neural networks are universal approximators, VAEAC networks could model the union of CVAE networks, so that VAEAC network performs transformation defined by the same network of the corresponding to the given mask CVAE. DISPLAYFORM1 So if CVAE models any distribution p(x|y), VAEAC also do. The guarantees for CVAE in the case of continuous variables are based on the point that every smooth distribution can be approximated with a large enough mixture of Gaussians, which is a special case of CVAE's generative model. These guarantees can be extended on the case of categorical-continuous variables also. Actually, there are distributions over categorical variables which CVAE with Gaussian prior and proposal distributions cannot learn. Nevertheless, this kind of limitation is not fundamental and is caused by poor proposal distribution family. Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by x i = ω) and their target values y. In this section we show that the better are achieved when our model learns the concatenation of objects features x and targets y. The example that shows the necessity of it is following. Consider a dataset where x 1 = 1, x 2 ∼ N (x 2 |y, 1), p d (y = 0) = p(y = 5) = 0.5. In this case p d (x 2 |x 1 = 1) = 0.5N (x 2 |0, 1) + 0.5N (x 2 |5, 1). We can see that generating data from p d (x 2 |x 1) may only confuse the classifier, because with probability 0.5 it generates x 2 ∼ N for y = 5 and x 2 ∼ N for y = 0. On the other hand, p d (x 2 |x 1, y) = N (x 2 |y, 1). Filling gaps using p d (x 2 |x 1, y) may only improve classifier or regressor by giving it some information from the joint distribution p d (x, y) and thus simplifying the dependence to be learned at the training time. So we treat y as an additional feature that is always unobserved during the testing time. The problem authors did not address in the original paper is the relation between the distribution of unobserved components p(b) at the testing stage and the distribution of masks in the requests to UMp(b). The distribution over masks p(b) induces the distributionp(b), and in the most cases p(b) =p(b). The distributionp(b) also depends on the permutations (i 1, i 2, . . ., i |b|) that we use to generate objects. We observed in experiments, that UM must be trained using unobserved mask distributionp(b). For example, if all masks from p(b) have a fixed number of unobserved components (e. g., 2), then UM will never see an example of mask with 1, 2,..., D 2 − 1 unobserved components, which is necessary to generate a sample conditioned on D 2 components. That leads to drastically low likelihood estimate for the test set and unrealistic samples. We developed an easy generative process forp(b) for arbitrary p(b) if the permutation of unobserved components (i 1, i 2, . . ., i |b|) is chosen randomly and equiprobably: firstly we generate DISPLAYFORM0 More complicated generative process exists for a sorted permutation where i j−1 < i j ∀j: 2 ≤ j ≤ |b|.In experiments we use uniform distribution over the permutations. Gaussian stochastic neural network and hybrid model are originally proposed in the paper on Conditional VAE . The motivation authors mention in the paper is as follows. During training the proposal distribution q φ (z|x, y) is used to generate the latent variables z, while during the testing stage the prior p ψ (z|y) is used. KL divergence tries to close the gap between two distributions but, according to authors, it is not enough. To overcome the issue authors propose to use a hybrid model FORMULA4, a weighted mixture of variational lower bound and a single-sample Monte-Carlo estimation of log-likelihood. The model corresponding to the second term is called Gaussian Stochastic Neural Network, because it is a feed-forward neural network with a single Gaussian stochastic layer in the middle. Also GSNN is a special case of CVAE where q φ (z|x, y) = p ψ (z|y). DISPLAYFORM0 L(x, y; θ, ψ, φ) = αL CV AE (x, y; θ, ψ, φ) DISPLAYFORM1 Authors report that hybrid model and GSNN outperform CVAE in terms of segmentation accuracy on the majority of datasets. We can also add that this technique seems to soften the "holes problem" . authors observe that vectors z from prior distribution may be different enough from all vectors z from the proposal distribution at the training stage, so the generator network may be confused at the testing stage. Due to this problem CVAE can have good reconstructions of y given z ∼ q φ (z|x, y), while samples of y given z ∼ p ψ (z|x) are not realistic. The same trick is applicable to our model as well: DISPLAYFORM2 In order to reflect the difference between sampling z from prior and proposal distributions, authors of CVAE use two methods of log-likelihood estimation: DISPLAYFORM3 The first estimator is called Monte-Carlo estimator and the second one is called Importance Sampling estimator (also known as IWAE). They are asymptotically equivalent, but in practice the Monte-Carlo estimator requires much more samples to obtain the same accuracy of estimation. Small S leads to underestimation of the log-likelihood for both Monte-Carlo and Importance Sampling , but for Monte-Carlo the underestimation is expressed much stronger. We perform an additional study of GSNN and hybrid model and show that they have drawbacks when the target distribution p(x|y) is has multiple different local maximums. In this section we show why GSNN cannot learn distributions with several different modes and leads to a blurry image samples. For the simplicity of the notation we consider hybrid model for a standard VAE: DISPLAYFORM0 The hybrid model for VAEAC can be obtained from by replacing x with x b and conditioning all distributions on x 1−b and b. The validity of the further equations and remains for VAEAC after this replacement. Consider now a categorical latent variable z which can take one of K values. Let x be a random variable with true distribution p d (x) to be modeled. Consider the following true data distribution: DISPLAYFORM1.., K} and some values x 1, x 2,..., x K. So the true distribution has K different equiprobable modes. Suppose the generator network N N θ which models mapping from z to some vector of parameters v z = N N θ (z). Thus, we define generative distribution as some function of these parameters: DISPLAYFORM2. Therefore, the parameters θ are just the set of v 1, v 2,..., v K.For the simplicity of the model we assume x,vj ). Using and the above formulas for q φ, p ψ and p θ we obtain the following optimization problem: It is easy to show that FORMULA1 is equivalent to the following optimization problem: DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 It is clear from that when α = 1 the log-likelihood of the initial model is optimized. On the other hand, when α = 0 the optimal point is DISPLAYFORM6 e. z doesn't influence the generative process, and for each z generator produces the same v which maximizes likelihood estimation of the generative model f (x, v) for the given dataset of x's. For Bernoulli and Gaussian generative distributions f such v is just average of all modes x 1, x 2,..., x K. That explains why further we observe blurry images when using GSNN model. The same holds for for continuous latent variables instead of categorical. Given K different modes in true data distribution, VAE uses proposal network to separate prior distribution into K components (i. e. regions in the latent space), so that each region corresponds to one mode. On the other hand, in GSNN z is sampled independently on the mode which is to be reconstructed from it, so for each z the generator have to produce parameters suitable for all modes. From this point of view, there is no difference between VAE and VAEAC. If the true conditional distribution has several different modes, then VAEAC can fit them all, while GSNN learns their average. If true conditional distribution has one mode, GSNN and VAEAC are equal, and GSNN may even learn faster because it has less parameters. Hybrid model is a trade-off between VAEAC and GSNN: the closer α to zero, the more blurry and closer to the average is the distribution of the model. The exact dependence of the model distribution on α can be derived analytically for the simple data distributions or evaluated experimentally. We perform such experimental evaluation in the next sections. In this section we show that VAEAC is capable of learning a complex multimodal distribution of synthetic data while GSNN and hybrid model are not. Let x ∈ R 2 and p(DISPLAYFORM0 is plotted in figure 6 . The dataset contains 100000 points sampled from p d (x). We use multi-layer perceptron with four ReLU layers of size 400-200-100-50, 25-dimensional Gaussian latent variables. For different mixture coefficients α we visualize samples from the learned distributions p ψ,θ (x 1, x 2), p ψ,θ (x 1 |x 2), and p ψ,θ (x 2 |x 1). The observed features for the conditional distributions are generated from the marginal distributions p(x 2) and p(x 1) respectively. We see in table 8 and in FIG5, that even with very small weight GSNN prevents model from learning distributions with several local optimas. GSNN also increases Monte-Carlo log-likelihood estimation with a few samples and decreases much more precise Importance Sampling log-likelihood estimation. When α = 0.9 the whole distribution structure is lost. We see that using α = 1 ruins multimodality of the restored distribution, so we highly recommend to use α = 1 or at least α ≈ 1. Table 9: Average negative log-likelihood of inpaintings for 1000 objects. IS-S refers to Importance Sampling log-likelihood estimation with S samples for each object. MC-S refers to Monte-Carlo log-likelihood estimation with S samples for each object. Naive Bayes is a baseline method which assumes pixels and colors independence. In FIG8 we can see that the inpaintings produced by GSNN are smooth, blurry and not diverse compared with VAEAC. Table 9 shows that VAEAC learns distribution over inpaintings better than GSNN in terms of test loglikelihood. Nevertheless, Monte-Carlo estimations with a small number of samples sometimes are better for GSNN, which means less local modes in the learned distribution and more blurriness in the samples. In FIG9 one can see that VAEAC has similar convergence speed to VAE in terms of iterations on MNIST dataset. In our experiments we observed the same behaviour for other datasets. Each iteration of VAEAC is about 1.5 times slower than VAE due to usage of three networks instead of two. We see that for some datasets MICE and MissForest outperform VAEAC, GSNN and NN. The reason is that for some datasets random forest is more natural structure than neural network. The also show that VAEAC, GSNN and NN show similar imputation performance in terms of NRMSE, PFC, post-imputation R2-score and accuracy. Given the from appendix C we can take this as a weak evidence that the distribution of imputations has only one local maximum for datasets from . doesnt use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high. We consider the horizontal line mask for MNIST which is described in appendix A.3. We use the released GAIN code 5 with a different mask generator. The inpaintings from VAEAC which uses the unobserved pixels during training are available in figure 1. The inpaintings from GAIN which ignores unobserved pixels are provided in FIG0. As can be seen in FIG0, GAIN fails to learn conditional distribution for given mask distribution p(b).Nevertheless, we don't claim that GAIN is not suitable for image inpainting. As it was shown in the supplementary of and in the corresponding code, GAIN is able to learn conditional distributions when p(b) is pixel-wise independent Bernoulli distribution with probability 0.5. In FIG0 we provide samples of Universal Marginalizer (UM) and VAEAC for the same inputs. Consider the case when UM marginal distributions are parametrized with Gaussians. The most simple example of a distribution, which UM cannot learn but VAEAC can, is given in figure 13. | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SyxtJh0qYm | We propose an extension of conditional variational autoencoder that allows conditioning on an arbitrary subset of the features and sampling the remaining ones. |
We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour. Discovering high-performance neural network architectures required years of extensive research by human experts through trial and error. As far as the image classification task is concerned, state-ofthe-art convolutional neural networks are going beyond deep, chain-structured layout BID19 BID6 towards increasingly more complex, graph-structured topologies BID12 BID27 BID9. The combinatorial explosion in the design space makes handcrafted architectures not only expensive to obtain, but also likely to be suboptimal in performance. Recently, there has been a surge of interest in using algorithms to automate the manual process of architecture design. Their goal can be described as finding the optimal architecture in a given search space such that the validation accuracy is maximized on the given task. Representative architecture search algorithms can be categorized as random with weights prediction BID1, Monte Carlo Tree Search BID15, evolution BID21 BID26 BID13 BID16, and reinforcement learning BID0 BID30 BID31 BID29, among which reinforcement learning approaches have demonstrated the strongest empirical performance so far. Architecture search can be computationally very intensive as each evaluation typically requires training a neural network. Therefore, it is common to restrict the search space to reduce complexity and increase efficiency of architecture search. Various constraints that have been used include: growing a convolutional "backbone" with skip connections BID16, a linear sequence of filter banks BID1, or a directed graph where every node has exactly two predecessors BID31. In this work we constrain the search space by imposing a hierarchical network structure, while allowing flexible network topologies (directed acyclic graphs) at each level of the hierarchy. Starting from a small set of primitives such as convolutional and pooling operations at the bottom level of the hierarchy, higher-level computation graphs, or motifs, are formed by using lower-level motifs as their building blocks. The motifs at the top of the hierarchy are stacked multiple times to form the final neural network. This approach enables search algorithms to implement powerful hierarchical modules where any change in the motifs is propagated across the whole network immediately. This is analogous to the modularized design patterns used in many handcrafted architectures, e.g. VGGNet BID19, ResNet BID6, and Inception BID24 are all comprised of building blocks. In our case, a hierarchical architecture is discovered through evolutionary or random search. The evolution of neural architectures was studied as a sub-task of neuroevolution BID8 BID14 BID28 BID21 BID3, where the topology of a neural network is simultaneously evolved along with its weights and hyperparameters. The benefits of indirect encoding schemes, such as multi-scale representations, have historically been discussed in BID5; BID11 BID20 BID22. Despite these pioneer studies, evolutionary or random architecture search has not been investigated at larger scale on image classification benchmarks until recently BID16 BID13 BID26 BID1 BID15. Our work shows that the power of simple search methods can be substantially enhanced using well-designed search spaces. Our experimental setup resembles BID31, where an architecture found using reinforcement learning obtained the state-of-the-art performance on ImageNet. Our work reveals that random or evolutionary methods, which so far have been seen as less efficient, can scale and achieve competitive performance on this task if combined with a powerful architecture representation, whilst utilizing significantly less computational resources. To summarize, our main contributions are:1. We introduce hierarchical representations for describing neural network architectures. 2. We show that competitive architectures for image classification can be obtained even with simplistic random search, which demonstrates the importance of search space construction. 3. We present a scalable variant of evolutionary search which further improves the and achieves the best published 1 among evolutionary architecture search techniques. We first describe flat representations of neural architectures (Sect. 2.1), where each architecture is represented as a single directed acyclic graph of primitive operations. Then we move on to hierarchical representations (Sect. 2.2) where smaller graph motifs are used as building blocks to form larger motifs. Primitive operations are discussed in Sect. 2.3. We consider a family of neural network architectures represented by a single-source, single-sink computation graph that transforms the input at the source to the output at the sink. Each node of the graph corresponds to a feature map, and each directed edge is associated with some primitive operation (e.g. convolution, pooling, etc.) that transforms the feature map in the input node and passes it to the output node. Formally, an architecture is defined by the representation (G, o), consisting of two ingredients: DISPLAYFORM0 2. An adjacency matrix G specifying the neural network graph of operations, where G ij = k means that the k-th operation o k is to be placed between nodes i and j. The architecture is obtained by assembling operations o according to the adjacency matrix G: DISPLAYFORM1 DISPLAYFORM2 3 are assembled into a level-2 motif o1. The top row shows how level-2 motifs o in a way that the ing neural network sequentially computes the feature map x i of each node i from the feature maps x j of its predecessor nodes j following the topological ordering: DISPLAYFORM3 DISPLAYFORM4 Here, |G| is the number of nodes in a graph, and merge is an operation combining multiple feature maps into one, which in our experiments was implemented as depthwise concatenation. An alternative option of element-wise addition is less flexible as it requires the incoming feature maps to contain the same number of channels, and is strictly subsumed by concatenation if the ing x i is immediately followed by a 1 × 1 convolution. The key idea of the hierarchical architecture representation is to have several motifs at different levels of hierarchy, where lower-level motifs are used as building blocks (operations) during the construction of higher-level motifs. Consider a hierarchy of L levels where the -th level contains M motifs. The highest-level = L contains only a single motif corresponding to the full architecture, and the lowest level = 1 is the set of primitive operations. We recursively define o m, the m-th motif in level, as the composition of lower-level motifs o DISPLAYFORM0 A hierarchical architecture representation is therefore defined by {G DISPLAYFORM1 determined by network structures of motifs at all levels and the set of bottom-level primitives. The assembly process is illustrated in FIG0 . We consider the following six primitives at the bottom level of the hierarchy ( = 1, M = 6):• 1 × 1 convolution of C channels DISPLAYFORM0 If applicable, all primitives are of stride one and the convolved feature maps are padded to preserve their spatial resolution. All convolutional operations are followed by batch normalization and ReLU activation BID10; their number of channels is fixed to a constant C. We note that convolutions with larger receptive fields and more channels can be expressed as motifs of such primitives. Indeed, large receptive fields can be obtained by stacking 3 × 3 convolutions in a chain structure BID19, and wider convolutions with more channels can be obtained by merging the outputs of multiple convolutions through depthwise concatenation. We also introduce a special none op, which indicates that there is no edge between nodes i and j. It is added to the pool of operations at each level. Evolutionary search over neural network architectures can be performed by treating the representations of Sect. 2 as genotypes. We first introduce an action space for mutating hierarchical genotypes (Sect. 3.1), as well as a diversification-based scheme to obtain the initial population (Sect. 3.2). We then describe tournament selection and random search in Sect. 3.3, and our distributed implementation in Sect. 3.4. A single mutation of a hierarchical genotype consists of the following sequence of actions:1. Sample a target non-primitive level ≥ 2.2. Sample a target motif m in the target level.3. Sample a random successor node i in the target motif.4. Sample a random predecessor node j in the target motif.5. Replace the current operation o (−1) k between j and i with a randomly sampled operation o DISPLAYFORM0 In the case of flat genotypes which consist of two levels (one of which is the fixed level of primitives), the first step is omitted and is set to 2. The mutation can be summarized as: DISPLAYFORM1 2. Alter an existing edge: DISPLAYFORM2 To initialize the population of genotypes, we use the following strategy:1. Create a "trivial" genotype where each motif is set to a chain of identity mappings.2. Diversify the genotype by applying a large number (e.g. 1000) of random mutations. In contrast to several previous works where genotypes are initialized by trivial networks BID21 BID16, the above diversification-based scheme not only offers a DISPLAYFORM0 good initial coverage of the search space with non-trivial architectures, but also helps to avoid an additional bias introduced by handcrafted initialization routines. In fact, this strategy ensures initial architectures are reasonably well-performing even without any search, as suggested by our random sample in Table 1. Our evolutionary search algorithm is based on tournament selection BID4. Starting from an initial population of random genotypes, tournament selection provides a mechanism to pick promising genotypes from the population, and to place its mutated offspring back into the population. By repeating this process, the quality of the population keeps being refined over time. We always train a model from scratch for a fixed number of iterations, and we refer to the training and evaluation of a single model as an evolution step. The genotype with the highest fitness (validation accuracy) among the entire population is selected as the final output after a fixed amount of time. A tournament is formed by a random set of genotypes sampled from the current effective population, among which the individual with the highest fitness value wins the tournament. The selection pressure is controlled by the tournament size, which is set to 5% of the population size in our case. We do not remove any genotypes from the population, allowing it to grow with time, maintaining architecture diversity. Our evolution algorithm is similar to the binary tournament selection used in a recent large-scale evolutionary method BID16.We also investigated random search, a simpler strategy which has not been sufficiently explored in the literature, as an alternative to evolution. In this case, a population of genotypes is generated randomly, the fitness is computed for each genotype in the same way as done in evolution, and the genotype with the highest fitness is selected as the final output. The main advantage of this method is that it can be run in parallel over the entire population, substantially reducing the search time. Our distributed implementation is asynchronous, consisting of a single controller responsible for performing evolution over the genotypes, and a set of workers responsible for their evaluation. Both parties have access to a shared tabular memory M recording the population of genotypes and their fitness, as well as a data queue Q containing the genotypes with unknown fitness which should be evaluated. Specifically, the controller will perform tournament selection of a genotype from M whenever a worker becomes available, followed by the mutation of the selected genotype and its insertion into Q for fitness evaluation (Algorithm 1). A worker will pick up an unevaluated genotype from Q whenever there is one available, assemble it into an architecture, carry out training and validation, and then record the validation accuracy (fitness) in M (Algorithm 2). Architectures are trained from scratch for a fixed number of steps with random weight initialization. We do not rely on weight inheritance as in BID16, though incorporating it into our system is possible. Note that during architecture evolution no synchronization is required, and all workers are fully occupied. In our experiments, we use the proposed search framework to learn the architecture of a convolutional cell, rather than the entire model. The reason is that we would like to be able to quickly compute the fitness of the candidate architecture and then transfer it to a larger model, which is achieved by using less cells for fitness computation and more cells for full model evaluation. A similar approach has recently been used in BID31 BID29.Architecture search is carried out entirely on the CIFAR-10 training set, which we split into two sub-sets of 40K training and 10K validation images. Candidate models are trained on the training subset, and evaluated on the validation subset to obtain the fitness. Once the search process is over, the selected cell is plugged into a large model which is trained on the combination of training and validation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We note that the test set is never used for model selection, and it is only used for final model evaluation. We also evaluate the cells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3). For CIFAR-10 experiments we use a model which consists of 3 × 3 convolution with c 0 channels, followed by 3 groups of learned convolutional cells, each group containing N cells. After each cell (with c input channels) we insert 3 × 3 separable convolution which has stride 2 and 2c channels if it is the last cell of the group, and stride 1 and c channels otherwise. The purpose of these convolutions is to control the number of channels as well as reduce the spatial resolution. The last cell is followed by global average pooling and a linear softmax layer. For fitness computation we use a smaller model with c 0 = 16 and N = 1, shown in FIG2. It is trained using SGD with 0.9 momentum for 5000 steps, starting with the learning rate 0.1, which is reduced by 10x after 4000 and 4500 steps. The batch size is 256, and the weight decay value is 3 · 10 −4. We employ standard training data augmentation where a 24 × 24 crop is randomly sampled from a 32 × 32 image, followed by random horizontal flipping. The evaluation is performed on the full size 32 × 32 image. A note on variance. We found that the variance due to optimization was non-negligible, and we believe that reporting it is important for performing a fair comparison and assessing model capabilities. When training CIFAR models, we have observed standard deviation of up to 0.2% using the exact same setup. The solution we adopted was to compute the fitness as the average accuracy over 4 training-evaluation runs. For the evaluation of the learned cell architecture on CIFAR-10, we use a larger model with c 0 = 64 and N = 2, shown in FIG2. The larger model is trained for 80K steps, starting with a learning rate 0.1, which is reduced by 10x after 40K, 60K, and 70K steps. The rest of the training settings are the same as used for fitness computation. We report mean and standard deviation computed over 5 training-evaluation runs. For the evaluation on the ILSVRC ImageNet challenge dataset BID18, we use an architecture similar to the one used for CIFAR, with the following changes. An input 299 × 299 image is passed through two convolutional layers with 32 and 64 channels and stride 2 each. It is followed by 4 groups of convolutional cells where the first group contains a single cell (and has c 0 = 64 input channels), and the remaining three groups have N = 2 cells each FIG2. We use SGD with momentum which is run for 200K steps, starting with a learning rate of 0.1, which is reduced by 10x after 100K, 150K, and 175K steps. The batch size is 1024, and weight decay is 10 −4. We did not use auxiliary losses, weight averaging, label smoothing or path dropout empirically found effective in BID31. The training augmentation is the same as in BID24, and consists in random crops, horizontal flips and brightness and contrast changes. We report the single-crop top-1 and top-5 error on the ILSVRC validation set. Figure 3: Fitness and number of parameters vs evolution step for flat and hierarchical representations. Left: fitness of a genotype generated at each evolution step. Middle: maximum fitness across all genotypes generated before each evolution step. Right: number of parameters in the small CIFAR-10 model constructed using the genotype generated at each evolution step. We run the evolution on flat and hierarchical genotypes for 7000 steps using 200 GPU workers. The initial size of the randomly initialized population is 200, which later grows as a of tournament selection and mutation (Sect. 3). For the hierarchical representation, we use three levels (L = 3), with M 1 = 6, M 2 = 6, M 3 = 1. Each of the level-2 motifs is a graph with |G | = 4 nodes, and the level-3 motif is a graph with |G | = 5 nodes. Each level-2 motif is followed by a 1 × 1 convolution with the same number of channels as on the motif input to reduce the number of parameters. For the flat representation, we used a graph with 11 nodes to achieve a comparable number of edges. The evolution process is visualized in Fig. 3. The left plot shows the fitness of the genotype generated at each step of evolution: the fitness grows fast initially, and plateaus over time. The middle plot shows the best fitness observed by each evolution step. Since the first 200 steps correspond to a random initialization and mutation starts after that, the best architecture found at step 200 corresponds to the output of random search over 200 architectures. Fig. 3 (right) shows the number of parameters in the small network (used for fitness computation), constructed using the genotype produced at each step. Notably, flat genotypes achieve higher fitness, but at the cost of larger parameter count. We thus also consider a parameter-constrained variant of the flat genotype, where only the genotypes with the number of parameters under a fixed threshold are permitted; the threshold is chosen so that the flat genotype has a similar number of parameters to the hierarchical one. In this setting hierarchical and flat genotypes achieve similar fitness. To demonstrate that improvement in fitness of the hierarchical architecture is correlated with the improvement in the accuracy of the corresponding large model trained till convergence, we plot the relative accuracy improvements in Fig. 4. Figure 4: Accuracy improvement over the course of evolution, measured with respect to the first random genotype. The small model is the model used for fitness computation during evolution (its absolute fitness value is shown with the red curve in Fig. 3 (middle) ). The large model is the model where the evolved cell architecture is deployed for training and evaluation. As far as the architecture search time is concerned, it takes 1 hour to compute the fitness of one architecture on a single P100 GPU (which involves 4 rounds of training and evaluation). Using 200 GPUs, it thus takes 1 hour to perform random search over 200 architectures and 1.5 days to do the evolutionary search with 7000 steps. This is significantly faster than 11 days using 250 GPUs reported by BID16 and 4 days using 450 GPUs reported by BID31. Table 1: Classification on the CIFAR-10 test set and ILSVRC validation set obtained using the architectures found using various representations and search methods. We now turn to the evaluation of architectures found using random and evolutionary search on CIFAR-10 and ImageNet. The are presented in Table 1.First, we note that randomly sampled architectures already perform surprisingly well, which we attribute to the representation power of our architecture spaces. Second, random search over 200 architectures achieves very competitive on both CIFAR-10 and ImageNet, which is remarkable considering it took 1 hour to carry out. This demonstrates that well-constructed architecture repre-sentations, coupled with diversified sampling and simple search form a simple but strong baseline for architecture search. Our best are achieved using evolution over hierarchical representations: 3.75% ± 0.12% classification error on the CIFAR-10 test set (using c 0 = 64 channels), which is further improved to 3.63% ± 0.10% with more channels (c 0 = 128). On the ImageNet validation set, we achieve 20.3% top-1 classification error and 5.2% top-5 error. We put these in the context of the state of the art in Tables 2 and 3. We achieve the best published on CIFAR-10 using evolutionary architecture search, and also demonstrate competitive performance compared to the best published methods on both CIFAR-10 and ImageNet. Our ImageNet model has 64M parameters, which is comparable to Inception-ResNet-v2 (55.8M) but larger than NASNet-A (22.6M). ResNet-1001 + pre-activation BID7 4.62 Wide ResNet-40-10 + dropout 3.8 DenseNet (k=24) BID9 3.74 DenseNet-BC (k=40) BID9 3.46MetaQNN BID0 6.92 NAS v3 BID30 3.65 Block-QNN-A BID29 3.60 NASNet-A BID31 3.41Evolving DNN BID13 7.3 Genetic CNN BID26 7.10 Large-scale Evolution BID16 5.4 SMASH BID1 4.03Evolutionary search, hier. repr., c0 = 64 3.75 ± 0.12 Evolutionary search, hier. repr., c0 = 128 3.63 ± 0.10 Table 2: Classification error on the CIFAR-10 test set obtained using state-of-the-art models as well as the best-performing architecture found using the proposed architecture search framework. Existing models are grouped as (from top to bottom): handcrafted architectures, architectures found using reinforcement learning, and architectures found using random or evolutionary search. Top-1 error (%) Top-5 error (%)Inception-v3 BID24 21.2 5.6 Xception BID2 21.0 5.5 Inception-ResNet-v2 BID25 19.9 4.9 NASNet-A BID31 19.2 4.7Evolutionary search, hier. repr., c0 = 64 20.3 5.2 Table 3: Classification error on the ImageNet validation set obtained using state-of-the-art models as well as the best-performing architecture found using our framework. The evolved hierarchical cell is visualized in Appendix A, which shows that architecture search have discovered a number of skip connections. For example, the cell contains a direct skip connection between input and output: nodes 1 and 5 are connected by Motif 4, which in turn contains a direct connection between input and output. The cell also contains several internal skip connections, through Motif 5 (which again comes with an input-to-output skip connection similar to Motif 4). We have presented an efficient evolutionary method that identifies high-performing neural architectures based on a novel hierarchical representation scheme, where smaller operations are used as the building blocks to form the larger ones. Notably, we show that strong can be obtained even using simplistic search algorithms, such as evolution or random search, when coupled with a well-designed architecture representation. Our best architecture yields the state-of-the-art on A ARCHITECTURE VISUALIZATION Visualization of the learned cell and motifs of our best-performing hierarchical architecture. Note that only motifs 1,3,4,5 are used to construct the cell, among which motifs 3 and 5 are dominating. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJQRKzbA- | In this paper we propose a hierarchical architecture representation in which doing random or evolutionary architecture search yields highly competitive results using fewer computational resources than the prior art. |
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction. VP algorithms essentially combine data-driven perception and planning, and are important for robotic manipulation and navigation domains, among others. A recent and promising approach to VP is the semi-parametric topological memory (SPTM) method, where image samples are treated as nodes in a graph, and the connectivity in the graph is learned using deep image classification. Thus, the learned graph represents the topological connectivity of the data, and planning can be performed using conventional graph search methods. However, training SPTM necessitates a suitable loss function for the connectivity classifier, which requires non-trivial manual tuning. More importantly, SPTM is constricted in its ability to generalize to changes in the domain, as its graph is constructed from direct observations and thus requires collecting new samples for planning. In this paper, we propose Hallucinative Topological Memory (HTM), which overcomes these shortcomings. In HTM, instead of training a discriminative classifier we train an energy function using contrastive predictive coding. In addition, we learn a conditional VAE model that generates samples given a context image of the domain, and use these hallucinated samples for building the connectivity graph, allowing for zero-shot generalization to domain changes. In simulated domains, HTM outperforms conventional SPTM and visual foresight methods in terms of both plan quality and success in long-horizon planning. For robots to operate in unstructured environments such as homes and hospitals, they need to manipulate objects and solve complex tasks as they perceive the physical world. While task planning and object manipulation have been studied in the classical AI paradigm, most successes have relied on a human-designed state representation and perception, which can be challenging to obtain in unstructured domains. While high-dimensional sensory input such as images can be easy to acquire, planning using raw percepts is challenging. This has motivated the investigation of datadriven approaches for robotic manipulation. For example, deep reinforcement learning (RL) has made impressive progress in handling high-dimensional sensory inputs and solving complex tasks in recent years. One of the main challenges in deploying deep RL methods in human-centric environment is interpretability. For example, before executing a potentially dangerous task, it would be desirable to visualize what the robot is planning to do step by step, and intervene if necessary. Addressing both data-driven modeling and interpretability, the visual planning (VP) paradigm seeks to learn a model of the environment from raw perception and then produce a visual plan of solving a task before actually executing a robot action. Recently, several studies in manipulation and navigation have investigated VP approaches that first learn what is possible to do in a particular environment by self-supervised interaction, and then use the learned model to generate a visual plan from the current state to the goal, and finally apply visual servoing to follow the plan. One particularly promising approach to VP is the semi-parametric topological memory (SPTM) method proposed by Savinov et al.. In SPTM, images collected offline are treated as nodes in a graph and represent the possible states of the system. To connect nodes in this graph, an image classifier is trained to predict whether pairs of images were'close' in the data or not, effectively learning which image transitions are feasible in a small number of steps. The SPTM graph can then be used to generate a visual plan -a sequence of images between a pair of start and goal images -by directly searching the graph. SPTM has several advantages, such as producing highly interpretable visual plans and the ability to plan long-horizon behavior. However, since SPTM builds the visual plan directly from images in the data, when the environment changes -for example, the lighting varies, the camera is slightly moved, or other objects are displaced -SPTM requires recollecting images in the new environment; in this sense, SPTM does not generalize in a zero-shot sense. Additionally, similar to, we find that training the graph connectivity classifier as originally proposed by requires extensive manual tuning. Figure 1: HTM illustration. Top left: data collection. In this illustration, the task is to move a green object between gray obstacles. Data consists of multiple obstacle configurations (contexts), and images of random movement of the object in each configuration. Bottom left: the elements of HTM. A CVAE is trained to hallucinate images of the object and obstacles conditioned on the obstacle image context. A connectivity energy model is trained to score pairs of images based on the feasibility of their transition. Right: HTM visual planning. Given a new context image and a pair of start and goal images, we first use the CVAE to hallucinate possible images of the object and obstacles. Then, a connectivity graph (blue dotted lines) is computed based on the connectivity energy, and we plan for the shortest path from start to goal on this graph (orange solid line). For executing the plan, a visual servoing controller is later used to track the image sequence. In this work, we propose to improve both the robustness and zero-shot generalization of SPTM. To tackle the issue of generalization, we assume that the environment is described using some context vector, which can be an image of the domain or any other observation data that contains enough information to extract a plan (see Figure 1 top left). We then train a conditional generative model that hallucinates possible states of the domain conditioned on the context vector. Thus, given an unseen context, the generative model hallucinates exploration data without requiring actual exploration. When building the connectivity graph with these hallucinated images, we replace the vanilla classifier used in SPTM with an energy-based model that employs a contrastive loss. We show that this alteration drastically improves planning robustness and quality. Finally, for planning, instead of connecting nodes in the graph according to an arbitrary threshold of the connectivity classifier, as in SPTM, we cast the planning as an inference problem, and efficiently search for the shortest path in a graph with weights proportional to the inverse of a proximity score from our energy model. Empirically, we demonstrate that this provides much smoother plans and barely requires any hyperparameter tuning. We term our approach Hallucinative Topological Memory (HTM). A visual overview of our algorithm is presented in Figure 1. We evaluate our method on a set of simulated VP problems of moving an object between obstacles, which require long-horizon planning. In contrast with prior work, which only focused on the success of the method in executing a task, here we also measure the interpretability of visual planning, through mean opinion scores of features such as image fidelity and feasibility of the image sequence. In both measures, HTM outperforms state-of-the-art data-driven approaches such as visual foresight and the original SPTM. Context-Conditional Visual Planning and Acting (VPA) Problem. We consider the contextconditional visual planning problem from. Consider deterministic and fully-observable environments E 1,..., E N that are sampled from an environment distribution P E. Each environment E i can be described by a context vector c i that entirely defines the dynamics o Figure 1, the context could represent an image of the obstacle positions, which is enough to predict the possible movement of objects in the domain. 1 As is typical in VP problems, we assume our data D = {o Ti, c i} i∈{1,...,N} is collected in a self-supervised manner, and that in each environment E i, the observation distribution is defined as P o (·|c i). At test time, we are presented with a new environment, its corresponding context vector c, and a pair of start and goal observations o start, o goal. Our goal is to use the training data to build a planner Q h (o start, o goal, c) and an h-horizon policy π h. The planner's task is to generate a sequence of observations between o start and o goal, in which any two consecutive observations are reachable within h time steps. The policy takes as input the image sequence and outputs a control policy that transitions the system from o start to o goal. As the problem requires a full plan given only a context image in the new environment, the planner must be capable of zero-shot generalization. Note that the planner and policy form an interpretable planning method that allows us to evaluate their performance separately. For simplicity we will omit the subscript h for the planner and the policy. Semi-Parametric Topological Memory (SPTM) is a visual planning method that can be used to solve a special case of VPA. where there is only a single training environment, E and no context image. SPTM builds a memory-based planner and an inverse-model controller. At training, a classifier R is trained to map two observation images o i, o j to a score ∈ representing the feasibility of the transition, where images that are ≤ h steps apart are labeled positive and images that are ≥ l are negative. The policy is trained as an inverse model L, mapping a pair of observation images o i, o j to an appropriate action a that transitions the system from o i to o j. Given an unseen environment E *, new observations are manually collected and organized as nodes in a graph G. Edges in the graph connect observations o i, o j if R(o i, o j) ≥ s shortcut, where s shortcut is a manually defined threshold. To plan, given start and goal observations o start and o goal, SPTM first uses R to localize, i.e., find the closest nodes in G to o start and o goal. A path is found by running Dijkstra's algorithm, and the method then selects a waypoint o wi on the path which represents the farthest observation that is still feasible under R. Since both the current localized state o i and its waypoint o wi are in the observation space, we can directly apply the inverse model and take the action a i where a i = L(o i, o wi). After localizing to the new observation state reached by a i, SPTM repeats the process until the node closest to o goal is reached. is a deep generative model that can be used for learning a high-dimensional conditional distribution P o (·|c). The CVAE is trained by maximizing the evidence lower bound (ELBO):, where q θ (z|o, c) is the encoder that maps observations and contexts to the latent distribution, p θ (o|z, c) is the decoder that maps latents and contexts to the observation distribution, and r ψ (z|c) is the prior that maps contexts to latent prior distributions. Together p θ, q φ, r ψ are trained to maximize the variational lower bound above. We assume that the prior and the encoder are Gaussian, which allows the D KL term to be computed in closed-form. Monte-Carlo sampling and the reparametrization trick are used to approximate the gradient of the loss. Contrastive Predictive Coding (CPC) extracts compact representations that maximize the causal and predictive aspects of high-dimensional sequential data. A non-linear encoder g enc encodes the observation o t to a latent representation z t = g enc (o t). We maximize the mutual information between the latent representation z t and future observation o t+k with a log-. This model is trained to be proportional to the density ratio p(o t+k |z t)/p(o t+k) by the CPC loss function: the cross entropy loss of correctly classifying a positive sample from a set X = {o 1, ..., o N} of N random samples with 1 positive sample from p(o t+k |z t) and N − 1 negative samples sampled from p(o t+k): SPTM has been shown to solve long-horizon planning problems such as navigation from first-person view. However, SPTM is not zero-shot: even a small change to the training environment requires collecting substantial exploration data for building the planning graph. This can be a limitation in practice, especially in robotic domains, as any interaction with the environment requires robot time, and exploring a new environment can be challenging (indeed, applied manual exploration). In addition, similarly to, we found that training the connectivity classifier as proposed in requires extensive hyperparameter tuning. In this section, we propose an extension of SPTM to overcome these two challenges by employing three ideas - using a CVAE to hallucinate samples in a zero-shot setting, using contrastive loss for a more robust score function and planner, and planning based on an approximate maximum likelihood formulation of the shortest path under uniform state distribution. We call this approach Hallucinative Topological Memory (HTM), and next detail each component in our method. We propose a zero-shot learning solution for automatically building the planning graph using only a context vector of the new environment. Our idea is that, after seeing many different environments and corresponding states of the system during training, given a new environment we should be able to effectively hallucinate possible system states. We can then use these hallucinations in lieu of real samples from the system in order to build the planning graph. To generate images conditioned on a context, we implement a CVAE as depicted in Figure 1. During training, we learn the prior latent distribution r ψ (z|c), modeled as a Gaussian with mean µ(c) and covariance matrix Σ(c), where µ(·) and Σ(·) are learned non-linear neural network transformations. During testing, when prompted with a new context vector c, we can sample latent vectors z 1,..., z N | c ∼ N (µ(c), Σ(c)), and pass them through the decoder p θ (x|z, c) for hallucinating samples in replacement of exploration data. A critical component in the SPTM method is the connectivity classifier that decides which image transitions are feasible. False positives may in impossible short-cuts in the graph, while false negatives can make the plan unnecessarily long. In, the classifier was trained discriminatively, using observations in the data that were reached within h steps as positive examples, and more than l steps as negative examples, where h and l are chosen arbitrarily. In practice, this leads to three important problems. First, this method is known to be sensitive to the choice of positive and negative labeling. Second, training data are required to be long, non-cyclic trajectories for a high likelihood of sampling'true' negative samples. However, self-supervised interaction data often resembles random walks that repeatedly visit a similar state, leading to inconsistent estimates on what constitutes negative data. Third, since the classifier is only trained to predict positively for temporally nearby images and negatively for temporally far away images, its predictions of medium-distance images can be arbitrary. This creates both false positives and false negatives, thereby increasing shortcuts and missing edges in the graph. To solve these problems, we propose to learn a connectivity score using contrastive predictive loss. Similar to CVAE, we initialize a CPC encoder g enc that takes in both observation and context, and a density-ratio model f k that does not depend on the context. Through optimizing the CPC objective, f k of positive pairs are encouraged to be distinguishable from that of negative pairs. Thus, it serves as a proxy for the temporal distance between two observations, leading to a connectivity score for planning. Theoretically, CPC loss is better motivated than the classification loss in SPTM as it structures the latent space on a clear objective: maximize the mutual information between current and future observations. In practice, this in less hyperparameter tuning and a smoother distance manifold in the representation space. Finally, instead of only sampling from the same trajectory as done in SPTM, our negative data are collected by sampling from the latent space of a trained CVAE or the replay buffer. Without this trick, we found that the SPTM classifier fails to handle self-supervised data. l) ). This score reflects the difficulty in transitioning to the next state from the current state by self-supervised exploration. The learned connectivity graph G can be viewed as a topological memory upon which we can use conventional graph planning methods to efficiently perform visual planning. In the third step, we find the shortest path using Dijkstra's algorithm on the learned connectivity graph G between the start and end node. In the fourth step, we apply our policy to follow the visual plan, reaching the next node in our shortest path and replan every fixed number of steps until we reachô goal. For the policy, we train an inverse model which predicts actions given two observations that are within h steps apart. Maximum likelihood trajectory with Dijkstra's. We show that the CPC loss can be utilized to cast the planning problem as an inference problem, and in an effective planning algorithm. After training the CPC objective to convergence, we have To estimate p(o t+k |o t)/p(o t+k), we compute the normalizing factor o ∈V [f k (o, o t)] for each o t by averaging over all nodes in the graph. Let's define our non-negative weight from o t to o t+k as A shortest-path planning algorithm finds T, o 0,..., o T that minimizes, Thus, assuming that the self-supervised data distribution is approximately uniform, the shortest path algorithm with proposed weight ω maximizes a lower bound on the trajectory likelihood given the start and goal states. In practice, this leads to a more stable planning approach and yields more feasible plans. Reinforcement Learning. Most of the study of data-driven planning has been under the model-free RL framework. However, the need to design a reward function, and the fact that the learned policy does not generalize to tasks that are not defined by the specific reward, has motivated the study of model-based approaches. Recently, investigated model-based RL from pixels on Mujoco and Atari domains, but did not study generalization to a new environment. explored model-based RL with image-based goals using visual model predictive control (visual MPC). These methods rely on video prediction, and are limited in the planning horizon due to accumulating errors. In comparison, our method does not predict full trajectories but only individual images, mitigating this problem. Our method can also use visual MPC as a replacement for the visual servoing policy. Self-supervised learning. Several studies investigated planning goal directed behavior from data obtained offline, e.g., by self-supervised robot interaction. Nair et al. used an inverse model to reach local sub-goals, but require human demonstrations of long-horizon plans. Wang et al. solve the visual planning problem using a conditional version of Causal InfoGAN. However, as training GAN is unstable and requires tedious model selection, we opted for the CVAE-based approach, which is much more robust. Classical planning and representation learning. In classical planning literature, task and motion planning also separates the high-level planning and the low-level controller. In these works, domain knowledge is required to specify preconditions and effects at the task level. Our approach only requires data collected through self-supervised interaction. Other studies that bridge between classical planning and representation learning include. These works, however, do not consider zero-shot generalization. While Srinivas et al. and Qureshi et al. learn representations that allow goal-directed planning to unseen environments, they require expert training trajectories. Ichter and Pavone also generalizes motion planning to new environments, but require a collision checker and valid samples from test environments. Recent work in visual planning (e.g., ) focused on real robotic tasks with visual input. While impressive, such can be difficult to reproduce or compare. For example, it is not clear whether manipulating a rope with the PR2 robot is more or less difficult than manipulating a rigid object among many visual distractors. In light of this difficulty, we propose a suite of simulated tasks with an explicit difficulty scale and clear evaluation metrics. Our domains consider moving a rigid object between obstacles using Mujoco, and by varying the obstacle positions, we can control the planning difficulty. For example, placing the object in a cul-de-sac would require non-trivial planning compared to simply moving around an obstacle along the way to the goal. We thus create two domains, as seen in Figure 2: 1. Block wall:: A green block navigates around a static red obstacle, which can vary in position. 2. Block wall with complex obstacle: Similar to the above, but here the wall is a 3-link object which can vary in position, joint angles, and length, making the task significantly harder. With these domains, we aim to asses the following attributes: • Does HTM improve visual plan quality over state-of-the-art VP methods? • How does HTM execution success rate compare to state-of-the-art VP methods? • How well does HTM generalize its planning to unseen contexts? We discuss our evaluation metrics for these attributes in Section 5.1. To fully assess success of HTM relative to other state-of-the-art VP methods, we run these evaluation metrics on SPTM and Visual Foresight. In the first baseline, since vanilla SPTM cannot plan in a new environment, we use the same samples generated by the same CVAE as HTM, and then build the graph by assigning edge weights in the graph proportional to their exponentiated SPTM classifier score. 3 We also give it the same negative sampling proceedure as HTM. The same low-level controller is also used to follow the plans. In the second baseline, Visual Foresight trains a video prediction model, and then performs model predictive control (MPC) which finds the optimal action sequence through random shooting. For the random shooting, we used 3 iterations of the cross-entropy method with 200 sample sequences. The MPC acts for 10 steps and replans, where the planning horizon T is 15. We use the state-of-the-art video predictor as proposed by Lee et al. and the public code provided by the authors. For evaluating trajectories in random shooting, we studied two cost functions that are suitable for our domains: pixel MSE loss and green pixel distance. The pixel MSE loss computes the pixel distance between the predicted observations and the goal image. This provides a sparse signal when the object pixels in the plan can overlap with those of the goal. We also investigate a cost function that uses prior knowledge about the task -the position of the moving green block, which is approximated by calculating the center of mass of the green pixels. As opposed to pixel MSE, the green pixel distance provides a smooth cost function which estimates the normalized distance between the estimated block positions of the predicted observations and the goal image. Note that this assumes additional domain knowledge compared to HTM. We design a set of tests that measure both qualitative and quantitative performance of an algorithm. To motivate the need for qualitative metrics, we reiterate the importance of planning interpretability; it is highly desirable that the generated plan visually make sense so as to allow a human to approve of the plan prior to execution. Qualitative Visual plans have the essential property of being intuitive, in that the imagined trajectory is perceptually sensible. Since these qualities are highly subjective, we devised a set of tests to evaluate plans based on human visual perception. For each domain, we asked 5 participants to visually score 5 randomly generated plans from each model by answering the following questions: Fidelity: Does the pixel quality of the images resemble the training data?; Feasibility: Is each transition in the generated plan executable by a single action step?; and Completeness: Is the goal reachable from the last image in the plan using a single action? Answers were in the range, where 0 denotes No to the proposed question and 1 means Yes. The mean opinion score were calculated for each model. Quantitative In addition to generating visually sensible trajectories, a planning algorithm must also be able to successfully navigate towards a predefined goal. Thus, for each domain, we selected 20 start and goal images, each with an obstacle configuration unseen during training. Success was measured by the ability to get within some L2 distance to the goal in a n steps or less, where the distance threshold and n varied on the domain but was held constant across all models. A controller specified by the algorithm executed actions given an imagined trajectory, and replanning occurred every r steps. Specific details can be found in the Appendix D. As shown in Table 5.2, HTM outperforms all baselines in both qualitative and quantitative measurements across all domains. In the simpler block wall domain, Visual Foresight with green pixel distance only succeeds under the assumption of additional state information of the object's location. the other algorithms do not have. However, in the complex obstacle domain, Visual Foresight fails to perform comparably to our algorithm, regardless of the additional assumption. We also compared our method with SPTM, using the same inverse model and CVAE to imagine testing samples. However, without a robust classification loss and improved method of weighting the graph's edges, SPTM often fails to find meaningful transitions. In regards to perceptual evaluation, Visual Foresight generates realistic transitions, as seen by the high participant scores for feasibility. However, the algorithm is limited in creating a visual plan within the optimal T = 15 timesteps. 4 Thus, when confronted with a challenging task of navigating around a convex shape where the number of timesteps required exceeds T, Visual Foresight fails to construct a reliable plan (see Figure 3), and thus lacks plan completeness. Conversely, SPTM is able to imagine some trajectory that will reach the goal state. However, as mentioned above and was confirmed in the perceptual scores, SPTM fails to select feasible transitions, such as imagining a trajectory where the block will jump across the wall or split into two blocks. Our approach, on the other hand, received the highest scores of fidelity, feasibility, and completeness. Finally, we show in Figure 3 the of our two proposed improvements to SPTM in isolation. The clearly show that a classifier using contrastive loss outperforms that which uses Binary Cross Entropy (BCE) loss, and furthermore that the inverse of the score function for edge weighting is more successful than the best tuned version of binary edge weights. Table 1: Qualitative and quantitative evaluation for the the block wall and block wall with complex obstacle domains. Qualitative data also displays the 95% confidence interval. Note HTM refers to edge weighting using the energy model, and is weighting using the density ratio, as described in 3.3. For the score function, we denote the energy model structured with contrastive loss as CPC and the classifier as proposed in with BCE loss as SPTM. For the edge weighting function, we test the binary edge weighting from the original SPTM paper, the inverse of the score function, and the inverse of the normalized score function. We propose a method that is visually interpretable and modular -we first hallucinate possible configurations, then compute a connectivity between them, and then plan. Our HTM can generalize to unseen environments and improve visual plan quality and execution success rate over state-of-the-art VP methods. Our suggest that combining classical planning methods with data-driven perception can be helpful for long-horizon visual planning problems, and takes another step in bridging the gap between learning and planning. In future work, we plan to combine HTM with Visual MPC for handling more complex objects, and use object-oriented planning for handling multiple objects. Another interesting aspect is to improve planning by hallucinating samples conditioned on the start and goal configurations, which can help reduce the search space during planning. In this section, we assume the dataset as described in VPA, D = {o . There are two ways of learning a model to distinguish the positive from the negative transitions. Classifier: As noted above, SPTM first trains a classifier which distinguishes between an image pair that is within h steps apart, and the images that are far apart using random sampling. The classifier is used to localize the current image and find possible next images for planning. In essence, the classifier contains the encoder g θ that embeds the observation x and the the score function f that takes the embedding of each image and output the logit for a sigmoid function. The binary cross entropy loss of the classifier can be written as follows:, where z − t is a random sample from D. Energy model: Another form of discriminating the the positive transition out of negative transitions is through an energy model. Oord et al. learn the embeddings of the current states that are predictive of the future states. Let g be an encoder of the input x and z = g θ (x) be the embedding. The loss function can be described as a cross entropy loss of predicting the correct sample from N + 1 samples which contain 1 positive sample and N negative samples:, where f ψ (u, v) = exp (u T ψv) and z Figure 6: Sample observations (top) and contexts (bottom). In this domain, an object can be translated and rotated (SE) slightly per timestep. The data are collected from 360 different object shapes with different number of building blocks between 3 to 7. Each object is randomly initialized 50 times and each episode has length 30. The goal is to plan a manipulation of an unseen object through the narrow gap between obstacles in zero-shot. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BkgF4kSFPB | We propose Hallucinative Topological Memory (HTM), a visual planning algorithm that can perform zero-shot long horizon planning in new environments. |
Deep Neural Networks (DNNs) thrive in recent years in which Batch Normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the reduction operations. In this paper, we propose alleviating the BN’s cost by using only a small fraction of data for mean & variance estimation at each iteration. The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency. We identify that the effectiveness expects less data correlation while the efficiency expects regular execution pattern. To this end, we propose two categories of approach: sampling or creating few uncorrelated data for statistics’ estimation with certain strategy constraints. The former includes “Batch Sampling (BS)” that randomly selects few samples from each batch and “Feature Sampling (FS)” that randomly selects a small patch from each feature map of all samples, and the latter is “Virtual Dataset Normalization (VDN)” that generates few synthetic random samples. Accordingly, multi-way strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. All the proposed methods are comprehensively evaluated on various DNN models, where an overall training speedup by up to 21.7% on modern GPUs can be practically achieved without the support of any specialized libraries, and the loss of model accuracy and convergence rate are negligible. Furthermore, our methods demonstrate powerful performance when solving the well-known “micro-batch normalization” problem in the case of tiny batch size. Recent years, Deep Neural Networks (DNNs) have achieved remarkable success in a wide spectrum of domains such as computer vision BID16 and language modeling BID4. The success of DNNs largely relies on the capability of presentation benefit from the deep structure BID5. However, training a deep network is so difficult to converge that batch normalization (BN) has been proposed to solve it BID14. BN leverages the statistics (mean & variance) of mini-batches to standardize the activations. It allows the network to go deeper without significant gradient explosion or vanishing BID23 BID14. Moreover, previous work has demonstrated that BN enables the use of higher learning rate and less awareness on the initialization BID14, as well as produces mutual information across samples BID21 or introduces estimation noises BID2 for better generalization. Despite BN's effectiveness, it is observed that BN introduces considerable training overhead due to the costly reduction operations. The use of BN can lower the overall training speed (mini second per image) by >45%, especially in deep models. To alleviate this problem, several methods were reported. Range Batch Normalization (RBN) BID1 accelerated the forward pass by estimating the variance according to the data range of activations within each batch. A similar approach, L 1 -norm BN (L1BN), simplified both the forward and backward passes by replacing the L 2 -norm variance with its L 1 -norm version and re-derived the gradients for backpropagation (BP) training. Different from the above two methods, Self-normalization BID15 provided another solution which totally eliminates the need of BN operation with an elaborate activation function called "scaled exponential linear unit" (SELU). SELU can automatically force the activation towards zero mean and unit variance for better convergence. Nevertheless, all of these methods are not sufficiently effective. The strengths of L1BN & RBN are very limited since GPU has sufficient resources to optimize the execution speed of complex arithmetic operations such as root for the vanilla calculation of L 2 -norm variance. Since the derivation of SELU is based on the plain convolutional network, currently it cannot handle other modern structures with skip paths like ResNet and DenseNet. In this paper, we propose mitigating BN's computational cost by just using few data to estimate the mean and variance at each iteration. Whereas, the key challenge of this way lies at how to preserve the normalization effectiveness of the vanilla BN and improve the execution efficiency in the meantime, i.e. balance the effectiveness-efficiency trade-off. We identify that the effectiveness preservation expects less data correlation and the efficiency improvement expects regular execution pattern. This observation motivates us to propose two categories of approach to achieve the goal of effective and efficient BN: sampling or creating few uncorrelated data for statistics' estimation with certain strategy constraints. Sampling data includes "Batch Sampling (BS)" that randomly selects few samples from each batch and "Feature Sampling (FS)" that randomly selects a small patch from each feature map (FM) of all samples; creating data means "Virtual Dataset Normalization (VDN)" that generates few synthetic random samples, inspired by BID22. Consequently, multi-way strategies including intra-layer regularity, inter-layer randomness, and static execution graph during each epoch, are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. All the proposed approaches with single-use or joint-use are comprehensively evaluated on various DNN models, where the loss of model accuracy and convergence rate is negligible. We practically achieve an overall training speedup by up to 21.7% on modern GPUs. Note that any support of specialized libraries is not needed in our work, which is not like the network pruning BID32 or quantization BID12 requiring extra library for sparse or low-precision computation, respectively. Most previous acceleration works targeted inference which remained the training inefficient BID26 BID20 BID19 BID31 BID9, and the rest works for training acceleration were orthogonal to our approach BID7 BID29. Additionally, our methods further shows powerful performance when solving the well-known "micro-batch normalization" problem in the case of tiny batch sizes. In summary, the major contributions of this work are summarized as follows.• We propose a new way to alleviate BN's computational cost by using few data to estimate the mean and variance, in which we identify that the key challenge is to balance the normalization effectiveness via less data correlation and execution efficiency via regular execution pattern.• We propose two categories of approach to achieve the above goal: sampling (BS/FS) or creating (VDN) few uncorrelated data for statistics' estimation, in which multi-way strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. The approaches can be used alone or jointly.• Various benchmarks are evaluated, on which up to 21.7% practical acceleration is achieved for overall training on modern GPUs with negligible accuracy loss and without specialized library support.• Our methods are also extended to the micro-BN problem and achieve advanced performance 1.In order to make this paper easier for understanding, we present the organization of the whole paper in FIG0 The activations in one layer for normalization can be described by a d-dimensional activation feature DISPLAYFORM0, where for each feature we have DISPLAYFORM1 Note that in convolutional (Conv) layer, d is the number of FMs and m equals to the number of points in each FM across all the samples in one batch; while in fully-connected (FC) layer, d and m are the neuron number and batch size, respectively. BN uses the statistics (mean E[ DISPLAYFORM2 of the intra-batch data for each feature to normalize activation by DISPLAYFORM3 where DISPLAYFORM4 are trainable parameters introduced to recover the representation capability, is a small constant to avoid numerical error, and DISPLAYFORM5 The detailed operations of a BN layer in the backward pass can be found in Appendix C. DISPLAYFORM6 Iter. per second. TAB3 ; (b) usual optimization of the reduction operation using adder tree; (c) the computational graph of BN in the forward pass (upper) and backward pass (lower); (d) the computation graph of BN using few data for statistics' estimation in forward pass (upper) and backward pass (lower). x is neuronal activations, µ and σ denote the mean and standard deviation of x within one batch, respectively, and is the summation operation. From FIG0, we can see that adding BN will significantly slow down the training speed (iterations per second) by 32%-43% on ImageNet. The reason why BN is costly is that it contains several "reduction operations", i.e. m j=1. We offer more thorough data analysis in Appendix E. If the reduction operations are not optimized, it's computational complexity should be O(m). With the optimized parallel algorithm proposed in BID3, the reduction operation is transformed to cascaded adders of depth of log(m) as shown in FIG0. However, the computational cost is still high since we usually have m larger than one million. As shown in FIG0, the red " "s represent operations that contain summations, which cause the BN inefficiency. Motivated by the above analysis, decreasing the effective value of m at each time for statistics estimation seems a promising way to reduce the BN cost for achieving acceleration. To this end, we propose using few data to estimate the mean and variance at each iteration. For example, if m changes to a much smaller value of s, equation FORMULA5 can be modified as DISPLAYFORM0 Under review as a conference paper at ICLR 2019 where x (k) s denotes the small fraction of data, s is the actual number of data points, and we usually have s m. Here we denote s/m as Sampling Ratio (it includes both the cases of sampling and creating few data in Section 3.1 and 3.2, respectively). Since the reduction operations in the backward pass can be parallelized whereas in the forward pass, the variance can not be calculated until mean is provided (which makes it nearly twice as slow as backward pass), we just use few data in the forward pass. The computational graph of BN using few data is illustrated in FIG0. The key is how to estimate E[DISPLAYFORM1 for each neuron or FM within one batch with much fewer data. The influence on the backward pass is discussed in Appendix C. Although using few data can reduce the BN's cost dramatically, it will meet an intractable challenge: how to simultaneously preserve normalization effectiveness and improve the execution efficiency. On one side, using few data to estimate the statistics will increase the estimation error. By regarding the layers with high estimation error as unnormalized ones, we did contrast test in FIG1 . The mean & variance will be scaled up exponentially as network deepens, which causes the degradation of BN's effectiveness. This degradation can be recovered from two aspects.• Intra-layer. For the reason that the estimation error is not only determined by the amount of data but also the correlation between them, we can sample less correlated data within each layer to improve the estimation accuracy.• Inter-layer. As depicted in by FIG1, the intermittent BN configuration (i.e. discontinuously adding BN in different layers) can also prevent the statistics scaling up across layers. This motivates us that as long as layers with high estimation error are discontinuous, the statistics shift can still be constrained to a smaller range. Therefore, to reduce the correlation between estimation errors in different layers can also be beneficial to improve the accuracy of the entire model, which can be achieved by sampling less correlated data between layers. On the other side, less data correlation indicates more randomness which usually causes irregular memory access irregular degrading the running efficiency. In this paper, we recognize that the overhead of sampling can be well reduced by using regular and static execution patterns, which is demonstrated with ablation study at Fig In a nutshell, careful designs are needed to balance the normalization effectiveness via less data correlation and the execution efficiency via regular execution pattern. Only in this way, it is possible to achieve practical acceleration with little accuracy loss, which is our major target in this work. Based on the above analysis, we summarize the design considerations as follows.• Using few data for statistics' estimation can effectively reduce the computational cost of BN operations. Whereas, the effectiveness-efficiency trade-off should be well balanced.• Less data correlation is promising to reduce the estimation error and then guarantees the normalization effectiveness.• More regular execution pattern is expected for efficient running on practical platforms. To reach the aforementioned goal, we propose two categories of approach in this section: sampling or creating few uncorrelated data for statistics' estimation. Furthermore, multi-way strategies to balance the data correlation and execution regularity. Here "sampling" means to sample a small fraction of data from the activations at each layer for statistics' estimation. As discussed in the previous section, mining uncorrelated data within and between layers is critical to the success of sampling-based BN. However, the correlation property of activations in deep networks is complex and may vary across network structures and datasets. Instead, we apply a hypothesis-testing approach in this work. We first make two empirical assumptions:• Hypothesis 1. Within each layer, data belonging to the different samples are more likely to be uncorrelated than those within the same sample.• Hypothesis 2. Between layers, data belonging to different locations and different samples are less likely to be correlated. Here "location" means coordinate within FMs. These two assumptions are based on the basic nature of real-world data and the networks, thus they are likely to hold in most situations. They are further evaluated through experiments in Section 4.1.Based on above hypotheses, we propose two uncorrelated-sampling strategies: Batch Sampling (BS) and Feature Sampling (FS). The detailed Algorithms can be found in Alg. 1 and 2, respectively, Appendix A.• BS FIG2 ) randomly selects few samples from each batch for statistics' estimation. To reduce the inter-layer data correlation, it selects different samples across layers following Hypothesis 2.• FS FIG2 ) randomly selects a small patch from each FM of all samples for statistics' estimation. Since the sampled data come from all the samples thus it has lower correlation within each layer following Hypothesis 1. Furthermore, it samples different patch locations across layers to reduce the inter-layer data correlation following Hypothesis 2.A Naive Sampling (NS) is additionally proposed as a comparison baseline, as shown in FIG2. NS is similar to BS while the sampling index is fixed across layers, i.e. consistently samples first few samples within each batch. Regular and Static Sampling. In order to achieve practical acceleration on GPU, we expect more regular sampling pattern and more static sampling index. Therefore, to balance the estimation effectiveness and execution efficiency, we carefully design the following sampling rules: In BS, the selected samples are continuous and the sample index for different channels are shared, while they are independent across layers; In FS, the patch shape is rectangular and the patch location is shared by different channels and samples within each layer but variable as layer changes. Furthermore, all the random indexes are updated only once for each epoch, which guarantees a static computational graph during the entire epoch. Instead of sampling uncorrelated data, another plausible solution is to directly create uncorrelated data for statistics' estimation. We propose Virtual Dataset Normalization (VDN) 2 to implement it, as illustrated in FIG2. VDN is realized with the following three steps: calculating the statistics of the whole training dataset offline; generating s virtual samples 3 at each iteration to concatenate with the original real inputs as the final network inputs. using data from only virtual samples at each layer for statistics' estimation. Due to the independent property of the synthesized data, they are more uncorrelated than real samples thus VDN can produce much more accurate estimation. The detailed implementation algorithm can be found in Alg. 3, Appendix A. The sampling approach and the creating approach can be used either in a single way or in a joint way. Besides the single use, a joint use can be described as DISPLAYFORM0 where x s denotes the sampled real data while x v represents the created virtual data. β is a controlling variable, which indicates how large the sampled data occupy the whole data for statistics' estimation within each batch: when β = 0 or 1, the statistics come from single approach (VDN or any sampling strategy); when β ∈, the final statistics are a joint value as shown in equation 4. A comparison between different using ways is presented in TAB2, where "d.s." denotes "different samples"; "d.l." stands for "different locations", and "g.i." indicates "generated independent data". Compared to NS, BS reduces the inter-layer correlation via selecting different samples across layers; FS reduces both the intra-layer and inter-layer correlation via using data from all samples within each layer and selecting different locations across layers, respectively. Though VDN has a similar inter-layer correlation with NS, it slims the intra-layer correlation with strong data independence. A combination of BS/FS and VDN can inherit the strength of different approaches, thus achieve much lower accuracy loss. Experimental Setup. All of our proposed approaches are validated on image classification task using CIFAR-10, CIFAR-100 and ImageNet datasets from two perspectives: effectiveness evaluation and efficiency execution. To demonstrate the scalability and generality of our approaches on deep networks, we select ResNet-56 on CIFAR-10 & CIFAR-100 and select ResNet-18 and DenseNet-121 on ImageNet 4. The model configuration can be found in TAB3. The means and variances for BN are locally calculated in each GPU without inter-GPU synchronization as usual. We denote our approaches as the format of "Approach-Sampled size/Original size-sampling ratio(%)". For instance, if we assume batch size is 128, "BS-4/128-3.1%" denotes only 4 samples are sampled in BS and the sampling ratio equals to 4 128 = 3.1%. Similarly, "FS-1/32-3.1%" implies a 1 32 = 3.1% patch is sampled from each FM, and "VDN-1/128-0.8%" indicates only one virtual sample is added. The traditional BN is denoted as "BN-128/128-100.0%". Other experimental configurations can be found in Appendix B. Convergence Analysis. FIG3 shows the top-1 validation accuracy and confidential interval of ResNet-56 on CIFAR-10 and CIFAR-100. On one side, all of our approaches can well approximate the accuracy of normal BN when the sampling ratio is larger than 2%, which evidence their effectiveness. On the other side, all the proposed approaches perform better than the NS baseline. In particular, FS performs best, which is robust to the sampling ratio with negligible accuracy loss (e.g. at sampling ratio=1.6%, the accuracy degradation is -0.087% on CIFAR-10 and +0.396% on CIFAR-100). VDN outperforms BS and NS with a large margin in extremely small sampling ratio (e.g. 0.8%), whereas the increase of virtual batch size leads to little improvement on accuracy. BS is constantly better than NS. Furthermore, an interesting observation is that the BN sampling could even achieve better accuracy sometimes, such as NS-8/128(72.6±1.5%), BS-8/128(72.3±1.5%), and FS-1/64(71.2±0.96%) against the baseline (70.8±1%) on CIFAR-100. FIG4 further shows the training curves of ResNet-56 on CIFAR-10 under different approaches. It reveals that FS and VDN would not harm the convergence rate, while BS and NS begin to degrade the convergence when the sampling ratio is smaller than 1.6% and 3.1%, respectively. TAB4 shows the top-1 validation error on ImageNet under different approaches. With the same sampling ratio, all the proposed approaches significantly outperform NS, and FS surpasses VDN and BS. Under the extreme sampling ratio of 0.78%, NS and BS don't converge. Due to the limitation of FM size, the smallest sampling ratio of FS is 1.6%, which has only -0.5% accuracy loss. VDN can still achieve relatively low accuracy loss (1.4%) even if the sampling ratio decreases to 0.78%. This implies that VDN is effective for normalization. Moreover, by combining FS-1/64 and VDN-2/128, we get the lowest accuracy loss (-0.2%). This further indicates that VDN can be combined with other sampling strategies to achieve better . Since training DenseNet-121 is time-consuming, we just report the with FS/BS-VDN joint use. Although DenseNet-121 is more challenging than ResNet-18 due to the much deeper structure, the "FS-1/64 + VDN-2/64" can still achieve very low accuracy loss (-0.6%). "BS-1/64 + VDN-2/64" has a little higher accuracy loss, whereas it still achieves better than NS. In fact, we observed gradient explosion if we just use VDN on very deep network (i.e. DenseNet-121), which can be conquered through jointly applying VDN and other proposed sampling approach (e.g. FS+VDN). FIG5 illustrates the training curves for better visualization of the convergence. Except for the BS with extremely small sampling ratio (0.8%) and NS, other approaches and configurations can achieve satisfactory convergence. Here, we further evaluate the fully random sampling (FRS) strategy, which samples completely random points in both the batch and FM dimensions. We can see that FRS is less stable compared with our proposed approaches (except the NS baseline) and achieves much lower accuracy. One possible reason is that under low sampling ratio, the sampled data may occasionally fall into the worse points, which lead to inaccurate estimation of the statistics. Correlation Analysis. In this section, we bring more empirical analysis to the data correlation that affects the error of statistical estimation. Here we denote the estimation errors at l th layer as E DISPLAYFORM0 s are the estimated mean & variance from the sampled data (including the created data in VDN) while µ (l) & σ (l) are the ground truth from the vanilla BN for the whole batch. The analysis is conducted on ResNet-56 over CIFAR-10. The estimation errors of all layers are recorded throughout the first training epoch. FIG6 and FIG7 present the distribution of estimation errors for all layers and the inter-layer correlation between estimation errors, respectively. In FIG6, FS demonstrates the least estimation error within each layer which implies its better convergence. The estimation error of VDN seems similar to BS and NS here, but we should note that it uses much lower sampling ratio of 0.8% compared to others of 3.1%. FIG7, it can be seen that BS presents obviously less inter-layer correlation than NS, which is consistent with previous experimental that BS can converge better than NS even though they have similar estimation error as shown in FIG6. For FS and VDN, although it looks like they present averagely higher correlations, there exist negative corrections which effectively improve the model accuracy. Moreover, FS produces better accuracy than NS and BS since its selected data come from all the samples with less correlation. After the normalization effectiveness evaluation, we will evaluate the execution efficiency which is the primary motivation. FIG8 shows the BN speedup during training and overall training improvement. In general, BS can gain higher acceleration ratio because it doesn't incur the fine-grained sampling within FMs like in FS and it doesn't require the additional calculation and concatenation of the virtual samples like in VDN. As for FS, it fails to achieve speedup on CIFAR-10 due to the small image size that makes the reduction of operations unable to cover the sampling overhead. The proposed approaches can obtain up to 2x BN acceleration and 21.8% overall training acceleration. TAB5 gives more on ResNet-18 using single approach and DenseNet-121 using joint approach. On ResNet-18, we perform much faster training compared with two recent methods for BN simplification BID1. On ResNet-18 we can achieve up to 16.5% overall training speedup under BS; on very deep networks with more BN layers, such as DenseNet-121, the speedup is more significant that reaches 23.8% under "BS+VDN" joint approach. "FS+VDN" is a little bit slower than "BS+VDN" since the latter one has a more regular execution pattern as aforementioned. Nonetheless, on a very deep model, we still recommend the "FS+VDN" version because it can preserve the accuracy better. The relationship be-tween sampling ratio and overall training speedup is represented in FIG0 which illustrates that 1) BS & FS can still achieve considerable speedup with a moderate sampling ratio; 2) BS can achieve more significant acceleration than FS, for its more regular execution pattern. It's worth noting that, our training speedup is practically obtained on modern GPUs without the support of specialized library that makes it easy-to-use. FIG8 (c) reveals that the regular execution pattern can significantly help us achieve practical speedup. 128/128 5.23 +0.38 RBN BID1 128 BN has been applied in most state-of-art DNN models BID8 BID24 since it was proposed. As aforementioned, BN standardizes the activation distribution to reduce the internal covariate shift. Models with BN have been demonstrated to converge faster and generalize better BID14 BID21. Recently, a model called Decorrelated Batch Normalization (DBN) was introduced which not only standardizes but also whitens the activations with ZCA whitening BID11. Although DBN further improves the normalization performance, it introduces significant extra computational cost. Simplifying BN has been proposed to reduce BN's computational complexity. For example, L1BN and RBN BID1 replace the original L 2 -norm variance with an L 1 -norm version and the range of activation values, respectively. From another perspective, Selfnormalization uses the customized activation function (SELU) to automatically shift activation's distribution BID15. However, as mentioned in Introduction, all of these methods fail to obtain a satisfactory balance between the effective normalization and computational cost, especially on large-scale modern models and datasets. Our work attempts to address this issue. Motivated by the importance but high cost of BN layer, we propose using few data to estimate the mean and variance for training acceleration. The key challenge towards this goal is how to balance the normalization effectiveness with much less data for statistics' estimation and the execution efficiency with irregular memory access. To this end, we propose two categories of approach: sampling (BS/FS) or creating (VDN) few uncorrelated data, which can be used alone or jointly. Specifically, BS randomly selects few samples from each batch, FS randomly selects a small patch from each FM of all samples, and VDN generates few synthetic random samples. Then, multi-way strategies including intra-layer regularity, inter-layer randomness, and static execution graph are designed to reduce the data correlation and optimize the execution pattern in the meantime. Comprehensive experiments evidence that the proposed approaches can achieve up to 21.7% overall training acceleration with negligible accuracy loss. In addition, VDN can also be applied to the micro-BN scenario with advanced performance. This paper preliminary proves the effectiveness and efficiency of BN using few data for statistics' estimation. We emphasize that the training speedup is practically achieved on modern GPUs, and we do not need any support of specialized libraries making it easy-to-use. Developing specialized kernel optimization deserves further investigation for more aggressive execution benefits. Notations. We use the Conv layer for illustration, which occupies the major part of most modern networks BID8 BID10. The batched features can be viewed as a 4D tensor. We use "E 0,1,2 " and "V ar 0,1,2 " to represent the operations that calculate the means and variances, respectively, where "0, 1, 2" denotes the dimensions for reduction. Data: input batch at layer l: DISPLAYFORM0 for ep ∈ all epochs do for l ∈ all layers do if BS: begin l = randint(0, N − n s); else NS: DISPLAYFORM1 Algorithm 2: FS Algorithm Data: input batch at layer l: DISPLAYFORM2 for ep ∈ all epochs do for l ∈ all layers do begin DISPLAYFORM3 All the experiments on CIFAR-10 & CIFAR-100 are conducted on a single Nvidia Titan Xp GPU. We use a weight decay of 0.0002 for all weight layers and all models are trained by 130 epochs. The initial learning rate is set to 0.1 and it is decreased by 10x at 50, 80, 110 epoch. During training, we adopt the "random flip left & right" and all the input images are randomly cropped to 32 × 32. Each model is trained from scratch for 5 times in order to reduce random variation. For ImageNet, We use 2 Nvidia Tesla V100 GPUs on DGX station for ResNet-18 and 3 for DenseNet-121. We use a weight decay of 0.0001 for all weight layers and all models are trained by 100 epochs. The initial learning rate is set to 0.1/256×"Gradient batch size" and we decrease the learning rate by 10x at 30, 60, 80, 90 epoch. During training, all the input images are augmented by random flipping and cropped to 224 × 224. We evaluate the top-1 validation error on the validation set using the centered crop of each image. To reduce random variation, we use the average of last 5 epochs to represent its final error rate. Besides, Winograd BID17 ) is applied in all models to speedup training. Compute Cycle and Memory Access. Our proposed approaches can effectively speedup forward pass. Under the condition that we use s m data to estimation the statistics for each feature, the total accumulation operations are significantly reduced from m − 1 to s − 1. If using the adder tree optimization illustrated in Section 2.1, the tree's depth can be reduced from log(m) to log(s). Thus, the theoretical compute speedup for the forward pass can reach log s (m) times. For instance, if the FM size is 56 × 56 with batch size of 128 (m = 56 × 56 × 128), under sampling ratio of 1/32, the compute speedup will be 36.7%. The total memory access is reduced by m/s times. For example, when the sampling ratio is 1/32, only 3.1% data need to be visited. This also contributes a considerable part in the overall speedup. Speedup in the Backward Pass. The BN operations in the forward pass have been shown in equation FORMULA3 - FORMULA7. Based on the derivative chain rule, we can get the corresponding operations in the backward pass as follows DISPLAYFORM0 Figure 11: Influence of decay rate for moving average. The above equation reveals that an appropriately smaller α might scale down the estimation error, thus produces better validation accuracy. To verify this prediction, the experiments are conducted on ResNet-56 over CIFAR-10 and using BS-1/128(0.78%) sampling. As shown in FIG0, it's obvious that there exists a best decay rate setting (here is 0.7) whereas the popular decay rate is 0.9. The performance also decays when decay rate is smaller than 0.7, which is because a too small α may lose the capability to record the moving estimation, thus degrade the validation accuracy. This is interesting because the decay rate is usually ignored by researchers, but the default value is probably not the best setting in our context. Figure 12: BN network for profiling. To further demonstrate that reduction operations are the major bottleneck of BN, we build a pure BN network for profiling as shown in FIG0. The network's input is a variable with shape of which is initialized following a standard normal distribution. The network is trained for 100 iterations and the training time is recorded. We overwrite the original BN's codes and remove the reduction operations in both forward pass and backward pass for contrast test. We use three different GPUs: K80, Titan Xp, and Tesla V100 to improve the reliability. The are shown in TAB7. We can see that on all the three GPUs, reduction operations take up to >60% of the entire operation time in BN. As a , it's solid to argue that the reduction operations are the bottleneck of BN. Micro-BN aims to alleviate the diminishing of BN's effectiveness when the amount of data in each GPU node is too small to provide a reliable estimation of activation statistics. Previous work can be classified to two categories: Sync-BN BID30 and Local-BN BID0 BID28 BID13. The former addresses this problem by synchronizing the estimations from different GPUs at each layer, which induces significant inter-GPU data dependency and slows down training process. The latter solves this problem by either avoiding the use of batch dimension in the batched activation tensor for statistics or using additional information beyond current layer to calibrate the statistics. In terms of BN's efficiency preservation, we are facing a similar problem with micro-BN, thus our framework can be further extended to the micro-BN scenario. In Sync-BN: With FS, each GPU node executes the same patch sampling as normal FS; With BS, we can randomly select the statistics from a fraction of GPUs rather than all nodes; With VDN, the virtual samples can be fed into a single or few GPUs. The first one can just simplify the computational cost within each GPU, while the last two further optimize the inter-GPU data dependency. In Local-BN, since the available data for each GPU is already tiny, the BN sampling strategy will be invalid. Fortunately, the VDN can still be effective by feeding virtue samples into each node. Experiments. The normalization in Sync-BN is based on the statistics from multiple nodes through synchronization, which is equivalent to that in FIG5 with large batch size for each node. Therefore, to avoid repetition, here we just show the on Local-BN with VDN optimization. We let the overall batch size of 256 breaks down to 64 workers (each one has only 4 samples for local normalization). We use "(gradient batch size, statistics batch size)" of to denote the configuration. A baseline of with BN and one previous work Group Normalization (GN) BID28 are used for comparison. As shown in FIG0, although the reduction of batch size will degrade the model accuracy, our VDN can achieve slightly better (top-1 validation error rate: 30.88%) than GN (top-1 validation error rate: 30.96%), an advanced technique for this scenario with tiny batch size. This promises the correct training of very large model so that each single GPU node can only accommodate several samples.. FIG0: Illustration of the paper's organization. The Green and Purple markers (round circle with a star in the center) represent whether the effectiveness is preserved by reducing inter-layer or intralayer correlation (Green: inter-layer; Purple: intra-layer). Moreover, the consideration of regular & static execution pattern is applied to all approaches. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HkGsHj05tQ | We propose accelerating Batch Normalization (BN) through sampling less correlated data for reduction operations with regular execution pattern, which achieves up to 2x and 20% speedup for BN itself and the overall training, respectively. |
Federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device, decoupling the ability to do model training from the need to store the data in the cloud. We propose Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs. FedMA constructs the shared global model in a layer-wise manner by matching and averaging hidden elements (i.e. channels for convolution layers; hidden states for LSTM; neurons for fully connected layers) with similar feature extraction signatures. Our experiments indicate that FedMA outperforms popular state-of-the-art federated learning algorithms on deep CNN and LSTM architectures trained on real world datasets, while improving the communication efficiency. Edge devices such as mobile phones, sensor networks or vehicles have access to a wealth of data. However, due to concerns raised by data privacy, network bandwidth limitation, and device availability, it's unpractical to gather all local data to the data center and conduct centralized training. To address these concerns, federated learning is emerging (; ; ; ;) to allow local clients to collaboratively train a shared global model. The typical federated learning paradigm involves two stages: (i) clients train models over their datasets independently (ii) the data center uploads their locally trained models. The data center then aggregates the received models into a shared global model. One of the standard aggregation methods is FedAvg where parameters of local models are averaged element-wise with weights proportional to sizes of client datasets. FedProx adds a proximal term for client local cost functions, which limits the impact of local updates by restricting them to be close to the global model. Agnostic Federated Learning (AFL) , as another variant of FedAvg, optimizes a centralized distribution that is formed by a mixture of the client distributions. One shortcoming of the FedAvg algorithm is that coordinate-wise averaging of weights may have drastic detrimental effect on the performance and hence hinders the communication efficiency. This issue arises due to the permutation invariant nature of the neural network (NN) parameters, i.e. for any given NN there are many variations of it that only differ in the ordering of parameters and constitute local optima which are practically equivalent. Probabilistic Federated Neural Matching (PFNM) addresses this problem by finding permutation of the parameters of the NNs before averaging them. PFNM further utilizes Bayesian nonparametric machinery to adapt global model size to heterogeneity of the data. As a , PFNM has better performance and communication efficiency, however it was only developed for fully connected NNs and tested on simple architectures. Our contribution In this work (i) we demonstrate how PFNM can be applied to CNNs and LSTMs, however we find that it gives very minor improvement over weight averaging when applied to modern deep neural network architectures; (ii) we propose Federated Matched Averaging (FedMA), a new layers-wise federated learning algorithm for modern CNNs and LSTMs utilizing matching and model size adaptation underpinnings of PFNM; (iii) We empirically study FedMA with real datasets under the federated learning constraints. In this section we will discuss permutation invariance classes of prominent neural network architectures and establish the appropriate notion of averaging in the parameter space of NNs. We will begin with the simplest case of a single hidden layer fully connected network, moving on to deep architectures and, finally, convolutional and recurrent architectures. Permutation invariance of fully connected architectures A basic fully connected (FC) NN can be formulated asŷ = σ(xW 1)W 2 (without loss of generality, biases are omitted to simplify notation), where σ is the non-linearity (applied entry-wise). Expanding the preceding expression, where i· and ·i denote ith row and column correspondingly and L is the number of hidden units. Summation is a permutation invariant operation, hence for any {W 1, W 2} there are L! practically equivalent parametrizations if this basic NN. It is then more appropriate to writeŷ Recall that permutation matrix is an orthogonal matrix that acts on rows when applied on the left and on columns when applied on the right. Suppose {W 1, W 2} are optimal weights, then weights obtained from training on two homogeneous datasets X j, X j are It is now easy to see why naive averaging in the parameter space is not appropriate: with non-negligible probability Π j = Π j and (W 1 Π j + W 1 Π j)/2 = W 1 Π for any Π. To meaningfully average neural networks in the weight space we should first undo the permutation In this section we formulate practical notion of parameter averaging under the permutation invariance. Let w jl be lth neuron learned on dataset j (i.e. lth column of W Π j in the previous example), θ i denote the ith neuron in the global model, and c(·, ·) be an appropriate similarity function between a pair of neurons. Solution to the following optimization problem are the required inverse permutations: Then Π Solving matched averaging Objective function in equation 2 can be optimized using an iterative procedure: applying the Hungarian matching algorithm to find permutation {π j li} l,i corresponding to dataset j, holding other permutations {π j li} l,i,j =j fixed and iterating over the datasets. Important aspect of Federated Learning that we should consider here is the data heterogeneity. Every client will learn a collection of feature extractors, i.e. neural network weights, representing their individual data modality. As a consequence, feature extractors learned across clients may overlap only partially. To account for this we allow the size of the global model L to be an unknown variable satisfying max j L j ≤ L ≤ j L j where L j is the number of neurons learned from dataset j. That is, global model is at least as big as the largest of the local models and at most as big as the concatenation of all the local models. Next we show that matched averaging with adaptive global model size remains amendable to iterative Hungarian algorithm with a special cost. At each iteration, given current estimates of {π j li} l,i,j =j, we find a corresponding global model (this is typically a closed-form expression or a simple optimization sub-problem, e.g. a mean if c(·, ·) is Euclidean) and then we will use Hungarian algorithm to match this global model to neurons {w j l} L j l=1 of the dataset j to obtain a new global model with L ≤ L ≤ L + L j neurons. Due to data heterogeneity, local model j may have neurons not present in the global model built from other local models, therefore we want to avoid "poor" matches by saying that if the optimal match has cost larger than some threshold value, instead of matching we create a new global neuron from the corresponding local one. We also want a modest size global model and therefore penalize its size with some increasing function f (L). This intuition is formalized in the following extended maximum bipartite matching formulation: The size of the new global model is then L = max{i : π j li = 1, l = 1, . . ., L j}. We note some technical details: after the optimization is done, each corresponding Π T j is of size L j × L and is not a permutation matrix in a classical sense when L j = L. Its functionality is however similar: taking matrix product with a weight matrix W j Π T j implies permuting the weights to align with weights learned on the other datasets and padding with "dummy" neurons having zero weights (alternatively we can pad weights W j first and complete Π T j with missing rows to recover a proper permutation matrix). This "dummy" neurons should also be discounted when taking average. Without loss of generality, in the subsequent presentation we will ignore these technicalities to simplify the notation. To complete the matched averaging optimization procedure it remains to specify similarity c(·, ·), threshold and model size penalty f (·). Although one can consider application specific choices, here for simplicity we follow setup of. They arrived at a special case of equation 3 to compute maximum a posteriori estimate (MAP) of their Bayesian nonparametric model based on the Beta-Bernoulli process (BBP) , where similarity c(w jl, θ i) is the corresponding posterior probability of jth client neuron l generated from a Gaussian with mean θ i, and and f (·) are guided by the Indian Buffet Process prior . We refer to a procedure for solving equation 2 with the setup from as BBP-MAP. We note that their Probabilistic Federated Neural Matching (PFNM) is only applicable to fully connected architectures limiting its practicality. Our matched averaging perspective allows to formulate averaging of widely used architectures such as CNNs and LSTMs as instances of equation 2 and utilize the BBP-MAP as a solver. Before moving onto the convolutional and recurrent architectures, we discuss permutation invariance in deep fully connected networks and corresponding matched averaging approach. We will utilize this as a building block for handling LSTMs and CNN architectures such as VGG widely used in practice. We extend equation 1 to recursively define deep FC network: where n = 1,..., N is the layer index, Π 0 is identity indicating non-ambiguity in the ordering of input features x = x 0 and Π N is identity for the same in output classes. Conventionally σ(·) is any non-linearity except forŷ = x N where it is the identity function (or softmax if we want probabilities instead of logits). When N = 2, we recover a single hidden layer variant from equation 1. To perform matched averaging of deep FCs obtained from J clients we need to find inverse permutations for every layer of every client. Unfortunately, permutations within any consecutive pair of intermediate layers are coupled leading to a NP-hard combinatorial optimization problem. Instead we consider recursive (in layers) matched averaging formulation. Suppose we have {Π j,n−1}, then plugging {Π T j,n−1 W j,n} into equation 2 we find {Π j,n} and move onto next layer. The recursion base for this procedure is {Π j,0}, which we know is an identity permutation for any j. Permutation invariance of CNNs The key observation in understanding permutation invariance of CNNs is that instead of neurons, channels define the invariance. To be more concrete, let Conv(x, W) define convolutional operation on input x with weights W ∈ R, where C in, C out are the numbers of input/output channels and w, h are the width and height of the filters. Applying any permutation to the output dimension of the weights and then same permutation to the input channel dimension of the subsequent layer will not change the corresponding CNN's forward pass. Analogous to equation 4 we can write: Note that this formulation permits pooling operations as those act within channels. To apply matched averaging for the nth CNN layer we form inputs to equation 2 as This can be alternatively derived taking the IM2COL perspective. Similar to FCs, we can recursively perform matched averaging on deep CNNs. The immediate consequence of our is the extension of PFNM to CNNs. Empirically (Figure 1, see One-Shot Matching) we found that this extension performs well on MNIST with a simpler CNN architecture such as LeNet (4 layers) and significantly outperforms coordinate-wise weight averaging (1 round FedAvg). However, it breaks down for more complex architecture, e.g. VGG-9 (9 layers), needed to obtain good quality prediction on a more challenging CIFAR-10. Permutation invariance of LSTMs Permutation invariance in the recurrent architectures is associated with the ordering of the hidden states. At a first glance it appears similar to fully connected architecture, however the important difference is associated with the permutation invariance of the hidden-to-hidden weights H ∈ R L×L, where L is the number of hidden states. In particular, permutation of the hidden states affects both rows and columns of H. Consider a basic RNN h t = σ(h t−1 H + x t W), where W are the input-to-hidden weights. To account for the permutation invariance of the hidden states, we notice that dimensions of h t should be permuted in the same way for any t, hence To match RNNs, the basic sub-problem is to align hidden-to-hidden weights of two clients with Euclidean similarity, which requires minimizing Π T H j Π − H j 2 2 over permutations Π. This is a quadratic assignment problem, which is NP-hard. Fortunately, the same permutation appears in an already familiar context of input-to-hidden matching of W Π. Our matched averaging RNN solution is to utilize equation 2 plugging-in input-to-hidden weights {W j} to find {Π j}. Then federated hidden-to-hidden weights are computed as H = 1 J j Π j H h Π T j and input-to-hidden weights are computed as before. LSTMs have multiple cell states, each having its individual hiddento-hidden and input-to-hidden weights. In out matched averaging we stack input-to-hidden weights into SD × L weight matrix (S is the number of cell states; D is input dimension and L is the number of hidden states) when computing the permutation matrices and then average all weights as described previously. LSTMs also often have an embedding layer, which we handle like a fully connected layer. Finally, we process deep LSTMs in the recursive manner similar to deep FCs. Defining the permutation invariance classes of CNNs and LSTMs allows us to extend PFNM to these architectures, however our empirical study in Figure 1 (see OneShot Matching) demonstrates that such extension fails on deep architectures necessary to solve more complex tasks. Our suggest that recursive handling of layers with matched averaging may entail poor overall solution. To alleviate this problem and utilize the strength of matched averaging on "shallow" architectures, we propose the following layer-wise matching scheme. First, data center gathers only the first layer's weights from the clients and performs one-layer matching described previously to obtain the first layers weights of the federated model. Data center then broadcasts these weights to the clients, which proceed to train all consecutive layers on their datasets, keeping the matched federated layers frozen. This procedure is then repeated up to the last layer for which we conduct a weighted averaging based on the class proportions of data points per client. We summarize our Federated Matched Averaging (FedMA) in Algorithm 1. The FedMA approach requires communication rounds equal to the number of layers in a network. In Figure 1 we show that with layer-wise matching FedMA performs well on the deeper VGG-9 CNN as well as LSTMs. In the more challenging heterogeneous setting, FedMA outperforms FedAvg, FedProx trained with same number of communication rounds (4 for LeNet and LSTM and 9 for VGG-9) and other baselines, i.e. client individual CNNs and their ensemble. j p jk W jl,n where p k is fraction of data points with label k on worker j; end for j ∈ {1, . . ., J} do W j,n+1 ← Π T j,n W j,n+1; // permutate the next-layer weights Train {W j,n+1, . . ., W j,L} with W n frozen; end n = n + 1; end FedMA with communication We've shown that in the heterogeneous data scenario FedMA outperforms other federated learning approaches, however it still lags in performance behind the entire data training. Of course the entire data training is not possible under the federated learning constraints, but it serves as performance upper bound we should strive to achieve. To further improve the performance of our method, we propose FedMA with communication, where local clients receive the matched global model at the beginning of a new round and reconstruct their local models with the size equal to the original local models (e.g. size of a VGG-9) based on the matching of the previous round. This procedure allows to keep the size of the global model small in contrast to a naive strategy of utilizing full matched global model as a starting point across clients on every round. We present an empirical study of FedMA with communication and compare it with state-of-the-art methods i.e. FedAvg and FedProx ; analyze the performance under the growing number of clients and visualize the matching behavior of FedMA to study its interpretability. Our experimental studies are conducted over three real world datasets. Summary information about the datasets and associated models can be found in supplement Table 3. Experimental Setup We implemented FedMA and the considered baseline methods in PyTorch . We deploy our empirical study under a simulated federated learning environment where we treat one centralized node in the distributed cluster as the data center and the other nodes as local clients. All nodes in our experiments are deployed on p3.2xlarge instances on Amazon EC2. We assume the data center samples all the clients to join the training process for every communication round for simplicity. For the CIFAR-10 dataset, we use data augmentation (random crops, and flips) and normalize each individual image (details provided in the Supplement). We note that we ignore all batch normalization layers in the VGG architecture and leave it for future work. For CIFAR-10, we considered two data partition strategies to simulate federated learning scenario: (i) homogeneous partition where each local client has approximately equal proportion of each of the classes; (ii) heterogeneous partition for which number of data points and class proportions are unbalanced. We simulated a heterogeneous partition into J clients by sampling p k ∼ Dir J (0.5) and allocating a p k,j proportion of the training instances of class k to local client j. We use the original test set in CIFAR-10 as our global test set and all test accuracy in our experiments are conducted over that test set. For the Shakespeare dataset, since each speaking role in each play is considered a different client according to , it's inherently heterogeneous. We preprocess the Shakespeare dataset by filtering out the clients with datapoints less 10k and get 132 clients in total. We choose 80% of data in training set. We then randomly sample J = 66 out of 132 clients in conducting our experiments. We amalgamate all test sets on clients as our global test set. In this experiment we study performance of FedMA with communication. Our goal is to compare our method to FedAvg and FedProx in terms of the total message size exchanged between data center and clients (in Gigabytes) and the number of communication rounds (recall that completing one FedMA pass requires number of rounds equal to the number of layers in the local models) needed for the global model to achieve good performance on the test data. We also compare to the performance of an ensemble method. We evaluate all methods under the heterogeneous federated learning scenario on CIFAR-10 with J = 16 clients with VGG-9 local models and on Shakespeare dataset with J = 66 clients with 1-layer LSTM network. We fix the total rounds of communication allowed for FedMA, FedAvg, and FedProx i.e. 11 rounds for FedMA and 99/33 rounds for FedAvg and FedProx for the VGG-9/LSTM experiments respectively. We notice that local training epoch is a common parameter shared by the three considered methods, we thus tune the local training epochs (we denote it by E) (comprehensive analysis will be presented in the next experiment) and report the convergence rate under the best E that yields the best final model accuracy over the global test set. We also notice that there is another hyper-parameter in FedProx i.e. the coefficient µ associated with the proxy term, we also tune the parameter using grid search and report the best µ we found i.e. 0.001 for both VGG-9 and LSTM experiments. FedMA outperforms FedAvg and FedProx in all scenarios (Figure 2) with its advantage especially pronounced when we evaluate convergence as a function of the message size in Figures 2(a) and 2(c). Final performance of all trained models is summarized in Tables 1 and 2. Effect of local training epochs As studied in previous work (; ;), the number of local training epochs E can affect the performance of FedAvg and sometimes lead to divergence. We conduct an experimental study on the effect of E over FedAvg, FedProx, and FedMA on VGG-9 trained on CIFAR-10 under heterogeneous setup. The candidate local epochs we considered are E ∈ {10, 20, 50, 70, 100, 150}. For each of the candidate E, we run FedMA for 6 rounds while FedAvg and FedProx for 54 rounds and report the final accuracy that each methods achieves. The is shown in Figure 3. We observed that training longer favors the convergence rate of FedMA, which matches the our assumption that FedMA returns better global model on local models with higher quality. For FedAvg, longer local training leads to deterioration of the final accuracy, which matches the observation in the previous literature (; ;). FedProx prevents the accuracy deterioration to some extent, however, the accuracy of final model still gets reduced. The of this experiment suggests that FedMA is the only method that local clients can use to train their model as long as they want. Handling data bias Real world data often exhibit multimodality within each class, e.g. geodiversity. It has been shown that an observable amerocentric and eurocentric bias is present in the widely used ImageNet dataset . Classifiers trained on such data "learn" these biases and perform poorly on the under-represented domains (modalities) since correlation between the corresponding dominating domain and class can prevent the classifier from learning meaningful relations between features and classes. For example, classifier trained on amerocentric and eurocentric data may learn to associate white color dress with a "bride" class, therefore underperforming on the wedding images taken in countries where wedding traditions are different . The data bias scenario is an important aspect of federated learning, however it received little to no attention in the prior federated learning works. In this study we argue that FedMA can handle this type of problem. If we view each domain, e.g. geographic region, as one client, local models will not be affected by the aggregate data biases and learn meaningful relations between features and classes. FedMA can then be used to learn a good global model without biases. We have already demonstrated strong performance of FedMA on federated learning problems with heterogeneous data across clients and this scenario is very similar. To verify this conjecture we conduct the following experiment. We simulate the skewed domain problem with CIFAR-10 dataset by randomly selecting 5 classes and making 95% training images in those classes to be grayscale. For the remaining 5 we turn only 5% of the corresponding images into grayscale. By doing so, we create 5 grayscale images dominated classes and 5 colored images dominated classes. In the test set, there is half grayscale and half colored images for each class. We anticipate entire data training to pick up the uninformative correlations between greyscale and certain classes, leading to poor test performance without these correlations. In Figure 4 we see that entire data training performs poorly in comparison to the regular (i.e. No Bias) training and testing on CIFAR-10 dataset without any grayscaling. This experiment was motivated by Olga Russakovsky's talk at ICML 2019. Next we compare the federated learning based approaches. We split the images from color dominated classes and grayscale dominated classes into 2 clients. We then conduct FedMA with communication, FedAvg, and FedProx with these 2 clients. FedMA noticeably outperforms the entire data training and other federated learning approach as shown in Figure 4. This suggests that FedMA may be of interest beyond learning under the federated learning constraints, where entire data training is the performance upper bound, but also to eliminate data biases and outperform entire data training. We consider two additional approaches to eliminate data bias without the federated learning constraints. One way to alleviate data bias is to selectively collect more data to debias the dataset. In the context of our experiment, this means getting more colored images for grayscale dominated classes and more grayscale images for color dominated classes. We simulate this scenario by simply doing a full data training where each class in both train and test images has equal amount of grayscale and color images. This procedure, Color Balanced, performs well, but selective collection of new data in practice may be expensive or even not possible. Instead of collecting new data, one may consider oversampling from the available data to debias. In Oversampling, we sample the underrepresented domain (via sampling with replacement) to make the proportion of color and grayscale images to be equal for each class (oversampled images are also passed through the data augmentation pipeline, e.g. random flipping and cropping, to further enforce the data diversity). Such procedure may be prone to overfitting the oversampled images and we see that this approach only provides marginal improvement of the model accuracy compared to centralized training over the skewed dataset and performs noticeably worse than FedMA. Data efficiency It is known that deep learning models perform better when more training data is available. However, under the federated learning constraints, data efficiency has not been studied to the best of our knowledge. The challenge here is that when new clients join the federated system, they each bring their own version of the data distribution, which, if not handled properly, may deteriorate the performance despite the growing data size across the clients. To simulate this scenario we first partition the entire training CIFAR-10 dataset into 5 homogeneous pieces. We then partition each homogeneous data piece further into 5 sub-pieces heterogeneously. Using this strategy, we partition the CIFAR-10 training set into 25 heterogeneous small sub-datasets containing approximately 2k points each. We conduct a 5-step experimental study: starting from a randomly selected homogeneous piece consisting of 5 associated heterogeneous sub-pieces, we simulate a 5-client federated learning heterogeneous problem. For each consecutive step, we add one of the remaining homogeneous data pieces consisting of 5 new clients with heterogeneous sub-datasets. Results are presented in Figure 5. Performance of FedMA (with a single pass) improves when new clients are added to the federated learning system, while FedAvg with 9 communication rounds deteriorates. Interpretability One of the strengths of FedMA is that it utilizes communication rounds more efficiently than FedAvg. Instead of directly averaging weights element-wise, FedMA identifies matching groups of convolutional filters and then averages them into the global convolutional filters. It's natural to ask "How does the matched filters look like?". In Figure 6 we visualize the representations generated by a pair of matched local filters, aggregated global filter, and the filter returned by the FedAvg method over the same input image. Matched filters and the global filter found with FedMA are extracting the same feature of the input image, i.e. filter 0 of client 1 and filter 23 of client 2 are extracting the position of the legs of the horse, and the corresponding matched global filter 0 does the same. For the FedAvg, global filter 0 is the average of filter 0 of client 1 and filter 0 of client 2, which clearly tampers the leg extraction functionality of filter 0 of client 1. In this paper, we presented FedMA, a new layer-wise federated learning algorithm designed for modern CNNs and LSTMs architectures utilizing probabilistic matching and model size adaptation. We demonstrate the convergence rate and communication efficiency of FedMA empirically. In the future, we would like to extend FedMA towards finding the optimal averaging strategy. Making FedMa support more building blocks e.g. residual structures in CNNs and batch normalization layers is also of interest. Table 4: Detailed information of the VGG-9 architecture used in our experiments, all non-linear activation function in this architecture is ReLU; the shapes for convolution layers follows (Cin, Cout, c, c) In preprocessing the images in CIFAR-10 dataset, we follow the standard data augmentation and normalization process. For data augmentation, random cropping and horizontal random flipping are used. Each color channels are normalized with mean and standard deviation by µ r = 0.491372549, µ g = 0.482352941, µ b = 0.446666667, σ r = 0.247058824, σ g = 0.243529412, σ b = 0.261568627. Each channel pixel is normalized by subtracting the mean value in this color channel and then divided by the standard deviation of this color channel. Here we report the shapes of final global VGG and LSTM models returned by FRB with communication. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BkluqlSFDS | Communication efficient federated learning with layer-wise matching |
We present SOSELETO (SOurce SELEction for Target Optimization), a new method for exploiting a source dataset to solve a classification problem on a target dataset. SOSELETO is based on the following simple intuition: some source examples are more informative than others for the target problem. To capture this intuition, source samples are each given weights; these weights are solved for jointly with the source and target classification problems via a bilevel optimization scheme. The target therefore gets to choose the source samples which are most informative for its own classification task. Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting. SOSELETO may be applied to both classic transfer learning, as well as the problem of training on datasets with noisy labels; we show state of the art on both of these problems. Deep learning has demonstrated remarkable successes in tasks where large training sets are available. Yet, its usefulness is still limited in many important problems that lack such data. A natural question is then how one may apply the techniques of deep learning within these relatively data-poor regimes. A standard approach that seems to work relatively well is transfer learning. Despite its success, we claim that this approach misses an essential insight: some source examples are more informative than others for the target classification problem. Unfortunately, we don't know a priori which source examples will be important. Thus, we propose to learn this source filtering as part of an end-to-end training process. The ing algorithm is SOSELETO: SOurce SELEction for Target Optimization. Each training sample in the source dataset is given a weight, representing its importance. A shared source/target representation is then optimized by means of a bilevel optimization. In the interior level, the source minimizes its classification loss with respect to the representation and classification layer parameters, for fixed values of the sample weights. In the exterior level, the target minimizes its classification loss with respect to both the source sample weights and its own classification layer. The sample weights implicitly control the representation through the interior level. The target therefore gets to choose the source samples which are most informative for its own classification task. Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting, as the target does not directly control the representation parameters. The entire processtraining of the shared representation, source and target classifiers, and source weights -happens simultaneously. Related Work The most common techniques for transfer learning are feature extraction e.g. and fine-tuning, e.g. BID8. An older survey of transfer learning techniques may be found in BID20. Domain adaptation BID23 involves knowledge transfer when the source and target classes are the same. Earlier techniques aligned the source and target via matching of feature space statistics BID3 BID15; subsequent work used adversarial methods to improve the domain adaptation performance BID6;. In this paper, we are more interested in transfer learning where the source and target classes are different. BID16; BID21 BID1 b) address domain adaptation that is closer to our setting. BID2 examines "partial transfer learning" in which there is partial overlap between source and target classes (often the target classes are a subset of the source). This setting is also dealt with in BID0. Like SOSELETO, BID7 propose selecting a portion of the source dataset, however, the selection is done prior to training and is not end-to-end. In, an adversarial loss aligns the source and target representations in a few-shot setting. Instance reweighting is a well studied technique for domain adaptation, demonstrated e.g. in Covariate Shift methods BID24 BID25 BID26. While in these works, the source and target label spaces are the same, we allow them to be different -even entirely non-overlapping. Crucially, we do not make assumptions on the similarity of the distributions nor do we explicitly optimize for it. The same distinction applies for the recent work of BID9, and for the partial overlap assumption of. In addition, these two works propose an unsupervised approach, whereas our proposed method is completely supervised. Classification with noisy labels is a longstanding problem in the machine learning literature, see the review paper BID5. Within the realm of deep learning, it has been observed that with sufficiently large data, learning with label noise -without modification to the learning algorithms -actually leads to reasonably high accuracy BID10 BID28 BID22 BID4. We consider the setting where the large noisy dataset is accompanied by a small clean dataset. BID27 introduced a noise layer into the CNN that adapts the output to align with the noisy label distribution. proposed to predict simultaneously the clean label and the type of noise; consider the same setting, but with additional information in the form of a knowledge graph on labels. BID18 conditioned the gradient propagation on the agreement of two separate networks. BID14 BID7 combine ideas of learning with label noise with instance reweighting. We consider the problem of classifying a data-poor target set, by utilizing a data-rich source set. The sets and their corresponding labels are denoted by {( DISPLAYFORM0 respectively, where, n t n s . The key insight is that not all source examples contribute equally useful information in regards to the target problem. For example, suppose that the source set consists of a broad collection of natural images; whereas the target set consists exclusively of various breeds of dog. We would assume that images of dogs, as well as wolves, cats and even objects with similar textures like rugs in the source set may help in the target classification task; On the flip side, it is probably less likely that images of airplanes and beaches will be relevant (though not impossible). However, the idea is not to come with any preconceived notions (semantic or otherwise) as to which source images will help; rather, the goal is to let the algorithm choose the relevant source images, in an end-to-end fashion. We assume that the source and target networks share the same architecture, except for the last classification layers. In particular, the architecture is given by F (x; θ, φ), where φ denotes the last classification layer(s), and θ constitutes all of the "representation" layers. The source and target networks are thus F (x; θ, φ s), and F (x; θ, φ t), respectively. By assigning a weight α j ∈ to each source example, we define the weighted source loss as: DISPLAYFORM1 is a per example classification loss (e.g. cross-entropy). The weights α j will allow us to decide which source images are most relevant for the target classification task. The target loss is standard: DISPLAYFORM2 As noted in Section 1, this formulation allows us to address both the transfer learning problem as well as learning with label noise. In the former case, the source and target may have non-overlapping label spaces; high weights will indicate which source examples have relevant knowledge for the target classification task. In the latter case, the source is the noisy dataset, the target is the clean dataset, and they share a classifier (i.e. φ t = φ s) as well as a label space; high weights will indicate which source examples are likely to be reliable. In either case, the target is much smaller than the source. The question now becomes: how can we combine the source and target losses into a single optimization problem? A simple idea is to create a weighted sum of source and target losses. Unfortunately, issues are likely to arise regardless of the weight chosen. If the target is weighted equally to the source, then overfitting may likely given the small size of the target. On the other hand, if the weights are proportional to the size of the two sets, then the source will simply drown out the target. We propose instead to use bilevel optimization. Specifically, in the interior level we find the optimal features and source classifier as a function of the weights α, by minimizing the source loss: DISPLAYFORM3 In the exterior level, we minimize the target loss, but only through access to the source weights; that is, we solve: min DISPLAYFORM4 Why might we expect this bilevel formulation to succeed? The key is that the target only has access to the features in an indirect manner, by controlling which source examples will dominate the source classification problem. This form of regularization, mitigates overfitting, which is the main threat when dealing with a small set such as the target. We adopt a stochastic approach for optimizing the bilevel problem. Specifically, at each iteration, we take a gradient step in the interior level problem: DISPLAYFORM5 and Q(θ, φ s) is a matrix whose j th column is given by DISPLAYFORM6 ). An identical descent equation exists for the classifier φ s, which we omit for clarity. Given this iterative version of the interior level of the bilevel optimization, we may now turn to the exterior level. Plugging Equation into Equation FORMULA4 gives: min α,φ t L t (θ m − λ p Qα, φ t). We can then take a gradient descent step of this equation, yielding: DISPLAYFORM7 where we have made use of the fact that λ p is small. Of course, there will also be a descent equation for the classifier φ t. The ing update scheme is quite intuitive: source example weights are update according to how well they align with the target aggregated gradient. Finally, we clip the weight values to be in the range (a detailed explanation to this step is provided in Appendix B). For completeness we have included a summary of SOSELETO algorithm in Appendix A. Convergence properties are discussed in Appendix C. Noisy Labels: Synthetic Experiment We illustrate how SOSELETO works on the problem of learning with noisy labels, on a synthetic example, see FIG0. (a) The source set consists of 500 points in R 2, separated into two classes by the y-axis. However, 20% of the points are are assigned a wrong label. The target set consists of 50 "clean" points (For clarity, the target set is not illustrated.) SOSELETO is run for 100 epochs. FIG0 and 1(c), we choose a threshold of 0.1 on the weights α, and colour the points accordingly. In particular, in FIG0 (b) the clean (i.e. correctly labelled) instances which are above the threshold are labelled in green, while those below the threshold are labelled in red; as can be seen, all of the clean points lie above the threshold for this choice of threshold, meaning that SOSELETO has correctly identified all of the clean points. In FIG0 (c), the noisy (i.e. incorrectly labelled) instances which are below the threshold are labelled in green; and those above the threshold are labelled in red. In this case, SOSELETO correctly identifies most of these noisy labels by assigning them small weights, while the few mislabeled points (shown in red), are all near the separator. In FIG0, a plot is shown of mean weight vs. training epoch for clean instances and noisy instances. As expected, the algorithm nicely separates the two sets. FIG0 (e) demonstrates that the good behaviour is fairly robust to the threshold chosen. Noisy Labels: CIFAR-10 We now turn to a real-world setting of the problem of learning with label noise using a noisy version of CIFAR-10 (FORMULA3 set aside 10K clean examples for pre-training, a necessary step in both of these algorithms. In contrast, we use half that size. We also compare with a baseline of training only on noisy labels. In all cases, "Quick" cif architecture has been used. We set: λ p = 10 −4, target and source batch-sizes: 32, and 256. Performance on a 10K test set are shown in Table 1 for three noise levels. Our method surpasses previous methods on all three noise levels. We provide further analysis in Appendix D.Transfer Learning: SVHN 0-4 to MNIST 5-9 We now examine the performance of SOSELETO on a transfer learning task, using a challenging setting where: (a) source and target sets have disjoint label sets, and (b) the target set is very small. The source dataset is chosen as digits 0-4 of SVHN BID19. The target dataset is a very small subset of digits 5-9 from MNIST LeCun et al. We compare our with the following techniques: target only, which indicates training on just the target set; standard fine-tuning; and a fine-tuned version thereof; and two variants of the Label Efficient Learning technique described in. Except for the target only and fine-tuning approaches, all other approaches rely on a huge number of unlabelled target data. By contrast, we do not make use of any of this data. For each of the above methods, the simple LeNet architecture BID12 was used. We set: λ p = 10 −2, source and the target batch-sizes: 32, 10. The performance of the various methods on MNIST's test set is shown in TAB2, split into two parts: techniques which use the 30K examples of unlabelled data, and those which do not. SOSELETO has superior performance to all techniques except the Label Efficient. In Appendix E we further analyze which SVHN instances are considered more useful than others by SOSELETO in a partial class overlap setting, namely transferring SVHN 0-9 to MNIST 0-4.Although not initially designed to use unlabelled data, one may do so using the learned SOSELETO classifier to classify the unlabelled data, generating noisy labels. SOSELETO can now be run in a label-noise mode. In the case of n t = 25, this procedure elevates the accuracy to above 92%. We have presented SOSELETO, a technique for exploiting a source dataset to learn a target classification task. This exploitation takes the form of joint training through bilevel optimization, in which the source loss is weighted by sample, and is optimized with respect to the network parameters; while the target loss is optimized with respect to these weights and its own classifier. We have empirically shown the effectiveness of the algorithm on both learning with label noise, as well as transfer learning problems. An interesting direction for future research involves incorporating an additional domain alignment term. We note that SOSELETO is architecture-agnostic, and may be extended beyond classification tasks. DISPLAYFORM0 end while SOSELETO consists of alternating the interior and exterior descent operations, along with the descent equations for the source and target classifiers φ s and φ t. As usual, the whole operation is done on a mini-batch basis, rather than using the entire set; note that if processing is done in parallel, then source mini-batches are taken to be non-overlapping, so as to avoid conflicts in the weight updates. A summary of SOSELETO algorithm appears in 1. Note that the target derivatives ∂L t /∂θ and ∂L t /∂φ t are evaluated over a target mini-batch; we suppress this for clarity. In terms of time-complexity, we note that each iteration requires both a source batch and a target batch; assuming identical batch sizes, this means that SOSELETO requires about twice the time as the ordinary source classification problem. Regarding space-complexity, in addition to the ordinary network parameters we need to store the source weights α. Thus, the additional relative spacecomplexity required is the ratio of the source dataset size to the number of network parameters. This is obviously problem and architecture dependent; a typical number might be given by taking the source dataset to be Imagenet ILSVRC-2012 (size 1.2M) and the architecture to be ResNeXt-101 (size 44.3M parameters), yielding a relative space increase of about 3%. Recall that our goal is to explicitly require that α j ∈. We may achieve this by requiring DISPLAYFORM0 where the new variable β j ∈ R, and σ(·) is a kind of piecewise linear sigmoid function. Now we will wish to replace the Update Equation, the update for α, with a corresponding update equation for β. This is straightforward. Define the Jacobian ∂α/∂β by DISPLAYFORM1 Then we modify Equation to read DISPLAYFORM2 The Jacobian is easy to compute analytically: DISPLAYFORM3 Due to this very simple form, it is easy to see that β m will never lie outside of; and thus that α m = β m for all time. Hence, we can simply replace this equation with DISPLAYFORM4 where CLIP clips the values below 0 to be 0; and above 1 to be 1. SOWETO is only an approximation to the solution of a bilevel optimization problem. As a , it is not entirely clear whether it will even converge. In this section, we demonstrate a set of sufficient conditions for SOWETO to converge to a local minimum of the target loss L t.To this end, let us examine the change in the target loss from iteration m to m + 1: DISPLAYFORM0 Now, we can use the evolution of the weights α. Specifically, we substitute Equation into the above, to get DISPLAYFORM1 where ∆L indicates the change in the target loss, to first order. Note that the latter two terms in ∆L F O t are both negative, and will therefore cause the first order approximation of the target loss to decrease, as desired. As regards the first term, matters are unclear. However, it is clear that if we set the learning rate λ α sufficiently large, the second term will eventually dominate the first term, and the target loss will be decreased. Indeed, we can do a slightly finer analysis. Ignoring the final term (which is always negative), and setting v = Q T ∂Lt ∂θ, we have that DISPLAYFORM0 where in the second line we have used the fact that all elements of α are in; and in the third line, we have used a standard bound on the L 1 norm of a vector. Thus, a sufficient condition for the first order approximation of the target loss to decrease is if DISPLAYFORM1 If this is true at all iterations, then the target loss will continually decrease and converge to a local minimum (given that the loss is bounded from below by 0). We perform further analysis by examining the α-values that SOSELETO chooses on convergence, see FIG3. To visualize the , we imagine thresholding the training samples in the source set on the basis of their α-values; we only keep those samples with α greater than a given threshold. By increasing the threshold, we both reduce the total number of samples available, as well as change the effective noise level, which is the fraction of remaining samples which have incorrect labels. We may therefore plot these two quantities against each other, as shown in FIG3; we show three plots, one for each noise level. Looking at these plots, we see for example that for the 30% noise level, if we take the half of the training samples with the highest α-values, we are left with only about 4% which have incorrect labels. We can therefore see that SOSELETO has effectively filtered out the incorrect labels in this instance. For the 40% and 50% noise levels, the corresponding numbers are about 10% and 20% incorrect labels; while not as effective in the 30% noise level, SOSELETO is still operating as designed. Further evidence for this is provided by the large slopes of all three curves on the righthand side of the graph. APPENDIX E ANALYZING SVHN 0-9 TO MNIST 5-9 SOSELETO is capable of automatically pruning unhelpful instances at train time. The experiment presented in Section 3 demonstrates how SOSELETO can improve classification of MNIST 5-9 by making use of different digits from a different dataset (SVHN 0-4). To further reason about which instances are chosen as useful, we have conducted another experiment: SVHN 0-9 to MNIST 5-9. There is now a partial overlap in classes between source and target. Our findings are summarized in what follows. An immediate effect of increasing the source set, was a dramatic improvement in accuracy to 90.3%.Measuring the percentage of "good" instances (i.e. instances with weight above a certain threshold) didn't reveal a strong correlation with the labels. In Figure 3 we show this for a threshold of 0.8. As can be seen, labels 7-9 are slightly higher than the rest but there is no strong evidence of labels 5-9 being more useful than 0-4, as one might hope for. That said, a more careful examination of low-and high-weighted instances, revealed that the usefulness of an instance, as determined by SOSELETO, is more tied to its appearance: namely, whether the digit is centered, at a similar size as MNIST, the amount of blur, and rotation. In Figure 4 we show a random sample of some "good" and "bad" (i.e. high and low weights, respectively). A close look reveals that "good" instances often tend to be complete, centered, axis aligned, and at a good size (wrt MNIST sizes). Especially interesting was that, among the "bad" instances, we found about 3 − 5% wrongly labeled instances! In FIG5 we display several especially problematic instances of the SVHN, all of which are labeled as "0" in the dataset. As can be seen, some examples are very clear errors. The highly weighted instances, on the other hand, had almost no such errors. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | r1lqA-OSvV | Learning with limited training data by exploiting "helpful" instances from a rich data source. |
Derivative-free optimization (DFO) using trust region methods is frequently used for machine learning applications, such as (hyper-)parameter optimization without the derivatives of objective functions known. Inspired by the recent work in continuous-time minimizers, our work models the common trust region methods with the exploration-exploitation using a dynamical system coupling a pair of dynamical processes. While the first exploration process searches the minimum of the blackbox function through minimizing a time-evolving surrogation function, another exploitation process updates the surrogation function time-to-time using the points traversed by the exploration process. The efficiency of derivative-free optimization thus depends on ways the two processes couple. In this paper, we propose a novel dynamical system, namely \ThePrev---\underline{S}tochastic \underline{H}amiltonian \underline{E}xploration and \underline{E}xploitation, that surrogates the subregions of blackbox function using a time-evolving quadratic function, then explores and tracks the minimum of the quadratic functions using a fast-converging Hamiltonian system. The \ThePrev\ algorithm is later provided as a discrete-time numerical approximation to the system. To further accelerate optimization, we present \TheName\ that parallelizes multiple \ThePrev\ threads for concurrent exploration and exploitation. Experiment based on a wide range of machine learning applications show that \TheName\ outperform a boarder range of derivative-free optimization algorithms with faster convergence speed under the same settings. Derivative-free optimization (DFO) techniques BID31, such as Bayesian optimization algorithms BID43 BID24, non-differentiable coordinate descent BID4, natural gradient method BID12 BID14, and natural evolution strategies BID41, have been widely used for black-box function optimization. DFO techniques have been viewed as one of promising solutions, when the first-order/higher-order derivatives of the objective functions are not available. For example, to train large-scale machine learning models, parameter tuning is sometimes required. The problem to find the best parameters from the high-dimensional parameter space is frequently formalized as a black-box optimization problem, as the function that maps the specific parameter settings to the performance of models is not known BID11 BID9 BID48 BID21. The evaluation of the black-box function is often computationally expensive, and there thus needs DFO algorithms to converge fast with global/local minimum guarantee. Backgrounds. To ensure the performance of DFO algorithms, a series of pioneering work has been done BID5 BID36 BID16 BID2 BID11. Especially, Powell et al. (; BID33 proposed Trust-Region methods that intends to "surrogate" the DFO solutions through exploring the minimum in the trust regions of the blackbox objective functions, where the trust regions are tightly approximated using model functions (e.g., quadratic functions or Gaussian process) via interpolation. Such two processes for exploration and exploitation are usually alternatively iterated, so as to pursue the global/local minimum BID2. With exploration and exploitation BID7, a wide range of algorithms have been proposed using trust region for DFO surrogation BID34 BID38 BID43 BID45 BID42 BID46 BID39 BID0 BID30 BID23 BID1.Technical Challenges. Though trust region methods have been successfully used for derivative-free optimization for decades, the drawbacks of these methods are still significant:• The computational and storage complexity for (convex) surrogates is extremely high. To approximate the trust regions of blackbox functions, quadratic functions BID34 BID39 and Gaussian process BID43 BID46 BID23 are frequently used as (convex) surrogates. However, fitting the quadratic functions and Gaussian process through interpolation is quite time-consuming with high sample complexity. For example, using quadratic functions as surrogates (i.e., approximation to the second-order Taylor's expansion) needs to estimate the gradient and inverse Hessian matrix BID34 BID39, where a large number of samples are required to avoid ill-conditioned inverse Hessian approximation; while the surrogate function in GP is nonconvex, which is even more sophisticated to optimize.• The convergence of trust region methods cannot be guaranteed for high-dimensional nonconvex DFO. Compared to the derivative-based algorithms such as stochastic gradient descent and accelerated gradient methods BID3 BID44, the convergence of DFO algorithms usually are not theoretically guaranteed. Jamieson et al. BID16 provided the lower bound for algorithms based on boolean-based comparison of function evaluation. It shows that DFO algorithms can converge at Ω(1/ √ T) rate in the best case (T refers to the total number of iterations), without assumptions on convexity and smoothness, even when the evaluation of black-box function is noisy. Our Intuitions. To tackle the technical challenges, we are motivated to study novel trust region methods with following properties 1. Low-complexity Quadratic Surrogates with Limited Memory. To lower the computational complexity, we propose to use quadratic functions with identity Hessian matrices as surrogates. Rather than incorporating all evaluated samples in quadratic form approximation, our algorithm only works with the most-recently evaluated sample points. In this way, the memory consumption required can be further reduced. However, the use of identity Hessian matrices for quadratic form loses the information about the distribution (e.g., Fisher information or covariance BID13) of evaluated sample points. 2. Fast Quadratic Exploration with Stochastic Hamiltonian Dynamical Systems. Though it is difficult to improve the convergence rate of the DFO algorithms in general nonconvex settings with less oracle calls (i.e., times of function evaluation), one can make the exploration over the quadratic trust region even faster. Note that exploration requires to cover a trust region rather than running on the fastest path (e.g., the gradient flow BID15) towards the minimum of trust region. In this case, there needs an exploration mechanism traversing the whole quadratic trust region in a fast manner and (asymptotically) approaching to the minimum. FIG0 illustrates the examples of exploration processes over the quadratic region via its gradient flows (i.e., gradient descent) or using Hamiltonian dynamics with gradients BID25 as well as their stochastic variants with explicit perturbation, all in the same length of time. It shows that the stochastic Hamiltonian dynamics (shown in FIG0 (d)) can well balance the needs of fast-approaching the minimum while sampling the quadratic region with its trajectories. Compared to the (stochastic) gradient flow, which leads to the convergence to the minimum in the fast manner, the stochastic Hamiltonian system are expected to well explore the quadratic trust region with the convergence kept. Inspired by theoretical convergence consequences of Hamiltonian dynamics with Quadratic form BID44 BID25, we propose to use stochastic Hamiltonian dynamical system for exploring the quadratic surrogates. 3. Multiple Quadratic Trust Regions with Parallel Exploration-Exploitation. Instead of using one quadratic cone as the surrogate, our method constructs the trust regions using multiple quadratic surrogates, where every surrogate is centered by one sample point. In this way, the information of multiple sample points can be still preserved. Further, to enjoy the speedup of parallel computation, the proposed method can be accelerated through exploring the minimum from multiple trust regions (using multiple Hamiltonian dynamical sys- Our work is inspired by the recent progress in the continuous-time convex minimizers BID44 BID15 BID47 on convex functions, where the optimization algorithms are considered as the discrete-time numerical approximation to some (stochastic) ordinary differential equations (ODEs) or dynamics, such as Itô processes for SGD algorithms BID15 or Hamiltonian systems for Nesterov's accelerated SGD BID44. We intend to first study the new ODE and dynamical system as a continuous-time DFO minimizer that addresses above three research issues. With the new ODE, we aim at proposing the discrete-time approximation as the algorithms for black-box optimization. Our Contributions. Specifically, we make following contributions. To address the three technical challenges, a continuous-time minimizer for derivative-free optimization based on a Hamiltonian system coupling two processes for exploration and exploitation respectively. Based on the proposed dynamical system, an algorithm, namely SHE 2 -Stochastic Hamiltonian Exploration and Exploitation, as a discrete-time version of the proposed dynamical system, as well as P-SHE 2 that parallelizes SHE 2 for acceleration. With the proposed algorithms, a series of experiments to evaluate SHE 2 and P-SHE 2 using real-world applications. The two algorithms outperform a wide range of DFO algorithms with better convergence. To the best of our knowledge, this work is the first to use a Hamiltonian system with coupled process for DFO algorithm design and analysis. In this section, we first review the most relevant work of trust region methods for DFO problem, then present the preliminaries of this work. The trust region algorithms can be categorized by the model functions used for surrogates. Generally, there are two types of algorithms adopted: Gaussian Process (GP) BID43 BID45 BID46 BID23 or Quadratic functions BID34 BID39 BID30 for surrogation. Blessed by the power of Bayesian nonparameteric statistics, Gaussian process can well fit the trust regions, with confidence bounds measured, using samples evaluated by the blackbox function. However, the GP-based surrogation cannot work in high dimension and cannot scale-up with large number of samples. To solved this problem, GP-based surrogation algorithms using the kernel gradients BID46 and mini-batch BID23 have been recently studied. On the other hand, the quadratic surrogation BID34 indeed approximates the trust region through interpolating the second-order Taylor expansion of the blackbox objective. With incoming points evaluated, there frequently needs to numerically estimate and adapt the inverse Hessian matrix and gradient vector, which is extremely time-consuming and sample-inefficiency (with sample BID34). Following such settings, BID39 proposed to a second-order algorithm for blackbox variational inference based on quadratic surrogation, while BID30 ) leveraged a Gaussian Mixture Model (multiple quadratic surrogations) to fit the policy search space over blackbox probabilistic distribution for policy optimization. A novel convex model generalizing the quadratic surrogation has been recently proposed to characterize the loss for structured prediction BID1. DISPLAYFORM0 In addition, some evolutionary strategies, such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) BID13 BID0, indeed behave as a sort of quadratic surrogate as well. Compared to the common quadratic surrogate, CMA-ES models the energy of blackbox function using a multivariate Gaussian distribution. For every iteration, CMA-ES draws a batch of multiple samples from the distribution, then statistically updates parameters of the distribution using the samples with blackbox evaluation. CMA-ES can be further accelerated with parallel blackbox function evaluation and has been used for hyperparameter optimization of deep learning BID22. Here, we review the Nesterov's accelerated method for quadratic function minimization. We particularly are interested in the ODE of Nesterov's accelerated method and interpret behavior of the ODE as a Hamiltonian dynamical system. Corollary 1 (ODE of Nesterov's Accelerated Method). According to BID44, the discretetime numerical format of the Nesterov's accelerated method BID27 can be viewed as an ODE as follow. Z DISPLAYFORM0 where f (X) is defined as the objective function for minimization and ∇ X f (Z(t)) refers to the gradient of the function on the point Z(t). Above ODE can converge with strongly theoretical consequences if the function f (X) is convex with some smoothness assumptions BID44. Corollary 2 (Convergence of Eq 1 over Quadratic Loss). Let's set f (X) = DISPLAYFORM1 According to the ODE analysis of Nestereov's accelerated method BID44, the ODE listed in Eq 1 converges with increasing time t at the following rate: DISPLAYFORM2 where X refers to the initial status of the ODE. The proof has been given in BID44. In this section, we present the proposed Hamiltonian system for Black-Box minimization via exploration and exploitation. Then, we introduce the algorithms and analyze its approximation to the dynamical systems. Given a black-box objective function f (X) and X ∈ R d, we propose to search the minimum of f (X) in the d-dimensional vector space R d, using a novel Hamiltonian system, derived from the ODE of Nesterov's accelerated method and Eq 1, yet without the derivative of f (X) needed. Definition 1 (Quadratic Loss Function). Given two d-dimensional vectors X and Y, we characterizes the Euclid distance between the two vectors using the function as follow. DISPLAYFORM0 where the partial derivative of Q on X should be DISPLAYFORM1 Definition 2 (Stochastic Hamiltonian Exploration and Exploitation). As was shown in Eq. 4, a Hamiltonian system is designed with following two coupled processes: exploration process X(t) and exploitation process Y (t), where t refers to the searching time. These two processes are coupled withn each other. Specifically, the exploration process X(t) in Eq 4 uses a second order ODE to track the dynamic process Y (t), while the exploiting process Y (t) always memorizes the minimum point (i.e., X(τ)) that have been reached by X(t) from time 0 to t. DISPLAYFORM2 where DISPLAYFORM3 indicates the fastest direction to track Y (t) from X(t); and the perturbation term ζ(t) referring to an unbiased random noise with controllable bound β t ← −(α t + 3/t) and γ t ← 3/t; 5: DISPLAYFORM4 6: DISPLAYFORM5 else 12: DISPLAYFORM6 end if 14: end for 15: return Y T; ζ(t) 2 ≤ would help the system escape from an unstable stationary point in even shorter time. In the above dynamical system, we treat Y (t) as the minimizer of the black-box function f (X). 2 approximates the black-box function f (X) using a simple yet effective quadratic function, then leverages the ODE listed in Eq 1 to approximate the minimum with the quadratic function. With the new trajectories traversed by Eq 1, the quadratic function would be updated. Through repeating such surrogation-approximation-updating procedures, the ODE continuously tracks the time-dependent evolution of quadratic loss functions and finally stops at a stationary point when the quadratic loss functions is no longer updated (even with new trajectories traversed). Remark 1. We can use the analytical BID44 to interpret the dynamical system (in Eq 4) as an adaptive perturbated dynamical system that intends to minimize the Euclid distance between X(t) and Y (t) at each time t. The memory complexity of this continuous-time minimizer is O, where a Markov process Y (t) is to used to memorize the status quo of local minimum during exploration and exploitation. Theorem 1 (Convergence of SHE 2 Dynamics). Let's denote x * as a possible local minimum point of the landscape function f (x). We have as t → ∞, with high probability, that X(t) → x *, where X(t) is the solution to.Please refer to the Lemma 1 and Lemma 2 in the Appendix for the proof of above theorems. We will discuss the rate of convergence, when introducing SHE 2 algorithm as a discrete-time approximation to SHE 2.3.2 SHE 2: ALGORITHM DESIGN AND ANALYSIS Given a black-box function f (x) and a sequence of non-negative step-size α t (t=0, 1, 2, . . ., T), which is small enough, as well as the scale of perturbation ε, we propose to implement SHE 2 as Algorithm 1. The output of algorithm Y T refers to the value of Y t in the last iteration (i.e., the t th iteration). The whole algorithm only uses the evaluation of function f (x) for comparisons, without computing its derivatives. In each iteration, only the variable Y t is dedicated to memorize the local minimum in the sequence of X 1, X 2,..., X t. Thus the memory complexity of SHE 2 is O.In terms of convergence, Jamieson et al BID16 provided an universal lower bound on the convergence rate of DFO based on the "boolean-valued" comparison of (noisy) function evaluation. SHE 2 should enjoy the same convergence rate Ω(1/ √ T) without addressing any further assumptions. Here, we would demonstrate that the proposed algorithm behaves as a discrete-time approximation to the dynamical systems of X(t) and Y (t) addressed in Eq 4, while as α t → 0 the sequence of X t and Y t (for 1 ≤ t ≤ T) would converge to the behavior of continuous-time minimizer -coupled processes X(t) and Y (t).Given an appropriate constant step-size α t → 0 for t = 1, 2..., T, we can rewrite the the sequences X t described in lines 4-7 of Algorithm 1 as the following Stochastic Differential Equation (SDE) of X ω (t) with the random noise ω(t): DISPLAYFORM0 where ζ(t) refers to the continuous-time dynamics of sequence ζ 1, ζ 2,..., ζ T and |ζ(t)| 2 = ε for every time t. Through combining above two ODEs and Lemma 1, we can obtain the SDE of X(t) based on the perturbation ζ(t) as: DISPLAYFORM1 The sequence Y t (t=0, 1, 2, · · · T) always exploits the minimum point that has been already found by X t at time t. Thus, we can consider Y t is the discrete-time of Y (t) that exploits the minimum traversed by X(t). In this way, we can consider the coupled sequences of X t and Y t (for 1 ≤ t ≤ T) as the discrete-time form of the proposed dynamical system with X(t) and Y (t). To enjoy the speedup of parallel computation, we propose a new Hamiltonian dynamical system with a set of ODEs that leverage multiple pairs of coupled processes for exploration and exploitation in parallel. Then, we present the algorithm design as a discrete-time approximation to the ODEs. 2 DYNAMICAL SYSTEM Given a black-box objective function f (X) and X ∈ R d, we propose to search the minimum of f (X) in the d-dimensional vector space R d, using following systems. Definition 3 (Parallel Stochastic Hamiltonian Exploration and Exploitation). As was shown in Eq. 7, a Hamiltonian system is designed with N pairs of coupled exploration-exploitation processes: X i (t) and Y i (t) for 1 ≤ i ≤ N that explores and exploits the minimum in-parallel from N (random/unique) starting points, and an overall exploitation process Y (t) memorizing the local minimum traversed by the all N pairs of coupled processes. Specifically, for each pair of coupled processes, a new surrogation model Q δ (X i (t), Y i (t), Y (t)) has been proposed to measure the joint distance from X i (t) to Y i (t) and Y (t) respectively, where δ > 0 refers to a trade-off factor weighted-averaging the two distances. DISPLAYFORM0 where DISPLAYFORM1 indicates the fastest direction to track Y i (t) and Y (t), jointly, from X(t). In the above dynamical system, we treat Y (t) as the minimizer of the black-box function f (X). Remark 2. We understand the dynamical system listed in Eq 7 as a perturbated dynamical system with multiple state variables, where all variables are coupled to search the minimum of f (X) through X i (t) (for 1 ≤ i ≤ N). The memory complexity of this continuous-time minimizer is O(N), where every Markov process Y i (t) is to used to memorize the status quo of local minimum traversed by the corresponding processes. 6: for t = 1, 2..., T do 7:for j = 1, 2, 3,..., N in Parallel do 8:/* X(t) update for the j th SHE 2 thread*/ 9:β t ← −(α t + 3/t) and γ t ← 3/t;10: DISPLAYFORM0 11: DISPLAYFORM1 12: DISPLAYFORM2 13: DISPLAYFORM3 DISPLAYFORM4 where ζ j (t) refers to the continuous-time dynamics of sequence ζ j 1, ζ j 2,..., ζ j T and |ζ j (t)| 2 = ε for every time t. Through combining above three ODEs and Eq. 8, we can obtain the ODE of X(t) as: DISPLAYFORM5 Using same the settings, we can conclude that X j t would have similar behavior as X i (t) (for 1 ≤ i ≤ N in Eq 7). Thus, Algorithm 2 can be viewed as a discrete-time approximation of dynamical systems in Eq 7. Since the sequence Y t always exploits the minimum point that has been found by all N threads at every time t, we can use the algorithm output Y T as the minimizer of f (x). The proposed P-SHE 2 algorithm can be viewed as a particle swarm optimizer Kennedy FORMULA1 with inverse-scale step-size settings. Compared to Particle Swarm, which usually adopts constant stepsize settings (i.e., α t, β t and γ t are fixed as a constant value), P-SHE 2 proposes to use a small α t, while setting β t = −(α t + 3/t) and γ t = 3/t for each (the t th) iteration. Such settings help the optimizer approximates to the Nesterov's scheme, so as to enjoy faster convergence speed, under certain assumption. In terms of contribution, our research made as yet an rigorous analysis for Particle Swarm through linking it to to Nesterov's scheme; BID44. We provide three sets of experiments to validate our algorithms. In the first set of experiments, we demonstrate the performance of SHE 2 and P-SHE 2 to minimize two non-convex functions through the comparisons to a set of DFO optimizers, including Gaussian Process optimization algorithms (GP-UCB) , Powell's BOBYQA methods BID35, Limited Memory-BFGS-B (L-BFGS) BID50, Covariance Matrix Adaptation Evolution Strategy (CMA-ES) BID13, and Particle Swarm optimizer (PSO) BID18. For the second set of experiments, we use the same set of algorithms to train logistic regression BID20 and support vector machine BID6 classifiers, on top of benchmark datasets, for supervised learning tasks. In the third set, we use P-SHE 2 to optimize the hyper-parameters of ResNet-50 for the performance tuning on Flower 102 and MIT Indoor 67 benchmark datasets under transfer learning settings. Figure 2 presents the performance comparison between P-SHE 2, SHE 2 and the baseline algorithms using two 2D benchmark nonconvex functions-Franke's function and Peaks function. Figure 2.a and c present the landscape of these two functions, while Figure 2.b and d present the performance evaluation of P-SHE 2, SHE 2 and baseline algorithms on these two functions. All these algorithms are tuned with best parameters and evaluated for 20 times, while averaged performance is presented. Specifically, we illustrate how these algorithms would converge with increasing number of iterations. Obviously, on Franke's function, only P-SHE 2, i.e., the P-SHE 2 algorithm with N = 10 search threads, CMA-ES, GP-UCB algorithms and BOBYQA converge to the global minimum, while the rest of algorithms, including SHE 2 and PSO, converge to the local minimum. Though P-SHE 2 needs more iterations to converge to the global minimum, its per iteration time consumption is significantly lower than the other three convergeable algorithms (shown in Figure 2.e). The same comparison can be also observed from the comparison using Peaks function (in Figures 2.b and 2 .c, only CMA-ES and P-SHE 2 converge to global minimum in the given number of iterations). Compared P-SHE 2 to PSO, they both use 10 search threads with the same computational/memory complexity, while P-SHE 2 converges much faster than PSO. The same phenomena can be also observed from the comparison between SHE 2 and PSO, both of which search with single thread. We can suggest that the adaptive step-size settings inherent from Nesterove's scheme accelerate the convergence speed. We use above algorithms to train logistic regression (LR) and SVM classifiers using Iris (4 features, 3 classes and 150 instances), Breast (32 features, 2 classes and 569 instances) and Wine (13 features, 3 classes and 178 instances) datasets. We treat the loss functions of logistic regression and SVM as black-box functions and parameters (e.g., projection vector β for logistic regression) as optimization outcomes. Note that the number of parameters for multi-class (#class ≥ 3) classification is #class × #f eatures, e.g., 39 for wine data. We don't include GP-UCB in the comparison, as it is extremely time-consuming to scale-up in high-dimensional settings. FIG2 demonstrates how loss function could be minimized by above algorithms with iterations by iterations. For both classifiers on all three datasets, P-SHE 2-the P-SHE 2 algorithms with N = 100 search threads, outperforms all rest algorithms with the most significant loss reduction and the best convergence performance. We also test the accuracy of trained classifiers using the testing datasets. Table. 1 shows that both classifiers trained by P-SHE 2 enjoys the best accuracy among all above DFO algorithms and the accuracy is comparable to those trained using gradientbased optimizers. All above experiments are carried out under 10-folder cross-validation. Note that the accuracy of the classifiers trained by P-SHE 2 is closed to/or even better than some fine-tuned gradient-based solutions BID8 BID40 ). To test the performance of P-SHE 2 for derivative-free optimization with noisy black-box function evaluation, We use P-SHE 2 to optimize the hyper-parameter of ResNet-50 networks for Flower BID29 and MIT Indoor 67 classification BID37 ) tasks. The two networks are pre-trained using ImageNet BID19 and Place365 datasets BID49, respectively. Specifically, we design a black-box function to package the training procedure of the ResNet-50, where 12 continuous parameters, including the learning rate of procedure, type of optimizers (after simple discretization), the probabilistic distribution of image pre-processing operations for randomized data augmentation and so on, are interfaced as the input of the function while the validation loss of the network is returned as the output. We aim at searching the optimal parameters with the lowest validation loss. The experiments are all based on a Xeon E5 cluster with many available TitanX, M40x8, and 1080Ti GPUs. Our experiments compare P-SHE 2 with a wide range of solvers and hyper-parameter tuning tools, including PSO, CMA-ES BID22, GP-UCB BID17 and BOBYQA under the same pre-training/computing settings. Specifically, we adopt the vanilla implementation of GP-UCB and BOBYQA (with single search thread), while P-SHE 2, PSO and CMA-ES are all with 10 search threads for parallel optimization. The experimental show that all these algorithms can well optimize the hyer-parameters of ResNet for the better performance under the same settings, while P-SHE 2 has ever searched the hyperparameters with the lowest validation loss in our experiments (shown in FIG4). Due to the restriction of PyBOBYQA API, we can only provide the function evaluation of the final solution obtained by BOBYQA as a flatline in FIG4. In fact, P-SHE 2, PSO and CMA-ES may spend more GPU hours than GP-UCB and BOBYQA due to the parallel search. For the fair comparison, we also evaluate GP-UCB and BOBYQA with more than 100 iterations til the convergence, where GP-UCB can achieve 0.099854 validation error (which is comparable to the three parallel solvers) for Flower 102 task. Note that we only claim that P-SHE 2 can be used for hyerparameter optimization with decent performance. We don't intend to state that P-SHE 2 is the best for hyerparameter tuning, as the performance of the three parallel solvers are sometimes randon and indeed close to each other. In this paper, we present SHE 2 and P-SHE 2 -two derivative-free optimization algorithms that leverage a Hamiltonian exploration and exploitation dynamical systems for black-box function optimization. Under mild condition SHE 2 algorithm behaves as a discrete-time approximation to a Nestereov's scheme ODE BID44 over the quadratic trust region of the blackbox function. Moreover, we propose P-SHE 2 to further accelerate the minimum search through parallelizing multiple SHE 2 -alike search threads with simple synchronization. Compared to the existing trust region methods, P-SHE 2 uses multiple quadratic trust regions with multiple (coupled) stochastic Hamiltonian dynamics to accelerate the exploration-exploitation processes, while avoiding the needs of Hessian matrix estimation for quadratic function approximation. Instead of interpolating sampled points in one quadratic function, P-SHE 2 defacto constructs one quadratic surrogate (with identity Hessian) for each sampled point and leverages parallel search threads with parallel black-box function evaluation to boost the performance. Experiment show that P-SHE 2 can compete a wide range of DFO algorithms to minimize nonconvex benchmark functions, train supervised learning models via parameter optimization, and fine-tune deep neural networks via hyperparameter optimization. 2 DYNAMICAL SYSTEM Our goal is to show that in the system we have X(t) → x * as t → ∞, where x * is a local minimum point of the landscape function f (x). Definition 4. We say the point x * is a local minimum point of the function f (x) if and only if f (x *) ≤ f (x) for any x ∈ U (x *), where U (x *) which is any open neighborhood around the point x *.Let us first remove the noise ζ(t) in our system. Thus we obtain the following deterministic dynamical system (X 0 (t), Y 0 (t)): DISPLAYFORM0 In the equations (a) and (b) of, the pair of processes (X 0 (t), Y 0 (t)) is a pair of coupled processes. In the next Lemma, we show that X 0 (t) converges to the minimum point of f (x) along the trajectory of X 0 (t) as t → ∞.Lemma 1. For the deterministic system, we have that DISPLAYFORM1 Proof. SetẊ 0 (t) = P (t), we can write equation (a) in in Hamiltonian form DISPLAYFORM2 Set H(X, Y, P) = P 2 2 + Q(X, Y). Then we have DISPLAYFORM3 As we have DISPLAYFORM4, we see from that we have DISPLAYFORM5 Notice that by our construction part (b) of the coupled process, we have f (Y 0 (t)) ≤ f (X 0 (s)) for 0 ≤ s ≤ t. If X 0 (t) − Y 0 (t) = 0, then (X 0 (t) − Y 0 (t)) ·Ẏ 0 (t) = 0. If X 0 (t) − Y 0 (t) = 0, then we see that Y 0 (t) = X 0 (s 0) for some 0 ≤ s 0 < t, and f (X 0 (s 0)) < f (X 0 (t)). By continuity of the trajectory of X 0 (t) as well as the function f (x), we see that in this caseẎ 0 (t) = 0, so that (X 0 (t) − Y 0 (t)) ·Ẏ 0 (t) = 0. Thus we see that actually gives d dt H(X 0 (t), Y 0 (t), P (t)) = − 3 t P (t) 2 2 ≤ 0.From here, we know that H(X 0 (t), Y 0 (t), P (t)) keeps decaying until P (t) → 0 and X 0 (t) − Y 0 (t) 2 → 0, as desired. Since f (Y 0 (t)) ≤ min 0≤s≤t f (X 0 (s)), Lemma 1 tells us that as t → ∞, the deterministic process X 0 (t) in approaches the minimum of f along the trajectory traversed by itself. Let us now add the noise ζ(t) to part (a) of, so that we come back to our original system. We would like to argue that with the noise ζ(t), we actually have lim t→∞ min 0≤s≤t f (X(s)) = f (x *), and thus X(t) → x * when t → ∞ as desired. Lemma 2. For the process X(t) in part (a) of the system, we have X(t) → x * as t → ∞, where x * is a local minimum of the landscape function f (x).Proof. We first notice that we have ζ(t) = ε ξ t ξ t 2, where ξ t ∼ N (0, I) is a sequence of i.i.d.normal, so that ζ(t) 2 = ε, Eζ(t) = 0. Viewing as a small random perturbation (see BID10, Chapter 2, Section 1)) of the system we know that for any δ > 0 fixed, we have DISPLAYFORM6 as ε → 0. From here we know that the process X(t) behaves close to X 0 (t) with high probability, so that by Lemma 1 we know that with high probability we have lim t→∞ f (X(t)) − min 0≤s≤t f (X(s)) = 0.Our next step is to improve the above asymptotic to X(t) → x * as t → ∞. Comparing with, we see that it suffices to show DISPLAYFORM7 To demonstrate, we note that when t is large, we can ignore in the damping term 3 tẊ (t) and obtain a friction-less dynamics f (X(τ)).Combining Lemma 1, and ∂ ∂X Q(X, Y) = X − Y we further see that the term ∂ ∂X Q(X, Y) also contribute little in. Thus part (a) of reduces to a very simple equation DISPLAYFORM8 Equation FORMULA1 enables the process X(t) to explore locally in an ergodic way its neighborhood points, so that if FORMULA1 is not valid, then X(t + dt) will explore a nearby point at which f (X(t + dt)) is less that min 0≤s≤t f (X(t)), and thus will move to that point. This leads to a further decay in the value of f (X(t)), which demonstrates that in he limit t → ∞ we must have, and the Lemma concludes. Summarizing, we have the Theorem 1. Here we provide a short discussion on the convergence rate of the algorithm SHE2. In the previous appendix we have demonstrated that the system converges via two steps. Step 1 in Lemma 1 shows that the differential equation modeling Nesterov's accelerated gradient descent (see BID44) helps the process X(t) to "catch up" with the minimum point Y (t) on its path. Step 2 in Lemma 2 shows that when t → ∞ the noise term ζ(t) helps the process X(t) to reach local | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rklEUjR5tm | a new derivative-free optimization algorithms derived from Nesterov's accelerated gradient methods and Hamiltonian dynamics |
Binarized Neural networks (BNNs) have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained. However, BNNs only binarize the model parameters and activations during propagations. Therefore, BNNs do not offer significant efficiency improvements during training, since the gradients are still propagated and used with high precision. We show there is no inherent difficulty in training BNNs using "Binarized BackPropagation" (BBP), in which we also binarize the gradients. To avoid significant degradation in test accuracy, we simply increase the number of filter maps in a each convolution layer. Using BBP on dedicated hardware can potentially significantly improve the execution efficiency (\emph{e.g.}, reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training process with an appropriate hardware support, even after such an increase in network size. Moreover, our method is ideal for distributed learning as it reduces the communication costs significantly (e.g., by ~32). Using this method, we demonstrate a minimal loss in classification accuracy on several datasets and topologies. | [
1,
0,
0,
0,
0,
0,
0,
0
] | ryKRRsm0Z | Binarized Back-Propagation all you need for completely binarized training is to is to inflate the size of the network |
The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure. An inappropriate selection can lead to the loss of information of the input during forward propagation and the exponential vanishing/exploding of gradients during back-propagation. Understanding the theoretical properties of untrained random networks is key to identifying which deep networks may be trained successfully as recently demonstrated by who showed that for deep feedforward neural networks only a specific choice of hyperparameters known as the `edge of chaos' can lead to good performance. We complete this analysis by providing quantitative showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos. By further extending this analysis, we identify a class of activation functions that improve the information propagation over ReLU-like functions. This class includes the Swish activation, $\phi_{swish}(x) = x \cdot \text{sigmoid}(x)$, used in , and. This provides a theoretical grounding for the excellent empirical performance of $\phi_{swish}$ observed in these contributions. We complement those previous by illustrating the benefit of using a random initialization on the edge of chaos in this context. Deep neural networks have become extremely popular as they achieve state-of-the-art performance on a variety of important applications including language processing and computer vision; see, e.g., BID8. The success of these models has motivated the use of increasingly deep networks and stimulated a large body of work to understand their theoretical properties. It is impossible to provide here a comprehensive summary of the large number of contributions within this field. To cite a few relevant to our contributions, BID11 have shown that neural networks have exponential expressive power with respect to the depth while BID14 obtained similar using a topological measure of expressiveness. We follow here the approach of BID14 and BID16 by investigating the behaviour of random networks in the infinite-width and finite-variance i.i.d. weights context where they can be approximated by a Gaussian process as established by BID10 and BID9.In this paper, our contribution is two-fold. Firstly, we provide an analysis complementing the of BID14 and BID16 and show that initializing a network with a specific choice of hyperparameters known as the'edge of chaos' is linked to a deeper propagation of the information through the network. In particular, we establish that for a class of ReLU-like activation functions, the exponential depth scale introduced in BID16 is replaced by a polynomial depth scale. This implies that the information can propagate deeper when the network is initialized on the edge of chaos. Secondly, we outline the limitations of ReLU-like activation functions by showing that, even on the edge of chaos, the limiting Gaussian Process admits a degenerate kernel as the number of layers goes to infinity. Our main gives sufficient conditions for activation functions to allow a good'information flow' through the network (Proposition 4) (in addition to being non-polynomial and not suffering from the exploding/vanishing gradient problem). These conditions are satisfied by the Swish activation φ swish (x) = x · sigmoid(x) used in BID4, BID2 and BID15. In recent work, BID15 used automated search techniques to identify new activation functions and found experimentally that functions of the form φ(x) = x · sigmoid(βx) appear to perform indeed better than many alternative functions, including ReLU. Our paper provides a theoretical grounding for these . We also complement previous empirical by illustrating the benefits of an initialization on the edge of chaos in this context. All proofs are given in the Supplementary Material. We use similar notations to those of BID14 and BID9. Consider a fully connected random neural network of depth L, widths (N l) 1≤l≤L, weights W 2 ) denotes the normal distribution of mean µ and variance σ 2. For some input a ∈ R d, the propagation of this input through the network is given for an activation function φ: R → R byThroughout the paper we assume that for all l the processes y l i are independent (across i) centred Gaussian processes with covariance kernels κ l and write accordingly y DISPLAYFORM0. This is an idealized version of the true processes corresponding to choosing N l−1 = +∞ (which implies, using Central Limit Theorem, that y l i (a) is a Gaussian variable for any input a). The approximation of y l i by a Gaussian process was first proposed by BID12 in the single layer case and has been recently extended to the multiple layer case by BID9 and BID10. We recall here the expressions of the limiting Gaussian process kernels. For any input DISPLAYFORM1 where F φ is a function that depends only on φ. This gives a recursion to calculate the kernel κ l; see, e.g., BID9 for more details. We can also express the kernel κ l in terms of the correlation c l ab in the l th layer used in the rest of this paper DISPLAYFORM2 where q l−1 a DISPLAYFORM3 b, is the variance, resp. correlation, in the (l − 1) th layer and Z 1, Z 2 are independent standard Gaussian random variables. when it propagates through the network. q l a is updated through the layers by the recursive formula q l a = F (q l−1 a), where F is the'variance function' given by DISPLAYFORM4 Throughout the paper, Z, Z 1, Z 2 will always denote independent standard Gaussian variables. We analyze here the limiting behaviour of q L a and c L a,b as the network depth L goes to infinity under the assumption that φ has a second derivative at least in the distribution sense 1. From now onwards, we will also assume without loss of generality that c 1 ab ≥ 0 (similar can be obtained straightforwardly when c 1 ab ≤ 0). We first need to define the Domains of Convergence associated with an activation function φ. Remark: Typically, q in Definition 1 is a fixed point of the variance function defined in equation 2. Therefore, it is easy to see that for any (σ b, σ w) such that F is increasing and admits at least one fixed point, we have K φ,corr (σ b, σ w) ≥ q where q is the minimal fixed point; i.e. q:= min{x : F (x) = x}. Thus, if we re-scale the input data to have q 1 a ≤ q, the variance q l a converges to q. We can also re-scale the variance σ w of the first layer (only) to assume that q 1 a ≤ q for all inputs a. The next gives sufficient conditions on (σ b, σ w) to be in the domains of convergence of φ. DISPLAYFORM0 DISPLAYFORM1 The proof of Proposition 1 is straightforward. We prove that sup F (x) = σ 2 w M φ and then apply the Banach fixed point theorem; similar ideas are used for C φ,δ.Example: For ReLU activation function, we have M ReLU = 2 and C ReLU,δ ≤ 1 for any δ > 0. almost surely and the outputs of the network are constant functions. FIG3 illustrates this behaviour for d = 2 for ReLU and Tanh using a network of depth L = 10 with N l = 100 neurons per layer. The draws of outputs of these networks are indeed almost constant. To refine this convergence analysis, BID16 established the existence of q and c such that |q l a −q| ∼ e −l/ q and |c l ab −1| ∼ e −l/ c when fixed points exist. The quantities q and c are called'depth scales' since they represent the depth to which the variance and correlation can propagate without being exponentially close to their limits. More precisely, if we write DISPLAYFORM0 ] then the depth scales are given by r = − log(α) −1 and c = − log(χ 1) −1. The equation χ 1 = 1 corresponds to an infinite depth scale of the correlation. It is called the edge of chaos as it separates two phases: an ordered phase where the correlation converges to 1 if χ 1 < 1 and a chaotic phase where χ 1 > 1 and the correlations do not converge to 1. In this chaotic regime, it has been observed in BID16 that the correlations converge to some random value c < 1 when φ(x) = Tanh(x) and that c is independent of the correlation between the inputs. This means that very close inputs (in terms of correlation) lead to very different outputs. Therefore, in the chaotic phase, the output function of the neural network is non-continuous everywhere. Definition 2. For (σ b, σ w) ∈ D φ,var, let q be the limiting variance 2. The Edge of Chaos, hereafter EOC, is the set of values of (σ b, σ w) satisfying DISPLAYFORM1 To further study the EOC regime, the next lemma introduces a function f called the'correlation function' simplifying the analysis of the correlations. It states that the correlations have the same asymptotic behaviour as the time-homogeneous dynamical system c DISPLAYFORM2 The condition on φ in Lemma 1 is violated only by activation functions with exponential growth (which are not used in practice), so from now onwards, we use this approximation in our analysis. Note that being on the EOC is equivalent to (σ b, σ w) satisfying f = 1. In the next section, we analyze this phase transition carefully for a large class of activation functions. (as we will see later EOC = {(0, √ 2)} for ReLU). Unlike the output in FIG3, this output displays much more variability. However, we will prove here that the correlations still converges to 1 even in the EOC regime, albeit at a slower rate. We consider activation functions φ of the form: φ(x) = λx if x > 0 and φ(x) = βx if x ≤ 0. ReLU corresponds to λ = 1 and β = 0. For this class of activation functions, we see (Proposition 2) that the variance is unchanged (q l a = q 1 a) on the EOC, so that q does not formally exist in the sense that the limit of q l a depends on a. However, this does not impact the analysis of the correlations. Proposition 2. Let φ be a ReLU-like function with λ and β defined above. Then for any σ w < 2 λ 2 +β 2 and DISPLAYFORM0 )} and, on the EOC, F (x) = x for any x ≥ 0.This class of activation functions has the interesting property of preserving the variance across layers when the network is initialized on the EOC. However, we show in Proposition 3 below that, even in the EOC regime, the correlations converge to 1 but at a slower rate. We only present the for ReLU but the generalization to the whole class is straightforward. Example: ReLU: The EOC is reduced to the singleton (σ FORMULA4 also performed a similar analysis by using the "Scaled Exponential Linear Unit" activation (SELU) that makes it possible to center the mean and normalize the variance of the post-activation φ(y). The propagation of the correlation was not discussed therein either. In the next , we present the correlation function corresponding to ReLU networks. This was first obtained in BID0.We present an alternative derivation of this and further show that the correlations converge to 1 at a polynomial rate of 1/l 2 instead of an exponential rate. Proposition 3 (ReLU kernel). Consider a ReLU network with parameters (σ DISPLAYFORM1 We now introduce a set of sufficient conditions for activation functions which ensures that it is then possible to tune (σ b, σ w) to slow the convergence of the correlations to 1. This is achieved by making the correlation function f sufficiently close to the identity function. Proposition 4 (Main Result). Let φ be an activation function. Assume that (i) φ = 0, and φ has right and left derivatives in zero and φ (0 +) = 0 or φ (0 −) = 0, and there DISPLAYFORM0, the function F with parameters (σ b, σ w) ∈ EOC is non-decreasing and lim σ b →0 q = 0 where q is the minimal fixed point of F, q:= inf{x : DISPLAYFORM1 Note that ReLU does not satisfy the condition (ii) since the EOC in this case is the singleton (σ 2 b, σ 2 w) =. The of Proposition 4 states that we can make f (x) close to x by considering σ b → 0. However, this is under condition (iii) which states that lim σ b →0 q = 0. Therefore, practically, we cannot take σ b too small. One might wonder whether condition (iii) is necessary for this to hold. The next lemma shows that removing this condition in a useless class of activation functions. The next proposition gives sufficient conditions for bounded activation functions to satisfy all the conditions of Proposition 4. DISPLAYFORM2, xφ(x) > 0 and xφ (x) < 0 for x = 0, and φ satisfies (ii) in Proposition 4. Then, φ satisfies all the conditions of Proposition 4.The conditions in Proposition 5 are easy to verify and are, for example, satisfied by Tanh and Arctan. We can also replace the assumption "φ satisfies (ii) in Proposition 4" by a sufficient condition (see Proposition 7 in the Supplementary Material). Tanh-like activation functions provide better information flow in deep networks compared to ReLU-like functions. However, these functions suffer from the vanishing gradient problem during back-propagation; see, e.g., BID13 and BID7. Thus, an activation function that satisfies the conditions of Proposition 4 (in order to have a good 'information flow') and does not suffer from the vanishing gradient issue is expected to perform better than ReLU. Swish is a good candidate. DISPLAYFORM3 1+e −x satisfies all the conditions of Proposition 4.It is clear that Swish does not suffer from the vanishing gradient problem as it has a gradient close to 1 for large inputs like ReLU. FIG15 (a) displays f for Swish for different values of σ b. We see that f is indeed approaching the identity function when σ b is small, preventing the correlations from converging to 1. FIG15 (b) displays a draw of the output of a neural network of depth 30 and width 100 with Swish activation, and σ b = 0.2. The outputs displays much more variability than the ones of the ReLU network with the same architecture. We present in TAB0 some values of (σ b, σ w) on the EOC as well as the corresponding limiting variance for Swish. As condition (iii) of Proposition 4 is satisfied, the limiting variance q decreases with σ b. Other activation functions that have been shown to outperform empirically ReLU such as ELU BID1 ), SELU BID6 ) and Softplus also satisfy the conditions of Proposition 4 (see Supplementary Material for ELU). The comparison of activation functions satisfying the conditions of Proposition 4 remains an open question. We demonstrate empirically our on the MNIST dataset. In all the figures below, we compare the learning speed (test accuracy with respect to the number of epochs/iterations) for different activation functions and initialization parameters. We use the Adam optimizer with learning rate lr = 0.001. The Python code to reproduce all the experiments will be made available on-line.. In Figure 5, we compare the learning speed of a Swish network for different choices of random initialization. Any initialization other than on the edge of chaos in the optimization algorithm being stuck eventually at a very poor test accuracy of ∼ 0.1 as the depth L increases (equivalent to selecting the output uniformly at random). To understand what is happening in this case, let us recall how the optimization algorithm works. Let is the output of the network, and is the categorical cross-entropy loss. In the ordered phase, we know that the output converges exponentially to a fixed value (same value for all X i), thus a small change in w and b will not change significantly the value of the loss function, therefore the gradient is approximately zero and the gradient descent algorithm will be stuck around the initial value. from the vanishing gradient problem. Consequently, we expect Tanh to perform better than ReLU for shallow networks as opposed to deep networks, where the problem of the vanishing gradient is not encountered. Numerical confirm this fact. FIG18 shows curves of validation accuracy with confidence interval 90% (30 simulations). For depth 5, the learning algorithm converges faster for Tanh compared to ReLu. However, for deeper networks (L ≥ 40), Tanh is stuck at a very low test accuracy, this is due to the fact that a lot of parameters remain essentially unchanged because the gradient is very small. We have complemented here the analysis of BID16 which shows that initializing networks on the EOC provides a better propagation of information across layers. In the ReLU case, such an initialization corresponds to the popular approach proposed in BID3. However, even on the EOC, the correlations still converge to 1 at a polynomial rate for ReLU networks. We have obtained a set of sufficient conditions for activation functions which further improve information propagation when the parameters (σ b, σ w) are on the EOC. The Tanh activation satisfied those conditions but, more interestingly, other functions which do not suffer from the vanishing/exploding gradient problems also verify them. This includes the Swish function used in BID4, BID2 and promoted in BID15 but also.Our have also interesting implications for Bayesian neural networks which have received renewed attention lately; see, e.g., Hernandez-Lobato & Adams FORMULA4 and BID9. They show that if one assigns i.i.d. Gaussian prior distributions to the weights and biases, the ing prior distribution will be concentrated on close to constant functions even on the EOC for ReLU-like activation functions. To obtain much richer priors, our indicate that we need to select not only parameters (σ b, σ w) on the EOC but also an activation function satisfying Proposition 4. We provide in the supplementary material the proofs of the propositions presented in the main document, and we give additive theoretical and experimental . For the sake of clarity we recall the propositions before giving their proofs. A.1 CONVERGENCE TO THE FIXED POINT: PROPOSITION 1 DISPLAYFORM0 Proof. To abbreviate the notation, we use q l:= q l a for some fixed input a. Convergence of the variances: We first consider the asymptotic behaviour of q l = q l a. Recall that q l = F (q l−1) where, DISPLAYFORM1 The first derivative of this function is given by: DISPLAYFORM2 where Using the condition on φ, we see that for σ DISPLAYFORM0, the function F is a contraction mapping, and the Banach fixed-point theorem guarantees the existence of a unique fixed point q of F, with lim l→+∞ q l = q. Note that this fixed point depends only on F, therefore, this is true for any input a, and K φ,var (σ b, σ w) = ∞.Convergence of the covariances: Since M φ < ∞, then for all a, b ∈ R d there exists l 0 such that, for all l > l 0, | q l a − q l b | < δ. Let l > l 0, using Gaussian integration by parts, we have DISPLAYFORM1 We cannot use the Banach fixed point theorem directly because the integrated function here depends on l through q l. For ease of notation, we write c l:= c l ab, we have DISPLAYFORM2 l is a Cauchy sequence and it converges to a limit c ∈.At the limit DISPLAYFORM3 The derivative of this function is given by DISPLAYFORM4 By assumption on φ and the choice of σ w, we have sup x |f (x)| < 1, so that f is a contraction, and has a unique fixed point. Since f = 1, c = 1. The above is true for any a, b, therefore, DISPLAYFORM5 As an illustration we plot in FIG3 the variance for three different inputs with (σ b, σ w) =, as a function of the layer l. In this example, the convergence for Tanh is faster than that of ReLU. DISPLAYFORM6 where u 2 (x):= xZ 1 + √ 1 − x 2 Z 2. The first term goes to zero uniformly in x using the condition on φ and Cauchy-Schwartz inequality. As for the second term, it can be written as DISPLAYFORM7 again, using Cauchy-Schwartz and the condition on φ, both terms can be controlled uniformly in x by an integrable upper bound. We conclude using the Dominated convergence. Proposition 2. Let φ be a ReLU-like function with λ and β defined above. Then for any σ w < 2 λ 2 +β 2 and DISPLAYFORM0 )} and, on the EOC, F (x) = x for any x ≥ 0.Proof. We write q l = q l a throughout the proof. Note first that the variance satisfies the recursion: DISPLAYFORM1 For all σ w < 2 λ 2 +β 2, q = σ Proof. In this case the correlation function f is given by DISPLAYFORM2 • Let x ∈, note that f is differentiable and satisfies, DISPLAYFORM3 which is also differentiable. Simple algebra leads to DISPLAYFORM4 Since arcsin (x) = 1 √ 1−x 2 and f = 1/2, DISPLAYFORM5 We conclude using the fact that arcsin = x arcsin + √ 1 − x 2 and f = 1.• We first derive a Taylor expansion of f near 1. Consider the change of variable x = 1 − t 2 with t close to 0, then DISPLAYFORM6 we obtain that DISPLAYFORM7 DISPLAYFORM8 l < c l+1 then by taking the image by f (which is increasing because f ≥ 0) we have that c l+1 < c l+2, and we know that c 1 = f (c 0) ≥ c 0, so by induction the sequence c l is increasing, and therefore it converges (because it is bounded) to the fixed point of f which is 1. Now let γ l:= 1 − c l ab for a, b fixed. We note s = 2 √ 2 3π, from the series expansion we have that γ l+1 = γ l − sγ DISPLAYFORM9 DISPLAYFORM10 Assume that (i) φ = 0, and φ has right and left derivatives in zero and at least one of them is different from zero (φ (0 +) = 0 or φ (0 −) = 0), and there exists K > 0 such that DISPLAYFORM11, the function F with parameters (σ b, σ w,EOC) is non-decreasing and lim σ b →0 q = 0 where q is the minimal fixed point of F, q:= inf{x : DISPLAYFORM12 Proof. We first prove that K φ,var (σ b, σ w) ≥ q. We assume that σ b > 0, the case σ b = 0 is trivial since in this case q = 0 (the output of the network is zero in this case).Since F is continuous and DISPLAYFORM13 Using the fact that F is non-decreasing for any input a such that q 1 a ≤ q, we have q l is increasing and converges to the fixed point q. Therefore K φ,var (σ b, σ w) ≥ q. Now we prove that on the edge of chaos, we have DISPLAYFORM14 The EOC equation is given by σ 2 w E[φ ( √ qZ) 2 ] = 1. By taking the limit σ b → 0 on the edge of chaos, and using the fact that lim σ b →0 q = 0, we have σ DISPLAYFORM15 so that by taking the limit σ b → 0, and using the dominated convergence theorem, we have that DISPLAYFORM16 q + 1 and equation 6 holds. Finally since f is strictly convex, for all DISPLAYFORM17 q, we conclude using the fact that lim σ b →0 DISPLAYFORM18 Note however that for all σ b > 0, if (σ b, σ w) ∈ EOC, for any inputs a, b, we have lim l→∞ c l a,b = 1. Indeed, since f is usually strictly convex (otherwise, f would be equal to identity on at least a segment of) and f = 1, we have that f is a contraction (because f ≥ 0), therefore the correlation converges to the unique fixed point of f which is 1. Therefore, in most of the cases, the of Proposition 4 should be seen as a way of slowing down the convergence of the correlation to 1. Proof. Using the convexity of f and the of Proposition 4, we have in the limit σ b → 0, DISPLAYFORM19 2 which implies that var(φ ( √ qZ)) = 0. Therefore there exists a constant a 1 such that φ (√ qZ) = a 1 almost surely. This implies φ = a 1 almost everywhere. Proposition 5. Let φ be a bounded function such that φ = 0, φ > 0, φ (x) ≥ 0, φ(−x) = −φ(x), xφ(x) > 0 and xφ (x) < 0 for x = 0, and φ satisfies (ii) in Proposition 4. Then, φ satisfies all the conditions of Proposition 4.Proof. Let φ be an activation function that satisfies the conditions of Proposition 5.(i) we have φ = 0 and φ > 0. Since φ is bounded and 0 < φ < ∞, then there exists K such that DISPLAYFORM20 (ii) The condition (ii) is satisfied by assumption.(iii) Let σ b > 0 and σ w > 0. Using equation 3 together with φ > 0, we have F (x) ≥ 0 so that F is non-decreasing. Moreover, we have DISPLAYFORM21 DISPLAYFORM22. Now let's prove that the function DISPLAYFORM23 is increasing near 0 which means it is an injection near 0, this is sufficient to conclude (because we take q to be the minimal fixed point). After some calculus, we have DISPLAYFORM24 Using Taylor expansion near 0, after a detailed but unenlightening calculation the numerator is equal to −2φ DISPLAYFORM25, therefore the function e is increasing near 0.(iv) Finally, using the notations U 1:= √ qZ 1 and U 2 (x) = √ q(xZ 1 + √ 1 − x 2 Z 2), the first and second derivatives of the correlation function are given by DISPLAYFORM26 where we used Gaussian integration by parts. Let x > 0, we have that DISPLAYFORM27 where we used the fact that (Z 1, Z 2) = (−Z 1, −Z 2) (in distribution) and φ (−y) = −φ (y) for any y. Using xφ (x) ≤ 0, we have 1 {u1≥0} φ (u 1) ≤ 0. We also have for all y > 0, E[φ (U 2 (x))|U 1 = y] < 0, this is a consequence of the fact that φ is an odd function and that for x > 0 and y > 0, the mapping z 2 → xy + √ 1 − x 2 z 2 moves the center of the Gaussian distribution to a strictly positive number, we conclude that f (x) > 0 almost everywhere and assumption (iii) of Proposition 4 is verified. Proof. To abbreviate notation, we note φ:= φ Swish = xe x /(1 + e x) and h:= e x /(1 + e x) is the Sigmoid function. This proof should be seen as a sketch of the ideas and not a rigourous proof.• we have φ = 0 and φ = • As illustrated in TAB0 in the main text, it is easy to see numerically that (ii) is satisfied. Moreover, we observe that lim σ b →0 q = 0, which proves the second part of the (iii).• Now we prove that F > 0, we note g(x):= xφ (x)φ(x). We have DISPLAYFORM28 Define G by DISPLAYFORM29 which holds true for any positive number x. We thus have g(x) > G(x) for all real numbers x. Therefore E[g( √ xZ)] > 0 almost everywhere and F > 0. The second part of (iii) was already proven above.• Let σ b > 0 and σ w > 0 such that q exists. Recall that DISPLAYFORM30 In FIG3, we show the graph of E[φ (U 1)φ (U 2 (x))] for different values of q (from 0.1 to 10, the darkest line being for q = 10). A rigorous proof can be done but is omitted here. We observe that f has very small values when q is large, this is a of the fact that φ is concentrated around 0.Remark: On the edge of chaos, we have σ DISPLAYFORM31 this yields DISPLAYFORM32 The term E[φ ( √ qZ)φ(√ qZ)] is very small compared to 1 (∼ 0.01), therefore F (q) ≈ 1.Notice also that the theoretical corresponds to the equivalent Gaussian process, which is just an approximation of the neural network. Thus, using a value of (σ b, σ w) close to the EOC should not essentially change the quality of the . We can replace the conditions "φ satisfies (ii)" in Proposition 5 by a sufficient condition. However, this condition is not satisfied by Tanh. Proposition 7. Let φ be a bounded function such that DISPLAYFORM0, xφ(x) > 0 and xφ (x) < 0 for x = 0, and |Eφ (xZ) 2 | |x| −2β for large x and some β ∈. Then, φ satisfies all the conditions of Proposition 4.Proof. Let φ be an activation function that satisfies the conditions of Proposition 7. The proof is similar to the one of 5, we only need to show that having |Eφ (xZ) 2 | |x| −2β for large x and some β ∈ implies that (ii) of 4 is verified. We have that σ −β, so that we can make the term σ 2 w |Eφ (√ qZ) 2 | take any value between 0 and ∞. Therefore, there exists σ w such that (σ b, σ w) ∈ EOC, and assumption (ii) of Proposition 4 holds. In the proof of Proposition 5, we used the condition on φ (odd function) to prove that f > 0, however, in some cases when we can explicitly calculate f, we do not need φ to be defined. This is the case for Hard-Tanh, which is a piecewise-linear version of Tanh. We give an explicit calculation of f for the Hard-Tanh activation function which we note HT in what follows. We compare the performance of HT and Tanh based on a metric which we will define later. HT is given by DISPLAYFORM0 Recall the propagation of the variance q DISPLAYFORM1 where HT is the Hard-Tanh function. We have DISPLAYFORM2 This yields DISPLAYFORM3 where DISPLAYFORM4 -EDGE OF CHAOS:To study the correlation behaviour, we will assume that the variance converges to q. We have E(HT ( √ qZ) 2 ) = E(1 − FIG3 shows the EOC curve (condition (ii) is satisfied). FIG3 shows that is non-decreasing and FIG3 illustrates the fact that lim σ b →0 q = 0. Finally, FIG3 shows that function f is convex. Although the figures of F and f are shown just for one value of (σ b, σ w), the are true for any value of (σ b, σ w) on the EOC. TAB2 presents a comparative analysis of the validation accuracy of ReLU and Swish when the depth is larger than the width, in which case the approximation by a Gaussian process is not accurate (notice that in the approximation of a neural network by a Gaussian process, we first let N l → ∞, then we consider the limit of large L). ReLU tends to outperforms Swish when the width is smaller than the depth and both are small, however, we still observe a clear advantage of Swish for deeper architectures. | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1lJws05K7 | How to effectively choose Initialization and Activation function for deep neural networks |
The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes. To this end, we introduce contextual decomposition (CD), an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model. By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM. On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction. Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done. In comparison with simpler linear models, techniques from deep learning have achieved impressive accuracy by effectively learning non-linear interactions between features. However, due to our inability to describe the learned interactions, this improvement in accuracy has come at the cost of state of the art predictive algorithms being commonly regarded as black-boxes. In the domain of natural language processing (NLP), Long Short Term Memory networks (LSTMs) BID2 have become a basic building block, yielding excellent performance across a wide variety of tasks BID10 BID7, while remaining largely inscrutable. In this work, we introduce contextual decomposition (CD), a novel interpretation method for explaining individual predictions made by an LSTM without any modifications to the underlying model. CD extracts information about not only which words contributed to a LSTM's prediction, but also how they were combined in order to yield the final prediction. By mathematically decomposing the LSTM's output, we are able to disambiguate the contributions made at each step by different parts of the sentence. To validate the CD interpretations extracted from an LSTM, we evaluate on the problem of sentiment analysis. In particular, we demonstrate that CD is capable of identifying words and phrases of differing sentiment within a given review. CD is also used to successfully extract positive and negative negations from an LSTM, something that has not previously been done. As a consequence of this analysis, we also show that prior interpretation methods produce scores which have document-level information built into them in complex, unspecified ways. For instance, prior work often identifies strongly negative phrases contained within positive reviews as neutral, or even positive. The most relevant prior work on interpreting LSTMs has focused on approaches for computing word-level importance scores, with evaluation protocols varying greatly. BID8 introduced a decomposition of the LSTM's output embedding into a sum over word coefficients, and demonstrated that those coefficients are meaningful by using them to distill LSTMs into rules-based classifiers. BID5 took a more black box approach, called Leave One Out, by observing the change in log probability ing from replacing a given word vector with a zero vector, and relied solely on anecdotal evaluation. presents a general gradient-based technique, called Integrated Gradients, which was validated both theoretically and with empirical anecdotes. In contrast to our proposed method, this line of work has been limited to word-based importance scores, ignoring the interactions between variables which make LSTMs so accurate. Another line of work BID3 BID14 has focused on analysing the movement of raw gate activations over a sequence. BID3 was able to identify some co-ordinates of the cell state that correspond to semantically meaningful attributes, such as whether the text is in quotes. However, most of the cell co-ordinates were uninterpretable, and it is not clear how these co-ordinates combine to contribute to the actual prediction. Decomposition-based approaches to interpretation have also been applied to convolutional neural networks (CNNs) BID0 BID12. However, they have been limited to producing pixel-level importance scores, ignoring interactions between pixels, which are clearly quite important. Our approach is similar to these in that it computes an exact decomposition, but we leverage the unique gating structure of LSTMs in order to extract interactions. Attention based models BID1 offer another means of providing some interpretability. Such models have been successfully applied to many problems, yielding improved performance BID11 ) (. In contrast to other word importance scores, attention is limited in that it only provides an indirect indicator of importance, with no directionality, i.e. what class the word is important for. Although attention weights are often cited anecdotally, they have not been evaluated, empirically or otherwise, as an interpretation technique. As with other prior work, attention is also incapable of describing interactions between words. Given an arbitrary phrase contained within an input, we present a novel decomposition of the output of an LSTM into a sum of two contributions: those ing solely from the given phrase, and those involving other factors. The key insight behind this decomposition is that the gating dynamics unique to LSTMs are a vehicle for modeling interactions between variables. Over the past few years, LSTMs have become a core component of neural NLP systems. Given a sequence of word embeddings x 1, ..., x T ∈ R d1, a cell and state vector c t, h t ∈ R d2 are computed for each element by iteratively applying the below equations, with initialization h 0 = c 0 = 0. DISPLAYFORM0 DISPLAYFORM1 and denotes element-wise multiplication. o t, f t and i t are often referred to as output, forget and input gates, respectively, due to the fact that their values are bounded between 0 and 1, and that they are used in element-wise multiplication. After processing the full sequence, the final state h T is treated as a vector of learned features, and used as input to a multinomial logistic regression, often called SoftMax, to return a probability distribution p over C classes, with DISPLAYFORM2 3.2 CONTEXTUAL DECOMPOSITION OF LSTMWe now introduce contextual decomposition, our proposed method for interpreting LSTMs. Given an arbitrary phrase x q, ..., x r, where 1 ≤ q ≤ r ≤ T, we now decompose each output and cell state c t, h t in Equations 5 and 6 into a sum of two contributions. DISPLAYFORM3 The decomposition is constructed so that β t corresponds to contributions made solely by the given phrase to h t, and that γ t corresponds to contributions involving, at least in part, elements outside of the phrase. β DISPLAYFORM4 Here W β T provides a quantitative score for the phrase's contribution to the LSTM's prediction. As this score corresponds to the input to a logistic regression, it may be interpreted in the same way as a standard logistic regression coefficient. In the cell update Equation 5, neuron values in each of i t and g t are independently determined by both the contribution at that step, x t, as well as prior context provided by h t−1 = β t−1 + γ t−1 . Thus, in computing the element-wise product i t g t, often referred to as gating, contributions made by x t to i t interact with contributions made by h t to g t, and vice versa. We leverage this simple insight to construct our decomposition. First, assume that we have a way of linearizing the gates and updates in Equations 2, 3, 4 so that we can write each of them as a linear sum of contributions from each of their inputs. DISPLAYFORM0 When we use this linearization in the cell update Equation 5, the products between gates become products over linear sums of contributions from different factors. Upon expanding these products, the ing cross-terms yield a natural interpretation as being interactions between variables. In particular, cross-terms can be assigned as to whether they ed solely from the phrase, e.g. DISPLAYFORM1, from some interaction between the phrase and other factors, e.g. DISPLAYFORM2 Mirroring the recurrent nature of LSTMs, the above insights allow us to recursively compute our decomposition, with the initializations DISPLAYFORM3 We derive below the update equations for the case where q ≤ t ≤ r, so that the current time step is contained within the phrase. The other case is similar, and the general recursion formula is provided in Appendix 6.2.For clarity, we decompose the two products in the cell update Equation 5 separately. As discussed above, we simply linearize the gates involved, expand the ing product of sums, and group the cross-terms according to whether or not their contributions derive solely from the specified phrase, or otherwise. Terms are determined to derive solely from the specified phrase if they involve products from some combination of β t−1, β c t−1, x t and b i or b g (but not both). When t is not within the phrase, products involving x t are treated as not deriving from the phrase. DISPLAYFORM4 Having decomposed the two components of the cell update equation, we can attain our decomposition of c t by summing the two contributions. DISPLAYFORM5 Once we have computed the decomposition of c t, it is relatively simple to compute the ing transformation of h t by linearizing the tanh function in 6. Note that we could similarly decompose the output gate as we treated the forget gate above, but we empirically found this to not produce improved . DISPLAYFORM6 We now describe the linearizing functions L σ, L tanh used in the above decomposition. Formally, for arbitrary {y 1, ..., y N} ∈ R, where N ≤ 4, the problem is how to write DISPLAYFORM0 In the cases where there is a natural ordering to {y i}, prior work BID8 has used a telescoping sum consisting of differences of partial sums as a linearization technique, which we show below. DISPLAYFORM1 However, in our setting {y i} contains terms such as β t−1, γ t−1 and x t, which have no clear ordering. Thus, there is no natural way to order the sum in Equation 26. Instead, we compute an average over all orderings. Letting π 1,..., π M N denote the set of all permutations of 1,..., N, our score is given below. Note that when π i (j) = j, the corresponding term is equal to equation 26. DISPLAYFORM2 L σ can be analogously derived. When one of the terms in the decomposition is a bias, we saw improvements when restricting to permutations where the bias is the first term. As N only ranges between 2 and 4, this linearization generally takes very simple forms. For instance, when N = 2, the contribution assigned to y 1 is DISPLAYFORM3 This linearization was presented in a scalar context where y i ∈ R, but trivially generalizes to the vector setting y i ∈ R d2. It can also be viewed as an approximation to Shapely values, as discussed in BID6 and BID12. We now describe our empirical validation of CD on the task of sentiment analysis. First, we verify that, on the standard problem of word-level importance scores, CD compares favorably to prior work. Then we examine the behavior of CD for word and phrase level importance in situations involving compositionality, showing that CD is able to capture the composition of phrases of differing sentiment. Finally, we show that CD is capable of extracting instances of positive and negative negation. Code for computing CD scores is available online 1. We first describe the process for fitting models which are used to produce interpretations. As the primary intent of this paper is not predictive accuracy, we used standard best practices without much tuning. We implemented all models in Torch using default hyperparameters for weight initializations. All models were optimized using Adam BID4 with the default learning rate of 0.001 using early stopping on the validation set. For the linear model, we used a bag of vectors model, where we sum pre-trained Glove vectors BID9 and add an additional linear layer from the word embedding dimension, 300, to the number of classes, 2. We fine tuned both the word vectors and linear parameters. We will use the two data sets described below to validate our new CD method. We trained an LSTM model on the binary version of the Stanford Sentiment Treebank (SST) BID13 ), a standard NLP benchmark which consists of movie reviews ranging from 2 to 52 words long. In addition to review-level labels, it also provides labels for each phrase in the binarized constituency parse tree. Following the hyperparameter choices in , the word and hidden representations of our LSTM were set to 300 and 168, and word vectors were initialized to pretrained Glove vectors BID9. Our LSTM attains 87.2% accuracy, and we also train a logistic regression model with bag of words features, which attains 83.2% accuracy. Originally introduced in , the Yelp review polarity dataset was obtained from the Yelp Dataset Challenge and has train and test sets of sizes 560,000 and 38,000. The task is binary prediction for whether the review is positive (four or five stars) or negative (one or two stars). The reviews are relatively long, with an average length of 160.1 words. Following the guidelines from , we implement an LSTM model which attains 4.6% error, and an ngram logistic regression model, which attains 5.7% error. For computational reasons, we report interpretation on a random subset of sentences of length at most 40 words. When computing integrated gradient scores, we found that numerical issues produced unusable outputs for roughly 6% of the samples. These reviews are excluded. We compare the interpretations produced by CD against four state of the art baselines: cell decomposition BID8, integrated gradients , leave one out BID5, and gradient times input. We refer the reader to Section 2 for descriptions of these algorithms. For our gradient baseline, we compute the gradient of the output probability with respect to the word embeddings, and report the dot product between the word vector and its gradient. For integrated gradients, producing reasonable values required extended experimentation and communication with the creators regarding the choice of baselines and scaling issues. We ultimately used sequences of periods for our baselines, and rescaled the scores for each review by the standard deviation of the scores for that review, a trick not previously mentioned in the literature. To obtain phrase scores for word-based baselines integrated gradients, cell decomposition, and gradients, we sum the scores of the words contained within the phrase. Before examining the novel, phrase-level dynamics of CD, we first verify that it compares favorably to prior work for the standard use case of producing unigram coefficients. When sufficiently accurate in terms of prediction, logistic regression coefficients are generally treated as a gold standard for interpretability. In particular, when applied to sentiment analysis the ordering of words given by their coefficient value provides a qualitatively sensible measure of importance. Thus, when determining the validity of coefficients extracted from an LSTM, we should expect there to be a meaningful relationship between the CD scores and logistic regression coefficients. In order to evaluate the word-level coefficients extracted by the CD method, we construct scatter plots with each point consisting of a single word in the validation set. The two values plotted correspond to the coefficient from logistic regression and importance score extracted from the LSTM. For a quantitative measure of accuracy, we use pearson correlation coefficient. We report quantitative and qualitative in Appendix 6.1.3. For SST, CD and integrated gradients, with correlations of 0.76 and 0.72, respectively, are substantially better than other methods, with correlations of at most 0.51. On Yelp, the gap is not as big, but CD is still very competitive, having correlation 0.52 with other methods ranging from 0.34 to 0.56. Having verified reasonably strong in this base case, we now proceed to show the benefits of CD. We now show that, for phrases of at most five words, existing methods are unable to recognize subphrases with differing sentiments. For example, consider the phrase "used to be my favorite", which is of negative sentiment. The word "favorite", however, is strongly positive, having a logistic regression coefficient in the 93rd percentile. Nonetheless, existing methods consistently rank "favorite" as being highly negative or neutral. In contrast, as shown in TAB1, CD is able to identify "my favorite" as being strongly positive, and "used to be" as strongly negative. A similar dynamic also occurs with the phrase "not worth the time". The main justification for using LSTMs over simpler models is precisely that they are able to capture these kinds of interactions. Thus, it is important that an interpretation algorithm is able to properly uncover how the interactions are being handled. Using the above as a motivating example, we now show that a similar trend holds throughout the Yelp polarity dataset. In particular, we conduct a search for situations similar to the above, where a strongly positive/negative phrase contains a strongly dissenting subphrase. Phrases are scored using the logistic regression with n-gram features described in Section 4.1, and included if their absolute score is over 1.5. We then examine the distribution of scores for the dissenting subphrases, which are analogous to "favorite".For an effective interpretation algorithm, the distribution of scores for positive and negative dissenting subphrases should be significantly separate, with positive subphrases having positive scores, and vice versa. However, as can be seen in Appendix 6.1.1, for prior methods these two distributions are nearly identical. The CD distributions, on the other hand, are significantly separate, indicating that what we observed anecdotally above holds in a more general setting. We now show that prior methods struggle to identify cases where a sizable portion of a review (between one and two thirds) has polarity different from the LSTM's prediction. For instance, consider the review in Table 2, where the first phrase is clearly positive, but the second phrase causes the review to ultimately be negative. CD is the only method able to accurately capture this dynamic. By leveraging the phrase-level labels provided in SST, we can show that this pattern holds in the general case. In particular, we conduct a search for reviews similar to the above example. The search criteria are whether a review contains a phrase labeled by SST to be of opposing sentiment to the review-level SST label, and is between one and two thirds the length of the review. In Appendix 6.1.2, we show the distribution of the ing positive and negative phrases for different attribution methods. A successful interpretation method would have a sizable gap between these two distributions, with positive phrases having mostly positive scores, and negative phrases mostly negative. However, prior methods struggle to satisfy these criteria. 87% of all positive phrases are labelled as negative by integrated gradients, and cell decompositions BID8 even have the distributions flipped, with negative phrases yielding more positive scores than the positive phrases. CD, on the other hand, provides a very clear difference in distributions. To quantify this separation between positive and negative distributions, we examine a two-sample KolmogorovSmirnov one-sided test statistic, a common test for the difference of distributions with values ranging from 0 to 1. CD produces a score of 0.74, indicating a strong difference between positive and negative distributions, with other methods achieving scores of 0 (cell decomposition), 0.33 (integrated gradients), 0.58 (leave one out) and 0.61 (gradient), indicating weaker distributional differences. Given that gradient and leave one out were the weakest performers in unigram scores, this provides strong evidence for the superiority of CD. It's easy to love Robin Tunney -she's pretty and she can actbut it gets harder and harder to understand her choices. Leave one out BID5 It's easy to love Robin Tunney -she's pretty and she can actbut it gets harder and harder to understand her choices. Cell decomposition BID8 It's easy to love Robin Tunney -she's pretty and she can actbut it gets harder and harder to understand her choices. Integrated gradients It's easy to love Robin Tunney -she's pretty and she can actbut it gets harder and harder to understand her choices. It's easy to love Robin Tunney -she's pretty and she can actbut it gets harder and harder to understand her choices. Legend Very Negative Negative Neutral Positive Very Positive Table 2: Heat maps for portion of review from SST with different attribution techniques. Only CD captures that the first phrase is positive. In order to understand an LSTM's prediction mechanism, it is important to understand not just the contribution of a phrase, but how that contribution is computed. For phrases involving negation, we now demonstrate that we can use CD to empirically show that our LSTM learns a negation mechanism. Using the phrase labels in SST, we search over the training set for instances of negation. In particular, we search for phrases of length less than ten with the first child containing a negation phrase (such as "not" or "lacks", full list provided in Appendix 6.3) in the first two words, and the second child having positive or negative sentiment. Due to noise in the labels, we also included phrases where the entire phrase was non-neutral, and the second child contained a non-neutral phrase. We identify both positive negation, such as "isn't a bad film", and negative negation, such as "isn't very interesting", where the direction is given by the SST-provided label of the phrase. For a given negation phrase, we extract a negation interaction by computing the CD score of the entire phrase and subtracting the CD scores of the phrase being negated and the negation term itself. The ing score can be interpreted as an n-gram feature. Note that, of the methods we compare against, only leave one out is capable of producing such interaction scores. For reference, we also provide the distribution of all interactions for phrases of length less than 5.We present the distribution of extracted scores in FIG1. For CD, we can see that there is a clear distinction between positive and negative negations, and that the negation interactions are centered on the outer edges of the distribution of interactions. Leave one out is able to capture some of the interactions, but has a noticeable overlap between positive and negative negations around zero, indicating a high rate of false negatives. Another benefit of using CDs for interpretation is that, in addition to providing importance scores, it also provides dense embeddings for arbitrary phrases and interactions, in the form of β T discussed in Section 3.2. We anecdotally show that similarity in this embedding space corresponds to semantic similarity in the context of sentiment analysis. In particular, for all words and binary interactions, we compute the average embedding β T produced by CD across the training and validation sets. In Table 3, we show the nearest neighbours using a Table 3: Nearest neighbours for selected unigrams and interactions using CD embeddings cosine similarity metric. The are qualitatively sensible for three different kinds of interactions: positive negation, negative negation and modification, as well as positive and negative words. Note that we for positive and negative words, we chose the positive/negative parts of the negations, in order to emphasize that CD can disentangle this composition. In this paper, we have proposed contextual decomposition (CD), an algorithm for interpreting individual predictions made by LSTMs without modifying the underlying model. In both NLP and general applications of LSTMs, CD produces importance scores for words (single variables in general), phrases (several variables together) and word interactions (variable interactions). Using two sentiment analysis datasets for empirical validation, we first show that for information also produced by prior methods, such as word-level scores, our method compares favorably. More importantly, we then show that CD is capable of identifying phrases of varying sentiment, and extracting meaningful word (or variable) interactions. This movement beyond word-level importance is critical for understanding a model as complex and highly non-linear as LSTMs. 6 APPENDIX Figure 4: Logistic regression coefficients versus coefficients extracted from an LSTM on SST. We include a least squares regression line. Stronger linear relationships in the plots correspond to better interpretation techniques. To search for negations, we used the following list of negation words: not, n't, lacks, nobody, nor, nothing, neither, never, none, nowhere, remotely | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkRwGg-0Z | We introduce contextual decompositions, an interpretation algorithm for LSTMs capable of extracting word, phrase and interaction-level importance score |
Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction. This is due to the difficulty of estimating the phase of clean speech. To improve speech enhancement performance, we tackle the phase estimation problem in three ways. First, we propose Deep Complex U-Net, an advanced U-Net structured model incorporating well-defined complex-valued building blocks to deal with complex-valued spectrograms. Second, we propose a polar coordinate-wise complex-valued masking method to reflect the distribution of complex ideal ratio masks. Third, we define a novel loss function, weighted source-to-distortion ratio (wSDR) loss, which is designed to directly correlate with a quantitative evaluation measure. Our model was evaluated on a mixture of the Voice Bank corpus and DEMAND database, which has been widely used by many deep learning models for speech enhancement. Ablation experiments were conducted on the mixed dataset showing that all three proposed approaches are empirically valid. Experimental show that the proposed method achieves state-of-the-art performance in all metrics, outperforming previous approaches by a large margin. Speech enhancement is one of the most important and challenging tasks in speech applications where the goal is to separate clean speech from noise when noisy speech is given as an input. As a fundamental component for speech-related systems, the applications of speech enhancement vary from speech recognition front-end modules to hearing aid systems for the hearing-impaired BID36 BID32.Due to recent advances in deep learning, the speech enhancement task has been able to reach high levels in performance through significant improvements. When using audio signals with deep learning models, it has been a common practice to transform a time-domain waveform to a time-frequency (TF) representation (i.e. spectrograms) via short-time-Fourier-transform (STFT). Spectrograms are represented as complex matrices, which are normally decomposed into magnitude and phase components to be used in real-valued networks. In tasks involving audio signal reconstruction, such as speech enhancement, it is ideal to perform correct estimation of both components. Unfortunately, complex-valued phase has been often neglected due to the difficulty of its estimation. This has led to the situation where most approaches focus only on the estimation of a magnitude spectrogram while reusing noisy phase information BID9 BID39 BID7 BID15 BID26. However, reusing phase from noisy speech has clear limitations, particularly under extremely noisy conditions, in other words, when signal-to-noise ratio (SNR) is low. This can be easily verified by simply using the magnitude spectrogram of clean speech with the phase spectrogram of noisy speech to reconstruct clean speech, as illustrated in Fig A popular approach to speech enhancement is to optimize a mask which produces a spectrogram of clean speech when applied to noisy input audio. One of the first mask-based attempts to perform the task by incorporating phase information was the proposal of the phase-sensitive mask (PSM). Since the performance of PSM was limited because of reusing noisy phase, later studies proposed using complex-valued ratio mask (cRM) to directly optimize on complex values BID37 BID3. We found this direction promising for phase estimation because it has been shown that a complex ideal ratio mask (cIRM) is guaranteed to give the best oracle performance out of other ideal masks such as ideal binary masks, ideal ratio masks, or PSMs. Moreover, this approach jointly estimates magnitude and phase, removing the need of separate models. To estimate a complex-valued mask, a natural desire would be to use an architecture which can handle complex-domain operations. Recent work gives a solution to this by providing deep learning building blocks adapted to complex arithmetic BID28.In this paper, we build upon previous studies to design a new complex-valued masking framework, based on a proposed variant of U-Net BID19, named Deep Complex U-Net (DCUnet). In our proposed framework, DCUnet is trained to estimate a complex ratio mask represented in polar coordinates with prior knowledge observable from ideal complex-valued masks. With the complex-valued estimation of clean speech, we can use inverse short-time-Fourier-transform (ISTFT) to convert a spectrogram into a time-domain waveform. Taking this as an advantage, we introduce a novel loss function which directly optimizes source-to-distortion ratio (SDR) BID31, a quantitative evaluation measure widely used in many source separation tasks. Our contributions can be summarized as follows:1. We propose a new neural architecture, Deep Complex U-Net, which combines the advantages of both deep complex networks and U-Net, yielding state-of-the-art performance.2. While pointing out limitations of current masking strategies, we design a new complexvalued masking method based on polar coordinates.3. We propose a new loss function weighted-SDR loss, which directly optimizes a well known quantitative evaluation measure. Phase estimation for audio signal reconstruction has been a recent major interest within the audio source separation community because of its importance and difficulty. While iterative methods such as the Griffin-Lim algorithm and its variants BID8 BID17 aimed to address this problem, neural network-based approaches are recently attracting attention as noniterative alternatives. One major approach is to use an end-to-end model that takes audio as raw waveform inputs without using any explicit time-frequency (TF) representation computed via STFT BID16 BID18 BID23 BID5. Since raw waveforms inherently contain phase information, it is expected to achieve phase estimation naturally. Another method is to estimate magnitude and phase using two separate neural network modules which serially estimate magnitude and phase BID0 BID25. In this framework, the phase estimation module uses noisy phase with predicted magnitude to estimate phase of clean speech. There is also a recent study which proposed to use additional layers with trainable discrete values for phase estimation.A more straightforward method would be to jointly estimate magnitude and phase by using a continuous complex-valued ratio mask (cRM). Previous studies tried this joint estimation approach bounding the range of the cRM BID37 BID3. Despite the advantages of the cRM approach, previously proposed methods had limitations with regard to the loss function and the range of the mask which we will be returning with more details in Section 3 along with our proposed methods to alleviate these issues. As a natural extension to the works above, some studies have also undergone to examine whether complex-valued networks are useful when dealing with intrinsically complex-valued data. In the series of two works, complex-valued networks were shown to help singing voice separation performance with both fully connected neural networks and recurrent neural networks BID12 b). However, the approaches were limited as it ended up only switching the real-valued network into a complex-valued counterpart and leaving the other deep learning building blocks such as weight initialization and normalization technique in a realvalued manner. Also, the works do not show whether the phase was actually well estimated either quantitatively or qualitatively, only ending up showing that there was a performance gain. In this section we will provide details on our approach, starting with our proposed model Deep Complex U-Net, followed by the masking framework based on the model. Finally, we will introduce a new loss function to optimize our model, which takes a critical role for proper phase estimation. Before getting into details, here are some notations used throughout the paper. The input mixture signal x(n) = y(n) + z(n) ∈ R is assumed to be a linear sum of the clean speech signal y(n) ∈ R and noise z(n) ∈ R, where estimated speech is denoted asŷ(n) ∈ R. Each of the corresponding time-frequency (t, f) representations computed by STFT is denoted as DISPLAYFORM0 The ground truth mask cIRM is denoted as M t,f ∈ C and the estimated cRM is denoted asM t,f ∈ C, where The U-Net structure is a well known architecture composed as a convolutional autoencoder with skip-connections, originally proposed for medical imaging in computer vision community BID19. Furthermore, the use of real-valued U-Net has been shown to be also effective in many recent audio source separation tasks such as music source separation BID10 BID23 BID26, and speech enhancement BID16. Deep Complex U-Net (DCUnet) is an extended U-Net, refined specifically to explicitly handle complex domain operations. In this section, we will describe how U-Net is modified using the complex building blocks originally proposed by BID28. DISPLAYFORM1 Complex-valued Building Blocks. Given a complex-valued convolutional filter W = A + iB with real-valued matrices A and B, the complex convolution operation on complex vector h = x + iy with W is done by W * h = (A * x − B * y) + i(B * x + A * y). In practice, complex convolutions can be implemented as two different real-valued convolution operations with shared real-valued convolution filters. Details are illustrated in Appendix A. Activation functions like ReLU were also adapted to the complex domain. In previous work, CReLU, an activation function which applies ReLU on both real and imaginary values, was shown to produce the best out of many suggestions. Details on batch normalization and weight initialization for complex networks can be found in BID28.Modifying U-Net. The proposed Deep Complex U-Net is a refined U-Net architecture applied in STFT-domain. Modifications done to the original U-Net are as follows. Convolutional layers of UNet are all replaced to complex convolutional layers, initialized to meet the Glorot's criteria BID6. Here, the convolution kernels are set to be independent to each other by initializing the weight tensors as unitary matrices for better generalization and fast learning BID2. Complex batch normalization is implemented on every convolutional layer except the last layer of the network. In the encoding stage, max pooling operations are replaced with strided complex convolutional layers to prevent spatial information loss. In the decoding stage, strided complex deconvolutional operations are used to restore the size of input. For the activation function, we modified the previously suggested CReLU into leaky CReLU, where we simply replace ReLU into leaky ReLU BID14, making training more stable. Note that all experiments performed in Section 4 are done with these modifications. As our proposed model can handle complex values, we aim to estimate cRM for speech enhancement. Although it is possible to directly estimate the spectrogram of a clean source signal, it has been shown that better performance can be achieved by applying a weighting mask to the mixture spectrogram BID33. One thing to note is that real-valued ratio masks (RM) only change the scale of the magnitude without changing phase, ing in irreducible errors as illustrated in Appendix D. On the other hand, cRM also perform a rotation on the polar coordinates, allowing to correct phase errors. In other words, the estimated speech spectrogramŶ t,f is computed by multiplying the estimated maskM t,f on the input spectrogram X t,f as follows:Published as a conference paper at ICLR 2019 DISPLAYFORM0 In this state, the real and imaginary values of the estimated cRM is unbounded. Although estimating an unbounded mask makes the problem well-posed (see Appendix D for more information), we can imagine the difficulty of optimizing from an infinite search space compared to a bounded one. Therefore, a few techniques have been tried to bound the range of cRM. For example, Williamson et al. tried to directly optimize a complex mask into a cIRM compressed to a heuristic bound BID37. However, this method was limited since it was only able to succeed in training the model by computing the error between cIRM and the predicted cRM which often leads to a degradation of performance BID33 BID41. More recently, Ephrat et al.proposed a rectangular coordinate-wise cRM made with sigmoid compressions onto each of the real and imaginary parts of the output of the model BID3. After then MSE between clean source Y and estimated sourceŶ was computed in STFT-domain to train the model. However, the proposed masking method has two main problems regarding phase estimation. First, it suffers from the inherent problem of not being able to reflect the distribution of cIRM as shown in FIG2 and Appendix E. Second, this approach in a cRM with a restricted rotation range of 0 • to 90• (only clock-wise), which makes it hard to correct noisy phase. To alleviate these problems, we propose a polar coordinate-wise cRM method that imposes nonlinearity only on the magnitude part. More specifically, we use a hyperbolic tangent non-linearity to bound the range of magnitude part of the cRM be which makes the mask bounded in an unit-circle in complex space. The corresponding phase mask is naturally obtained by dividing the output of the model with the magnitude of it. More formally, let g(·) be our neural network and the output of it be O t,f = g(X t,f). The proposed complex-valued maskM t,f is estimated as follows: DISPLAYFORM1 A summarized illustration of cRM methods is depicted in FIG2. A popular loss function for audio source separation is mean squared error (MSE) between clean source Y and estimated sourceŶ on the STFT-domain. However, it has been reported that optimizing the model with MSE in complex STFT-domain fails in phase estimation due to the randomness in phase structure BID37. As an alternative, it is possible to use a loss function defined in the time-domain instead, as raw waveforms contain inherent phase information. While MSE on waveforms can be an easy solution, we can expect it to be more effective if the loss function is directly correlated with well-known evaluation measures defined in the time-domain. Here, we propose an improved loss function weighted-SDR loss by building upon a previous work which attempts to optimize a standard quality measure, source-to-distortion ratio (SDR) BID30. The original loss function loss V en suggested by Venkataramani et al. is formulated upon the observation from Equation 4, where y is the clean source signal andŷ is the estimated source signal. In practice, the negative reciprocal is optimized as in Equation FORMULA4. DISPLAYFORM0 Although using Equation 5 works as a loss function, there are a few critical flaws in the design. First, the lower bound becomes − y 2, which depends on the value of y causing fluctuation in the loss values when training. Second, when the target y is empty (i.e., y = 0) the loss becomes zero, preventing the model to learn from noisy-only data due to zero gradients. Finally, the loss function is not scale sensitive, meaning that the loss value is the same forŷ and cŷ, where c ∈ R.To resolve these issues, we redesigned the loss function by giving several modifications to Equation 5. First, we made the lower bound of the loss function independent to the source y by restoring back the term y 2 and applying square root as in Equation FORMULA5. This makes the loss function bounded within the range [-1, 1] and also be more phase sensitive, as inverted phase gets penalized as well. DISPLAYFORM1 Expecting to be complementary to source prediction and to propagate errors for noise-only samples, we also added a noise prediction term loss SDR (z,ẑ). To properly balance the contributions of each loss term and solve the scale insensitivity problem, we weighted each term proportional to the energy of each signal. The final form of the suggested weighted-SDR loss is as follows: DISPLAYFORM2 where,ẑ = x −ŷ is estimated noise and α = ||y|| 2 /(||y|| 2 + ||z|| 2) is the energy ratio between clean speech y and noise z. Note that although weighted SDR loss is a time-domain loss function, it can be backpropagated through our framework. Specifically, STFT and ISTFT operations are implemented as 1-D convolution and deconvolution layers consisting of fixed filters initialized with the discrete Fourier transform matrix. The detailed properties of the proposed loss function are in Appendix C. Dataset. For all experiments, we used the same experimental setups as previous works in order to perform direct performance comparison BID16 BID18 BID22 BID5. Noise and clean speech recordings were provided from the Diverse Environments Multichannel Acoustic Noise Database (DEMAND) BID27 and the Voice Bank corpus BID29, respectively, each recorded with sampling rate of 48kHz. Mixed audio inputs used for training were composed by mixing the two datasets with four signalto-noise ratio (SNR) settings (15, 10, 5, and 0 (dB)), using 10 types of noise (2 synthetic + 8 from DEMAND) and 28 speakers from the Voice Bank corpus, creating 40 conditional patterns for each speech sample. The test set inputs were made with four SNR settings different from the training set (17.5, 12.5, 7.5, and 2.5 (dB)), using the remaining 5 noise types from DEMAND and 2 speakers from the Voice Bank corpus. Note that the speaker and noise classes were uniquely selected for the training and test sets. Pre-processing. The original raw waveforms were first downsampled from 48kHz to 16kHz. For the actual model input, complex-valued spectrograms were obtained from the downsampled waveforms via STFT with a 64ms sized Hann window and 16ms hop length. Implementation. All experiments were implemented and fine-tuned with NAVER Smart Machine Learning (NSML) platform BID24 BID11. In this subsection, we compare overall speech enhancement performance of our method with previously proposed algorithms. As a baseline approach, Wiener filtering (Wiener) with a priori noise SNR estimation was used, along with recent deep-learning based models which are briefly described as the following: SEGAN: a time-domain U-Net model optimized with generative adversarial networks. Wavenet: a time-domain non-causal dilated wavenet-based network. MMSE-GAN: a timefrequency masking-based method with modified adversarial training method. Deep Feature Loss: a time-domain dilated convolution network trained with feature loss from a classifier network. BID21 3.23 2.68 2.67 2.22 5.07 SEGAN BID16 3 For comparison, we used the configuration of using a 20-layer Deep Complex U-Net (DCUnet-20) to estimate a tanh bounded cRM, optimized with weighted-SDR loss. As a showcase for the potential of our approach, we also show from a larger DCUnet-20 (Large-DCUnet-20) which has more channels in each layer. Both architectures are specified in detail in Appendix B. Results show that our proposed method outperforms the previous state-of-the-art methods with respect to all metrics by a large margin. Additionally, we can also see that larger models yield better performance. We see the reason to this significant improvement coming from the phase estimation quality of our method, which we plan to investigate in later sections. TAB2 shows the jointly combined on varied masking strategies and loss functions, where three models (DCU-10 (1.4M), DCU-16 (2.3M), and DCU-20 (3.5M)) are investigated to see how architectural differences in the model affect quantitative . In terms of masking strategy, the proposed BDT mask mostly yields better than UBD mask in DCU-10 and DCU-16, implying the importance of limiting optimization space with prior knowledge. However, in the case of DCU-20, UBD mask was able to frequently surpass the performance of BDT mask. Intuitively, this indicates that when the number of parameter gets large enough, the model is able to fit the distribution of data well even when the optimization space is not bounded. In terms of the loss function, almost every shows that optimizing with wSDR loss gives the best . However, we found out that Spc loss often provides better PESQ than wSDR loss for DCU-10 and DCU-16 except DCU-20 case where Spc and wSDR gave similar PESQ . Validation on complex-valued network and mask. In order to show that complex neural networks are effective, we compare evaluation of DCUnet (Cn) and its corresponding real-valued UNet setting with the same parameter size (Rn). For the real-valued network, we tested two settings cRMRn and RMRn to show the effectiveness of phase estimation. The first setting takes a complexvalued spectrogram as an input, estimating a complex ratio mask (cRM) with a tanh bound. The second setting takes a magnitude spectrogram as an input, estimating a magnitude ratio mask (RM) with a sigmoid bound. All models were trained with weighted-SDR loss, where the ground truth phase was given while training RMRn. Additionally, all models were trained on different number of parameters (20-layer (3.5M), 16-layer (2.3M), and 10-layer (1.4M)) to show that the are consistent regardless of model capacity. Detailed network architectures for each model are illustrated in Appendix B. In TAB3, evaluation show that our approach cRMCn makes better than conventional method RMRn for all cases, showing the effectiveness of phase correction. Also, cRMCn gives better than cRMRn, which indicates that using complex-valued networks consistently improve the performance of the network. Note that these are consistent through every evaluation measure and model size. We performed qualitative evaluations by obtaining preference scores between the proposed DCUnet (Large-DCUnet-20) and baseline methods. 15 utterance samples with different noise levels were selected from the test set and used for subjective listening tests. For each noisy sample, all possible six pairs of denoised audio samples from four different algorithms were presented to the participants in a random order, ing in 90 pairwise comparisons to be made by each subject. For each comparison, participants were presented with three audio samples -original noisy speech and two denoised speech samples by two randomly selected algorithms -and instructed to choose either a preferred sample (score 1) or "can't decide" (score 0.5). A total of 30 subjects participated in the listening test, and the are presented in TAB4 and in Table 7.: Scatter plots of estimated cRMs with 9 different mask and loss function configurations for a randomly picked noisy speech signal. Each scatter plot shows the distribution of complex values from an estimated cRM. The leftmost plot is from the cIRM for the given input. We can observe that most real-values are distributed around 0 and 1, while being relatively sparse in between. The configuration that fits the most to this distribution pattern is observed in the red dotted box which is achieved by the combination of our proposed methods (Bounded (tanh) and weighted-SDR). TAB4 shows that DCUnet clearly outperforms the other methods in terms of preference scores in every SNR condition. These differences are statistically significant as confirmed by pairwise one-tailed t-tests. Furthermore, the difference becomes more obvious as the input SNR condition gets worse, which supports our motivation that accurate phase estimation is even more important under harsh noisy conditions. This is further confirmed by in-depth quantitative analysis of the phase distance as described in Section 5 and TAB6. In this section, we aim to provide constructive insights on phase estimation by analyzing how and why our proposed method is effective. We first visualized estimated complex masks with scatter plots in FIG3 for each masking method and loss function configuration from scaling the magnitude of noisy speech and fails to correct the phase of noisy speech with rotations (e.g., (X DISPLAYFORM0 . In order to demonstrate this effect in an alternate perspective, we also plotted estimated waveforms for each loss function in FIG4 . As one can notice from FIG4 To explicitly support these observations, we would need a quantitative measure for phase estimation. Here, we define the phase distance between target spectrogram (A) and estimated spectrogram (B) as the weighted average of angle between corresponding complex TF bins, where each bin is weighted by the magnitude of target speech (A t,f) to emphasize the relative importance of each TF bin. Phase distance is formulated as the following: DISPLAYFORM1 where, ∠(A t,f, B t,f) represents the angle between A t,f and B t,f, having a range of.The phase distance between clean and noisy speech (PhaseDist (C, N) ) and the phase distance between clean and estimated speech (PhaseDist(C, E)) are presented in TAB6. The show that the best phase improvement (Phase Improvement = PhaseDist(C, N) − PhaseDist(C, E)) is obtained with wSDR loss under every SNR condition. Also Spc loss gives the worst , again reinforcing our observation. Analysis between the phase improvement and performance improvement is further discussed in Appendix G. In this paper, we proposed Deep Complex U-Net which combines two models to deal with complexvalued spectrograms for speech enhancement. In doing so, we designed a new complex-valued masking method optimized with a novel loss function, weighted-SDR loss. Through ablation studies, we showed that the proposed approaches are effective for more precise phase estimation, ing in state-of-the-art performance for speech enhancement. Furthermore, we conducted both quantitative and qualitative studies and demonstrated that the proposed method is consistently superior to the previously proposed algorithms. n the near future, we plan to apply our system to various separation tasks such as speaker separation or music source separation. Another important direction is to extend the proposed model to deal with multichannel audio since accurate estimation of phase is even more critical in multichannel environments BID34. Apart from separation, our approach can be generalized to various audio-related tasks such as dereverberation, bandwidth extension or phase estimation networks for text-to-speech systems. Taking advantage of sequence modeling, it may also be interesting to find further extensions with complex-valued LSTMs BID1 BID38. In this section, we address the difference between the real-valued convolution and the complexvalued convolution. Given a complex-valued convolution filter W = A + iB with real-valued matrices A and B, the complex-valued convolution can be interpreted as two different real-valued convolution operations with shared parameters, as illustrated in FIG6 (b). For a fixed number of #Channel product = #Input channel(M) × #Output channel(N), the number of parameters of the complex-valued convolution becomes double of that of a real-valued convolution. Considering this fact, we built the pair of a real-valued network and a complex-valued network with the same number of parameters by reducing #Channel product of complex-valued convolution by half for a fair comparison. The detail of models reflecting this configuration is explained in Appendix B. In this section, we describe three different model architectures (DCUnet-20 (#params: 3.5M), DCUnet-16 (#params: 2.3M), and DCUnet-10 (#params: 1.4M)) each in complex-valued network setting and real-valued network setting in FIG7, 8, 9. Both complex-valued network (C) and realvalued network (R) have the same size of convolution filters with different number of channels to set the parameter equally. The largest model, Large-DCUnet-20, in TAB0 is also described in FIG12. Every convolution operation is followed by batch normalization and an activation function as described in FIG0. For the complex-valued network, the complex-valued version of batch normalization and activation function was used following Deep Complex Networks BID28. Note that in the very last layer of every model the batch normalization and leaky ReLU activation was not used and non-linearity function for mask was applied instead. The real-valued network configuration was not considered in the case of largest model. In this section, we summarize the properties of the proposed weighted-SDR loss. First, we show that the range of weighted-SDR loss is bounded and explain the conditions under which the minimum value is obtained. Next, we explain the gradients in the case of noise-only input. FIG0: Description of encoder and decoder block. F f and F t denote the convolution filter size along the frequency and time axis, respectively. S f and S t denote the stride size of convolution filter along the frequency and time axis, respectively. O C and O R denote the different number of channels in complex-valued network setting and real-valued network setting, respectively. The number of channels of O R is set to be roughly √ 2 times the number of channels of O C so that the number of trainable parameters of real-valued network and complex-valued network becomes approximately the same. Let x denotes noisy speech with T time step, y denotes target source andŷ denotes estimated source. Then, loss wSDR (x, y,ŷ) is defined as follows: DISPLAYFORM0 where, α is the energy ratio between target source and noise, i.e., y 2 /(y 2 + x − y 2).Proposition 1. loss wSDR (x, y,ŷ) is bounded on [-1,1]. Moreover, for fixed y = 0 and x − y = 0, the minimum value -1 can only be attained whenŷ = y, if x = cy for ∀c ∈ R.Proof. Cauchy-Schwarz inequality states that for a ∈ R T and b ∈ R T, − a b ≤ < a, b > ≤ a b. By this inequality, [-1,1] becomes the range of loss wSDR. To attain the minimum value, the equality condition of the Cauchy-Schwarz inequality must be satisfied. This equality condition is equivalent to b = 0 or a = tb, for ∃t ∈ R. Applying the equality condition with the assumption (y = 0, x − y = 0) to Equation 9 leads toŷ = t 1 y and x −ŷ = t 2 (x − y), f or ∃t 1 ∈ R and ∃t 2 ∈ R. By adding these two equations, we can get (1 − t 1)x = (t 1 − t 2)y. By the assumption x = cy, which is generally satisfied for large T, we can conclude t 1 = 1, t 1 = t 2 must be satisfied when the minimum value is attained. The following property of the weighted-SDR loss shows that the network can also learn from noiseonly training data. In experiments, we add small number to denominators of Equation 9. Thus for the case of y = 0, Equation 9 becomes DISPLAYFORM1 Proposition 2. When we parameterizeŷ = g θ (x), the loss wSDR (x, y, g θ (x)) has a non-zero gradient with respect to θ even if the target source y is empty. Proof. We can calculate partial derivatives as follows: DISPLAYFORM2 Thus, the non-zero gradients with respect to θ can be back-propagated. In this section, we illustrate two possible irreducible errors. FIG1 (a) shows the irreducible phase error due to lack of phase estimation. FIG1 (b) shows the irreducible error induced when bounding the range of mask. Not bounding the range of the mask makes the problem well-posed but it may suffer from the wide range of optimization search space because of the lack of prior knowledge on the distribution of cIRM. The scatter plots of cIRM from training set is shown in FIG2. We show four different scatter plots according to their SNR values of mixture (0, 5, 10, and 15 (dB) ). Each scattered point of cIRM, M t,f, is defined as follows: DISPLAYFORM0 The scattered points near origin indicate the TF bins where the value of Y t,f is significantly small compared to X t,f. Therefore, those TF bins can be interpreted as the bins dominated with noise rather than source. On the other hand, the scattered points near indicates the TF bins where the value of Y t,f is almost the same as X t,f. In this case, those TF bins can be interpreted as the bins dominated with source rather than noise. Therefore, as SNR becomes higher, the amount of TF bins dominated with clean source becomes larger compared to the lower SNR cases, and consequently the portion of real part close to 1 becomes larger as in FIG2. In this section, we show a supplementary visualization of phase of estimated speech. Although the raw phase information itself does not show a distinctive pattern, the hidden structure can be revealed with group delay, which is the negative derivative of the phase along frequency axis BID40. With this technique, the phase information can be explicitly shown as in FIG3. FIG3 shows the group delay of clean speech and the corresponding magnitude is shown in FIG3 (a). The two representations shows that the group delay of phase has a similar structure to that of magnitude spectrogram. The estimated phase by our model is shown in in FIG3 (c).While the group delay of noisy speech FIG3 ) does not show a distinctive harmonic pattern, our estimation show the harmonic pattern similar to the group delay of clean speech, as shown in the yellow boxes in FIG3 In this section, to show the limitation of conventional approach (without phase estimation), we emphasize that the phase estimation is important, especially under low SNR condition (harsh condition).We first make an assumption that the estimation of phase information becomes more important when the given mixture has low SNR. Our reasoning behind this assumption is that if the SNR of a given mixture is low, the irreducible phase error is likely to be greater, hence a more room for improvement with phase estimation as illustrated in FIG4. This can also be verified in TAB6 columns PhaseDist(C, N) and Phase Improvement where the values of both columns increase as SNR becomes higher. FIG4: (a) The case where SNR of given mixture is high. In this case the source is likely to be dominant in the mixture. Therefore it is relatively easier to estimate ground truth source with better precision even when the phase is not estimated. (b) The case where SNR of given mixture is low. In this case the source is not dominant in the mixture. Therefore, the irreducible phase error is likely to be higher in low SNR conditions than higher SNR conditions. Under this circumstance, we assume the lack of phase estimation will in a particularly bad system performance. To empirically show the importance of phase estimation, we show correlation between phase improvement and performance difference between the conventional method (without phase estimation) and our proposed method (with phase estimation) in TAB8. The performance difference was calculated by simply subtracting the evaluation of conventional method from the evaluation of our method with phase estimation. For fair comparison, both conventional method (RMRn) and proposed method (cRMCn) were set to have the same number of parameters. Also, both models were trained with weighted-SDR loss. The show that when the SNR is low, both the phase improvement and the performance difference are relatively higher than the from higher SNR conditions. Furthermore, almost all show an incremental increase of phase improvement and performance difference as the SNR decreases, which agrees on our assumption. Therefore we believe that phase estimation is important especially in harsh noisy conditions (low SNR conditions). Table 7: Pairwise preference scores of four models including DCUnet. The scores are obtained by calculating the relative frequency the subjects prefer one method to the other method. Hard/Medium/Easy denote 2.5/7.5/17.5 SNR conditions in dB, respectively. Significance for each statistic is also described (n.s.: not significant, * : p<0.05, * * : p<0.01, * * * : p<0.001). | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SkeRTsAcYm | This paper proposes a novel complex masking method for speech enhancement along with a loss function for efficient phase estimation. |
All living organisms struggle against the forces of nature to carve out niches where they can maintain relative stasis. We propose that such a search for order amidst chaos might offer a unifying principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcement learning method called surprise minimizing RL (SMiRL). SMiRL trains an agent with the objective of maximizing the probability of observed states under a model trained on all previously seen states. The ing agents acquire several proactive behaviors to seek and maintain stable states such as balancing and damage avoidance, that are closely tied to the affordances of the environment and its prevailing sources of entropy, such as winds, earthquakes, and other agents. We demonstrate that our surprise minimizing agents can successfully play Tetris, Doom, and control a humanoid to avoid falls, without any task-specific reward supervision. We further show that SMiRL can be used as an unsupervised pre-training objective that substantially accelerates subsequent reward-driven learning The general struggle for existence of animate beings is not a struggle for raw materials, nor for energy, but a struggle for negative entropy. (Ludwig Boltzmann, 1886) All living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them (Boltzmann, 1886; Schrödinger, 1944; ;). Humans, for example, go to great lengths to shield themselves from surprise -we band together in millions to build cities with homes, supplying water, food, gas, and electricity to control the deterioration of our bodies and living spaces amidst heat and cold, wind and storm. The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats. Motivated by this, we ask: could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents? Our method therefore addresses the unsupervised reinforcement learning problem: how might an agent in an environment acquire complex behaviors and skills with no external supervision? This central problem in artificial intelligence has evoked several candidate solutions, largely focusing on novelty-seeking behaviors (; ; ; ; ;). In simulated worlds, such as video games, novelty-seeking intrinsic motivation can lead to interesting and meaningful behavior. However, we argue that these sterile environments are fundamentally lacking compared to the real world. In the real world, natural forces and other agents offer bountiful novelty. The second law of thermodynamics stipulates ever-increasing entropy, and therefore perpetual novelty, without even requiring any agent intervention. Instead, the challenge in natural environments is homeostasis: discovering behaviors that enable agents to maintain an equilibrium, for example to preserve their bodies, their homes, and avoid predators and hunger. Even novelty seeking behaviors may emerge naturally as a means to maintain homeostasis: an agent that is curious and forages for food in unlikely places might better satisfy its hunger. In natural environments (left), an inactive agent will experience a wide variety of states. By reasoning about future surprise, a SMiRL agent can take actions that temporarily increase surprise but reduce it in the long term. For example, building a house initially in novel states, but once it is built, the house allows the agent to experience a more stable and surprise-free environment. On the right we show an interpretation of the agent interaction loop using SMiRL. When the agent observes a state, it updates it belief p(s) over states. Then, the action policy π(a|s, θ) is conditioned on this belief and maximizes the expected likelihood of the next state under its belief. We formalize allostasis as an objective for reinforcement learning based on surprise minimization (SMiRL). In highly entropic and dynamic environments with undesirable forms of novelty, minimizing surprise (i.e., minimizing novelty) causes agents to naturally seek a stable equilibrium. Natural environments with winds, earthquakes, adversaries, and other disruptions already offer a steady stream of novel stimuli, and an agent that minimizes surprise in these environments will act and explore in order to find the means to maintain a stable equilibrium in the face of these disturbances. SMiRL is simple to describe and implement: it works by maintaining a density p(s) of visited states and training a policy to act such that future states have high likelihood under p(s). This interaction scheme is shown in Figure 1 (right) Across many different environments, with varied disruptive forces, and in agents with diverse embodiments and action spaces, we show that this simple approach induces useful equilibrium-seeking behaviors. We show that SMiRL agents can solve Tetris, avoid fireballs in Doom, and enable a simulated humanoid to balance and locomote, without any explicit task reward. More pragmatically, we show that SMiRL can be used together with a task reward to accelerate standard reinforcement learning in dynamic environments, and can provide a simple mechanism for imitation learning. SMiRL holds promise for a new kind of unsupervised RL method that produces behaviors that are closely tied to the prevailing disruptive forces, adversaries, and other sources of entropy in the environment. Videos of our are available at https://sites.google.com/view/surpriseminimization We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking to preserve order amidst chaos. In complex natural environments with disruptive forces that tend to naturally increase entropy, which we refer to as entropic environments, minimizing surprise over an agent's lifetime requires taking action to reach stable states, and often requires acting continually to maintain homeostasis and avoid surprise. The long term effects of actions on the agent's surprise can be complex and somewhat counterintuitive, especially when we consider that actions not only change the state that the agent is in, but also its beliefs about which states are more likely. The combination of these two processes induce the agent to not only seek states where p(s) is large, but to also visit states so as to alter p(s), in order to receive larger rewards in the future. This "meta" level reasoning can in behaviors where the agent might actually visit new states in order to make them more familiar. An example of this is shown in Figure 1 where in order to avoid the disruptions from the changing weather an agent needs to build a shelter or home to protect itself and decrease its observable surprise. The SMiRL formulation relies on disruptive forces in the environment to avoid collapse to degenerate solutions, such as staying in a single state s 0. Fortunately, natural environments typically offer no shortage of such disruption. To instantiate SMiRL, we design a reinforcement learning agent with a reward proportional to how familiar its current state is based on the history of states it has experienced during its "life," which corresponds to a single episode. Formally, we assume a fully-observed controlled Markov process (CMP), though extensions to partially observed settings can also be developed. We use s t to denote the state at time t, and a t to denote the agent's action, ρ(s 0) to denote the initial state distribution, and T (s t+1 |s t, a t) to denote the transition dynamics. The agent has access to a dataset D t = {s 1, . . ., s t} of all states experienced so far. By fitting a generative model p θt (s) with parameters θ t to this dataset, the agent obtains an estimator that can be used to evaluate the negative surprise reward, given by We denote the fitting process as θ t = U(D t). The goal of a SMiRL agent is to maximize the sum t log p θt (s t+1). Since the agent's actions affect the future D t and thus the future θ t's, the optimal policy does not simply visit states that have a high p θt (s) now, but rather those states that will change p θt (s) such that it provides high likelihood to the states that it sees in the future. We now present a practical reinforcement learning algorithm for surprise minimization. Recall that a critical component of SMiRL is reasoning about the effect of actions on future states that will be added to D, and their effect on future density estimates -e.g., to understand that visiting a state that is currently unfamiliar and staying there will make that state familiar, and therefore lead to higher rewards in the long run. This means that the agent must reason not only about the unknown MDP dynamics, but also the dynamics of the density model p θ (s) trained on D. In our algorithm, we accomplish this via an episodic training procedure, where the agent is trained over many episodes and D is reset at the beginning of each episode to simulate a new lifetime. Through this procedure, SMiRL learns the parameters φ of the agent's policy π φ for a fixed horizon. To learn this the policy must be conditioned on some sufficient statistic of D t, since the reward r t is a function of D t. Having trained parameterized generative models p θt as above on all states seen so far, we condition π on θ t and |D t |. This implies an assumption that θ t and |D t | represent the sufficient statistics necessary to summarize the contents of the dataset for the policy, and contain all information required to reason about how p θ will evolve in the future. Of course, we could also use any other summary statistic, or even read in the entirety of D t using a recurrent model. In the next section, we also describe a modification that allows us to utilize a deep density model without conditioning π on a high-dimensional parameter vector. Algorithm 1 provides the pseudocode. SMiRL can be used with any reinforcement learning algorithm, which we denote RL in the pseudocode. As is standard in reinforcement learning, we alternate between sampling episodes from the policy (lines 6-12) and updating the policy parameters (line 13). The details of the updates are left to the specific RL algorithm, which may be on or off-policy. During each episode, as shown in line 11, D 0 is initialized with the first state and grows as each state visited by the agent is added to the dataset. The parameters θ t of the density model are fit to D t at each timestep to both be passed to the policy and define the reward function. At the end of the episode, D T is discarded and the new D 0 is initialized. While SMiRL may in principle be used with any choice of model class for the generative model p θ (s), this choice must be carefully made in practice. As we show in our experiments, relatively simple distribution classes, such as products of independent marginals, suffice to run SMiRL in simple environments with low-dimensional state spaces. However, it may be desirable in more complex environments to use more sophisticated density estimators, especially when learning directly from high-dimensional observations such as images. In particular, we propose to use variational autoencoders (VAEs) to learn a non-linear compressed state representation and facilitate estimation of p θ (s) for SMiRL. A VAE is trained using the standard loss to reconstruct states s after encoding them into a low-dimensional normal distribution q ω (z|s) through the encoder q with parameters ω. A decoder p ψ (s|z,) with parameters ψ computes s from the encoder output z. During this training process, a KL divergence loss between the prior p(z) and q ω (z|s) is used to keep this distribution near the standard normal distribution. We described a VAE-based approach for estimating the SMiRL surprise reward. In our implementation, the VAE is trained online, with VAE updates interleaved with RL updates. Training a VAE requires more data than the simpler density models that can easily be fit to data from individual episodes. We propose to overcome this by not resetting the VAE parameters between training episodes. Instead, we train the VAE across episodes. Instead of passing all VAE parameters to the SMiRL policy, we track a separate episode-specific distribution p θt (z), distinct from the VAE prior, over the course of each episode. p θt (z) replaces p θt (s) in the SMiRL algorithm and is fit to only that episode's state history. We represent p θt (z) as a vector of independent normal distributions, and fit it to the VAE encoder outputs. This replaces the density estimate in line 10 of Algorithm 1. Specifically, the corresponding update U(D t) is performed as follows: Training the VAE online, over all previously seen data, deviates from the recipe in the previous section, where the density model was only updated within an episode. However, this does provide for a much richer state density model, and the within-episode updates to estimate p θt (z) still provide our method with meaningful surprise-seeking behavior. As we show in our experiments, this can improve the performance of SMiRL in practice. We evaluate SMiRL on a range of environments, from video game domains to simulated robotic control scenarios. These are rich, dynamic environments -the world evolves automatically even without agent intervention due to the presence of disruptive forces and adversaries. Note that SMiRL relies on such disruptions to produce meaningful emergent behavior, since mere inaction would otherwise suffice to achieve homeostasis. However, as we have argued above, such disruptions are also an important property of most real world environments. Current RL benchmarks neglect this, focusing largely on unrealistically sterile environments where the agent alone drives change . Therefore, our choices of environments, discussed below, are not solely motivated by suitability to SMiRL; rather, we aim to evaluate unsupervised RL approaches, ours as well as others, in these more dynamic environments. Tetris. The classic game of Tetris offers a naturally entropic environment -the world evolves according to its own rules and dynamics even in the absence of coordinated behavior of the agent, piling up pieces and filling up the board. It therefore requires active intervention to maintain homeostasis. We consider a 4 × 10 Tetris board with tromino shapes (composed of 3 squares), as shown in Figure 2a. The observation is a binary image of the current board with one pixel per square, as well as an indicator for the type of shape that will appear next. Each action denotes one of the 4 columns in which to drop the shape and one of 4 shape orientations. For evaluation, we measure how many rows the agent clears, as well as how many times the agent dies in the game by allowing the blocks to reach the top of the board, within the max episode length of 100. Since the observation is a binary image, we model p(s) as independent Bernoulli. See Appendix A for details. VizDoom. We consider two VizDoom environments from: TakeCover and DefendTheLine. TakeCover provides a dynamically evolving world, with enemies that appear over time and throw fireballs aimed at the player . The observation space consists of the 4 previous grayscale first-person image observations, and the action space consists of moving left or right. We evaluate the agent based on how many times it is hit by fireballs, which we term the "damage" taken by the agent. Images from the TakeCover environment are shown in Fig 2b and Fig??. In DefendTheLine, additional enemies can move towards the player, and the player can shoot the enemies. The agent starts with limited ammunition. This environment provides a "survival" reward function (r = 1 for each timestep alive) and performance is measured by how long the agent survives in the environment. For both environments, we model p(s) as independent Gaussian over the pixels. See Appendix A for details. miniGrid Is a navigation task where the agent has a partial observation of the environment shown by the lighter gray area around the red agent in Figure 2d. The agent needs to navigate down the hallways to escape the enemy agents (blue) to reach the safe room on the right the enemies can not enter, through a randomly placed door. Simulated Humanoid robots. In the last set of environments, a simulated planar Humanoid robot is placed in situations where it is in danger of falling. The action consists of the PD targets for each of the joints. The state space comprises the rotation of each joint and the linear velocity of each link. We evaluate several versions of this task, which are shown in Figure 2. The Cliff tasks starts the agent at the edge of a cliff, in a random pose and with a forward velocity of 1 m/s. Falling off the cliff leads to highly irregular and unpredictable configurations, so a surprise minimizing agent will want to learn to stay on the cliff. In the Treadmill environment, the robot starts on a platform that is moving at 1 m/s backwards; an agent will be carried backwards unless it learns some locomotion. The Pedestal environment is designed to show that SMiRL can learn a more active balancing policy. In this environment the agent starts out on a thin pedestal and random forces are applied to the robots links and boxes of random size are thrown at the agent. The Walk domain is used to evaluate the use of the SMiRL reward as a form of "stability reward" that assists the agent in learning how to walk while reducing the number of falls. This is done by initializing p(s) from example walking data and adding this to the task reward, as discussed in Section 4.2. The task reward in Walk is r walk = exp((v d * v d) * −1.5), where v d is the difference between the x velocity and the desired velocity of 1 m/s. In these environments, we measure performance as the proportion of episodes with a fall. A state is classified as a fall if either the agent's links, except for the feet, are touching the ground, or if the agent is −5 meters or more below the level of the platform or cliff. Since the state is continuous, we model p(s) as independent Gaussian; see Appendix A for details. Our experiments aim to answer the following questions: Can SMiRL learn meaningful and complex emergent behaviors in the environments described in Section 3? First, we evaluate SMiRL on the Tetris, VizDoom, Cliff, and Treadmill tasks, studying its ability to generate purposeful coordinated behaviors after training using only the surprise minimizing objective, in order to answer question. The SMiRL agent demonstrates meaningful emergent behaviors in each of these domains. In the Tetris environment, the agent is able to learn proactive behaviors to eliminate rows and properly play the game. The agent also learns emergent game playing behaviour in the VizDoom environment, acquiring an effective policy for dodging the fireballs thrown by the enemies. In both of these environments, stochastic and chaotic events force the SMiRL agent to take a coordinated course of action to avoid unusual states, such as full Tetris boards or fireball explosions. In the Cliff environment, the agent learns a policy that greatly reduces the probability of falling off of the cliff by bracing against the ground and stabilize itself at the edge, as shown in Figure 2e. In the Treadmill environment, SMiRL learns a more complex locomotion behavior, jumping forward to increase the time it stays on the treadmill, as shown in Figure 2f. A quantitative measurement of the reduction in falls is shown in Figure 4. We also study question in the TakeCover, Cliff, Treadmill and Pedestal environments, training a VAE model and estimating surprise in the latent space of the VAE. In most of these environments, the representation learned by the VAE leads to faster acquisition of the emergent behaviors in Take Comparison to intrinsic motivation. Figure 3 shows plots of the environment-specific rewards over time on Tetris, TakeCover, and the Humanoid domains Figure 4. In order to compare SMiRL to more standard intrinsic motivation methods, which seek out states that maximize surprise or novelty, we also evaluated ICM and RND (b). We also plot an oracle agent that directly optimizes the task reward. On Tetris, after training for 2000 epochs, SMiRL achieves near perfect play, on par with the oracle reward optimizing agent, with no deaths, as shown in Figure 3 (left, middle). ICM seeks novelty by creating more and more distinct patterns of blocks rather than clearing them, leading to deteriorating game scores over time. On TakeCover, SMiRL effectively learns to dodge fireballs thrown by the adversaries, as shown in 3 (right). Novelty-seeking ICM once again yields deteriorating rewards over time due to the method seeking novel events that correspond to damage. The baseline comparisons for the Cliff and Treadmill environments have a similar outcome. The novelty seeking behaviour of ICM causes it to learn a type of irregular behaviour that causes the agent to jump off the Cliff and roll around on the Treadmill, maximizing the variety (and quantity) of falls Figure 4. SMiRL and curiosity are not mutually exclusive. We show that these intrinsic reward functions can be combined to achieve better on the Treadmill environment Figure 4 (right). The combination of methods leads to increased initial learning speed and producing a walking-type gait on that task. To illustrate SMiRL's desire to explore we evaluate over an environment where the agent needs to produce long term planning behaviour. This environment is shown in Figure 2d, where the agent needs to navigate its way through the hallways, avoiding enemies, to reach a safe room through a randomly placed door. We found that SMiRL is able to solve this task. Results from these examples are shown on the accompanying website. While the central focus of this paper is the emergent behaviors that can be obtained via SMiRL, in this section we study more pragmatic applications. We show that SMiRL can be used for joint training to accelerate reward-driven learning of tasks, and also illustrate how SMiRL can be used to produce a rudimentary form of imitation learning. Imitation. We can easily adapt SMiRL to perform imitation by initializing the buffer D 0 with states from expert demonstrations, or even individual desired outcome states. To study this application of SMiRL, we initialize the buffer D 0 in Tetris with user-specified desired board states. An illustration of the Tetris imitation task is presented in Figure 6, showing imitation of a box pattern (top) and a checkerboard pattern (bottom), with the leftmost frame showing the user-specified example, and the other frames showing actual states reached by the SMiRL agent. While a number of prior works have studied imitation without example actions (; a; ; b; ; Lee et al.), this capability emerges automatically in SMiRL, without any further modification to the algorithm. SMiRL as a stability reward. In this next experiment, we study how SMiRL can accelerate acquisition of reward-driven behavior in environments that present a large number of possible actions leading to diverse but undesirable states. Such settings are common in real life: a car can crash in many different ways, a robot can drop a glass on the ground causing it to break in many ways, etc. While this is of course not the case for all tasks, many real-world tasks do require the agent to stabilize itself in a specific and relatively narrow set of conditions. Incorporating SMiRL into the learning objective in such settings can accelerate learning, and potentially improve safety during training, as the agent automatically learns to avoid anything that is unfamiliar. We study this application of SMiRL in the DefendTheLine task and the Walk task. In both cases, we use SMiRL to augment the task reward, such that the full reward is given by r combined (s) = r task (s) + αr SMiRL (s), where α is chosen to put the two reward terms at a similar magnitude. In the Walk task, illustrated in Figure 2g, p θ (s) is additionally initialized with 8 example walking trajectories (256 timesteps each), similarly to the imitation setting, to study how well SMiRL can incorporate prior knowledge into the stability reward (Reward + SMiRL (ours). We include another version that is not initialized with expert data (Reward + SMiRL (no-expert). We measure the number of falls during training, with and without the SMiRL reward term. The in Figure 5b show that adding the SMiRL reward in significantly fewer falls during training, and less when using expert data while learning to walk well, indicating that SMiRL stabilizes the agent more quickly than the task reward alone. In the DefendTheLine task, shown in Figure 2c, we compare the performance of SMiRL as a joint training objective to the more traditional novelty-driven bonus provided by ICM and RND (b). Novelty-driven bonuses are often used to accelerate learning in domains that present an exploration challenge. However, as shown in the in Figure 5a, the SMiRL reward, even without demonstration data, provides for substantially faster learning on this task than novelty-seeking intrinsic motivation. These suggest that SMiRL can be a viable method for accelerating learning and reducing the amount of unsafe behavior (e.g., falling) in dynamic environments. Prior works have sought to learn intelligent behaviors through reinforcement learning with respect to a provided reward function, such as the score of a video game or a hand-defined cost function . Such rewards are often scarce or difficult to provide in practical real world settings, motivating approaches for reward-free learning such as empowerment or intrinsic motivation (; ;). Intrinsic motivation has typically focused on encouraging novelty-seeking behaviors by maximizing model uncertainty (; ; ;), by maximizing model prediction error or improvement , through state visitation counts , via surprise maximization (; ;), and through other novelty-based reward bonuses (; a;). We do the opposite. Inspired by the free energy principle (;, we instead incentivize an agent to minimize surprise and study the ing behaviors in dynamic, entropy-increasing environments. In such environments, which we believe are more reflective of the real-world, we find that prior noveltyseeking environments perform poorly. Prior works have also studied how competitive self-play and competitive, multi-agent environments can lead to complex behaviors with minimal reward information (; ; ;). Like these works, we also consider how complex behaviors can emerge in resource constrained environments. However, our approach can also be applied in non-competitive environments. We presented an unsupervised reinforcement learning method based on minimization of surprise. We show that surprise minimization can be used to learn a variety of behaviors that maintain "homeostasis," putting the agent into stable and sustainable limit cycles in its environment. Across a range of tasks, these stable limit cycles correspond to useful, semantically meaningful, and complex behaviors: clearing rows in Tetris, avoiding fireballs in VizDoom, and learning to balance and hop forward with a bipedal robot. The key insight utilized by our method is that, in contrast to simple simulated domains, realistic environments exhibit dynamic phenomena that gradually increase entropy over time. An agent that resists this growth in entropy must take active and coordinated actions, thus learning increasingly complex behaviors. This stands in stark contrast to commonly proposed intrinsic exploration methods based on novelty, which instead seek to visit novel states and increase entropy. Besides fully unsupervised reinforcement learning, where we show that our method can give rise to intelligent and complex policies, we also illustrate several more pragmatic applications of our approach. We show that surprise minimization can provide a general-purpose risk aversion reward that, when combined with task rewards, can improve learning in environments where avoiding catastrophic (and surprising) outcomes is desirable. We also show that SMiRL can be adapted to perform a rudimentary form of imitation. Our investigation of surprise minimization suggests a number of directions for future work. The particular behavior of a surprise minimizing agent is strongly influenced by the particular choice of state representation: by including or excluding particular observation modalities, the agent will be more or less surprised. where s is a single state, θ i is the sample mean calculated from D t indicating the proportion of datapoints where location i has been occupied by a block, and s i is a binary variable indicating the presence of a block at location i. If the blocks stack to the top, the game board resets, but the episode continues and the dataset D t continues to accumulate states. SMiRL on VizDoom and Humanoid. In these environments the observations placed in the buffer are downsampled 10 × 13 single-frame observations for VizDoom environments and the full state for the Humanoid environments. We model p(s) as an independent Gaussian distribution for each dimension in the observation. Then, the SMiRL reward can be computed as: where s is a single state, µ i and σ i are calculated as the sample mean and standard deviation from D t and s i is the i th observation feature of s. We emphasize that the RL algorithm in SMiRL is provided with a standard stationary MDP (except in the VAE setting, more on that below), where the state is simply augmented with the parameters of the belief over states θ and the timestep t. We emphasize that this MDP is indeed Markovian, and therefore it is reasonable to expect any convergent RL algorithm to converge to a near-optimal solution. Consider the augmented state transition p(s t+1, θ t+1, t + 1|s t, a t, θ t, t). This transition model does not change over time because the updates to θ are deterministic when given s t and t. The reward function R(s t, θ t, t) is also stationary: it is in fact deterministic given s t and θ t. Because SMiRL uses RL in an MDP, we benefit from the same convergence properties as other RL methods. However, the version of SMiRL that uses a representation learned from a VAE is not Markovian because the VAE parameters are not added to the state, and thus the reward function changes over time.. We find that this does not hurt , and note that many intrinsic reward methods such as ICM and RND also lack stationary reward functions. This process is described in Algorithm 1. We do not use entropic to mean that state transition probabilities change over time. Rather, it means that for any state in the environment, random disruptive perturbations may be applied to the state. In such settings, SMiRL seeks to visit state distributions p(s) that are easy to preserve. VAE on-line training When using a VAE to model the surprise of new states, we evaluate the probability of the latent representations z, as described in Section 2.3. The VAE is trained at the end of each episode on all data seen so far across all episodes. This means that the encoder q ω (z|bs) is changing over the course of the SM iRL algorithm, which could lead to difficulty learning a good policy. In practice, the rich representations learned by the VAE help policy learning overall. Training parameters. For the discrete action environment (Tetris and VizDoom), the RL algorithm used is deep Q-learning with a target Q network. For the Humanoid domains, we use TRPO . For Tetris and the Humanoid domains, the policies are parameterized by fully connected neural networks, while VizDoom uses a convolutional network. The encoders and decoders of the VAEs used for VizDoom and Humanoid experiments are implemented as fully connected networks over the same buffer observations as above. The coefficient for the KL-divergence term in the VAE loss was 0.1 and 1.0 for the VizDoom and Humanoid experiments, respectively. | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1lDbaVYvH | Learning emergent behavior by minimizing Bayesian surprise with RL in natural environments with entropy. |
Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models. Deep neural networks that optimize for effective representations have enjoyed tremendous success over human-engineered representations. Meta-learning takes this one step further by optimizing for a learning algorithm that can effectively acquire representations. A common approach to metalearning is to train a recurrent or memory-augmented model such as a recurrent neural network to take a training dataset as input and then output the parameters of a learner model (; ; a; BID0 . Alternatively, some approaches pass the dataset and test input into the model, which then outputs a corresponding prediction for the test example (; ; ;). Such recurrent models are universal learning procedure approximators, in that they have the capacity to approximately represent any mapping from dataset and test datapoint to label. However, depending on the form of the model, it may lack statistical efficiency. In contrast to the aforementioned approaches, more recent work has proposed methods that include the structure of optimization problems into the meta-learner (; a;). In particular, model-agnostic meta-learning (MAML) optimizes only for the initial parameters of the learner model, using standard gradient descent as the learner's update rule (a). Then, at meta-test time, the learner is trained via gradient descent. By incorporating prior knowledge about gradient-based learning, MAML improves on the statistical efficiency of black-box meta-learners and has successfully been applied to a range of meta-learning problems (a; b;). But, does it do so at a cost? A natural question that arises with purely gradient-based meta-learners such as MAML is whether it is indeed sufficient to only learn an initialization, or whether representational power is in fact lost from not learning the update rule. Intuitively, we might surmise that learning an update rule is more expressive than simply learning an initialization for gradient descent. In this paper, we seek to answer the following question: does simply learning the initial parameters of a deep neural network have the same representational power as arbitrarily expressive meta-learners that directly ingest the training data at meta-test time? Or, more concisely, does representation combined with standard gradient descent have sufficient capacity to constitute any learning algorithm?We analyze this question from the standpoint of the universal function approximation theorem. We compare the theoretical representational capacity of the two meta-learning approaches: a deep network updated with one gradient step, and a meta-learner that directly ingests a training set and test input and outputs predictions for that test input (e.g. using a recurrent neural network). In studying the universality of MAML, we find that, for a sufficiently deep learner model, MAML has the same theoretical representational power as recurrent meta-learners. We therefore conclude that, when using deep, expressive function approximators, there is no theoretical disadvantage in terms of representational power to using MAML over a black-box meta-learner represented, for example, by a recurrent network. Since MAML has the same representational power as any other universal meta-learner, the next question we might ask is: what is the benefit of using MAML over any other approach? We study this question by analyzing the effect of continuing optimization on MAML performance. Although MAML optimizes a network's parameters for maximal performance after a fixed small number of gradient steps, we analyze the effect of taking substantially more gradient steps at meta-test time. We find that initializations learned by MAML are extremely resilient to overfitting to tiny datasets, in stark contrast to more conventional network initialization, even when taking many more gradient steps than were used during meta-training. We also find that the MAML initialization is substantially better suited for extrapolation beyond the distribution of tasks seen at meta-training time, when compared to meta-learning methods based on networks that ingest the entire training set. We analyze this setting empirically and provide some intuition to explain this effect. In this section, we review the universal function approximation theorem and its extensions that we will use when considering the universal approximation of learning algorithms. We also overview the model-agnostic meta-learning algorithm and an architectural extension that we will use in Section 4. The universal function approximation theorem states that a neural network with one hidden layer of finite width can approximate any continuous function on compact subsets of R n up to arbitrary precision (; ;). The theorem holds for a range of activation functions, including the sigmoid and ReLU functions. A function approximator that satisfies the definition above is often referred to as a universal function approximator (UFA). Similarly, we will define a universal learning procedure approximator to be a UFA with input (D, x) and output y, where (D, x) denotes the training dataset and test input, while y denotes the desired test output. showed that a neural network with a single hidden layer can simultaneously approximate any function and its derivatives, under mild assumptions on the activation function used and target function's domain. We will use this property in Section 4 as part of our meta-learning universality . Model-Agnostic Meta-Learning (MAML) is a method that proposes to learn an initial set of parameters θ such that one or a few gradient steps on θ computed using a small amount of data for one task leads to effective generalization on that task (a). Tasks typically correspond to supervised classification or regression problems, but can also correspond to reinforcement learning problems. The MAML objective is computed over many tasks {T j} as follows: DISPLAYFORM0 where D Tj corresponds to a training set for task T j and the outer loss evaluates generalization on test data in D Tj. The inner optimization to compute θ Tj can use multiple gradient steps; though, in this paper, we will focus on the single gradient step setting. After meta-training on a wide range of tasks, the model can quickly and efficiently learn new, held-out test tasks by running gradient descent starting from the meta-learned representation θ. While MAML is compatible with any neural network architecture and any differentiable loss function, recent work has observed that some architectural choices can improve its performance. A particularly effective modification, introduced by Finn et al. (2017b), is to concatenate a vector of parameters, θ b, to the input. As with all other model parameters, θ b is updated in the inner loop via gradient descent, and the initial value of θ b is meta-learned. This modification, referred to as a bias transformation, increases the expressive power of the error gradient without changing the expressivity of the model itself. While Finn et al. (2017b) report empirical benefit from this modification, we will use this architectural design as a symmetry-breaking mechanism in our universality proof. We can broadly classify RNN-based meta-learning methods into two categories. In the first approach (; ; ;), there is a meta-learner model g with parameters φ which takes as input the dataset D T for a particular task T and a new test input x, and outputs the estimated outputŷ for that input: DISPLAYFORM0 The meta-learner g is typically a recurrent model that iterates over the dataset D and the new input x. For a recurrent neural network model that satisfies the UFA theorem, this approach is maximally expressive, as it can represent any function on the dataset D T and test input x.In the second approach (; ; b; BID0 ;), there is a meta-learner g that takes as input the dataset for a particular task D T and the current weights θ of a learner model f, and outputs new parameters θ T for the learner model. Then, the test input x is fed into the learner model to produce the predicted outputŷ. The process can be written as follows: DISPLAYFORM1 Note that, in the form written above, this approach can be as expressive as the previous approach, since the meta-learner could simply copy the dataset into some of the predicted weights, reducing to a model that takes as input the dataset and the test example. 1 Several versions of this approach, i.e.; Li & Malik (2017b), have the recurrent meta-learner operate on order-invariant features such as the gradient and objective value averaged over the datapoints in the dataset, rather than operating on the individual datapoints themselves. This induces a potentially helpful inductive bias that disallows coupling between datapoints, ignoring the ordering within the dataset. As a , the meta-learning process can only produce permutation-invariant functions of the dataset. In model-agnostic meta-learning (MAML), instead of using an RNN to update the weights of the learner f, standard gradient descent is used. Specifically, the predictionŷ for a test input x is: DISPLAYFORM2 where θ denotes the initial parameters of the model f and also corresponds to the parameters that are meta-learned, and corresponds to a loss function with respect to the label and prediction. Since the RNN approaches can approximate any update rule, they are clearly at least as expressive as gradient descent. It is less obvious whether or not the MAML update imposes any constraints on the learning procedures that can be acquired. To study this question, we define a universal learning procedure approximator to be a learner which can approximate any function of the set of training datapoints D T and the test point x. It is clear how f MAML can approximate any function on x, as per the UFA theorem; however, it is not obvious if f MAML can represent any function of the set of input, output pairs in D T, since the UFA theorem does not consider the gradient operator. The first goal of this paper is to show that f MAML (D T, x ; θ) is a universal function approximator of (D T, x) in the one-shot setting, where the dataset D T consists of a single datapoint (x, y). Then, we will consider the case of K-shot learning, showing that f MAML (D T, x ; θ) is universal in the set of functions that are invariant to the permutation of datapoints. In both cases, we will discuss meta supervised learning problems with both discrete and continuous labels and the loss functions under which universality does or does not hold. We first introduce a proof of the universality of gradient-based meta-learning for the special case with only one training point, corresponding to one-shot learning. We denote the training datapoint as (x, y), and the test input as x. A universal learning algorithm approximator corresponds to the ability of a meta-learner to represent any function f target (x, y, x) up to arbitrary precision. We will proceed by construction, showing that there exists a neural network functionf (·; θ) such that f (x ; θ) approximates f target (x, y, x) up to arbitrary precision, where θ = θ − α∇ θ (y, f (x)) and α is the non-zero learning rate. The proof holds for a standard multi-layer ReLU network, provided that it has sufficient depth. As we discuss in Section 6, the loss function cannot be any loss function, but the standard cross-entropy and mean-squared error objectives are both suitable. In this proof, we will start by presenting the form off and deriving its value after one gradient step. Then, to show universality, we will construct a setting of the weight matrices that enables independent control of the information flow coming forward from x and x, and backward from y. We will start by constructingf, which, as shown in FIG0 is a generic deep network with N + 2 layers and ReLU nonlinearities. Note that, for a particular weight matrix W i at layer i, a single gradient step W i − α∇ Wi can only represent a rank-1 update to the matrix W i. That is because the gradient of W i is the outer product of two vectors, DISPLAYFORM0, where a i is the error gradient with respect to the pre-synaptic activations at layer i, and b i−1 is the forward post-synaptic activations at layer i − 1. The expressive power of a single gradient update to a single weight matrix is therefore quite limited. However, if we sequence N weight matrices as N i=1 W i, corresponding to multiple linear layers, it is possible to acquire a rank-N update to the linear function represented by W = N i=1 W i. Note that deep ReLU networks act like deep linear networks when the input and pre-synaptic activations are non-negative. Motivated by this reasoning, we will constructf (·; θ) as a deep ReLU network where a number of the intermediate layers act as linear layers, which we ensure by showing that the input and pre-synaptic activations of these layers are non-negative. This allows us to simplify the analysis. The simplified form of the model is as follows: DISPLAYFORM1 where φ(·; θ ft, θ b) represents an input feature extractor with parameters θ ft and a scalar bias transformation variable θ b, N i=1 W i is a product of square linear weight matrices, f out (·, θ out) is a function at the output, and the learned parameters are θ:= {θ ft, θ b, {W i}, θ out }. The input feature extractor and output function can be represented with fully connected neural networks with one or more hidden layers, which we know are universal function approximators, while N i=1 W i corresponds to a set of linear layers with non-negative input and activations. Next, we derive the form of the post-update predictionf (x ; θ). DISPLAYFORM2, and the error gradient ∇ z = e(x, y). Then, the gradient with respect to each weight matrix W i is: DISPLAYFORM3 Therefore, the post-update value of DISPLAYFORM4 where we will disregard the last term, assuming that α is comparatively small such that α 2 and all higher order terms vanish. In general, these terms do not necessarily need to vanish, and likely would further improve the expressiveness of the gradient update, but we disregard them here for the sake of the simplicity of the derivation. Ignoring these terms, we now note that the post-update value of z when x is provided as input intof (·; θ) is given by DISPLAYFORM5 Our goal is to show that that there exists a setting of W i, f out, and φ for which the above function, f (x, θ), can approximate any function of (x, y, x). To show universality, we will aim to independently control information flow from x, from y, and from x by multiplexing forward information from x and backward information from y. We will achieve this by decomposing W i, φ, and the error gradient into three parts, as follows: DISPLAYFORM6 where the initial value of θ b will be 0. The top components all have equal numbers of rows, as do the middle components. As a , we can see that z will likewise be made up of three components, which we will denote asz, z, andž. Lastly, we construct the top component of the error gradient to be 0, whereas the middle and bottom components, e(y) andě(y), can be set to be any linear (but not affine) function of y. We will discuss how to achieve this gradient in the latter part of this section when we define f out and in Section 6.In Appendix A.3, we show that we can choose a particular form ofW i, W i, andw i that will simplify the products of W j matrices in Equation 1, such that we get the following form for z: DISPLAYFORM7 where A 1 = I, B N = I, A i can be chosen to be any symmetric positive-definite matrix, and B i can be chosen to be any positive definite matrix. In Appendix D, we further show that these definitions of the weight matrices satisfy the condition that the activations are non-negative, meaning that the modelf can be represented by a generic deep network with ReLU nonlinearities. Finally, we need to define the function f out at the output. When the training input x is passed in, we need f out to propagate information about the label y as defined in Equation 2. And, when the test input x is passed in, we need a different function defined only on z. Thus, we will define f out as a neural network that approximates the following multiplexer function and its derivatives (as shown possible by): DISPLAYFORM8 where g pre is a linear function with parameters θ g such that ∇ z = e(y) satisfies Equation 2 (see Section 6) and h post (·; θ h) is a neural network with one or more hidden layers. As shown in Appendix A.4, the post-update value of f out is DISPLAYFORM9 Now, combining Equations 3 and 5, we can see that the post-update value is the following: DISPLAYFORM10 In summary, so far, we have chosen a particular form of weight matrices, feature extractor, and output function to decouple forward and backward information flow and recover the post-update function above. Now, our goal is to show that the above functionf (x ; θ) is a universal learning algorithm approximator, as a function of (x, y, x). For notational clarity, we will use DISPLAYFORM11 to denote the inner product in the above equation, noting that it can be viewed as a type of kernel with the RKHS defined by B iφ (x; θ ft, θ b).2 The connection to kernels is not in fact needed for the proof, but provides for convenient notation and an interesting observation. We then define the following lemma:Lemma 4.1 Let us assume that e(y) can be chosen to be any linear (but not affine) function of y. Then, we can choose θ ft, θ h, {A i ; i > 1}, {B i ; i < N} such that the function DISPLAYFORM12 can approximate any continuous function of (x, y, x) on compact subsets of R dim(y). Intuitively, Equation 7 can be viewed as a sum of basis vectors A i e(y) weighted by k i (x, x), which is passed into h post to produce the output. There are likely a number of ways to prove Lemma 4.1. In Appendix A.1, we provide a simple though inefficient proof, which we will briefly summarize here. We can define k i to be a indicator function, indicating when (x, x) takes on a particular value indexed by i. Then, we can define A i e(y) to be a vector containing the information of y and i. Then, the of the summation will be a vector containing information about the label y and the value of (x, x) which is indexed by i. Finally, h post defines the output for each value of (x, y, x). The bias transformation variable θ b plays a vital role in our construction, as it breaks the symmetry within k i (x, x). Without such asymmetry, it would not be possible for our constructed function to represent any function of x and x after one gradient step. In , we have shown that there exists a neural network structure for whichf (x ; θ) is a universal approximator of f target (x, y, x). We chose a particular form off (·; θ) that decouples forward and backward information flow. With this choice, it is possible to impose any desired post-update function, even in the face of adversarial training datasets and loss functions, e.g. when the gradient points in the wrong direction. If we make the assumption that the inner loss function and training dataset are not chosen adversarially and the error gradient points in the direction of improvement, it is likely that a much simpler architecture will suffice that does not require multiplexing of forward and backward information in separate channels. Informative loss functions and training data allowing for simpler functions is indicative of the inductive bias built into gradient-based meta-learners, which is not present in recurrent meta-learners. Our in this section implies that a sufficiently deep representation combined with just a single gradient step can approximate any one-shot learning algorithm. In the next section, we will show the universality of MAML for K-shot learning algorithms. Now, we consider the more general K-shot setting, aiming to show that MAML can approximate any permutation invariant function of a dataset and test datapoint ({(x, y) i; i ∈ 1...K}, x ) for K > 1. Note that K does not need to be small. To reduce redundancy, we will only overview the differences from the 1-shot setting in this section. We include a full proof in Appendix B.In the K-shot setting, the parameters off (·, θ) are updated according to the following rule: DISPLAYFORM0 Defining the form off to be the same as in Section 4, the post-update function is the following: DISPLAYFORM1 In Appendix C, we show one way in which this function can approximate any function of ({(x, y) k; k ∈ 1...K}, x ) that is invariant to the ordering of the training datapoints {(x, y) k; k ∈ 1... K}. We do so by showing that we can select a setting ofφ and of each A i and B i such that z is a vector containing a discretization of x and frequency counts of the discretized datapoints 4. If z is a vector that completely describes ({(x, y) i }, x ) without loss of information and because h post is a universal function approximator,f (x ; θ) can approximate any continuous function of ({(x, y) i }, x ) on compact subsets of R dim(y). It's also worth noting that the form of the above equation greatly resembles a kernel-based function approximator around the training points, and a substantially more efficient universality proof can likely be obtained starting from this premise. In the previous sections, we showed that a deep representation combined with gradient descent can approximate any learning algorithm. In this section, we will discuss the requirements that the loss function must satisfy in order for the in Sections 4 and 5 to hold. As one might expect, the main requirement will be for the label to be recoverable from the gradient of the loss. As seen in the definition of f out in Equation 4, the pre-update functionf (x, θ) is given by g pre (z; θ g), where g pre is used for back-propagating information about the label(s) to the learner. As stated in Equation 2, we require that the error gradient with respect to z to be: DISPLAYFORM0 and where e(y) andě(y) must be able to represent [at least] any linear function of the label y. We define g pre as follows: DISPLAYFORM1 To make the top term of the gradient equal to 0, we can setW g to be 0, which causes the pre-update predictionŷ =f (x, θ) to be 0. Next, note that e(y) = W T g ∇ŷ (y,ŷ) andě(y) =w T g ∇ŷ (y,ŷ). Thus, for e(y) to be any linear function of y, we require a loss function for which ∇ŷ (y, 0) is a linear function Ay, where A is invertible. Essentially, y needs to be recoverable from the loss function's gradient. In Appendix E and F, we prove the following two theorems, thus showing that the standard 2 and cross-entropy losses allow for the universality of gradient-based meta-learning. Theorem 6.1 The gradient of the standard mean-squared error objective evaluated atŷ = 0 is a linear, invertible function of y. Theorem 6.2 The gradient of the softmax cross entropy loss with respect to the pre-softmax logits is a linear, invertible function of y, when evaluated at 0. Now consider other popular loss functions whose gradients do not satisfy the label-linearity property. The gradients of the 1 and hinge losses are piecewise constant, and thus do not allow for universality. The Huber loss is also piecewise constant in some areas its domain. These error functions effectively lose information because simply looking at their gradient is insufficient to determine the label. Recurrent meta-learners that take the gradient as input, rather than the label, e.g. BID0, will also suffer from this loss of information when using these error functions. Now that we have shown that meta-learners that use standard gradient descent with a sufficiently deep representation can approximate any learning procedure, and are equally expressive as recurrent learners, a natural next question is -is there empirical benefit to using one meta-learning approach versus another, and in which cases? To answer this question, we next aim to empirically study the inductive bias of gradient-based and recurrent meta-learners. Then, in Section 7.2, we will investigate the role of model depth in gradient-based meta-learning, as the theory suggests that deeper networks lead to increased expressive power for representing different learning procedures. First, we aim to empirically explore the differences between gradient-based and recurrent metalearners. In particular, we aim to answer the following questions: can a learner trained with MAML further improve from additional gradient steps when learning new tasks at test time, or does it start to overfit? and does the inductive bias of gradient descent enable better few-shot learning performance on tasks outside of the training distribution, compared to learning algorithms represented as recurrent networks? Figure 4: Comparison of finetuning from a MAML-initialized network and a network initialized randomly, trained from scratch. Both methods achieve about the same training accuracy. But, MAML also attains good test accuracy, while the network trained from scratch overfits catastrophically to the 20 examples. Interestingly, the MAMLinitialized model does not begin to overfit, even though meta-training used 5 steps while the graph shows up to 100.To study both questions, we will consider two simple fewshot learning domains. The first is 5-shot regression on a family of sine curves with varying amplitude and phase. We trained all models on a uniform distribution of tasks with amplitudes A ∈ [0.1, 5.0], and phases γ ∈ [0, π]. The second domain is 1-shot character classification using the Omniglot dataset , following the training protocol introduced by. In our comparisons to recurrent meta-learners, we will use two state-of-the-art meta-learning models: SNAIL and metanetworks . In some experiments, we will also compare to a task-conditioned model, which is trained to map from both the input and the task description to the label. Like MAML, the task-conditioned model can be fine-tuned on new data using gradient descent, but is not trained for few-shot adaptation. We include more experimental details in Appendix G.To answer the first question, we fine-tuned a model trained using MAML with many more gradient steps than used during meta-training. The on the sinusoid domain, shown in FIG1, show that a MAML-learned initialization trained for fast adaption in 5 steps can further improve beyond 5 gradient steps, especially on out-of-distribution tasks. In contrast, a task-conditioned model trained without MAML can easily overfit to out-of-distribution tasks. With the Omniglot dataset, as seen in Figure 4, a MAML model that was trained with 5 inner gradient steps can be fine-tuned for 100 gradient steps without leading to any drop in test accuracy. As expected, a model initialized randomly and trained from scratch quickly reaches perfect training accuracy, but overfits massively to the 20 examples. Next, we investigate the second question, aiming to compare MAML with state-of-the-art recurrent meta-learners on tasks that are related to, but outside of the distribution of the training tasks. All three methods achieved similar performance within the distribution of training tasks for 5-way 1-shot Omniglot classification and 5-shot sinusoid regression. In the Omniglot setting, we compare each method's ability to distinguish digits that have been sheared or scaled by varying amounts. In the sinusoid regression setting, we compare on sinusoids with extrapolated amplitudes within [5.0, 10.0] and phases within [π, 2π]. The in FIG2 and Appendix G show a clear trend that MAML recovers more generalizable learning strategies. Combined with the theoretical universality , these experiments indicate that deep gradient-based meta-learners are not only equivalent in representational power to recurrent meta-learners, but should also be a considered as a strong contender in settings that contain domain shift between meta-training and meta-testing tasks, where their strong inductive bias for reasonable learning strategies provides substantially improved performance. The proofs in Sections 4 and 5 suggest that gradient descent with deeper representations in more expressive learning procedures. In contrast, the universal function approximation theorem only requires a single hidden layer to approximate any function. Now, we seek to empirically explore this theoretical finding, aiming to answer the question: is there a scenario for which model-agnostic meta-learning requires a deeper representation to achieve good performance, compared to the depth of the representation needed to solve the underlying tasks being learned? Figure 5: Comparison of depth while keeping the number of parameters constant. Task-conditioned models do not need more than one hidden layer, whereas meta-learning with MAML clearly benefits from additional depth. Error bars show standard deviation over three training runs. To answer this question, we will study a simple regression problem, where the meta-learning goal is to infer a polynomial function from 40 input/output datapoints. We use polynomials of degree 3 where the coefficients and bias are sampled uniformly at random within [−1, 1] and the input values range within [−3, 3]. Similar to the conditions in the proof, we meta-train and meta-test with one gradient step, use a mean-squared error objective, use ReLU nonlinearities, and use a bias transformation variable of dimension 10. To compare the relationship between depth and expressive power, we will compare models with a fixed number of parameters, approximately 40, 000, and vary the network depth from 1 to 5 hidden layers. As a point of comparison to the models trained for meta-learning using MAML, we trained standard feedforward models to regress from the input and the 4-dimensional task description (the 3 coefficients of the polynomial and the scalar bias) to the output. These task-conditioned models act as an oracle and are meant to empirically determine the depth needed to represent these polynomials, independent of the meta-learning process. Theoretically, we would expect the task-conditioned models to require only one hidden layer, as per the universal function approximation theorem. In contrast, we would expect the MAML model to require more depth. The , shown in Figure 5, demonstrate that the task-conditioned model does indeed not benefit from having more than one hidden layer, whereas the MAML clearly achieves better performance with more depth even though the model capacity, in terms of the number of parameters, is fixed. This empirical effect supports the theoretical finding that depth is important for effective meta-learning using MAML. In this paper, we show that there exists a form of deep neural network such that the initial weights combined with gradient descent can approximate any learning algorithm. Our findings suggest that, from the standpoint of expressivity, there is no theoretical disadvantage to embedding gradient descent into the meta-learning process. In fact, in all of our experiments, we found that the learning strategies acquired with MAML are more successful when faced with out-of-domain tasks compared to recurrent learners. Furthermore, we show that the representations acquired with MAML are highly resilient to overfitting. These suggest that gradient-based meta-learning has a num-ber of practical benefits, and no theoretical downsides in terms of expressivity when compared to alternative meta-learning models. Independent of the type of meta-learning algorithm, we formalize what it means for a meta-learner to be able to approximate any learning algorithm in terms of its ability to represent functions of the dataset and test inputs. This formalism provides a new perspective on the learning-to-learn problem, which we hope will lead to further discussion and research on the goals and methodology surrounding meta-learning. While there are likely a number of ways to prove Lemma 4.1 (copied below for convenience), here we provide a simple, though inefficient, proof of Lemma 4.1.Lemma 4.1 Let us assume that e(y) can be chosen to be any linear (but not affine) function of y. Then, we can choose θ ft, θ h, {A i ; i > 1}, {B i ; i < N} such that the function DISPLAYFORM0 can approximate any continuous function of (x, y, x) on compact subsets of R dim(y). To prove this lemma, we will proceed by showing that we can choose e, θ ft, and each A i and B i such that the summation contains a complete description of the values of x, x, and y. Then, because h post is a universal function approximator,f (x, θ) will be able to approximate any function of x, x, and y. Since A 1 = I and B N = I, we will essentially ignore the first and last elements of the sum by defining B 1:= I and A N:= I, where is a small positive constant to ensure positive definiteness. Then, we can rewrite the summation, omitting the first and last terms: DISPLAYFORM0 Next, we will re-index using two indexing variables, j and l, where j will index over the discretization of x and l over the discretization of x. DISPLAYFORM1 Next, we will define our chosen form of k jl in Equation 8. We show how to acquire this form in the next section. Lemma A.1 We can choose θ ft and each B jl such that DISPLAYFORM2 where discr(·) denotes a function that produces a one-hot discretization of its input and e denotes the 0-indexed standard basis vector. Now that we have defined the function k jl, we will next define the other terms in the sum. Our goal is for the summation to contain complete information about (x, x, y). To do so, we will chose e(y) to be the linear function that outputs J * L stacked copies of y. Then, we will define A jl to be a matrix that selects the copy of y in the position corresponding to (j, l), i.e. in the position j + J * l. This can be achieved using a diagonal A jl matrix with diagonal values of 1 + at the positions corresponding to the kth vector, and elsewhere, where k = (j + J * l) and is used to ensure that A jl is positive definite. As a , the post-update function is as follows: DISPLAYFORM3 where y is at the position j + J * l within the vector v(x, x, y), where j satisfies discr(x) = e j and where l satisfies discr(x) = e l. Note that the vector −αv(x, x, y) is a complete description of (x, x, y) in that x, x, and y can be decoded from it. Therefore, since h post is a universal function approximator and because its input contains all of the information of (x, x, y), the function f (x ; θ) ≈ h post (−αv(x, x, y); θ h ) is a universal function approximator with respect to its inputs (x, x, y).A.2 PROOF OF LEMMA A.1In this section, we show one way of proving Lemma A.1:Lemma A.1 We can choose θ ft and each B jl such that DISPLAYFORM4 where discr(·) denotes a function that produces a one-hot discretization of its input and e denotes the 0-indexed standard basis vector. DISPLAYFORM5, where θ b = 0. Since the gradient with respect to θ b can be chosen to be any linear function of the label y (see Section 6), we can assume without loss of generality that θ b = 0.We will chooseφ and B jl as follows: DISPLAYFORM6 where we use E ik to denote the matrix with a 1 at (i, k) and 0 elsewhere, and I is added to ensure the positive definiteness of B jl as required in the construction. Using the above definitions, we can see that: DISPLAYFORM7 Thus, we have proved the lemma, showing that we can choose aφ and each B jl such that: DISPLAYFORM8 The goal of this section is to show that we can choose a form ofW, W, andw such that we can simplify the form of z in Equation 1 into the following: DISPLAYFORM0 where DISPLAYFORM1 for i < N and B N = I. Recall that we decomposed W i, φ, and the error gradient into three parts, as follows: DISPLAYFORM2 where the initial value of θ b will be 0. The top components,W i andφ, have equal dimensions, as do the middle components, W i and 0. The bottom components are scalars. As a , we can see that z will likewise be made up of three components, which we will denote asz, z, andž, where, before the gradient update,z = N i=1W iφ (x; θ ft), z = 0, andž = 0. Lastly, we construct the top component of the error gradient to be 0, whereas the middle and bottom components, e(y) anď e(y), can be set to be any linear (but not affine) function of y. Using the above definitions and noting that θ ft = θ ft − α∇ θft = θ ft, we can simplify the form of z in Equation 1, such that the middle component, z, is the following: DISPLAYFORM3 We aim to independently control the backward information flow from the gradient e and the forward information flow fromφ. Thus, choosing allW i and W i to be square and full rank, we will set DISPLAYFORM4 for i ∈ {1...N} whereM N +1 = I and M 0 = I. Then we can again simplify the form of z: DISPLAYFORM5 where DISPLAYFORM6 In this appendix, we provide a full proof of the universality of gradient-based meta-learning in the general case with K > 1 datapoints. This proof will share a lot of content from the proof in the 1-shot setting, but we include it for completeness. We aim to show that a deep representation combined with one step of gradient descent can approximate any permutation invariant function of a dataset and test datapoint ({(x, y) i; i ∈ 1...K}, x ) for K > 1. Note that K does not need to be small. We will proceed by construction, showing that there exists a neural network function f (·; θ) such thatf (·; θ) approximates f target ({(x, y) k }, x ) up to arbitrary precision, where DISPLAYFORM0 and α is the learning rate. As we discuss in Section 6, the loss function cannot be any loss function, but the standard cross-entropy and mean-squared error objectives are both suitable. In this proof, we will start by presenting the form off and deriving its value after one gradient step. Then, to show universality, we will construct a setting of the weight matrices that enables independent control of the information flow coming forward from the inputs {x k} and x, and backward from the labels {y k}.We will start by constructingf. With the same motivation as in Section 4, we will constructf (·; θ) as the following:f DISPLAYFORM1 φ(·; θ ft, θ b) represents an input feature extractor with parameters θ ft and a scalar bias transformation variable θ b, N i=1 W i is a product of square linear weight matrices, f out (·, θ out) is a readout function at the output, and the learned parameters are θ:= {θ ft, θ b, {W i}, θ out }. The input feature extractor and readout function can be represented with fully connected neural networks with one or more hidden layers, which we know are universal function approximators, while N i=1 W i corresponds to a set of linear layers. Note that deep ReLU networks act like deep linear networks when the input and pre-synaptic activations are non-negative. We will later show that this is indeed the case within these linear layers, meaning that the neural network functionf is fully generic and can be represented by deep ReLU networks, as visualized in FIG0.Next, we will derive the form of the post-update predictionf (x ; θ). DISPLAYFORM2 and we denote its gradient with respect to the loss as ∇ z k = e(x k, y k). The gradient with respect to any of the weight matrices W i for a single datapoint (x, y) is given by DISPLAYFORM3 Therefore, the post-update value of DISPLAYFORM4 where we move the summation over k to the left and where we will disregard the last term, assuming that α is comparatively small such that α 2 and all higher order terms vanish. In general, these terms do not necessarily need to vanish, and likely would further improve the expressiveness of the gradient update, but we disregard them here for the sake of the simplicity of the derivation. Ignoring these terms, we now note that the post-update value of z when x is provided as input intof (·; θ) is given by DISPLAYFORM5 DISPLAYFORM6 Our goal is to show that that there exists a setting of W i, f out, and φ for which the above function,f (x, θ), can approximate any function of ({(x, y) k }, x ). To show universality, we will aim independently control information flow from {x k}, from {y k}, and from x by multiplexing forward information from {x k} and x and backward information from {y k}. We will achieve this by decomposing W i, φ, and the error gradient into three parts, as follows: DISPLAYFORM7 where the initial value of θ b will be 0. The top components all have equal numbers of rows, as do the middle components. As a , we can see that z k will likewise be made up of three components, which we will denote asz k, z k, andž k. Lastly, we construct the top component of the error gradient to be 0, whereas the middle and bottom components, e(y k) andě(y k), can be set to be any linear (but not affine) function of y k. We discuss how to achieve this gradient in the latter part of this section when we define f out and in Section 6. connection to kernels is not in fact needed for the proof, but provides for convenient notation and an interesting observation. We can now simplify the form off (x, θ) as the following equation: DISPLAYFORM8 Intuitively, Equation 20 can be viewed as a sum of basis vectors A i e(y k) weighted by k i (x k, x), which is passed into h post to produce the output. In Appendix C, we show that we can choose e, θ ft, θ h, each A i, and each B i such that Equation 20 can approximate any continuous function of ({(x, y) k }, x ) on compact subsets of R dim(y). As in the one-shot setting, the bias transformation variable θ b plays a vital role in our construction, as it breaks the symmetry within k i (x, x). Without such asymmetry, it would not be possible for our constructed function to represent any function of x and x after one gradient step. In , we have shown that there exists a neural network structure for whichf (x ; θ) is a universal approximator of f target ({(x, y) k }, x ). In Section 5 and Appendix B, we showed that the post-update functionf (x ; θ) takes the following form:f DISPLAYFORM0 In this section, we aim to show that the above form off (x ; θ) can approximate any function of {(x, y) k; k ∈ 1...K} and x that is invariant to the ordering of the training datapoints {(x, y) k; k ∈ 1... K}. The proof will be very similar to the one-shot setting proof in Appendix A.1Similar to Appendix A.1, we will ignore the first and last elements of the sum by defining B 1 to be I and A N to be I, where is a small positive constant to ensure positive definiteness. We will then re-index the first summation over i = 2... N − 1 to instead use two indexing variables j and l as follows:f DISPLAYFORM1 As in Appendix A.1, we will define the function k jl to be an indicator function over the values of x k and x. In particular, we will reuse Lemma A.1, which was proved in Appendix A.2 and is copied below:Lemma A.1 We can choose θ ft and each B jl such that DISPLAYFORM2 where discr(·) denotes a function that produces a one-hot discretization of its input and e denotes the 0-indexed standard basis vector. Likewise, we will chose e(y k) to be the linear function that outputs J * L stacked copies of y k. Then, we will define A jl to be a matrix that selects the copy of y k in the position corresponding to (j, l), i.e. in the position j + J * l. This can be achieved using a diagonal A jl matrix with diagonal values of 1 + at the positions corresponding to the nth vector, and elsewhere, where n = (j + J * l) and is used to ensure that A jl is positive definite. Published as a conference paper at ICLR 2018As a , the post-update function is as follows: DISPLAYFORM3 where y k is at the position j + J * l within the vector v(x k, x, y k), where j satisfies discr(x k) = e j and where l satisfies discr(x k) = e l.For discrete, one-shot labels y k, the summation over v amounts to frequency counts of the triplets (x k, x, y k). In the setting with continuous labels, we cannot attain frequency counts, as we do not have access to a discretized version of the label. Thus, we must make the assumption that no two datapoints share the same input value x k. With this assumption, the summation over v will contain the output values y k at the index corresponding to the value of (x k, x). For both discrete and continuous labels, this representation is redundant in x, but nonetheless contains sufficient information to decode the test input x and set of datapoints {(x, y) k } (but not the order of datapoints).Since h post is a universal function approximator and because its input contains all of the information DISPLAYFORM4; θ h is a universal function approximator with respect to {(x, y) k } and x. In this appendix, we show that the network architecture with linear layers analyzed in the Sections 4 and 5 can be represented by a deep network with ReLU nonlinearities. We will do so by showing that the input and activations within the linear layers are all non-negative. First, consider the input φ(·; θ ft, θ b) andφ(·; θ ft, θ b). The inputφ(·; θ ft, θ b) is defined to consist of three terms. The top term,φ is defined in Appendices A.2 and C to be a discretization (which is non-negative) both before and after the parameters are updated. The middle term is defined to be a constant 0. The bottom term, θ b, is defined to be 0 before the gradient update and is not used afterward. Next, consider the weight matrices, W i. To determine that the activations are non-negative, it is now sufficient to show that the products DISPLAYFORM0 To do so, we need to show that the products N i=jW i, N i=j W i, and N i=jw i are PSD for j = 1,..., N. In Appendix A.2, each B i =M i+1 is defined to be positive definite; and in Appendix A.3, we define the products N i=j+1W i =M j + 1. Thus, the conditions on the products ofW i are satisfied. In Appendices A.1 and C, each A i are defined to be symmetric positive definite matrices. In Appendix A.3, we define DISPLAYFORM1 Thus, we can see that each M i is also symmetric positive definite, and therefore, each W i is positive definite. Finally, the purpose of the weightsw i is to provide nonzero gradients to the input θ b, thus a positive value for eachw i will suffice. E PROOF OF THEOREM 6.1Here we provide a proof of Theorem 6.1: Theorem 6.1 The gradient of the standard mean-squared error objective evaluated atŷ = 0 is a linear, invertible function of y. For the standard mean-squared error objective, (y,ŷ) = 1 2 y −ŷ 2, the gradient is ∇ŷ (y, 0) = −y, which satisfies the requirement, as A = −I is invertible. There is a clear trend that gradient descent enables better generalization on out-of-distribution tasks compared to the learning strategies acquired using recurrent meta-learners such as SNAIL. Right: Here is another example that shows the resilience of a MAML-learned initialization to overfitting. In this case, the MAML model was trained using one inner step of gradient descent on 5-way, 1-shot Omniglot classification. Both a MAML and random initialized network achieve perfect training accuracy. As expected, the model trained from scratch catastrophically overfits to the 5 training examples. However, the MAML-initialized model does not begin to overfit, even after 100 gradient steps. F PROOF OF THEOREM 6.2Here we provide a proof of Theorem 6.2:Theorem 6.2 The gradient of the softmax cross entropy loss with respect to the pre-softmax logits is a linear, invertible function of y, when evaluated at 0.For the standard softmax cross-entropy loss function with discrete, one-hot labels y, the gradient is ∇ŷ (y, 0) = c − y where c is a constant vector of value c and where we are denotingŷ as the pre-softmax logits. Since y is a one-hot representation, we can rewrite the gradient as ∇ŷ (y, 0) = (C − I)y, where C is a constant matrix with value c. Since A = C − I is invertible, the cross entropy loss also satisfies the above requirement. Thus, we have shown that both of the standard supervised objectives of mean-squared error and cross-entropy allow for the universality of gradientbased meta-learning. In this section, we provide two additional comparisons on an out-of-distribution task and using additional gradient steps, shown in FIG3. We also include additional experimental details. For Omniglot, all meta-learning methods were trained using code provided by the authors of the respective papers, using the default model architectures and hyperparameters. The model embedding architecture was the same across all methods, using 4 convolutional layers with 3 × 3 kernels, 64 filters, stride 2, batch normalization, and ReLU nonlinearities. The convolutional layers were followed by a single linear layer. All methods used the Adam optimizer with default hyperparameters. Other hyperparameter choices were specific to the algorithm and can be found in the respective papers. For MAML in the sinusoid domain, we used a fully-connected network with two hidden layers of size 100, ReLU nonlinearities, and a bias transformation variable of size 10 concatenated to the input. This model was trained for 70,000 meta-iterations with 5 inner gradient steps of size α = 0.001. For SNAIL in the sinusoid domain, the model consisted of 2 blocks of the following: 4 dilated convolutions with 2 × 1 kernels 16 channels, and dilation size of 1,2,4, and 8 respectively, then an attention block with key/value dimensionality of 8. The final layer is a 1 × 1 convolution to the output. Like MAML, this model was trained to convergence for 70,000 iterations using Adam with default hyperparameters. We evaluated the MAML and SNAIL models for 1200 trials, reporting the mean and 95% confidence intervals. For computational reasons, we evaluated the MetaNet model using 600 trials, also reporting the mean and 95% confidence intervals. Following prior work , we downsampled the Omniglot images to be 28 × 28. When scaling or shearing the digits to produce out-of-domain data, we transformed the original 105 × 105 Omniglot images, and then downsampled to 28 × 28. In the depth comparison, all models were trained to convergence using 70,000 iterations. Each model was defined to have a fixed number of hidden units based on the total number of parameters (fixed at around 40,000) and the number of hidden layers. Thus, the models with 2, 3, 4, and 5 hidden layers had 200, 141, 115, and 100 units per layer respectively. For the model with 1 hidden layer, we found that using more than 20, 000 hidden units, corresponding to 40, 000 parameters, ed in poor performance. Thus, the reported in the paper used a model with 1 hidden layer with 250 units which performed much better. We trained each model three times and report the mean and standard deviation of the three runs. The performance of an individual run was computed using the average over | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HyjC5yWCW | Deep representations combined with gradient descent can approximate any learning algorithm. |
Brain-Computer Interfaces (BCI) may help patients with faltering communication abilities due to neurodegenerative diseases produce text or speech by direct neural processing. However, their practical realization has proven difficult due to limitations in speed, accuracy, and generalizability of existing interfaces. To this end, we aim to create a BCI that decodes text directly from neural signals. We implement a framework that initially isolates frequency bands in the input signal encapsulating differential information regarding production of various phonemic classes. These bands form a feature set that feeds into an LSTM which discerns at each time point probability distributions across all phonemes uttered by a subject. Finally, a particle filtering algorithm temporally smooths these probabilities incorporating prior knowledge of the English language to output text corresponding to the decoded word. Further, in producing an output, we abstain from constraining the reconstructed word to be from a given bag-of-words, unlike previous studies. The empirical success of our proposed approach, offers promise for the employment of such an interface by patients in unfettered, naturalistic environments. Neurodegenerative diseases such as amyotrophic lateral sclerosis (ALS) restrict an individual's potential to fully engage with their surroundings by hindering communication abilities. BrainComputer Interfaces (BCI) have long been envisioned to assist such patients as they bypass affected pathways and directly translate neural recordings into text or speech output. However, practical implementation of this technology has been hindered by limitations in speed and accuracy of existing systems. Many patients rely on devices that use motor imagery, or on interfaces that require them to individually identify and spell out text characters such as the "point and click" cursor method 3) A bLSTM creates probability distributions over phonemes at each time point. 4) Probabilities are smoothed and domain knowledge is incorporated using a probabilistic automaton traversed using a particle filtering algorithm. 5) The highest probability word is chosen as the output.. Despite significant work in system optimization, inherent limitations in their designs render them significantly slower than spoken communication. To address these shortcomings, several studies are using electrocorticography (ECoG) and local field potential (LFP) signals. These invasive approaches provide superior signal quality with high temporal and spatial accuracy. Previous work attempted translation to continuous phoneme sequences using invasive neural data; however, despite their reported higher translation speed, their applications are limited to a reduced dictionary (10-100 words). Other design choices meant to enhance phoneme classification capitalize on prior knowledge of the target words, hindering their generalization to unmodified scenarios. Additionally, a recent study synthesized speech using recordings from speech cortex. Though the authors demonstrate partial transferrability of their decoder amongst patients, their accuracy is again limited to selection of the reconstructed word by a listener from a pool of 25 words and worsens as the pool size increases. Thus, establishing the capability of these approaches to generalize to unconstrained vocabularies is not obvious and has to our knowledge not yet been studied. Here, we present the performance of a two-part decoder network comprising of an LSTM and a particle filtering algorithm on data gathered from six patients. We provide empirical evidence that our interface achieves an average accuracy of 32% calculated against a full corpus, i.e. one encompassing all feasible English words that can be formulated using the entire set of phonemes uttered by a patient, thus marking an important, non-incremental step in the direction of viability of such an interface. The overall system consists of five steps as detailed in Fig.1. During the study, subjects were asked to repeat individual words ("yes", "no"), or monopthongal vowels with or without preceding consonants. During each trial, they were instructed which word or string to repeat and were then be prompted by a beep followed by a 2.25 second window during which they spoke. A trial thus consists of each such repetition. The number of these trials varied between subjects based on their comfort, ranging from 55 to 208. Number of phonemes per subject consequently fluctuated between 8 (3 consonants, 5 vowels) to 16 (11 consonants, 5 vowels). The sampling rate of these recordings was 30 kHz. Before further processing, electrodes determined to have low signal-to-noise ratio (SNR) were removed. Criterion for electrode removal was either that its time-series signal was uniformly zero, or that it contained artifacts that were atleast one order of magnitude larger than the mean absolute signal amplitude. Correspondingly, ∼ 8-9 % of channels were eliminated from further analysis. We elucidate in our previous work the localization and relevance of the remaining electrodes as pertains to the different parts of speech they encode. In order to include as input to our network classifier differential information stored in the neural signals about production of various phonemes, an experiment was designed that mapped power in spectral bands of these recordings to the underlying phoneme pronunciation. Each recording was divided into time windows from -166.67 to 100 ms relative to onset of the speech stimuli. Labels were assigned respectively to the corresponding audio signal: [silence, consonant/vowel]. The power per band is pre-processed by z-scoring and down sampling it to 100 Hz. This then acts as an input to a linear classifier which we train using early-stopping and coordinate descent methods. To additionally ensure that the classifier can identify the silence after completion of a phoneme string, we performed training over 100 ms post speech onset, but test the features captured by the classifier over 333.33 ms, since most trials end within this time period. While previous studies have used bands upto high gamma (70-150 Hz) for all speech uniformly, our show that for several of our subjects, vowels are delineated by high spectral bands (> 600 Hz) and consonants by low ones (< 400 Hz). For individual subject's values we refer the reader to. The first part of our decoder is a stacked two-layer bLSTM. We use a bLSTM due to its ability to retain temporally distant dependencies when decoding a sequence. Furthermore, our analysis reveals that while a single-layer network can differentiate between phonemic classes such as nasals, semivowels, fricatives etc.; a two-layer model can distinguish between individual phonemes. We train this model with an ADAM optimizer to minimize weighted cross-entropy error, wherein the weights are inversely proportional to phoneme frequencies. Finally, we evaluate the decoder using leave-one-trial-out; for each time point in the test trial the recurrent network outputs probability distributions across all phonemes in the subject's dataset. Implementation was using Pytorch. A language model is used to apply prior knowledge about the expected output given the target domain of natural language. In general, such a model creates prior probability distributions for the output based on the sequences seen in a corpus that is reflective of the target output. In this study, word frequencies were determined using the Brown corpus translated into phonemic sequences using the CMU Pronouncing Dictionary. The phoneme prior probabilities were determined by finding the relative frequency of each phoneme in the ing corpus. To find probabilities of sequences of phonemes, these priors may be conjoined using the nth-order Markov assumption to create an n-gram model. While such models are able to capture local phonemic patterns, they allow for sequences that are not valid words on the language. A probabilistic automaton (PA) creates a stronger prior by creating states for every subsequence that starts a word in the corpus. Each state then links to every state that represents a superstring that is one character longer (Figure 2). This graphical structure accounts for the possibility of homophones by keeping a list of such words associated with each node along with their relative frequency in the text corpus. Here we would like to reiterate that our model solely derives from the Brown corpus, and hence as such is patient agnostic. Laplacian smoothing is applied to the LSTM model output so that phonemes that were not seen during training are assigned a non-zero probability. To the ing distributions we then apply the language model using a subject-dependent temporal algorithm. Specifically in this study, we employ a particle filtering (PF) method. PF estimates the probability distribution of sequential outputs by creating a set of realities (called particles) and projecting them through the model based on observed data and the language model. Each particle contains a reference to a state in the model, a history of previous states, and an amount of time that it is going to remain in the current state. Distribution of states occupied by these particles represents an estimation of the true probability distribution. When the system begins, a set of P particles is generated and each is associated with the root node of the language model. At each time point, samples are drawn from the proposal distribution defined by the transition probabilities from the previous state. The time that the particle will stay in that state is drawn from a distribution representing how long the subject is expected to spend speaking a specific phoneme. At each time point, the probability weight is computed for each of the particles using, The weights are then normalized and the probability of possible output strings is found by summing the weights of all particles that correspond to that string. The system keeps a running account of the highest probability output at each time. The effective number of particles is then computed. If the effective number falls below a threshold, P thresh, a new set of particles are drawn from the particle distribution. This threshold was chosen empirically to be 10% of the total number of particles. Sensitivity analysis varying this value did not have a significantly affect . Further at each time point, the amount of time for a given particle to remain in a state is decremented. Once that reaches zero, the particle transitions to a new state in the language model based on probability p(x t | x 0:t−1). In our evaluation, output words were only considered correct if the phoneme sequence matched the labels and each phoneme overlaped at least partially with its respective audio label. Word accuracies varied between subjects, ranging from 54.55% (subject 1) to 13.46% (subject 2) (Table 1). On average, 32.16% of trials were classified completely correctly and an additional 23.06% had at least one phoneme match. Of the incorrect classifications 32.28% produced incorrect words either because none of the output phonemes were correct or because the sequences did not align temporally with the audio signal. In the remaining 12.49% of trials, the system did not detect speech signals, and produced an empty string as output. Each of the subjects in this study were able to communicate with significantly higher accuracy than chance. Nevertheless, the average word error rate seen in this study (67.8% on average) was higher than the 53% reported in. There were several important differences in these studies, however. The primary difference is that their system produced an audio output that required a human listener to transcribe into a word selection. Despite advances in machine learning and natural language processing, humans have superior ability to use contextual information to find meaning in a signal. Furthermore, that study limited classifications to an output domain set of 50 words, which is generally not sufficient for a realistic communication system. While this study makes a significant addition to existing BCI literature in terms of its avoidance of the traditional bag-of-words approach, our accuracies are lower than those reported in ERP-based BCI studies. Moreover, in order for a BCI system based on translating neural signals to become a practical solution, improvements need to be made either in signal acquisition, machine learning translation, or user strategy. One approach could be to sacrifice some of the speed advantages by having users repeat words multiple times. While this would reduce communication speeds below natural speaking rates, it would still greatly exceed ERP-based methods, while increasing the signals available for classification which could improve system accuracy. However, both this study and previous literature have primarily been concerned with decoding speech/text for patients with intact motor abilities. It is presently unclear how this would translate to intended speech. While the electrodes used in this study are inept to answer this question, given their majority location in the speech cortical areas, we suggest a plausible new experiment: teaching those who can't speak to rethink speech in terms of vocal tract movements. Using electrodes in the sensorimotor cortex and continuous visual feedback of ground truth vocal tract movements for each phoneme's pronounciation, a subject's attention could be entrained to only the (intended or executed) motion of their vocal tract for covert and overt speech respectively. One can then test the transferability of state space models -latent variables comprising of different articulators and observed states corresponding to the time-varying neural signals -between the covert and overt behaviours to better understand and harness the physiological variability between the two to eventually translate current studies into potentially viable devices. The language model used in this study was designed to be general enough for application in a realistic BCI system. This generality may have been detrimental to its performance in the current study, however, as language models based on natural language will bias towards words that are common in everyday speech. The current study design produced many words that are infrequent in the training corpus. As a , the language model biased away from such outputs, making it almost impossible to correctly classify. Lastly, while the presented in this study are promising, but they represent offline performance which does not include several factors such as user feedback. The proposed system serves as a step in the direction of a generalized BCI system that can directly translate neural signals into written text in naturalistic scenarios. However, communication accuracies are currently insufficient for a practical BCI device, so future work must focus on improving these and developing an interface to present feedback to users. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | B1lj77F88B | We present an open-loop brain-machine interface whose performance is unconstrained to the traditionally used bag-of-words approach. |
Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network. To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space. Significantly, this also reduces the computational cost of processing the ant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features. The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications. Deep learning models, and particularly convolutional neural networks (CNN), have become the standard approach for tackling image processing tasks. The key to the success of these methods lies in the rich representations deep models build, which are generated after an exhaustive and computationally expensive learning process BID16. To generate deep representations, deep learning models have strong training requirements in terms of dataset size, computational power and optimal hyper-parametrization. For any domain or application in which either of those factors is an issue, training a deep model from scratch becomes unfeasible. Within deep learning, the field of transfer learning studies how to extract and reuse pre-trained deep representations. This approach has three main applications: improving the performance of a network by initializing its training from a non-random state BID31 BID2 BID17, enabling the training of deep networks for tasks of limited dataset size BID9 BID27, and exploiting deep representations through alternative machine learning methods BID0 BID25 BID10. The first two cases, where training a deep network remains the end purpose of the transfer learning process, are commonly known as transfer learning for fine-tuning, while the third case, where the end purpose of the transfer learning does not necessarily include training a deep net, is typically referred as transfer learning for feature extraction. Of the three limiting factors of training deep networks (i.e., dataset size, computational cost, and optimal hyper-parametrization), transfer learning for fine-tuning partly solves the first. Indeed, one can successfully train a CNN on a dataset composed by roughly a few thousand instances using a pre-trained model as starting point, and achieve state-of-the-art-. Unfortunately, fine-tuning a model still requires a minimum dataset size, a significant amount of computational resources, and lots of time to optimize the multiple hyper-parameters involved in the process. Transfer learning for feature extraction on the other hand is based on processing a set of data instances through a pre-trained neural network, extracting the activation values so these can be used by another learning mechanism. This is applicable to datasets of any size, as each data instance is processed independently. It has a relatively small computational cost, since there is no deep net training. And finally, it requires no hyper-parameter optimization, since the pre-trained model can be used out-of-the-box. Significantly, the applications of transfer learning for feature extraction are limited only by the capabilities of the methods that one can execute on top of the generated deep representations. As previously mentioned, designing and training a deep model to maximize classification performance is a time consuming task. In this paper we explore the opposite approach, minimizing the design and tuning effort using a feature extraction process. Our goal is to build an out-of-the-box classification tool (which could be used by anyone regardless of technical ) capable of defining a full-network embedding (integrating the representations built by all layers of a source CNN model). When compared to single-layer embeddings, this approach generates richer and more powerful embeddings, while also being more robust to the use of inappropriate pre-trained models. We asses the performance of such solution when compared with thoroughly designed and tuned models. Transfer learning studies how to extract and reuse deep representations learnt for a given task t0, to solve a different task t1. Fine tuning approaches require the t1 target dataset to be composed by at least a few thousands instances, to avoid overfitting during the fine-tuning process. To mitigate this limitation, it has been proposed to reuse carefully selected parts of the t0 dataset in the finetuning process alongside the t1 (i.e., selective joint fine-tuning) BID9, and also to use large amounts of noisy web imagery alongside with clean curated data BID14. In fine-tuning, choosing which layers of weights from the t0 model should be transferred, and which should be transferred and kept unchanged on the t1 training phase has a large impact on performance. Extensive research on that regard has shown that the optimal policy depends mostly on the properties of both t0 and t1 BID0 BID32 BID18. This dependency, together with the hyper-parameters inherent to deep network training, defines a large set of problem specific adaptations to be done by fine-tuning solutions. Given a pre-trained model for t0 one may use alternative machine learning methods for solving t1, instead of fine-tuning a deep model. For that purpose, one needs to generate a representation of t1 data instances as perceived by the model trained for t0. This feature extraction process is done through a forward pass of t1 data instances on the pre-trained CNN model, which defines a data embedding that can be fed to the chosen machine learning method (e.g., a Support Vector Machine, or SVM, for classification). In most cases, the embedding is defined by capturing and storing the activation values of a single layer close to the output BID0 BID25 BID10 BID6 BID19 BID23. The rest of layers (e.g., most convolutional layers) are discarded because these "are unlikely to contain a richer semantic representation than the later feature" BID6. So far, this choice has been supported by performance comparisons of single-layer embeddings, where high-level layer embeddings have been shown to consistently outperform low-level layer embeddings BID0 BID25. However, it is also known that all layers within a deep network, including low-level ones, can contribute to the characterization of the data in different ways. This implies that the richest and most versatile representation that can be generated by a feature extraction process must include all layers from the network, i.e., it must define a full-network embedding. However, no full-network embedding has been proposed in the literature so far, due to the difficulty of successfully integrating the features found on such an heterogeneous set of layers as the one defined by a full deep network architecture. Beyond the layers to extract, there are many other parameters that can affect the feature extraction process. Some of those are evaluated in BID0, which includes parameters related with the architecture and training of the initial CNN (e.g., network depth and width, distribution of training data, optimization parameters), and parameters related with transfer learning process (e.g., fine-tuning, spatial pooling and dimensionality reduction). Among the most well established transformations of deep embeddings are a L2 normalization BID0 BID25, and an unsupervised feature reduction like Principal Component Analysis (PCA) BID0 BID25 BID10. The quality of the ant embedding is typically evaluated by the performance of an SVM, trained using the embedding representations, to solve a classification task BID0 BID25. Recently, extracted features have also been combined with more sophisticated computer vision techniques, such as the constellation model BID27 and Fisher vectors BID5, with significant success. Transfer learning for feature extraction is used to embed a dataset t1 in the representation language learnt for a task t0. To do so, one must forward pass each data instance of t1 through the model pre-trained on t0, capturing the internal activations of the net. This is the first step of our method, but, unlike previous contributions to feature extraction, we store the activation values generated on every convolutional and fully connected layer of the model to generate a full-network embedding. Each filter within a convolutional layer generates several activations for a given input, as a of convolving the filter. This corresponds to the presence of the filter at various locations of the input. In a ant feature extracted embedding this implies a significant increase in dimensionality, which is in most cases counterproductive. At the same time, the several values generated by a given filter provide only relative spatial information, which may not be particularly relevant in a transfer learning setting (i.e., one where the problem for which the filters were learnt is not the same as the problem where the filter is applied). To tackle this issue, a recurrent solution in the field is to perform a spatial average pooling on each convolutional filter, such that a single value per filter is obtained by averaging all its spatially-depending activations BID0 BID25. After this pooling operation, each feature in the embedding corresponds to the degree with which a convolutional filter is found on average in the whole image, regardless of location. While losing spatial information, this solution maintains most of the embedding descriptive power, as all convolutional filters remain represented. A spatial average pooling on the filters of convolutional layers is the second step of our method. The values ing from the spatial pooling are concatenated with the features from the fully connected layers into a single vector, to generate a complete embedding. In the case of the well-known VGG16 architecture BID28 this embedding vector is composed by 12,416 features. The features composing the embedding vector so far described are obtained from neurons of different type (e.g., convolutional and fully connected layers) and location (i.e., from any layer depth). These differences account for large variations in the corresponding feature activations (e.g., distribution, magnitude, etc.). Since our method considers an heterogeneous set of features, a feature standardization is needed. Our proposed standardization computes the z-values of each feature, using the train set to compute the mean and standard deviation. This process transforms each feature value so that it indicates how separated the value is from the feature mean in terms of positive/negative standard deviations. In other words, the degree with which the feature value is atypically high (if positive) or atypically low (if negative) in the context of the dataset. A similar type of feature normalization is frequently used in deep network training (i.e., batch normalization) BID12, but this is the first time this technique has been applied to a feature extraction solution. As discussed in §2, most feature extraction approaches apply an L2 norm by data instance, thus nor- malizing by image instead of by feature. As seen in §5, this approach provides competitive , but is not appropriate when using features coming from many different layers. By using the z-values per feature, we use activations across the dataset as reference for normalization. This balances each feature individually, which allows us to successfully integrate all types of features in the embedding. Significantly, this feature standardization process generates a context dependent embedding, as the representation of each instance depends on the rest of instances being computed with it. Indeed, consider how the features relevant for characterizing a bird in a context of cars are different than the ones relevant for characterizing the same bird in a context of other birds. Such a subjective representation makes the approach more versatile, as it is inherently customized for each specific problem. After the feature standardization, the final step of the proposed pipeline is a feature discretization, which is described in §3.1. An end-to-end overview of the proposed embedding generation is shown in FIG0. The embedding vectors we generate are composed of a large number of features. Exploring a representation space of such high-dimensionality is problematic for most machine learning algorithms, as it can lead to overfitting and other issues related with the curse of dimensionality. A common solution is to use dimensionality reduction techniques like PCA BID19 BID0 BID3. We propose an alternative approach, which keeps the same number of features (and thus keeps the size of the representation language defined by the embedding) but reduces their expressiveness similarly to quantization methodology followed in BID3. In detail, we discretize each standardized feature value to represent either an atypically low value (-1), a typical value, or an atypically high value. This discretization is done by mapping feature values to the {−1, 0, 1} domain by defining two thresholds f t − and f t +.To find consistent thresholds, we consider the work of , who use a supervised statistical approach to evaluate the importance of CNN features for characterization. Given a feature f and a class c, this work uses an empirical statistic to measure the difference between activation values of f for instances of c and activation values of f for the rest of classes in the dataset. This allows them to quantify the relevance of feature/class pairs for class characterization. In their work, authors further separate these pairs in three different sets: characteristic by abscence, uncharacteristic and characteristic by presence. We use these three sets to find our thresholds f t − and f t +, by mapping the feature/class relevances to our corresponding feature/image activations. We do so on some datasets explored in: mit67, flowers102 and cub200, by computing the average values of the features belonging to each of the three sets. FIG1 shows the three ing distributions of values for the mit67 dataset. Clearly, a strong correlation exists between the supervised statistic feature relevance defined by and the standardized feature values generated by the full-network embedding, as features in the characteristic by absence set correspond to activations which are particularly low, while features in the characteristic by presence set correspond to activations which are particularly high. We obtain the f t − and f t + values through the Kolmogrov-Smirnov statistic, which provides the maximum gap between two empirical distributions. Vertical dashed lines of FIG1 indicate these optimal thresholds for the mit67 dataset, the rest are shown in TAB0. To obtain a parameter free methodology, and considering the stable behavior of the f t + and f t − thresholds, we chose to set f t + = 0.15 and f t − = −0.25 in all our experiments. Thus, after the step of feature standardization, we discretize the values above 0.15 to 1, the values below −0.25 to −1, and the rest to 0. One of the goals of this paper is to identify a full-network feature extraction methodology which provides competitive out-of-the-box. For that purpose, we evaluate the embedding proposed in §3.1 on a set of 9 datasets which define different image classification challenges. The list includes datasets of classic object categorization, fine-grained categorization, and scene and textures classification. The disparate type of discriminative features needed to solve each of these problems represents a challenge for any approach which tries to solve them without specific tuning of any kind. The MIT Indoor Scene Recognition dataset BID22 (mit67) BID1 ) (food101) is a large dataset of 101 food categories. Test labels are reliable but train images are noisy (e.g., occasionally mislabeled). The Describable Textures Dataset BID4 ) (textures) is a database of textures categorized according to a list of 47 terms inspired from human perception. The Oulu Knots dataset BID26 (wood) contains knot images from spruce wood, classified according to Nordic Standards. This dataset of industrial application is considered to be challenging even for human experts. Details for these datasets are provided in TAB1. This includes the train/test splits used in our experiments. In most cases we follow the train/test splits as provided by the dataset authors in order to obtain comparable . A specific case is caltech101 where, following the dataset authors in- structions BID7, we randomly choose 30 training examples per class and a maximum of 50 for test, and repeat this experiment 5 times. The other particular case is the food101 dataset. Due to its large size, we use only the provided test set for both training and testing, using a stratified 5-fold cross validation. The same stratified 5-fold cross validation approach is used for the wood dataset, where no split is provided by the authors. In this section we analyze the performance gap between thoroughly tuned models (those which currently provide state-of-the-art ) and the approach described in §3. To evaluate the consistency of our method out-of-the-box, we decide not to use additional data when available on the dataset (e.g., image segmentation, regions of interest or other metadata), or to perform any other type of problem specific adaptation (e.g., tuning hyper-parameters).As source model for the feature extraction process we use the classical VGG16 CNN architecture BID28 pre-trained on the Places2 scene recognition dataset for the mit67 experiments, and the same VGG16 architecture pre-trained on the ImageNet 2012 classification dataset BID24 for the rest (these define our t0 tasks). As a the proposed embedding is composed by 12,416. On top of that, we use a linear SVM with the default hyperparameter C = 1 for classification, with a one-vs-the-rest strategy. Standard data augmentation is used in the SVM training, using 5 crops per sample (4 corners + central) with horizontal mirroring (total of 10 crops per sample). At test time, all the 10 crops are classified, using a voting strategy to decide the label of each data sample. Beyond the comparison with the current state-of-the-art, we also compare our approach with the most frequently used feature extraction solution. As discussed in §2, a popular embedding is obtained by extracting the activations of one of the fully connected layers (fc6 or fc7 for the VGG16 model) and applying a L2 normalization per data instance BID0 BID25 BID6. We call this our baseline method, an overview of it is shown in FIG4. The same pre-trained model used as source for the full-network embedding is used for the baseline. For both baselines (fc6 and fc7), the final embedding is composed by 4,096 features. This is used to train the same type of SVM classifier trained with the full-network embedding. The of our classification experiments are shown in Table 3. Performance is measured with average per-class classification accuracy. For each dataset we provide the accuracy provided by the baselines, by our method, and by the best method we found in the literature (i.e., the state-of-the-art or SotA). For a proper interpretation of the performance gap between the SotA methods and ours, we further indicate if the SotA uses external data (beyond the t1 dataset and the t0 model) and if it performs fine-tuning. Overall, our method outperforms the best baseline (fc6) by 2.2% accuracy on average. This indicates that the proposed full-network embedding successfully integrates the representations generated at the various layers. The datasets where the baseline performs similarly or slightly outperforms the full-network embedding (cub200, cats-dogs and sdogs) are those where the target task t1 overlaps with the source task t0 (e.g., ImageNet 2012). The largest difference happens for the sdogs, which is explicitly a subset of ImageNet 2012. In this sort of easy transfer learning problems, the fully con- Table 3: Classification in % of average per-class accuracy for the baselines, for the fullnetwork embedding, and for the current state-of-the-art (SotA). ED: SotA uses external data, FT: SotA performs fine-tuning of the network. SotA citeation for each dataset: mit67 BID9, cub200 BID14, flowers102 BID9, cats-dogs BID27, sdogs BID9, caltech101 BID11, food101 BID17 and textures BID5. State-of-the-art performance is in most cases a few accuracy points above the performance of the full-network embedding (7.8% accuracy on average). These are encouraging, considering that our method uses no additional data, requires no tuning of parameters and it is computationally cheap (e.g., it does not require deep network training). The dataset where our full-network embedding is more clearly outperformed is the cub200. In this dataset BID14 achieve a remarkable state-of-the-art performance by using lots of additional data (roughly 5 million additional images of birds) to train a deep network from scratch, and then fine-tune the model using the cub200 dataset. In this case, the large gap in performance is caused by the huge disparity in the amount of training data used. A similar context happens in the evaluation of food101, where BID17 use the complete training set for fine-tuning, while we only use a subset of the test set (see §4 for details). If we consider the for the other 6 datasets, the average performance gap between the state-of-the-art and the full-network embedding is 4.2% accuracy on average. Among the methods which achieve the best performance on at least one dataset, there is one which is not based on fine tuning. The work of BID5 obtains the best for the textures dataset by using a combination of bag-of-visual-words, Fisher vectors and convolutional filters. Authors demonstrate how this approach is particularly competitive on texture based datasets. Our more generic methodology is capable of obtaining an accuracy 2.5% accuracy lower in this highly specific domain. The wood dataset is designed to be particularly challenging, even for human experts; according to the dataset authors the global accuracy of an experienced human sorter is about 75-85% BID15 BID26. There is currently no reported in average per-class accuracy for this dataset, so the corresponding values in Table 3 are left blank. Consequently, the we report represent the current state-of-the-art to the best of our knowledge (74.1%±6.9 in average per-class accuracy). The best previously reported in the literature for wood correspond to BID23, which are 94.3% in global accuracy. However, the difference between average per-class accuracy and global accuracy is particularly relevant in this dataset, given the variance in images per class (from 3 to 37). To evaluate the average per-class accuracy, we tried our best to replicate the method of BID23, which ed in 71.0%±8.2 average per-class accuracy when doing a stratified 5-fold cross validation. A performance similar to that of our baseline method. In this section we consider removing and altering some of the components of the full-network embedding to understand their impact. First we remove feature discretization, and evaluate the embeddings obtained after the feature standardization step (FS). Secondly, we consider a partial feature discretization which only maps values between f t + and f t − to zero, and evaluate an embedding which keeps the rest of the original values ({−v, 0, v}). The purpose of this second experiment is to study if the increase in performance provided by the feature discretization is caused by the noise reduction effect of mapping frequent values to 0, or if it is caused by the space simplification ant of mapping all activations to only three values. As shown in TAB2, the full-network embedding outperforms all the other variants, with the exceptions of flowers102 and cats-dogs where FS is slightly more competitive (0.8,0.7% accuracy) and caltech101 where the best is {−v, 0, v} by 0.5% accuracy. The noise reduction variant (i.e., {−v, 0, v}) outperforms the FS variant in 5 out of 9 datasets. The main difference between both is that the former sparsifies the embeddings by transforming typical values to zeros, with few informative data being lost in the process. The complete feature discretization done by the full-network model (i.e., {−1, 0, 1}) further boosts performance, outperforming the {−v, 0, v} embedding on 7 of 9 datasets. This shows the potential benefit of reducing the complexity of the embedding space. The feature discretization also has the desirable effect of reducing the training cost of the SVM applied on the ing embedding. Using the FS embedding as control (the slowest of all variants), the {−v, 0, v} embedding trains the SVM between 3 and 13 times faster depending on the dataset, while the full-network embedding with its complete discretization trains between 10 and 50 times faster. Significantly, all three embeddings are composed by 12,416 features. For comparison, the baseline method, which uses shorter embeddings of 4,096 features, trains the SVM between 100 and 650 times faster than the FS. For both the baseline and the full-network embeddings, training the SVM takes a few minutes on a single CPU.A different variation we consider is to an inappropriate task t0 as source for generating the baseline and full-network embeddings. This tests the robustness of each embedding when using an ill-suited pre-trained model. We use the model pre-trained on ImageNet 2012 for generating the mit67 embeddings, and the model pre-trained on Places2 for the rest of datasets. Table 5 shows that the full-network embedding is much more robust, with an average reduction in accuracy of 16.4%, against 24.6% of the baseline. This remark the limitation of the baseline caused by its own late layer dependency. Finally, we also considered using different network depths, a parameter also analyzed in BID0. We repeated the full-network experiments using the VGG19 architecture instead of the VGG16, and found performance differences to be minimal (maximum difference of 0.3%) and inconsistent. In this paper we describe a feature extraction process which leverages the information encoded in all the features of a deep CNN. The full-network embedding introduces the use of feature standardization and of a novel feature discretization methodology. The former provides context-dependent Table 5: Classification in % average per-class accuracy of the baseline and the full-network embedding when using a network pre-trained on ImageNet 2012 for mit67 and on Places2 for the rest. embeddings, which adapt the representations to the problem at hand. The later reduces noise and regularizes the embedding space while keeping the size of the original representation language (i.e., the pre-trained model used as source). Significantly, the feature discretization restricts the computational overhead ant of processing much larger embeddings when training an SVM. Our experiments also show that the full-network is more robust than single-layer embeddings when an appropriate source model is not available. The ant full-network embedding is shown to outperform single-layer embeddings in several classification tasks, and to provide the best reported on one of those tasks (wood). Within the state-of-the-art, the full-network embedding represents the best available solution when one of the following conditions apply: When the accessible data is scarce, or an appropriate pre-trained model is not available (e.g., specialized industrial applications), when computational resources are limited (e.g., no GPUs availability), or when development time or technical expertise is restricted or non cost-effective. Beyond classification, the full-network embedding may be of relevance for any task exploiting visual embeddings. For example, in image retrieval and image annotation tasks, the full-network embedding has been shown to provide a boost in performance when compared to one layer embeddings. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | SkAK2jg0b | We present a full-network embedding of CNN which outperforms single layer embeddings for transfer learning tasks. |
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures. Despite the impressive success that deep neural networks have achieved in a wide range of challenging tasks, the inference in deep neural network is highly memory-intensive and computationintensive due to the over-parameterization of deep neural networks. Network pruning (; ;) has been recognized as an effective approach to improving the inference efficiency in resource-limited scenarios. Traditional pruning methods consist of dense network training followed with pruning and fine-tuning iterations. To avoid the expensive pruning and fine-tuning iterations, many sparse training methods (; ; ;) have been proposed, where the network pruning is conducted during the training process. However, all these methods suffers from following three problems: Coarse-grained predefined pruning schedule. Most of the existing pruning methods use predefined pruning schedule with many additional hyperparameters like pruning a% parameter each time and then fine-tuning for b epochs with totally c pruning steps. It is non-trivial to determine these hyperparameters for network architectures with various degree of complexity. Therefore, usually a fixed pruning schedule is adopted for all the network architectures, which means that a very simple network architecture like LeNet-300-100 will have the same pruning schedule as a far more complex network like ResNet-152. Besides, almost all the existing pruning methods conduct epoch-wise pruning, which means that the pruning is conducted between two epochs and no pruning operation happens inside each epoch. Failure to properly recover the pruned weights. Almost all the existing pruning methods conduct "hard" pruning that prunes weights by directly setting their values to 0. Many works (; ;) have argued that the importance of network weights are not fixed and will change dynamically during the pruning and training process. Previously unimportant weights may tend to be important. So the ability to recover the pruned weights is of high significance. However, directly setting the pruned weights to 0 in the loss of historical parameter importance, which makes it difficult to determine: 1) whether and when each pruned weight should be recovered, 2) what values should be assigned to the recovered weights. Therefore, existing methods that claim to be able to recover the pruned weights simply choose a predefined portion of pruned weights to recover and these recover weights are randomly initialized or initialized to the same value. Failure to properly determine layer-wise pruning rates. Modern neural network architectures usually contain dozens of layers with various number of parameters. Therefore, the degrees of parameter redundancy are very different among the layers. For simplicity, some methods prune the same percentage of parameter at each layer, which is not optimal. To obtain dynamic layer-wise pruning rates, a single global pruning threshold or layer-wise greedy algorithms are applied. Using a single global pruning threshold is exceedingly difficult to assess the local parameter importance of the individual layer, since each layer has a significantly different amount of parameter and contribution to the model performance. This makes pruning algorithms based on a single global threshold inconsistent and non-robust. The problem of layer-by-layer greedy pruning methods is that the unimportant neurons in an early layer may have a significant influence on the responses in later layers, which may in propagation and amplification of the reconstruction error . We propose a novel end-to-end sparse training algorithm that properly solves the above problems. With only one additional hyperparameter used to set the final model sparsity, our method can achieve dynamic fine-grained pruning and recovery during the whole training process. Meanwhile, the layerwise pruning rates will be adjusted automatically with respect to the change of parameter importance during the training and pruning process. Our method achieves state-of-the-art performance compared with other sparse training algorithms. The proposed algorithm has following promising properties: • Step-wise pruning and recovery. A training epoch usually will have tens of thousands of training steps, which is the feed-forward and back-propagation pass for a single mini-batch. Instead of pruning between two training epochs with predefined pruning schedule, our method prunes and recovers the network parameter at each training step, which is far more fine-grained than existing methods. • Neuron-wise or filter-wise trainable thresholds. All the existing methods adopt a single pruning threshold for each layer or the whole architecture. Our method defines a threshold vector for each layers. Therefore, our method adopts neuron-wise pruning thresholds for fully connected and recurrent layer and filter-wise pruning thresholds for convolutional layer. Additionally, all these pruning thresholds are trainable and will be updated automatically via back-propagation. • Dynamic pruning schedule. The training process of deep neural network consists of many hyperparameters. The learning rate is perhaps the most important hyperparameter. Usually, the learning rate will decay during the training process. Our method can automatic adjust the layer-wise pruning rates under different learning rate to get the optimal sparse network structure. • Consistent sparse pattern. Our algorithm can get consistent layer-wise sparse pattern under different model sparsities, which indicates that our method can automatic determine the optimal layer-wise pruning rates given the target model sparsity. Traditional Pruning Methods: presented the early work about network pruning using second-order derivatives as the pruning criterion. The effective and popular training, pruning and fine-tuning pipeline was proposed by , which used the parameter magnitude as the pruning criterion. extended this pipeline to prune the recurrent neural networks with a complicated pruning strategy. introduced first-order Taylor term as the pruning criterion and conduct global pruning. used 1 regularization to force the unimportant parameters to zero. Sparse Neural Network Training: Recently, some works attempt to find the sparse network directly during the training process without the pruning and fine-tuning stage. Inspired by the growth and extinction of neural cell in biological neural networks, proposed a pruneregrowth procedure called Sparse Evolutionary Training (SET) that allows the pruned neurons and connections to revive randomly. However, the sparsity level need to be set manually and the random recover of network connections may provoke unexpected effects to the network. DEEP-R proposed by used Bayesian sampling to decide the pruning and regrowth configuration, which is computationally expensive. Dynamic Sparse Reparameterization used dynamic parameter reallocation to find the sparse structure. However, the pruning threshold can only get halved if the percentage of parameter pruned is too high or get doubled if that percentage is too low for a certain layer. This coarse-grained adjustment of pruning threshold significantly limits the ability of Dynamic Sparse Reparameterization. Additionally, a predefined pruning ratio and fractional tolerance are required. Dynamic Network Surgery proposed pruning and splicing procedure that can prune or recover network connections according to the parameter magnitude but it requires manually determined thresholds that are fixed during the sparse learning process. This layer-wise thresholds are extremely hard to manually set. Meanwhile, Fixing the thresholds makes it hard to adapt to the rapid change of parameter importance. proposed sparse momentum that used the exponentially smoothed gradients as the criterion for pruning and regrowth. A fixed percentage of parameters are pruned at each pruning step. The pruning ratio and momentum scaling rate need to be searched from a relatively high parameter space. 3 DYNAMIC SPARSE TRAINING 3.1 NOTATION Deep neural network consists of a set of parameters {W i : 1 ≤ i ≤ C}, where W i denotes the parameter matrix at layer i and C denotes the number of layers in this network. For each fully connected layer and recurrent layer, the corresponding parameter is W i ∈ R co×ci, where c o is the output dimension and c i is the input dimension. For each convolutional layer, there exists a convolution kernel K i ∈ R co×ci×w×h, where c o is the number of output channels, c i is the number of input channels, w and h are the kernel sizes. Each filter in a convolution kernel K i can be flattened to a vector. Therefore, a corresponding parameter matrix W i ∈ R co×z can be derived from each convolution kernel K i ∈ R co×ci×w×h, where z = c i × w × h. Actually, the pruning process is equivalent to finding a binary parameter mask M i for each parameter matrix W i. Thus, a set of binary parameter masks {M i : 1 ≤ i ≤ C} will be found by network pruning. Each element for these parameter masks M i is either 1 or 0. Pruning can be regarded as applying a binary mask M to each parameter W. This binary parameter mask M preserves the information about the sparse structure. Given the parameter set {W 1, W 2, · · ·, W C}, our method will dynamically find the corresponding parameter masks {M 1, M 2, · · ·, M C}. To achieve this, for each parameter matrix W ∈ R co×ci, a trainable pruning threshold vector t ∈ R co is defined. Then we utilize a unit step function S(x) as shown in Figure 2(a) to get the masks according to the magnitude of parameters and corresponding thresholds as present below. With the dynamic parameter mask M, the corresponding element in mask M ij will be set to 0 if W ij needs to be pruned. This means that the weight W ij is masked out by the 0 at M ij to get a sparse parameter W M. The value of underlying weight W ij will not change, which preserves the historical information about the parameter importance. For a fully connected layer or recurrent layer with parameter W ∈ R co×ci and threshold vector t ∈ R co, each weight W ij will have a neuron-wise threshold t i, where W ij is the jth weight associated with the ith output neuron. Similarly, the thresholds are filter-wise for convolutional layer. Besides, a threshold matrix or a single scalar threshold can also be chosen. More details are present in Appendix A.2. With the threshold vector and dynamic parameter mask, the trainable masked fully connected, convolutional and recurrent layer are introduced as shown in Figure 1, where X is the input of current layer and Y is the output. For fully connected and recurrent layers, instead of the dense parameter W, the sparse product W M will be used in the batched matrix multiplication, where denote Hadamard product operator. For convolutional layers, each convolution kernel K ∈ R co×ci×w×h can be flatten to get W ∈ R co×z. Therefore, the sparse kernel can be obtained by similar process as fully connected layers. This sparse kernel will be used for subsequent convolution operation. Figure 2: The unit step function S(x) and its derivative approximations. In order to make the elements in threshold vector t trainable via back-propagation, the derivative of the binary step function S(x) is required. However, its original derivative is an impulse function whose value is zero almost everywhere and infinite at zero as shown in Figure 2 (b). Thus the original derivative of binary step function S(x) cannot be applied in back-propagation and parameter updating directly. Some previous works (; ;) demonstrated that by providing a derivative estimation, it is possible to train networks containing such binarization function. A clip function called straight through estimator (STE) was employed in these works and is illustrated in Figure 2 (c). discussed the derivative estimation in balance of tight approximation and smooth back-propagation. We adopt this long-tailed higher-order estimator H(x) in our method. As shown in Figure 2 (d), it has a wide active range between [−1, 1] with non-zero gradient to avoid gradient vanishing during training. On the other hand, the gradient value near zero is a piecewise polynomial function giving tighter approximation than STE. The estimator is represented as With this derivative estimator, the elements in the vector threshold t can be trained via backpropagation. Meanwhile, in trainable masked layers, the network parameter W can receive two branches of gradient, namely the performance gradient for better model performance and the structure gradient for better sparse structure, which helps to properly update the network parameter under sparse network connectivity. The structure gradient enables the pruned (masked) weights to be updated via back-propagation. The details about the feed-forward and back-propagation of the trainable masked layer is present in Appendix A.3. Therefore, the pruned (masked) weights, the unpruned (unmasked) weights and the elements in the threshold vector can all be updated via back-propagation at each training step. The proposed method will conduct fine-grained step-wise pruning and recovery automatically. Now that the thresholds of each layer is trainable, higher percentage of pruned parameter is desired. To get the parameter masks M with high sparsity, higher pruning thresholds are needed. To achieve this, we add a sparse regularization term L s to the training loss that penalizes the low threshold value. For a trainable masked layer with threshold t ∈ R co, the corresponding regularization term is R = co i=1 exp(−t i). Thus, the sparse regularization term L s for the a deep neural network with C trainable masked layers is: We use exp(−x) as the regularization function since it is asymptotical to zero as x increases. Consequently, it penalizes low thresholds without encouraging them to become extremely large. The traditional fully connected, convolutional and recurrent layers can be replaced with the corresponding trainable masked layers in deep neural networks. Then we can train a sparse neural network directly with back-propagation algorithm given the training dataset D = {(x 1, y 1), (x 2, y 2), · · ·, (x N, y N)}, the network parameter W and the layer-wise threshold t as follows: where L(·) is the loss function, e.g. cross-entropy loss for classification and α is the scaling coefficient for the sparse regularization term, which is able to control the percentage of parameter remaining. The sparse regularization term L s tends to increase the threshold t for each layer thus getting higher model sparsity. However, higher sparsity tends to increase the loss function, which reversely tends to reduce the threshold and level of sparsity. Consequently, the training process of the thresholds can be regarded as a contest between the sparse regularization term and the loss function in the sense of game theory. Therefore, our method is able to dynamically find the sparse structure that properly balances the model sparsity and performance. The proposed method is evaluated on MNIST, CIFAR-10 and ImageNet with various modern network architectures including fully connected, convolutional and recurrent neural networks. To quantify the pruning performance, the layer remaining ratio is defined to be k l = n/m, where n is the number of element equal to 1 in the mask M and m is the total number of element in M. The model remaining ratio k m is the overall ratio of non-zero element in the parameter masks for all trainable masked layers. The model remaining percentage is defined as k m × 100%. For all trainable masked layers, the trainable thresholds are initialized to zero since it is assumed that originally the network is dense. The detailed experimental setup is present in Appendix A.1. MNIST. Table 1 presents the pruning of proposed method for Lenet-300-100 , Lenet-5-Caffe and LSTM . Both LSTM models have two LSTM layers with hidden size 128 for LSTM-a and 256 for LSTM-b. Our method can prune almost 98% parameter with little loss of performance on Lenet-300-100 and Lenet-5-Caffe. For the LSTM models adapted for the sequential MNIST classification, our method can find sparse models with better performance than dense baseline with over 99% parameter pruned. CIFAR-10. The pruning performance of our method on CIFAR-10 is tested on VGG and WideResNet (with other sparse learning algorithms on CIFAR-10 as presented in Table 2 . The state-of-the-art algorithms, DSR and Sparse momentum , are selected for comparison. Dynamic Sparse Training (DST) outperforms them in almost all the settings as present in By varying the scaling coefficient α for sparse regularization term, we can control the model remaining ratios of sparse models generated by dynamic sparse training. The relationships between α, model remaining ratio and sparse model accuracy of VGG, WideResNet-16-8 and WideResNet-28-8 on CIFAR-10 are presented in Figure 3. As demonstrated, the model remaining ratio keeps decreasing with increasing α. With a moderate α value, it is easy to obtain a sparse model with comparable or even higher accuracy than the dense counterpart. On the other side, if the α value is too large that makes the model remaining percentage less than 5%, there will be a noticeable accuracy drop. As demonstrated in Figure 3, the choice of α ranges from 10 −9 to 10 −4. Depending on the application scenarios, we can either get models with similar or better performance as dense counterparts by a relatively small α or get a highly sparse model with little performance loss by a larger α. Figure 4 (a) demonstrates the change of layer remaining ratios for Lenet-300-100 trained with DST at each training step during the first training epoch. And Figure 4 (b) presents the change of layer remaining ratios during the whole training process (20 epochs). As present in these two figures, instead of decreasing by manually set fixed step size as in other pruning methods, our method makes the layer remaining ratios change smoothly and continuously at each training step. Meanwhile, as shown in Figure 4 (a), the layer remaining ratios fluctuate dynamically within the first 100 training steps, which indicates that DST can achieve step-wise fine-grained pruning and recovery. Meanwhile, for multilayer neural networks, the parameters in different layers will have different relative importance. For example, Lenet-300-100 has three fully connected layers. Changing the parameter of the last layer (layer 3) can directly affect the model output. Thus, the parameter of layer 3 should be pruned more carefully. The input layer (layer 1) has the largest amount of parameters and takes the raw image as input. Since the images in MNIST dataset consist of many unchanged pixels (the ) that have no contribution to the classification process, it is expected that the parameters that take these invariant as input can be pruned safely. Therefore, the remaining ratio should be the highest for layer 3 and the lowest for layer 1 if a Lenet-300-100 model is pruned. To check the pruning effect of our method on these three layers, Lenet-300-100 model is sparsely trained by the default hyperparameters setting present in Appendix A.1. The pruning trends of these three layers during dynamic sparse training is present in Figure 4 (b). During the whole sparse training process, the remaining ratios of layer 1 and layer 2 keep decreasing and the remaining ratio of layer 3 maintains to be 1 after the fluctuation during the first epoch. The remaining ratio of layer 1 is the lowest and decrease to less that 10% quickly, which is consistent with the expectation. In the meantime, the test accuracy of the sparse model is almost the same as the test accuracy on dense model in the whole training process. This indicates that our method can properly balance the model remaining ratio and the model performance by continuous fine-grained pruning throughout the entire sparse training procedure. Similar training tendency can be observed in other network architectures. The detailed for other architectures are present in Appendix A.4. Dynamic adjustment regarding learning rate decay. To achieve better performance, it is a common practice to decay the learning rate for several times during the training process of deep neural network. Usually, the test accuracy will decrease immediately just after the learning rate decay and then tend toward flatness. Similar phenomenon is observed for VGG-16 on CIFAR-10 trained by DST as present in Figure 5(b), where the learning rate decay from 0.1 to 0.01 at 80 epoch. Like the layer 3 of Lenet-300-100, the second fully connected layer (FC 2) of VGG-16 is the output layer, hence its remaining ratio is expected to be relatively high. But a surprising observation is that the remaining ratio of FC 2 is quite low at around 0.1 for the first 80 epochs and increases almost immediately just after the 80 epoch as present in Figure 5 (a). We suppose that this is caused by the decay of learning rate from 0.1 to 0.01 at 80 epoch. Before the learning rate decay, the layers preceding to the output layer fail to extract enough useful features for classification due to the relatively coarse-grained parameter adjustment incurred by high learning rate. This means that the corresponding parameters that take those useless features as input can be pruned, which leads to the low remaining ratio of the output layer. The decayed learning rate enables fine-grained parameter update that makes the neural network model converge to the good local minima quickly , where most of the features extracted by the preceding layers turn to be helpful for the classification. This makes previously unimportant network connections in the output layer become important thus the remaining ratio of this layer get abrupt increase. There are two facts that support our assumptions for this phenomenon. The first fact is the sudden increase of the test accuracy from around 85% to over 92% just after the learning rate decay. Secondly, the remaining ratios of the preceding convolutional layers are almost unchanged after the remaining ratio of the output layer increases up to 1, which means that the remaining parameters in these layers are necessary to find the critical features. The abrupt change of the remaining ratio of the output layer indicates that our method can dynamically adjust the pruning schedule regarding the change of hyperparameters during the training process. Dynamic adjustment regarding different initial learning rate. To further investigate the effect of learning rate, VGG-16 model is trained by different initial learning rate (0.01) with other hyperparameters unchanged. The corresponding training curves are presented in Figure 6. As shown in Figure 6 (a), the remaining ratio of FC 2 increases up to 1 after around 40 epochs when the test accuracy reaches over 90%. This means that with proper update of network parameters due to a smaller initial learning rate, the layers preceding to the output layer (FC 2) tend to extract useful features before the learning rate decay. It can be observed in Figure 6 (b) that the test accuracy only increase about 2% from 90% to 92% roughly after the learning rate decay at the 80 epoch. Meanwhile, one can see that our method adopts different pruning schedule regarding different initial choices of training hyperparameters. Model performance under dynamic schedule. Figure 5 (b) and Figure 6(Considering the model performance during the whole training process, when trained with initial learning rate 0.01, the sparse accuracy is consistent higher than dense accuracy during the whole training process as present in Figure 6 (b). Meanwhile, one can see from Figure 5 (b) that the running average of sparse accuracy is also higher than that of dense accuracy during the whole training process when the initial learning rate is 0.1. Many works have tried to design more compact model with mimic performance to overparameterized models; ). Network architecture search has been viewed as the future paradigm of deep neural network design. However, it is extremely difficult to determine whether the designed layer consists of redundancy. Therefore, typical network architecture search methods rely on evolutionary algorithms or reinforcement learning , which is extremely time-consuming and computationally expensive. Network pruning can actually be reviewed as a kind of architecture search process thus the sparse structure revealed from network pruning may provide some guidance to the network architecture design. However, The layer-wise equal remaining ratio generated by the unified pruning strategy fails to indicate the different degree of redundancy for each layer. And the global pruning algorithm is non-robust, which fails to offer consistent guidance. Here we demonstrate another interesting observation called consistent sparse pattern during dynamic sparse training that provides useful information about the redundancy of individual layers as the guidance for compact network architecture design. For the same architecture trained by our method with various α values, the relative relationship of sparsity among different layers keeps consistent. The sparse patterns of VGG-16 on CIFAR-10 are present in Figure 7 with three different α. In all configurations, the last four convolutional layers (Conv 10-13) and the first fully connected layers (FC 1) are highly sparse. Meanwhile, some layers (Conv 3-7, FC 2) keep a high amount of parameters after pruning. This consistent sparse pattern indicates that these heavily pruned layers consist of high redundancy and a large number of parameters can be reduced in these layers to get more compact models. This phenomenon also exists in other network architectures, which is present in details in the appendix A.5. Therefore, with this consistent sparse pattern, after designing a new network architecture, our method can be applied to get useful information about the layer-wise redundancy in this new architecture. LSTM models are trained using Adam optimization scheme with default Adam hyperparameter setting for 20 epochs. The batch size is 100 and the default learning rate is 0.001. Meanwhile, the default scaling factor α for sparse regularization term is 0.001. Models on CIFAR-10 are trained using SGD with momentum 0.9 and batch size of 64 with 160 training epochs. The initial learning rate is 0.1 and decayed by 0.1 at 80 and 120 epoch. The default scaling factor α for sparse regularization term is 5 × 10 −6 for all tested architectures. ImageNet. ResNet-50 models are trained using SGD with momentum 0.9 and batch size of 1024 with 90 training epochs. The default initial learning rate is 0.1 and decayed by 10 at 30, 60, 80 epochs. For all trainable masked layers, the trainable thresholds are initialized to zero. Additionally, we find that extremely high scaling coefficient α will make the sparse regularization term dominates the training loss thus making the mask M to be all zero in certain layer, which may impede the training process. To prevent this, the pruning threshold t will be reset to zero if more than 99% elements in the mask are zero in a certain layer despite the LSTM models. This mechanism makes the training process smoother and enables a wider range choice of α. Beside a threshold vector, a threshold scalar t or threshold matrix T ∈ R co×ci can also be adopted for each trainable masked layer with parameter W ∈ R co×ci. All the three choices of trainable thresholds are tested on various architectures. Considering the effects on network pruning, the matrix threshold has almost the same model remaining ratio compared with the vector threshold. The scalar threshold is less robust than the vector and matrix threshold. In terms of the storage overhead, the matrix threshold almost doubles the amount of parameter during the training process in each architectures. Meanwhile, the vector threshold only adds less than 1% additional network parameter. The extra trainable threshold will also bring additional computation. In both feed forward and backpropagation phase, the additional computations are matrix-element-wise operations (O(n 2)), that is apparently light-weighted compared to the original batched matrix multiplication (O(n 3)). For practical deployment, only the masked parameter W M need to be stored, thus no overhead will be introduced. Therefore, considering the balance between the overhead and the benefit incurred by these three choices, the vector threshold is chosen for trainable masked layers. Feed forward process. Considering a single layer in deep neural network with input x and dense parameter W. The normal layer will conduct matrix-vector multiplication as W x or convolution as Conv(W, x). In trainable masked layers, since a sparse mask M is obtained for each layer. The sparse parameter W M will be adopted in the corresponding matrix-vector multiplication or convolution as (W M)x and Conv(W M, x). Back-propagation process. Referring to Figure 1, we denotes P = W M and the gradient received by P in back-propagation as dP. Considering the gradients from right to left: • The performance gradient is dP M • The gradient received by M is dP W • The gradient received by Q is dP W H(Q), where H(Q) is the of H(x) applied to Q elementwisely. • The structure gradient is dP W H(Q) sgn(W), where sgn(W) is the of sign function applied to W elementwisely. • The gradient received by the vector threshold t is dt ∈ R co. We denote dT = −dP W H(Q), then dT ∈ R co×ci. And we will have dt i = ci j=1 T ij for 1 ≤ i ≤ c o. • The gradient received by the parameter W is dW = dP M + dP W H(Q) sgn(W) Since we add 2 regularization in the training process, all the elements in W are distributed within [−1, 1]. Meanwhile, almost all the elements in the vector threshold are distributed within. The exceptions are the situation as shown in Figure 3 (a) and Figure 4 (a) where the last layer get no weight pruned (masked). Regarding the process of getting Q, all the elements in Q are within [−1, 1]. Therefore H(Q) is a dense matrix. Then W, H(Q) and sgn(W) are all dense matrices and the pruned (masked) weights can receive the structure gradient dP W H(Q) sgn(W) Here we present the change of remaining ratios during dynamic sparse training for the other tested architectures. Since WideResNet models only have one fully connected layer at the end as the output layer, the first five convolutional layers and the last fully connected layer are present. Figure 8 demonstrates the corresponding for WideResNet-16-8. Figure 9 demonstrates the corresponding for WideResNet-16-10. And Figure 10 demonstrates the corresponding for WideResNet-28-8. The similar phenomenon can be observed in all these three network architectures for various α. Here we present the consistent sparse pattern for the other tested architectures. Figure 11 demonstrates the corresponding for WideResNet-16-8. And Figure 12 demonstrates the corresponding for WideResNet-16-10. | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJlbGJrtDB | We present a novel network pruning method that can find the optimal sparse structure during the training process with trainable pruning threshold |
To which extent can successful machine learning inform our understanding of biological learning? One popular avenue of inquiry in recent years has been to directly map such algorithms into a realistic circuit implementation. Here we focus on learning in recurrent networks and investigate a range of learning algorithms. Our approach decomposes them into their computational building blocks and discusses their abstract potential as biological operations. This alternative strategy provides a “lazy” but principled way of evaluating ML ideas in terms of their biological plausibility One could take each algorithm individually and try to model in detail a biophysical implementation, à la. However, it's unlikely that any single ML solution maps one-to-one onto neural circuitry. Instead, a more useful exercise would be to identify core computational building blocks that are strictly necessary for solving temporal credit assignment, which are more likely to have a direct biological analogue. To this end, we put forward a principled framework for evaluating biological plausibility in terms of the mathematical operations required-hence our "lazy" analysis. We examine several online algorithms within this framework, identifying potential issues common across algorithms, for example the need to physically represent the Jacobian of the network dynamics. We propose some novel solutions to this and other issues and in the process articulate biological mechanisms that could facilitate these solutions. Finally, we empirically validate that these biologically realistic approximations still solve temporal credit assignment, in two simple synthetic tasks. Plausibility criteria for recurrent learning. Consider a recurrent network of n units, with voltages v (t) = Wr (t−1), wherer (t) is the concatenation of recurrent and external inputs, with an additional constant input for the bias term,r For a closer match to neural circuits, the firing rates update in continuous time, via r (t) = (1 − α)r (t−1) + αφ(v (t) ), using a point-wise neural activation function φ: R n → R n (e.g. tanh) and the network's inverse time constant α ∈. The network output y (t) = softmax(W out r (t) + b out ) ∈ R nout is computed by output weights/bias W out ∈ R nout×n, b out ∈ R nout and compared with the training label y * (t) to produce an instantaneous loss L (t). BPTT and RTRL each provide a method for calculating the gradient of each instantaneous loss ∂L (t) /∂W ij, to be used for gradient descent. BPTT unrolls the network over time and performs backpropagation as if on a feedforward network: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Tensor(s) Update equations Notes UORO A where c * (Table 1 : A summary of several new online algorithms' tensor structure and update equations. where n is the immediate credit assignment vector and is the network Jacobian, with explicitly references activity at all time points, RTRL instead recursively updates the "influence tensor" M, preserving the first-order long-term dependencies in the network as it runs forward. The actual gradient is then calculated as Unlike BPTT, every computation in RTRL involves only current time t or t − 1. In general, an online algorithm has some tensor structure for summarizing the inter-temporal dependencies in the network, to avoid having to explicitly unroll the network. These tensor(s) must update at each time step as new data come in. RTRL uses an order-3 tensor, ing in an O(n 3) memory requirement that is neither efficient nor biologically plausible. However, all of the new online algorithms we discuss are only O(n 2) in memory. In Table 1 we show the tensor structure and update equations for each of these algorithms in order to discuss the mathematical operations needed for each and whether a neural circuit could implement them. How these updates lead to sensible learning is outside our scope, and we refer the reader to either the original papers or the review. In a purely artificial setting, these tensor updates from Table 1 are straightforward to implement, but biologically, one has to consider how these tensors are physically represented and the mechanism for performing the updates. We present a list of mathematical operations and comment on how a biological neural network might or might not be able to perform it in parallel with the forward pass: i A vector can be encoded as a firing rate, voltage, or any other intracellular variable. ii A matrix must be encoded as the strengths of a set of synapses; if individual entries change, they must do so time-continuously and via a (local) synaptic plasticity rule. iii Matrix-vector multiplication can be implemented by neural transmission, but input vectors must represent firing rates, as opposed to voltages or other intracellular variables. iv Matrix-matrix multiplication is at face value not possible, as it requires O(n 3) multiplications, and there is no biological structure to support this. v Independent additive noise is feasible; biological neural networks are naturally noisy in ways that can be leveraged for computation. vi At face value, it is not possible to maintain a "noisy" copy of the network to estimate perturbation effects, e.g. KeRNL (Table 1) or. However, there may be workarounds. How do different algorithms do? RFLO is sufficiently simple to pass all of these tests, but it arguably doesn't actually solve temporal credit assignment and merely regresses natural memory traces to task labels (see Section 5.5 of ), which limits its performance ceiling. Every other algorithm fails at least one of our criteria, at least at first glance. KF-RTRL and R-KF are out because of the matrix-matrix products in their updates. Although the eligibility-trace-like update in KeRNL for B (t) ij is straightforward, learning the A (t) ki matrix requires a perturbed network-on the surface unlikely biologically (vi). While UORO uses only matrix-vector products, the time-continuity requirement (ii) is awkward, because if we choose the constants ρ 0, ρ 1 to make one update equation smooth in time (e.g. ρ 0 = 1 −, ρ 1 =, for 0 < 1), the other update becomes unstable due to the appearance of ρ Can we fix any of these issues? While each algorithm poses its own challenges, the Jacobian is a recurring problem for anything that meaningfully solves credit assignment. Therefore we propose a general solution, to instead use an approximate Jacobian, whose entries we refer to as J ij, which updates at each step according to a perceptron-like learning rule: Biologically, this would correspond to having an additional set of synapses (possibly spatially segregated from W) with their own plasticity rules. Computationally, this approximation brings no traditional speed benefits, but it offers a plausible mechanism by which a neural circuit can access its own Jacobian for learning purposes. As for other challenges, the matrix-matrix-vector product appearing in DNI can be implemented by the circuit itself in a phase of computation separate from the forward pass. For the intermediate to pass through the second matrix, it must be represented as a firing rate (iii), which already requires altering the original equations to m φ l r A l m is a voltage. This would naively interfere with the forward pass, since v (t) = Wr (t−1) already uses the network firing rates and somatic voltages. However, we could imagine the A synapses feeding into an electrically isolated neural compartment (say the apical dendrites) to define a separate voltage u (t+1) m, which is allowed to drive neural firing to φ(u (t+1) m ) in specific "update" phases. We already know that branch-specific gating (by interneurons) can filter which information makes it to the soma to drive spiking. Do these fixes work empirically? Given our criteria and novel workarounds, RFLO and DNI(b), our altered version of DNI (with the approximate Jacobian), remain as viable candidates for neural learning. To ensure our additional approximations do not ruin performance, we empirically evaluate DNI(b), along with the original DNI and RFLO. As upper and lower bounds on performance, respectively, we also include exact credit assignment methods (BPTT and RTRL) and a "fixed-W" algorithm that only trains the output weights. We use two synthetic tasks, each of which requires solving temporal credit assignment and has clear markers for success. One task ("Add") requires mapping a stream of i.i.d. Bernoulli inputs x (t) to an output y * (t) = 0.5 + 0.5x (t−t1) − 0.25x (t−t2), with time rescaled to match α. The label depends on the inputs via lags t 1, t 2 that can be adjusted to modulate task difficulty. The other task ("Mimic") requires reproducing the response of a separate RNN with the same architecture and fixed weights to a shared Bernoulli input stream. We find that training loss for RFLO and DNI is worse than the optimal solutions (BPTT and RTRL), but both beat the fixed-W performance lower bound. DNI(b) performs worse than original DNI, unsurprising because it involves further approximations, but still much better than the fixed-W baseline. This demonstrates that solving temporal credit assignment is possible within biological constraints. It is still unclear how neural circuits achieve sophisticated learning, in particular solving temporal credit assignment. Here we approached the problem by looking for biologically sensible approximations to RTRL and BPTT. Although we have empirical to prove that our solutions can solve temporal credit assignment for simple tasks, the substance of our contribution is conceptual, in articulating what computations are abstractly feasible and which are not. In particular, we have shown that accessing the Jacobian for learning is possible by using a set of synapses trained to linearly approximate the network's own dynamics. Along the way, we have identified some key lessons. The main one is that neural circuits need additional infrastructure specifically to support learning. This could be extra neurons, extra compartments within neurons, separate coordinated phases of computation, input gating by inhibition, etc. While we all know that biology is a lot more complicated than traditional models of circuit learning would suggest, it has proved difficult to identify the functional role of these details in a bottom-up way. On the other hand, drawing a link between ML algorithms and biology can hint at precise computational roles for not well understood circuit features. Another lesson is that implementing even fairly simple learning equations in parallel to the forward pass is nontrivial, since it already uses up so much neural hardware. Even a simple matrix-vector product requires an entirely separate phase of network dynamics in order to not interfere with the forward pass of computation. While it may be tempting to outsource some of these update equations to separate neurons, the would not be locally available to drive synaptic plasticity. Of course, we acknowledge that any particular solution, whether RFLO or DNI, is a highly contrived, specific, and likely incorrect guess at how neural circuits learn, but we believe the exercise has big-picture implications for how to think about biological learning. Beyond the particular topic of online learning in recurrent networks, our work provides a general blueprint for abstractly evaluating computational models as mechanistic explanations for biological neural networks. Knowing what computational building blocks are at our disposal and what biological details are needed to implement them is an important foundation for studying ML algorithms in a biological context. | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJgPEXtIUS | We evaluate new ML learning algorithms' biological plausibility in the abstract based on mathematical operations needed |
In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific $l_p$-perturbation models have been developed, we show that they do not come with any guarantee against other $l_q$-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt $l_1$- \textit{and} $l_\infty$-perturbations and show how that leads to the first provably robust models wrt any $l_p$-norm for $p\geq 1$. The vulnerability of neural networks against adversarial manipulations is a problem for their deployment in safety critical systems such as autonomous driving and medical applications. In fact, small perturbations of the input which appear irrelevant or are even imperceivable to humans change the decisions of neural networks. This questions their reliability and makes them a target of adversarial attacks. To mitigate the non-robustness of neural networks many empirical defenses have been proposed, e.g. by;;;;; , but at the same time more sophisticated attacks have proven these defenses to be ineffective (; ;), with the exception of the adversarial training of. However, even these l ∞ -adversarially trained models are not more robust than normal ones when attacked with perturbations of small l p -norms with p = ∞ (; ; b;). The situation becomes even more complicated if one extends the attack models beyond l p -balls to other sets of perturbations (; ; ;). Another approach, which fixes the problem of overestimating the robustness of a model, is provable guarantees, which means that one certifies that the decision of the network does not change in a certain l p -ball around the target point. Along this line, current state-of-theart methods compute either the norm of the minimal perturbation changing the decision at a point (e.g. ;) or lower bounds on it (; ;). Several new training schemes like (; ; ; ; a; ;) aim at both enhancing the robustness of networks and producing models more amenable to verification techniques. However, all of them are only able to prove robustness against a single kind of perturbations, typically either l 2 -or l ∞ -bounded, and not wrt all the l p -norms simultaneously, as shown in Section 5. Some are also designed to work for a specific p , and it is not clear if they can be extended to other norms. The only two papers which have shown, with some limitations, non-trivial empirical robustness against multiple types of adversarial examples are and Tramèr & Boneh In this paper we aim at robustness against all the l p -bounded attacks for p ≥ 1. We study the non-trivial case where none of the l p -balls is contained in another. If p is the radius of the l p -ball for which we want to be provably robust, this requires: q > p > q for p < q and d being the input dimension. We show that, for normally trained models, for the l 1 -and l ∞ -balls we use in the experiments none of the adversarial examples constrained to be in the l 1 -ball (i.e. of an l 1 -attack) belongs to the l ∞ -ball, and vice versa. This shows that certifying the union of such balls is significantly more complicated than getting robust in only one of them, as in the case of the union the attackers have a much larger variety of manipulations available to fool the classifier. We propose a technique which allows to train piecewise affine models (like ReLU networks) which are simultaneously provably robust to all the l p -norms with p ∈ [1, ∞]. First, we show that having guarantees on the l 1 -and l ∞ -distance to the decision boundary and region boundaries (the borders of the polytopes where the classifier is affine) is sufficient to derive meaningful certificates on the robustness wrt all l p -norms for p ∈ (1, ∞). In particular, our guarantees are independent of the dimension of the input space and thus go beyond a naive approach where one just exploits that all l p -metrics can be upper-and lower-bounded wrt any other l q -metric. Then, we extend the regularizer introduced in Croce et al. (2019a) so that we can directly maximize these bounds at training time. Finally, we show the effectiveness of our technique with experiments on four datasets, where the networks trained with our method are the first ones having non-trivial provable robustness wrt l 1 -, l 2 -and l ∞ -perturbations. It is well known that feedforward neural networks (fully connected, CNNs, residual networks, DenseNets etc.) with piecewise affine activation functions, e.g. ReLU, leaky ReLU, yield continuous piecewise affine functions (see e.g. ;). Croce et al. (2019a) exploit this property to derive bounds on the robustness of such networks against adversarial manipulations. In the following we recall the guarantees of Croce et al. (2019a) wrt a single l p -perturbation which we extend in this paper to simultaneous guarantees wrt all the l p -perturbations for p in [1, ∞]. Let f: R d → R K be a classifier with d being the dimension of the input space and K the number of classes. The classifier decision at a point x is given by arg max r=1,...,K f r (x). In this paper we deal with ReLU networks, that is with ReLU activation function (in fact our approach can be easily extended to any piecewise affine activation function e.g. leaky ReLU or other forms of layers leading to a piecewise affine classifier as in Croce et al. (2019b) Denoting the activation function as σ (σ(t) = max{0, t} if ReLU is used) and assuming L hidden layers, we have the usual recursive definition of f as where n l is the number of units in the l-th layer (For the convenience of the reader we summarize from the description of the polytope Q(x) containing x and affine form of the classifier f when restricted to Q(x). We assume that x does not lie on the boundary between polytopes (this is almost always true as faces shared between polytopes are of lower dimension). Let This allows us to write f (l) (x) as composition of affine functions, that is which we simplify as A forward pass through the network is sufficient to compute V (l) and b (l) for every l. The polytope Q(x) is given as intersection of N = L l=1 n l half spaces defined by Let q be defined via.., K and s = c, which represent the N l p -distances of x to the hyperplanes defining the polytope Q(x) and the K − 1 l p -distances of x to the hyperplanes defining the decision boundaries in Q(x). Finally, we define as the minimum values of these two sets of distances (note that d The l p -robustness r p (x) of a classifier f at a point x, belonging to class c, wrt the l p -norm is defined as the optimal value of the following optimization problem where is S a set of constraints on the input, e.g. pixel values of images have to be in. The l p -robustness r p (x) is the smallest l p -distance to x of a point which is classified differently from c. Thus, r p (x) = 0 for misclassified points. The following theorem from Croce et al. (2019a), rephrased to fit the current notation, provides guarantees on r p (x). Although Theorem 2.1 holds for any l p -norm with p ≥ 1, it requires to compute d B p (x) and d D p (x) for every p individually. In this paper, exploiting this and the geometrical arguments presented in Section 3, we show that it is possible to derive bounds on the robustness r p (x) for any p ∈ (1, ∞) using only information on r 1 (x) and r ∞ (x). In the next section, we show that the straightforward usage of standard l p -norms inequalities does not yield meaningful bounds on the l p -robustness inside the union of the l 1 -and l ∞ -ball, since these bounds depend on the dimension of the input space of the network. Figure 1: Visualization of the l 2 -ball contained in the union resp. the convex hull of the union of l 1 -and l ∞ -balls in R 3. First column: co-centric l 1 -ball (blue) and l ∞ -ball (black). Second: in red the largest l 2 -ball completely contained in the union of l 1 -and l ∞ -ball. Third: in green the convex hull of the union of the l 1 -and l ∞ -ball. Fourth: the largest l 2 -ball (red) contained in the convex hull. The l 2 -ball contained in the convex hull is significantly larger than that contained in the union of l 1 -and l ∞ -ball. 3 Minimal l p -norm of the complement of the union of l 1 -and l ∞ -ball and its convex hull By the standard norm inequalities it holds, for every x ∈ R d, that and thus a naive application of these inequalities yields the bound However, this naive bound does not take into account that we know that x 1 ≥ 1 and x ∞ ≥ ∞. Our first yields the exact value taking advantage of this information. Thus a guarantee both for l 1 -and l ∞ -ball yields a guarantee for all intermediate l p -norms. However, for affine classifiers a guarantee for B 1 and B ∞ implies a guarantee wrt the convex hull C of their union B 1 ∪ B ∞. This can be seen by the fact that an affine classifier generates two half-spaces, and the convex hull of a set A is the intersection of all half-spaces containing A. Thus, inside C the decision of the affine classifier cannot change if it is guaranteed not to change in B 1 and B ∞, as C is completely contained in one of the half-spaces generated by the classifier (see Figure 1 for illustrations of B 1, B ∞, their union and their convex hull). With the following theorem, we characterize, for any p ≥ 1, the minimal l p -norm over R d \ C. where α = 5) (red) and its naive lower bound (green). We fix ∞ = 1 and show the varying 1 ∈ (1, d), for d = 784 and d = 3072. We plot the value (or a lower bound in case of) of the minimal x 2, depending on 1, given by the different approaches (first and third plots). The red curves are almost completely hidden by the green ones, as they mostly overlap, but can be seen for small values of x 1. Moreover, we report (second and fourth plots) the ratios of the minimal The values provided by are much larger than those of. Note that our expression in Theorem 3.1 is exact and not just a lower bound. Moreover, the minimal l p -distance of R d \ C to the origin in Equation is independent from the dimension d, in contrast to the expression for the minimal l p -norm over R d \ U 1,∞ in and its naive lower bound in, which are both decreasing for increasing d and p > 1. In Figure 1 we compare visually the largest l 2 -balls (in red) fitting inside either U 1,∞ or the convex hull C in R 3, showing that the one in C is clearly larger. In Figure 2 we provide a quantitative comparison in high dimensions. We plot the minimal l 2 -norm over R d \ C (blue) and over R d \ U 1,∞ (red) and its naive lower bound (green). We fix x ∞ = ∞ = 1 and vary e. the dimensions of the input spaces of MNIST and CIFAR-10. One sees clearly that the blue line corresponding to is significantly higher than the other two. In the second and fourth plots of Figure 2 we show, for each 1, the ratio of the l 2 -distances given by and. The maximal ratio is about 3.8 for d = 784 and 5.3 for d = 3072, meaning that the advantage of increases with d. These two examples indicate that the l p -balls contained in C can be a few times larger than those in U 1,∞. Recall that we deal with piecewise affine networks. If we could enlarge the linear regions on which the classifier is affine so that it contains the l 1 -and l ∞ -ball of some desired radii, we would automatically get the l p -balls of radii given by Theorem 3.1 to fit in the linear regions. The next section formalizes the ing robustness guarantees. Combining the of Theorems 2.1 and 3.1, in the next theorem we derive lower bounds on the robustness of a continuous piecewise affine classifier f, e.g. a ReLU network, at a point x wrt any l p -norm with p ≥ 1 using only d). for any p ∈ (1, ∞), with α = It tries to push the k B closest hyperplanes defining Q(x) farther than γ B from x and the k D closest decision hyperplanes farther than γ D from x both wrt l p -metric. In other words, MMR-l p aims at widening the linear regions around the training points so that they contain l p -balls of radius either γ B or γ D centered in the training points. Using MMR-l p wrt a fixed l p -norm, possibly in combination with the adversarial training of , leads to classifiers which are empirically resistant wrt l p -adversarial attacks and are easily verifiable by state-of-the-art methods to provide lower bounds on the true robustness. where We stress that, even if the formulation of MMR-Universal is based on MMR-l p, it is just thanks to the novel geometrical motivation provided by Theorem 3.1 and its interpretation in terms of robustness guarantees of Theorem 4.1 that we have a theoretical justification of MMR-Universal. Moreover, we are not aware of any other approach which can enforce simultaneously l 1 -and l ∞ -guarantees, which is the key property of MMR-Universal. The loss function which is minimized while training the classifier f is then, with {( being the training set and CE the cross-entropy loss, During the optimization our regularizer aims at pushing both the polytope boundaries and the decision hyperplanes farther than γ 1 in l 1 -distance and farther than γ ∞ in l ∞ -distance from the training point x, in order to achieve robustness close or better than γ 1 and γ ∞ respectively. According to Theorem 4.1, this enhances also the l p -robustness for p ∈ (1, ∞). Note that if the projection of x on a decision hyperplane does not lie inside is just an approximation of the signed distance to the true decision surface, in which case Croce et al. (2019a) argue that it is an approximation of the local Cross-Lipschitz constant which is also associated to robustness (see). The regularization parameters λ 1 and λ ∞ are used to balance the weight of the l 1 -and l ∞ -term in the regularizer, and also wrt the cross-entropy loss. Note that the terms of MMR-Universal involving the quantities d (x) penalize misclassification, as they take negative values in this case. Moreover, we take into account the k B closest hyperplanes and not just the closest one as done in Theorems 2.1 and 4.1. This has two reasons: first, in this way the regularizer enlarges the size of the linear regions around the training points more quickly and effectively, given the large number of hyperplanes defining each polytope. Second, pushing many hyperplanes influences also the neighboring linear regions of Q(x). This comes into play when, in order to get better bounds on the robustness at x, one wants to explore also a portion of the input space outside of the linear region Q(x), which is where Theorem 4.1 holds. As noted in; Croce et al. (2019a); , established methods to compute lower bounds on the robustness are loose or completely fail when using normally trained models. In fact, their effectiveness is mostly related to how many ReLU units have stable sign when perturbing the input x within a given l p -ball. This is almost equivalent to having the hyperplanes far from x in l p -distance, which is what MMR-Universal tries to accomplish. This explains why in Section 5 we can certify the models trained with MMR-Universal with the methods of and. We compare the models obtained via our MMR-Universal regularizer to state-of-the-art methods for provable robustness and adversarial training. As evaluation criterion we use the robust test error, defined as the largest classification error when every image of the test set can be perturbed within a fixed set (e.g. an l p -ball of radius p). We focus on the l p -balls with p ∈ {1, 2, ∞}. Since computing the robust test error is in general an NP-hard problem, we evaluate lower and upper bounds on it. The lower bound is the fraction of points for which an attack can change the decision with perturbations in the l p -balls of radius p (adversarial samples), that is with l p -norm smaller than p. For this task we use the PGD-attack . In choosing the values of p for p ∈ {1, 2, ∞}, we try to be consistent with previous literature (e.g. ; Croce et al. (2019a) ) for the values of ∞ and 2. Equation provides, given 1 and ∞, a value at which one can expect l 2 -robustness (approximately 2 = √ 1 ∞). Then we fix 1 such that this approximation is slightly larger than the desired 2. We show in Table 1 the values chosen for p, p ∈ {1, 2, ∞}, and used to compute the robust test error in Table 2. Notice that for these values no l p -ball is contained in the others. Moreover, we compute for the plain models the percentage of adversarial examples given by an l 1 -attack (we use the PGD-attack) with budget 1 which have also l ∞ -norm smaller than or equal to ∞, and vice versa. These percentages are zero for all the datasets, meaning that being (provably) robust in the union of these l p -balls is much more difficult than in just one of them (see also C.1). We train CNNs on MNIST, Fashion-MNIST (et al. (2019a), either alone or with adversarial training (MMR+AT) and the training with our regularizer MMR-Universal. We use AT, KW, MMR and MMR+AT wrt l 2 and l ∞, as these are the norms for which such methods have been used in the original papers. More details about the architecture and models in C.3. In Table 2 we report test error (TE) computed on the whole test set and lower (LB) and upper (UB) bounds on the robust test error obtained considering the union of the three l p -balls, indicated by l 1 + l 2 + l ∞ (these statistics are on the first 1000 points of the test set). The lower bounds l 1 + l 2 + l ∞ -LB are given by the fraction of test points for which one of the adversarial attacks wrt l 1, l 2 and l ∞ is successful. The upper bounds l 1 + l 2 + l ∞ -UB are computed as the percentage of points for which at least one of the three l p -balls is not certified to be free of adversarial examples (lower is better). This last one is the metric of main interest, since we aim at universally provably robust models. In C.2 we report the lower and upper bounds for the individual norms for every model. MMR-Universal is the only method which can give non-trivial upper bounds on the robust test error for all datasets, while almost all other methods aiming at provable robustness have l 1 + l 2 + l ∞ -UB close to or at 100%. Notably, on GTS the upper bound on the robust test error of MMR-Universal is lower than the lower bound of all other methods except AT-(l 1, l 2, l ∞), showing that MMR-Universal provably outperforms existing methods which provide guarantees wrt individual l p -balls, either l 2 or l ∞, when certifying the union l 1 +l 2 +l ∞. The test error is slightly increased wrt the other methods giving provable robustness, but the same holds true for combined adversarial training AT-(l 1, l 2, l ∞) compared to standard adv. training AT-l 2 /l ∞. We conclude that MMR-Universal is the only method so far being able to provide non-trivial robustness guarantees for multiple l p -balls in the case that none of them contains any other. We have presented the first method providing provable robustness guarantees for the union of multiple l p -balls beyond the trivial case of the union being equal to the largest one, establishing a baseline for future works. Without loss of generality after a potential permutation of the coordinates it holds |x d | = x ∞. Then we get, which finishes the proof. Proof. We first note that the minimum of the l p -norm over R d \ C lies on the boundary of C (otherwise any point on the segment joining the origin and y and outside C would have where (1 − α) ∞ < ∞ and Thus S would not span a face as a convex combination intersects the interior of C. This implies that if 1 e j is in S then all the vertices v of B ∞ in S need to have v j = ∞, otherwise S would not define a face of C. Analogously, if − 1 e j ∈ S then any vertex v of B ∞ in S has v j = − ∞. However, we note that out of symmetry reasons we can just consider faces of C in the positive orthant and thus we consider in the following just sets S which contain vertices of "positive type" 1 e j. Let now S be a set (not necessarily defining a face of C) containing h ≤ k vertices of B 1 and d − h vertices of B ∞ and P the matrix whose columns are these points. The matrix P has the form −h is a matrix whose entries are either ∞ or − ∞. If the matrix P does not have full rank then the origin belongs to any hyperplane containing S, which means it cannot be a face of C. This also implies A has full rank if S spans a face of C. We denote by π the hyperplane generated by the affine hull of S (the columns of P) assuming that A has full rank. Every point b belonging to the hyperplane π generated by S is such that there exists a unique a ∈ R d which satisfies where 1 d1,d2 is the matrix of size d 1 × d 2 whose entries are 1. The matrix (P, b) ∈ R d+1,d+1 need not have full rank, so that and then the linear system P a = b has a unique solution. We define the vector v ∈ R d as solution of, which is unique as P has full rank. From their definitions we have P a = b and 1 T a = 1, so that and thus noticing that this also implies that any vector b ∈ R d such that b, v = 1 belongs to π (suppose that ∃q / ∈ π with q, v = 1, then define c as the solution of P c = q and then 1 = q, v = P c, v = c, P T v = c, 1 which contradicts that q / ∈ π). Applying Hölder inequality to we get for any b ∈ π, where 1 p + 1 q = 1. Moreover, as p ∈ (1, ∞) there exists always a point b * for which holds as equality. In the rest of the proof we compute v q for any q > 1 when S is a face of C and then yields the desired minimal value of b p over all b lying in faces of C. which implies Moreover, we have Furthermore v 2 is defined as the solution of We note that all the entries of are either 1 or −1, so that the inner product between each row of A T and v 2 is a lower bound on the l 1 -norm of v 2. Since every entry of the r.h.s. of the linear system is, which combined with leads to In order to achieve equality u, v = v 1 it has to hold u i = sgn(v i) for every v i = 0. If at least two components of v were non-zero, the corresponding columns of A T would be identical, which contradicts the fact that A T has full rank. Thus v 2 can only have one non-zero component which in absolute value is equal to Thus, after a potential reordering of the components, v has the form From the second condition in, we have This means that, in order for S to define a face of C, we need h = k if α > 0, h ∈ {k − 1, k} if α = 0 (in this case choosing h = k − 1 or h = k leads to the same v, so in practice it is possible to use simply h = k for any α). Once we have determined v, we can use again and to see that Finally, for any v there exists b * ∈ π for which equality is achieved in. Suppose that this b * does not lie in a face of C. Then one could just consider the line segment from the origin to b * and the point intersecting the boundary of C would have smaller l p -norm contradicting the just derived inequality. Thus the b * realizing equality in lies in a face of C. Restricting the analysis to p = 2 for simplicity, we get and one can check that δ * is indeed a maximizer. Moreover, at δ * we have a ratio between the two bounds We observe that the improvement of the robustness guarantee by considering the convex hull instead of the union is increasing with dimension and is ≈ 3. Therefore the interior of the l 1 -ball of radius ρ 1 (namely, B 1 (x, ρ 1)) and of the l ∞ -ball of radius ρ ∞ (B ∞ (x, ρ ∞)) centered in x does not intersect with any of those hyperplanes. This implies that {π j} j are intersecting the closure of In Table 3 we compute the percentage of adversarial perturbations given by the PGD-attack wrt l p with budget p which have l q -norm smaller than q, for q = p (the values of p and q used are those from Table 1). We used the plain model of each dataset. The most relevant statistics of Table 3 are about the relation between the l 1 -and l ∞ -perturbations (first two rows). In fact, none of the adversarial examples wrt l 1 is contained in the l ∞ -ball, and vice versa. This means that, although the volume of the l 1 -ball is much smaller, even because of the intersection with the box constraints d, than that of the l ∞ -ball in high dimension, and most of it is actually contained in the l ∞ -ball, the adversarial examples found by l 1 -attacks are anyway very different from those got by l ∞ -attacks. The choice of such p is then meaningful, as the adversarial perturbations we are trying to prevent wrt the various norms are non-overlapping and in practice exploit regions of the input space significantly diverse one from another. Moreover, one can see that also the adversarial manipulations wrt l 1 and l 2 do not overlap. Regarding the case of l 2 and l ∞, for MNIST and F-MNIST it happens that the adversarial examples wrt l 2 are contained in the l ∞ -ball. However, as one observes in Table 4, being able to certify the l ∞ -ball is not sufficient to get non-trivial guarantees wrt l 2. In fact, all the models trained on these datasets to be provably robust wrt the l ∞ -norm, that is KW-l ∞, MMR-l ∞ and MMR+AT-l ∞, have upper bounds on the robust test error in the l 2 -ball larger than 99%, despite the values of the lower bounds are small (which means that the attacks could not find adversarial perturbations for many points). Such analysis confirms that empirical and provable robustness are two distinct problems, and the interaction of different kinds of perturbations, as we have, changes according to which of these two scenarios one considers. In Table 4 we report, for each dataset, the test error and upper and lower bounds on the robust test error, together with the p used, for each norm individually. It is clear that training for provable l p -robustness (expressed by the upper bounds) does not, in general, yield provable l q -robustness for q = p, even in the case where the lower bounds are small for both p and q. In order to compute the upper bounds on the robust test error in Tables 2 and 4 we use the method of for all the three l p -norms and that of only for the l ∞ -norm. This second one exploits a reformulation of the problem in in terms of mixed integer programming (MIP), which is able to exactly compute the solution of for p ∈ {1, 2, ∞}. However, such technique is strongly limited by its high computational cost. The only reason why it is possible to use it in practice is the exploitation of some presolvers which are able to reduce the complexity of the MIP. Unfortunately, such presolvers are effective just wrt l ∞. On the other hand, the method of applies directly to every l p -norm. This explains why the bounds provided for l ∞ are tighter than those for l 1 and l 2. The convolutional architecture that we use is identical to , which consists of two convolutional layers with 16 and 32 filters of size 4 × 4 and stride 2, followed by a fully connected layer with 100 hidden units. The AT-l ∞, AT-l 2, KW, MMR and MMR+AT training models are those presented in Croce et al. (2019a) and available at https: //github.com/max-andr/provable-robustness-max-linear-regions. We trained the AT-(l 1, l 2, l ∞) performing for each batch of the 128 images the PGD-attack wrt the three norms (40 steps for MNIST and F-MNIST, 10 steps for GTS and CIFAR-10) and then training on the point realizing the maximal loss (the cross-entropy function is used), for 100 epochs. For all experiments with MMR-Universal we use batch size 128 and we train the models for 100 epochs. Moreover, we use Adam optimizer of with learning rate of 5 × 10 −4 for MNIST and F-MNIST, 0.001 for the other datasets. We also reduce the learning rate by a factor of 10 for the last 10 epochs. On CIFAR-10 dataset we apply random crops and random mirroring of the images as data augmentation. For training we use MMR-Universal as in with k B linearly (wrt the epoch) decreasing from 20% to 5% of the total number of hidden units of the network architecture. We also use a training schedule for λ p where we linearly increase it from λ p /10 to λ p during the first 10 epochs. We employ both schemes since they increase the stability of training with MMR. In order to determine the best set of hyperparameters λ 1, λ ∞, γ 1, and γ ∞ of MMR, we perform a grid search over them for every dataset. In particular, we empirically found that the optimal values of γ p are usually between 1 and 2 times the p used for the evaluation of the robust test error, while the values of λ p are more diverse across the different datasets. Specifically, for the models we reported in Table 4 the following values for the (λ 1, λ ∞) have been used: (3.0, 12.0) for MNIST, (3.0, 40.0) for F-MNIST, (3.0, 12.0) for GTS and (1.0, 6.0) for CIFAR-10. In Tables 2 and 4, while the test error which is computed on the full test set, the statistics regarding upper and lower bounds on the robust test error are computed on the first 1000 points of the respective test sets. For the lower bounds we use the FAB-attack with the Figure 3: We show, for each dataset, the evolution of the test error (red), upper bound (UB) on the robust test error wrt l 1 (black), l 2 (cyan) and l ∞ (blue) during training. Moreover, we report in green the upper bounds on the test error when the attacker is allowed to exploit the union of the three l p -balls. The statistics on the robustness are computed at epoch 1, 2, 5, 10 and then every 10 epochs on 1000 points with the method of , using the models trained with MMR-Universal. original parameters, 100 iterations and 10 restarts. For PGD we use also 100 iterations and 10 restarts: the directions for the update step are the sign of the gradient for l ∞, the normalized gradient for l 2 and the normalized sparse gradient suggested by Tramèr & with sparsity level 1% for MNIST and F-MNIST, 10% for GTS and CIFAR-10. Finally we use the Liner Region Attack as in the original code. For MIP we use a timeout of 120s, that means if no guarantee is obtained by that time, the algorithm stops verifying that point. We show in Figure 3 the clean test error (red) and the upper bounds on the robust test error wrt l 1 (black), l 2 (cyan), l ∞ (blue) and wrt the union of the three l p -balls (green), evaluated at epoch 1, 2, 5, 10 and then every 10 epochs (for each model we train for 100 epochs) for the models trained with our regularizer MMR-Universal. For each dataset used in Section 5 the test error is computed on the whole test set, while the upper bound on the robust test error is evaluated on the first 1000 points of the test set using the method introduced in (the thresholds 1, 2, ∞ are those provided in Table 1). Note that the statistics wrt l ∞ are not evaluated additionally with the MIP formulation of as the in the main paper which would improve the upper bounds wrt l ∞. For all the datasets the test error keeps decreasing across epochs. The values of all the upper bounds generally improve during training, showing the effectiveness of MMR-Universal. We here report the robustness obtained training with MMR-l p +AT-l q with p = q on MNIST. This means that MMR is used wrt l p, while adversarial training wrt l q. In particular we test p, q ∈ {1, ∞}. In Table 5 we report the test error (TE), lower (LB) and upper bounds (UB) on the robust test error for such model, evaluated wrt l 1, l 2, l ∞ and l 1 + l 2 + l ∞ as done in Section 5. It is clear that training with MMR wrt a single norm does not suffice to get provable guarantees in all the other norms, despite the addition of adversarial training. In fact, for both the models analysed the UB equals 100% for at least one norm. Note that the statistics wrt l ∞ in the plots do not include the of the MIP formulation of. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rklk_ySYPB | We introduce a method to train models with provable robustness wrt all the $l_p$-norms for $p\geq 1$ simultaneously. |
In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model. The proposed control problem contains a restoration dynamics which is modeled by an RNN. The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network. We call the proposed model the dynamically unfolding recurrent restorer (DURR). Numerical experiments show that DURR is able to achieve state-of-the-art performances on blind image denoising and JPEG image deblocking. Furthermore, DURR can well generalize to images with higher degradation levels that are not included in the training stage. Image restoration, including image denoising, deblurring, inpainting, etc., is one of the most important areas in imaging science. Its major purpose is to obtain high quality reconstructions of images corrupted in various ways during imaging, acquisiting, and storing, and enable us to see crucial but subtle objects that reside in the images. Image restoration has been an active research area. Numerous models and algorithms have been developed for the past few decades. Before the uprise of deep learning methods, there were two classes of image restoration approaches that were widely adopted in the field: transformation based approach and PDE approach. The transformation based approach includes wavelet and wavelet frame based methods BID11 BID3, dictionary learning based methods BID0, similarity based methods BID2 BID10, low-rank models BID21 BID18, etc. The PDE approach includes variational models BID31 BID35 BID1, nonlinear diffusions BID33 BID6 BID38, nonlinear hyperbolic equations BID32, etc. More recently, deep connections between wavelet frame based methods and PDE approach were established BID4 BID12.One of the greatest challenge for image restoration is to properly handle image degradations of different levels. In the existing transformation based or PDE based methods, there is always at least one tuning parameter (e.g. the regularization parameter for variational models and terminal time for nonlinear diffusions) that needs to be manually selected. The choice of the parameter heavily relies on the degradation level. Recent years, deep learning models for image restoration tasks have significantly advanced the state-of-the-art of the field. BID20 proposed a convolutional neural network (CNN) for image denoising which has better expressive power than the MRF models by BID22. Inspired by nonlinear diffusions, BID9 designed a deep neural network for image denoising and BID40 improves the capacity by introducing a deeper neural network with residual connections. use the CNN to simulate a wide variety of image processing operators, achieving high efficiencies with little accuracy drop. However, these models cannot gracefully handle images with varied degradation levels. Although one may train different models for images with different levels, this may limit the application of these models in practice due to lack of flexibility. Taking blind image denoising for example. BID40 designed a 20-layer neural network for the task, called DnCNN-B, which had a huge number of parameters. To reduce number of parameters, BID24 proposed the UNLNet 5, by unrolling a projection gradient algorithm for a constrained optimization model. However, BID24 also observed a drop in PSNR comparing to DnCNN. Therefore, the design of a light-weighted and yet effective model for blind image denoising remains a challenge. Moreover, deep learning based models trained on simulated gaussian noise images usually fail to handle real world noise, as will be illustrated in later sections. Another example is JPEG image deblocking. JPEG is the most commonly used lossy image compression method. However, this method tend to introduce undesired artifacts as the compression rate increases. JPEG image deblocking aims to eliminate the artifacts and improve the image quality. Recently, deep learning based methods were proposed for JPEG deblocking BID13 BID40. However, most of their models are trained and evaluated on a given quality factor. Thus it would be hard for these methods to apply to Internet images, where the quality factors are usually unknown. In this paper, we propose a single image restoration model that can robustly restore images with varied degradation levels even when the degradation level is well outside of that of the training set. Our proposed model for image restoration is inspired by the recent development on the relation between deep learning and optimal control. The relation between supervised deep learning methods and optimal control has been discovered and exploited by BID39; BID26 BID7 BID16. The key idea is to consider the residual block x n+1 = x n + f (x n) as an approximation to the continuous dynamicsẊ = f (X). In particular, BID26 BID16 demonstrated that the training process of a class of deep models (e.g. ResNet by BID19, PolyNet by BID42, etc.) can be understood as solving the following control problem: DISPLAYFORM0 Here x 0 is the input, y is the regression target or label,Ẋ = f (X, w) is the deep neural network with parameter w(t), R is the regularization term and L can be any loss function to measure the difference between the reconstructed images and the ground truths. In the context of image restoration, the control dynamicẊ = f (X(t), ω(t)), t ∈ (0, τ) can be, for example, a diffusion process learned using a deep neural network. The terminal time τ of the diffusion corresponds to the depth of the neural network. Previous works simply fixed the depth of the network, i.e. the terminal time, as a fixed hyper-parameter. However BID30 showed that the optimal terminal time of diffusion differs from image to image. Furthermore, when an image is corrupted by higher noise levels, the optimal terminal time for a typical noise removal diffusion should be greater than when a less noisy image is being processed. This is the main reason why current deep models are not robust enough to handle images with varied noise levels. In this paper, we no longer treat the terminal time as a hyper-parameter. Instead, we design a new architecture (see Fig. 3) that contains both a deep diffusion-like network and another network that determines the optimal terminal time for each input image. We propose a novel moving endpoint control model to train the aforementioned architecture. We call the proposed architecture the dynamically unfolding recurrent restorer (DURR).We first cast the model in the continuum setting. Let x 0 be an observed degraded image and y be its corresponding damage-free counterpart. We want to learn a time-independent dynamic systeṁ X = f (X(t), w) with parameters w so that X = x and X(τ) ≈ y for some τ > 0. See Fig. 2 for an illustration of our idea. The reason that we do not require X(τ) = y is to avoid over-fitting. For varied degradation levels and different images, the optimal terminal time τ of the dynamics may vary. Therefore, we need to include the variable τ in the learning process as well. The learning of the dynamic system and the terminal time can be gracefully casted as the following moving endpoint control problem: DISPLAYFORM1 Different from the previous control problem, in our model the terminal time τ is also a parameter to be optimized and it depends on the data x. The dynamic systemẊ = f (X(t), w) is modeled by a recurrent neural network (RNN) with a residual connection, which can be understood as a residual network with shared weights BID25. We shall refer to this RNN as the restoration unit. In order to learn the terminal time of the dynamics, we adopt a policy network to adaptively determine an optimal stopping time. Our learning framework is demonstrated in Fig. 3. We note that the above moving endpoint control problem can be regarded as the penalized version of the well-known fixed endpoint control problem in optimal control BID15, where instead of penalizing the difference between X(τ) and y, the constraint X(τ) = y is strictly enforced. In short, we summarize our contribution as following:• We are the first to use convolutional RNN for image restoration with unknown degradation levels, where the unfolding time of the RNN is determined dynamically at run-time by a policy unit (could be either handcrafted or RL-based).• The proposed model achieves state-of-the-art performances with significantly less parameters and better running efficiencies than some of the state-of-the-art models.• We reveal the relationship between the generalization power and unfolding time of the RNN by extensive experiments. The proposed model, DURR, has strong generalization to images with varied degradation levels and even to the degradation level that is unseen by the model during training (Fig. 1).• The DURR is able to well handle real image denoising without further modification. Qualitative have shown that our processed images have better visual quality, especially sharper details compared to others. The proposed architecture, i.e. DURR, contains an RNN (called the restoration unit) imitating a nonlinear diffusion for image restoration, and a deep policy network (policy unit) to determine the terminal time of the RNN. In this section, we discuss the training of the two components based on our moving endpoint control formulation. As will be elaborated, we first train the restoration unit to determine ω, and then train the policy unit to estimate τ (x). If the terminal time τ for every input x i is given (i.e. given a certain policy), the restoration unit can be optimized accordingly. We would like to show in this section that the policy used during training greatly influences the performance and the generalization ability of the restoration unit. More specifically, a restoration unit can be better trained by a good policy. The simplest policy is to fix the loop time τ as a constant for every input. We name such policy as "naive policy". A more reasonable policy is to manually assign an unfolding time for each degradation level during training. We shall call this policy the "refined policy". Since we have not trained the policy unit yet, to evaluate the performance of the trained restoration units, we manually pick the output image with the highest PSNR (i.e. the peak PSNR).We take denoising as an example here. The peak PSNRs of the restoration unit trained with different policies are listed in Table. 1. the refined policy, the noise levels and the associated loop times are,. For the naive policy, we always fix the loop times to 8. As we can see, the refined policy brings the best performance on all the noise levels including 40. The restoration unit trained for specific noise level (i.e. σ = 40) is only comparable to the one with refined policy on noise level 40. The restoration unit trained on multiple noise levels with naive policy has the worst performance. These indicate that the restoration unit has the potential to generalize on unseen degradation levels when trained with good policies. According to FIG2, the generalization reflects on the loop times of the restoration unit. It can be observed that the model with steeper slopes have stronger ability to generalize as well as better performances. According to these , the restoration unit we used in DURR is trained using the refined policy. More specifically, for image denoising, the noise level and the associated loop times are set to,,, and. For JPEG image deblocking, the quality factor (QF) and the associated loop times are set to and. We discuss two approaches that can be used as policy unit:Handcraft policy: Previous work BID30 has proposed a handcraft policy that selects a terminal time which optimizes the correlation of the signal and noise in the filtered image. This criterion can be used directly as our policy unit, but the independency of signal and noise may not hold for some restoration tasks such as real image denoising, which has higher noise level in the low-light regions, and JPEG image deblocking, in which artifacts are highly related to the original image. Another potential stopping criterion of the diffusion is no-reference image quality assessment BID28, which can provide quality assessment to a processed image without the ground truth image. However, to the best of our knowledge, the performances of these assessments are still far from satisfactory. Because of the limitations of the handcraft policies, we will not include them in our experiments. Reinforcement learning based policy: We start with a discretization of the moving endpoint problem on the dataset {(x i, y i)|i = 1, 2, · · ·, d}, where {x i} are degraded observations of the damage-free images {y i}. The discrete moving endpoint control problem is given as follows: DISPLAYFORM0 DISPLAYFORM1 Here, DISPLAYFORM2 is the forward Euler approximation of the dynamicsẊ = f (X(t), w). The terminal time {N i} is determined by a policy network P (x, θ), where x is the output of the restoration unit at each iteration and θ the set of weights. In our experiment, we simply set r = 0, i.e. doesn't introduce any regularization which might bring further benefit but is beyond this paper's scope of discussion. In other words, the role of the policy network is to stop the iteration of the restoration unit when an ideal image restoration is achieved. The reward function of the policy unit can be naturally defined by DISPLAYFORM3 In order to solve the problem (2.2), we need to optimize two networks simultaneously, i.e. the restoration unit and the policy unit. The first is an restoration unit which approximates the controlled dynamics and the other is the policy unit to give the optimized terminating conditions. The objective function we use to optimize the policy network can be written as DISPLAYFORM4 where π θ denotes the distribution of the trajectories X = {X i n, n = 1, . . ., N i, i = 1, . . ., d} under the policy network P (·, θ). Thus, reinforcement learning techniques can be used here to learn a neural network to work as a policy unit. We utilize Deep Q-learning BID29 as our learning strategy and denote this approach simply as DURR. However, different learning strategies can be used (e.g. the Policy Gradient). In all denoising experiments, we follow the same settings as in BID9 BID40; BID24. All models are evaluated using the mean PSNR as the quantitative metric on the BSD68 BID27 ). The training set and test set of BSD500 (400 images) are used for training. Six gaussian noise levels are evaluated, namely σ = 25, 35, 45, 55, 65 and 75. Additive noise are applied to the image on the fly during training and testing. Both the training and evaluation process are done on gray-scale images. The restoration unit is a simple U-Net BID34 style fully convolutional neural network. For the training process of the restoration unit, the noise levels of 25, 35, 45 and 55 are used. Images are cut into 64 × 64 patches, and the batch-size is set to 24. The Adam optimizer with the learning rate 1e-3 is adopted and the learning rate is scaled down by a factor of 10 on training plateaux. The policy unit is composed of two ResUnit and an LSTM cell. For the policy unit training, we utilize the reward function in Eq.4. For training the policy unit, an RMSprop optimizer with learning rate 1e-4 is adopted. We've also tested other network structures, these tests and the detailed network structures of our model are demonstrated in the supplementary materials. In all JPEG deblocking experiments, we follow the settings as in BID40. All models are evaluated using the mean PSNR as the quantitative metric on the LIVE1 dataset BID36. Both the training and evaluation processes are done on the Y channel (the luminance channel) of the YCbCr color space. The PIL module of python is applied to generate JPEG-compressed images. The module produces numerically identical images as the commonly used MATLAB JPEG encoder after setting the quantization tables manually. The images with quality factors 20 and 30 are used during training. De-blocking performances are evaluated on four quality factors, namely QF = 10, 20, 30, and 40. All other parameter settings are the same as in the denoising experiments. We select DnCNN-B BID40 and UNLNet 5 BID24 for comparisons since these models are designed for blind image denoising. Moreover, we also compare our model with non-learning-based algorithms BM3D BID10 and WNNM BID18. The noise levels are assumed known for BM3D and WNNM due to their requirements. Comparison are shown in TAB2.Despite the fact that the parameters of our model (1.8 × 105 for the restoration unit and 1.0 × 10 5 for the policy unit) is less than the DnCNN (approximately 7.0 × 10 5), one can see that DURR outperforms DnCNN on most of the noise-levels. More interestingly, DURR does not degrade too much when the the noise level goes beyond the level we used during training. The noise level σ = 65, 75 is not included in the training set of both DnCNN and DURR. DnCNN reports notable drops of PSNR when evaluated on the images with such noise levels, while DURR only reports small drops of PSNR (see the last row of TAB2 and Fig. 6). Note that the reason we do not provide the of UNLNet 5 in TAB2 is because the authors of BID24 has not released their codes yet, and they only reported the noise levels from 15 to 55 in their paper. We also want to emphasize that they trained two networks, one for the low noise level (5 ≤ σ ≤ 29) and one for higher noise level (30 ≤ σ ≤ 55). The reason is that due to the use of the constraint ||y − x|| 2 ≤ by , we should not expect the model generalizes well to the noise levels surpasses the noise level of the training set. For qualitative comparisons, some restored images of different models on the BSD68 dataset are presented in Fig. 5 and Fig. 6. As can be seen, more details are preserved in DURR than other models. It is worth noting that the noise level of the input image in Fig. 6 is 65, which is unseen by both DnCNN and DURR during training. Nonetheless, DURR achieves a significant gain of nearly 1 dB than DnCNN. Moreover, the texture on the cameo is very well restored by DURR. These clearly indicate the strong generalization ability of our model. More interestingly, due to the generalization ability in denoising, DURR is able to handle the problem of real image denoising without additional training. For testing, we test the images obtained from BID23. We present the representative in Fig. 7 and more are listed in the supplementary materials. We also train our model for blind color image denoising, please refer to the supplementary materials for more details. For deep learning based models, we select DnCNN-3 BID40 for comparisons since it is the only known deep model for multiple QFs deblocking. As the AR-CNN BID13 is a commonly used baseline, we re-train the AR-CNN on a training set with mixed QFs and denote this model as AR-CNN-B. Original AR-CNN as well as a non-learning-based method SA-DCT are also tested. The quality factors are assumed known for these models. Quantitative are shown in TAB3. Though the number of parameters of DURR is significantly less than the DnCNN-3, the proposed DURR outperforms DnCNN-3 in most cases. Specifically, considerable gains can be observed for our model on seen QFs, and the performances are comparable on unseen QFs. A representative on the LIVE1 dataset is presented in FIG4. Our model generates the most clean and accurate details. More experiment details are given in the supplementary materials. Figure 7: Denoising on a real image from BID23. Our model can be easily extended to other applications such as deraining, dehazing and deblurring. In all these applications, there are images corrupted at different levels. Rainfall intensity, haze density and different blur kernels will all effect the image quality. In this paper, we proposed a novel image restoration model based on the moving endpoint control in order to handle varied noise levels using a single model. The problem was solved by jointly optimizing two units: restoration unit and policy unit. The restoration unit used an RNN to realize the dynamics in the control problem. A policy unit was proposed for the policy unit to determine the loop times of the restoration unit for optimal . Our model achieved the state-of-the-art in blind image denoising and JPEG deblocking. Moreover, thanks to the flexibility of the given policy, DURR has shown strong abilities of generalization in our experiments. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJfZKiC5FX | We propose a novel method to handle image degradations of different levels by learning a diffusion terminal time. Our model can generalize to unseen degradation level and different noise statistic. |
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method. The Wasserstein distance is a powerful tool based on the theory of optimal transport to compare data distributions with wide applications in image processing, computer vision and machine learning BID29. In a context of machine learning, it has recently found numerous applications, e.g. domain adaptation, or word embedding BID22. In the context of deep learning, the Wasserstein appeared recently to be a powerful loss in generative models BID2 and in multi-label classification BID19. Its power comes from two major reasons: i) it allows to operate on empirical data distributions in a non-parametric way ii) the geometry of the underlying space can be leveraged to compare the distributions in a geometrically sound way. The space of probability measures equipped with the Wasserstein distance can be used to construct objects of interest such as barycenters BID0 or geodesics BID34 that can be used in data analysis and mining tasks. More formally, let X be a metric space endowed with a metric d X. Let p ∈ (0, ∞) and P p (X) the space of all Borel probability measures µ on X with finite moments of order p, i.e. X d X (x, x 0) p dµ(x) < ∞ for all x 0 in X. The p-Wasserstein distance between µ and ν is defined as: DISPLAYFORM0 Here, Π(µ, ν) is the set of probabilistic couplings π on (µ, ν). As such, for every Borel subsets A ⊆ X, we have that µ(A) = π(X × A) and ν(A) = π(A × X). It is well known that W p defines a metric over P p (X) as long as p ≥ 1 (e.g. BID39), Definition 6.2).When p = 1, W 1 is also known as Earth Mover's distance (EMD) or Monge-Kantorovich distance. The geometry of (P p (X), W 1 (X)) has been thoroughly studied, and there exists several works on computing EMD for point sets in R k (e.g. BID35). However, in a number of applications the use of W 2 (a.k.a root mean square bipartite matching distance) is a more natural distance arising in computer vision BID6, computer graphics BID5 BID16 BID36 BID7 or machine learning BID14. See BID16 for a discussion on the quality comparison between W 1 and W 2.Yet, the deployment of Wasserstein distances in a wide class of applications is somehow limited, especially because of an heavy computational burden. In the discrete version of the above optimisation problem, the number of variables scale quadratically with the number of samples in the distributions, and solving the associated linear program with network flow algorithms is known to have a cubical complexity. While recent strategies implying slicing technique BID6 BID26, entropic regularization BID13 BID3 BID37 or involving stochastic optimization BID21, have emerged, the cost of computing pairwise Wasserstein distances between a large number of distributions (like an image collection) is prohibitive. This is all the more true if one considers the problem of computing barycenters BID14 BID3 or population means. A recent attempt by Staib and colleagues BID38 use distributed computing for solving this problem in a scalable way. We propose in this work to learn an Euclidean embedding of distributions where the Euclidean norm approximates the Wasserstein distances. Finding such an embedding enables the use of standard Euclidean methods in the embedded space and significant speedup in pairwise Wasserstein distance computation, or construction of objects of interests such as barycenters. The embedding is expressed as a deep neural network, and is learnt with a strategy similar to those of Siamese networks BID11. We also show that simultaneously learning the inverse of the embedding function is possible and allows for a reconstruction of a probability distribution from the embedding. We first start by describing existing works on Wasserstein space embedding. We then proceed by presenting our learning framework and give proof of concepts and empirical on existing datasets. Metric embedding The question of metric embedding usually arises in the context of approximation algorithms. Generally speaking, one seeks a new representation (embedding) of data at hand in a new space where the distances from the original space are preserved. This new representation should, as a positive side effect, offers computational ease for time-consuming task (e.g. searching for a nearest neighbor), or interpretation facilities (e.g. visualization of high-dimensional datasets). More formally, given two metrics spaces (X, d X) and (Y, d y) and D ∈ [1, ∞), a mapping φ: X → Y is an embedding with distortion at most D if there exists a coefficient α DISPLAYFORM0 Here, the α parameter is to be understood as a global scaling coefficient. The distortion of the mapping is the infimum over all possible D such that the previous relation holds. Obviously, the lower the D, the better the quality of the embedding is. It should be noted that the existence of exact (isometric) embedding (D = 1) is not always guaranteed but sometimes possible. Finally, the embeddability of a metric space into another is possible if there exists a mapping with constant distortion. A good introduction on metric embedding can be found in BID30.Theoretical on Wasserstein space embedding Embedding Wasserstein space in normed metric space is still a theoretical and open questions BID31. Most of the theoretical guarantees were obtained with W 1. In the simple case where X = R, there exists an isometric embedding with L 1 between two absolutely continuous (wrt. the Lebesgue measure) probability measures µ and ν given by their by their cumulative distribution functions F µ and F ν, i.e. W 1 (µ, ν) = R |F µ (x) − F ν (x)|dx. This fact has been exploited in the computation of sliced Wasserstein distance BID6 BID28. Conversely, there is no known isometric embedding for pointsets in [n] k = {1, 2, . . ., n} k, i.e. regularly sampled grids in R k, but best known distortions are between O(k log n) and Ω(k + √ log n) BID10 BID23 BID24. Regarding W 2, recent BID1 have shown there does not exist meaningful embedding over R 3 with constant approximation. Their show notably that an embedding of pointsets of size n into L 1 must incur a distortion of O(√ log n). Regarding our choice of W 2 2, there does not exist embeddability up to our knowledge, but we show that, for a population of locally concentrated measures, a good approximation can be obtained with our technique. We now turn to existing methods that consider local linear approximations of the transport problem. Linearization of Wasserstein space Another line of work BID40 BID27 also considers the Riemannian structure of the Wasserstein space to provide meaningful linearization by projecting onto the tangent space. By doing so, they notably allows for faster computation of pairwise Wasserstein distances (only N transport computations instead of N (N − 1)/2 with N the number of samples in the dataset) and allow for statistical analysis of the embedded data. They proceed by specifying a template element and compute, from particle approximations of the data, linear transport plans with this template element, that allow to derive an embedding used for analysis. Seguy and Cuturi BID34 also proposed a similar pipeline, based on velocity field, but without relying on an implicit embedding. It is to be noted that for data in 2D, such as images, the use of cumulative Radon transform also allows for an embedding which can be used for interpolation or analysis BID6 BID26, by exploiting the exact solution of the optimal transport in 1D through cumulative distribution functions. Our work is the first to propose to learn a generic embedding rather than constructing it from explicit approximations/transformations of the data and analytical operators such as Riemannian Logarithm maps. As such, our formulation is generic and adapts to any type of data. Finally, since the mapping to the embedded space is constructed explicitly, handling unseen data does not require to compute new optimal transport plans or optimization, yielding extremely fast computation performances, with similar approximation performances. We discuss here how our method, coined DWE for Deep Wasserstein Embedding, learns in a supervised way a new representation of the data. To this end we need a pre-computed dataset that consists of pairs of histograms {x Wasserstein distance DISPLAYFORM0 ..,n . One immediate way to solve the problem would be to concatenate the samples x 1 and x 2 and learn a deep network that predicts y. This would work in theory but it would prevent us from interpreting the Wasserstein space and it is not by default symmetric which is a key property of the Wasserstein distance. Another way to encode this symmetry and to have a meaningful embedding that can be used more broadly is to use a Siamese neural network BID8 . Originally designed for metric learning purpose and similarity learning (based on labels), this type of architecture is usually defined by replicating a network which takes as input two samples from the same learning set, and learns a mapping to new space with a contrastive loss. It has mainly been used in computer vision, with successful applications to face recognition BID11 or one-shot learning for example BID25. Though its capacity to learn meaningful embeddings has been highlighted in BID41, it has never been used, to the best of our knowledge, for mimicking a specific distance that exhibits computation challenges. This is precisely our objective here. We propose to learn and embedding network φ that takes as input a histogram and project it in a given Euclidean space of R p. In practice, this embedding should mirror the geometrical property of the Wasserstein space. We also propose to regularize the computation of this embedding by adding a reconstruction loss based on a decoding network ψ. This has two important impacts: First we observed empirically that it eases the learning of the embedding and improves the generalization performance of the network (see experimental in appendix) by forcing the embedded representation to catch sufficient information of the input data to allow a good reconstruction. This type of autoencoder regularization loss has been discussed in BID43 in the different context of embedding learning. Second, using a decoder network allows the interpretation of the , which is of prime importance in several data-mining tasks (discussed in the next subsection). An overall picture depicting the whole process is given in FIG1. The global objective function reads min DISPLAYFORM1 where λ > 0 weights the two data fitting terms and KL(,) is the Kullbach-Leibler divergence. This choice is motivated by the fact that the Wasserstein metric operates on probability distributions. Once the functions φ and ψ have been learned, several data mining tasks can be operated in the Wasserstein space. We discuss here the potential applications of our computational scheme and its wide range of applications on problems where the Wasserstein distance plays an important role. Though our method is not an exact Wasserstein estimator, we empirically show in the numerical experiments that it performs very well and competes favorably with other classical computation strategies. Wasserstein barycenters BID0 BID14 BID7. Barycenters in Wasserstein space were first discussed by BID0. Designed through an analogy with barycenters in a Euclidean space, the Wasserstein barycenters of a family of measures are defined as minimizers of a weighted sum of squared Wasserstein distances. In our framework, barycenters can be obtained as DISPLAYFORM0 where x i are the data samples and the weights α i obeys the following constraints: i α i = 1 and α i > 0. Note that when we have only two samples, the barycenter corresponds to a Wasserstein interpolation between the two distributions with α = [1 − t, t] and 0 ≤ t ≤ 1 BID33. When the weights are uniform and the whole data collection is considered, the barycenter is the Wasserstein population mean, also known as Fréchet mean BID4.Principal Geodesic Analysis in Wasserstein space BID34 BID4. PGA, or Principal Geodesic Analysis, has first been introduced by Fletcher et al. BID18. It can be seen as a generalization of PCA on general Riemannian manifolds. Its goal is to find a set of directions, called geodesic directions or principal geodesics, that best encode the statistical variability of the data. It is possible to define PGA by making an analogy with PCA. Let x i ∈ R n be a set of elements, the classical PCA amounts to i) find x the mean of the data and subtract it to all the samples ii) build recursively a subspace V k = span(v 1, · · ·, v k) by solving the following maximization problem: DISPLAYFORM1 Fletcher gives a generalization of this problem for complete geodesic spaces by extending three important concepts: variance as the expected value of the squared Riemannian distance from mean, Geodesic subspaces as a portion of the manifold generated by principal directions, and a projection operator onto that geodesic submanifold. The space of probability distribution equipped with the Wasserstein metric (P p (X), W 2 2 (X)) defines a geodesic space with a Riemannian structure BID33, and an application of PGA is then an appealing tool for analyzing distributional data. However, as noted in BID34 BID4, a direct application of Fletcher's original algorithm is intractable because P p (X) is infinite dimensional and there is no analytical expression for the exponential or logarithmic maps allowing to travel to and from the corresponding Wasserstein tangent space. We propose a novel PGA approximation as the following procedure: i) find x the approximate Fréchet mean of the data as x = 1 N N i φ(x i) and subtract it to all the samples ii) build recursively a subspace V k = span(v 1, · · ·, v k) in the embedding space (v i being of the dimension of the embedded space) by solving the following maximization problem: DISPLAYFORM2 which is strictly equivalent to perform PCA in the embedded space. Any reconstruction from the corresponding subspace to the original space is conducted through ψ. We postpone a detailed analytical study of this approximation to subsequent works, as it is beyond the goals of this paper. Other possible methods. As a matter of facts, several other methods that operate on distributions can benefit from our approximation scheme. Most of those methods are the transposition of their Euclidian counterparts in the embedding space. Among them, clustering methods, such as Wasserstein k-means BID14, are readily adaptable to our framework. Recent works have also highlighted the success of using Wasserstein distance in dictionary learning BID32 or archetypal Analysis BID42. In this section we evaluate the performances of our method on grayscale images normalized as histograms. Images are offering a nice testbed because of their dimensionality and because large datasets are frequently available in computer vision. The framework of our approach as shown in FIG1 of an encoder φ and a decoder ψ composed as a cascade. The encoder produces the representation of input images h = φ(x). The architecture used for the embedding φ consists in 2 convolutional layers with ReLU activations: first a convolutional layer of 20 filters with a kernel of size 3 by 3, then a convolutional layer of 5 filters of size 5 by 5. The convolutional layers are followed by two linear dense layers respectively of size 100 and the final layer of size p = 50. The architecture for the reconstruction ψ consists in a dense layer of output 100 with ReLU activation, followed by a dense layer of output 5*784. We reshape the layer to map the input of a convolutional layer: the output vector is 3D-tensor. Eventually, we invert the convolutional layers of φ with two convolutional layers: first a convolutional layer of 20 filters with ReLU activation and a kernel of size 5 by 5, followed by a second layer with 1 filter, with a kernel of size 3 by 3. Eventually the decoder outputs a reconstruction image of shape 28 by 28. In this work, we only consider grayscale images, that are normalized to represent probability distributions. Hence each image is depicted as an histogram. In order to normalize the decoder reconstruction we use a softmax activation for the last layer. All the dataset considered are handwritten data and hence holds an inherent sparsity. In our case, we cannot promote the output sparsity through a convex L1 regularization because the softmax outputs positive values only and forces the sum of the output to be 1. Instead, we apply a p p pseudo -norm regularization with p = 1/2 on the reconstructed image, which promotes sparse output and allows for a sharper reconstruction of the images BID20 and DWE given as average number of W 2 2 computation per seconds for different configurations. Dataset and training. Our first numerical experiment is performed on the well known MNIST digits dataset. This dataset contains 28×28 images from 10 digit classes In order to create the training dataset we draw randomly one million pairs of indexes from the 60 000 training samples and compute the exact Wasserstein distance with a squared Euclidean ground metric using the POT toolbox. All those pairwise distances can be computed in an embarrassingly parallel scheme (1h30 on 1 CPU). Among this million, 700 000 are used for learning the neural network, 200 000 are used for validation and 100 000 pairs are used for testing purposes. The DWE model is learnt on a GTX TitanX Maxwell 980 GPU node and takes around 1h20 with a stopping criterion computed from on a validation set. Numerical precision and computational performance The true and predicted values for the Wasserstein distances are given in FIG2. We can see that we reach a good precision with a test MSE of 0.4 and a relative MSE of 2e-3. The correlation is of 0.996 and the quantiles show that we have a very small uncertainty with only a slight bias for large values where only a small number of samples is available. This show that a good approximation of the W 2 2 can be performed by our approach (≈1e-3 relative error). Now we investigate the ability of our approach to compute W 2 2 efficiently. To this end we compute the average speed of Wasserstein distance computation on test dataset to estimate the number of W 2 2 computations per second in the Table of FIG2. Note that there are 2 ways to compute the W 2 2 with our approach denoted as Indep and Pairwise. This comes from the fact that our W 2 2 computation is basically a squared Euclidean norm in the embedding space. The first computation measures the time to compute the W 2 2 between independent samples by projecting both in the embedding and computing their distance. The second computation aims at computing all the pairwise W 2 2 between two sets of samples and this time one only needs to project the samples once and compute all the pairwise distances, making it more efficient. Note that the second approach would be the one used in a retrieval problem where one would just embed the query and then compute the distance to all or a selection of the dataset to find a Wasserstein nearest neighbor for instance. The speedup achieved by our method is very impressive even on CPU with speedup of x18 and x1000 respectively for Indep and Pairwise. But the GPU allows an even larger speedup of respectively x1000 and x500 000 with respect to a state-of-the-art C compiled Network Flow LP solver of the POT Toolbox BID5. Of course this speed-up comes at the price of a time-consuming learning phase, which makes our method better suited for mining large scale datasets and online applications. Wasserstein Barycenters Next we evaluate our embedding on the task of computing Wasserstein Barycenters for each class of the MNIST dataset. We take 1000 samples per class from the test dataset and compute their uniform weight Wasserstein Barycenter using Eq. 3. The ing barycenters and their Euclidean means are reported in FIG3. Note that not only those barycenters are sensible but also conserve most of their sharpness which is a problem that occurs for regularized barycenters BID37 BID3. The computation of those barycenters is also very efficient since it requires only 20ms per barycenter (for 1000 samples) and its complexity scales linearly with the number of samples. Figure 4: Principal Geodesic Analysis for classes 0,1 and 4 from the MNIST dataset for squared Euclidean distance (L2) and Deep Wasserstein Embedding (DWE). For each class and method we show the variation from the barycenter along one of the first 3 principal modes of variation. Principal Geodesic Analysis We report in Figure 4 the Principal Component Analysis (L2) and Principal Geodesic Analysis (DWE) for 3 classes of the MNIST dataset. We can see that using Wasserstein to encode the displacement of mass leads to more semantic and nonlinear subspaces such as rotation/width of the stroke and global sizes of the digits. This is well known and has been illustrated in BID34. Nevertheless our method allows for estimating the principal component even in large scale datasets and our reconstruction seems to be more detailed compared to BID34 maybe because our approach can use a very large number of samples for subspace estimation. Datasets The Google Doodle dataset is a crowd sourced dataset that is freely available from the web 1 and contains 50 million drawings. The data has been collected by asking users to hand draw with a mouse a given object or animal in less than 20 seconds. This lead to a large number of examples for each class but also a lot of noise in the sens that people often get stopped before the end of their drawing.We used the numpy bitmaps format proposed on the quick draw github account. Those are made of the simplified drawings rendered into 28x28 grayscale images. These images are aligned to the center of the drawing's bounding box. In this paper we downloaded the classes Cat, Crab and Faces and tried to learn a Wasserstein embedding for each of these classes with the same architecture as used for MNIST. In order to create the training dataset we draw randomly 1 million pairs of indexes from the training samples of each categories and compute the exact Wasserstein distance with squared Euclidean ground metric using the POT toolbox The numerical performances of the learned models on each of the doodle dataset is reported in the diagonal of Table 1. Those datasets are much more difficult than MNIST because they have not been curated and contain a very large variance due to numerous unfinished doodles. An interesting comparison is the cross comparison between datasets where we use the embedding learned on one dataset to compute the W 2 2 on another. The cross performances is given in Table 1 and shows that while there is definitively a loss in accuracy of the prediction, this loss is limited between the doodle datasets that all have an important variety. Performance loss across doodle and MNIST dataset is larger because the latter is highly structured and one needs to have a representative dataset to generalize well which is not the case with MNIST. This also clearly highlights that our method finds a data-dependent embedding that is specific to the geometry of the learning set. Wasserstein interpolation Next we qualitatively evaluate the subspace learned by DWE by comparing the Wasserstein interpolation of our approach with the true Wasserstein interpolation estimated by solving the OT linear program and by using regularized OT with Bregman projections BID3. The interpolation for all those methods and the Euclidean interpolation are available in FIG4. The LP solver takes a long time (20 sec/interp) and leads to a "noisy" interpolation as already explained in. The regularized Wasserstein barycenter is obtained more rapidly (4 sec/interp) but is also very smooth at the risk of loosing some details, despite choosing a small regularization that prevents numerical problems. Our reconstruction also looses some details due to the Auto-Encoder error but is very fast and can be done in real time (4 ms/interp). In this work we presented a computational approximation of the Wasserstein distance suitable for large scale data mining tasks. Our method finds an embedding of the samples in a space where the Euclidean distance emulates the behavior of the Wasserstein distance. Thanks to this embedding, numerous data analysis tasks can be conducted at a very cheap computational price. We forecast that this strategy can help in generalizing the use of Wasserstein distance in numerous applications. However, while our method is very appealing in practice it still raises a few questions about the theoretical guarantees and approximation quality. First it is difficult to foresee from a given network architecture if it is sufficiently (or too much) complex for finding a successful embedding. It can be conjectured that it is dependent on the complexity of the data at hand and also the locality of the manifold where the data live in. Second, the theoretical existence on such Wasserstein embedding with constant distortion are still lacking. Future works will consider these questions as well as applications of our approximation strategy on a wider range of ground loss and data mining tasks. Also, we will study the transferability of one database to another (i.e. leveraging on previously computed embedding) to diminish the computational burden of computing Wasserstein distances on numerous pairs for the learning process, by considering for instance domain adaptation strategies between embeddings. We discuss here the role of the decoder, not only as a matter of interpreting the , but rather as a regularizer. We train our DWE on MNIST with and without the decoder and compares the learning curves of the MSE on the validation set. In FIG5, DWE achieves a lower MSE with the decoder, which enforces the use of a decoder into our framework. We illustrate here the plurality of examples found in this dataset by drawing random excerpts in FIG6. There exist also a lot of outlier images (scribblings, texts, etc.). As discussed in the main text several drawings are unfinished and/or do not represent correctly the required class. We then compute the Wasserstein interpolation between four samples of each datasets in FIG7. Note that these interpolation might not be optimal w.r.t. the objects but we clearly see a continuous displacement of mass that is characteristic of optimal transport. This leads to surprising artefacts for example when the eye of a face fuse with the border while the nose turns into an eye. Also note that there is no reason for a Wasserstein barycenter to be a realistic sample. In FIG9 we show the quantitative evaluation for DWE on the three datasets, that correspond to Table 1 in the paper. The reported MSE performances correspond to the ones in the diagonal of Table 1. We can see that the deviation is larger for large values of W We report in FIG1 a nearest neighbor walk (sequential jumps to the nearest, in the sense of the considered metric, image that has not already been seen) on a subset of 10000 test samples starting with the same image but using either the L2 distance in the input or DWE embedded space. Note that the L2 in input space is here very sensible to outliers (black squares) that are rare in the dataset but have a L2 distance rather small to all other examples (most sequences converge to those samples). Conversely the DWE neighbors follow a smooth trajectory along the examples. This illustrates the advantage of W 2 2 for image retrieval, which is made computationally possible with DWE. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJyEH91A- | We show that it is possible to fastly approximate Wasserstein distances computation by finding an appropriate embedding where Euclidean distance emulates the Wasserstein distance |
Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a , the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%. Word embeddings are perceived as one of the most impactful contributions from unsupervised representation learning to natural language processing from the past few years BID10. Word embeddings are learned once on a large-scale stream of words. A key benefit is that these pre-computed vectors can be re-used almost universally in many different downstream applications. Recently, there has been increasing interest in learning universal sentence embeddings. BID24 have shown that the best encoding architectures are based on recurrent neural networks (RNNs) BID5 BID25 or the Transformer architecture BID2. These techniques are, however, substantially more expensive to train and apply than word embeddings BID14 BID2. Their usefulness is therefore limited when fast processing of large volumes of data is critical. More efficient encoding techniques are typically based on aggregated word embeddings such as Continuous Bag of Words (CBOW), which is a mere summation of the word vectors BID19. Despite CBOW's simplicity, it attains strong on many downstream tasks. Using sophisticated weighting schemes, the performance of aggregated word embeddings can be further increased BID0, coming even close to strong LSTM baselines BID26 BID13 such as InferSent BID5. This raises the question how much benefit recurrent encoders actually provide over simple word embedding based methods BID32. In their analysis, BID13 suggest that the main difference may be the ability to encode word order. In this paper, we propose an intuitive method to enhance aggregated word embeddings by word order awareness. The major drawback of these CBOW-like approaches is that they are solely based on addition. However, addition is not all you need. Since it is a commutative operation, the aforementioned methods are not able to capture any notion of word order. However, word order information is crucial for some tasks, e.g., sentiment analysis BID13. For instance, the following two sentences yield the exact same embedding in an addition-based word embedding aggregation technique: "The movie was not awful, it was rather great." and "The movie was not great, it was rather awful." A classifier based on the CBOW embedding of these sentences would inevitably fail to distinguish the two different meanings .To alleviate this drawback, BID28 propose to model each word as a matrix rather than a vector, and compose multiple word embeddings via matrix multiplication rather than addition. This so-called Compositional Matrix Space Model (CMSM) of language has powerful theoretical properties that subsume properties from vector-based models and symbolic approaches. The most obvious advantage is the non-commutativity of matrix multiplication as opposed to addition, which in order-aware encodings. In contrast to vector-based word embeddings, there is so far no solution to effectively train the parameters of word matrices on large-scale unlabeled data. Training schemes from previous work were specifically designed for sentiment analysis BID34 BID1. Those require complex, multi-stage initialization, which indicates the difficulty of training CMSMs. We show that CMSMs can be trained in a similar way as the well-known CBOW model of word2vec BID19. We make two simple yet critical changes to the initialization strategy and training objective of CBOW. Hence, we present the first unsupervised training scheme for CMSMs, which we call Continual Multiplication Of Words (CMOW). We evaluate our model's capability to capture linguistic properties in the encoded text. We find that CMOW and CBOW have properties that are complementary. On the one hand, CBOW yields much stronger at the word content memorization task. CMOW, on the other hand, offers an advantage in all other linguistic probing tasks, often by a wide margin. Thus, we propose a hybrid model to jointly learn the word vectors of CBOW and the word matrices for CMOW.Our experimental confirm the effectiveness of our hybrid CBOW-CMOW approach. At comparable embedding size, CBOW-CMOW retains CBOW's ability to memorize word content while at the same time improves the performance on the linguistic probing tasks by 8%. CBOW-CMOW outperforms CBOW at 8 out of 11 supervised downstream tasks scoring only 0.6% lower on the tasks where CBOW is slightly better. On average, the hybrid model improves the performance over CBOW by 1.2% on supervised downstream tasks, and by 0.5% on the unsupervised tasks. In summary, our contributions are: For the first time, we present an unsupervised, efficient training scheme for the Compositional Matrix Space Model. Key elements of our scheme are an initialization strategy and training objective that are specifically designed for training CMSMs. We quantitatively demonstrate that the strengths of the ing embedding model are complementary to classical CBOW embeddings. We successfully combine both approaches into a hybrid model that is superior to its individual parts. After giving a brief overview of the related work, we formally introduce CBOW, CMOW, and the hybrid model in Section 3. We describe our experimental setup and present the in Section 4. The are discussed in Section 5, before we conclude. We present an algorithm for learning the weights of the Compositional Matrix Space Model BID28. To the best of our knowledge, only BID34 and BID1 have addressed this. They present complex, multi-level initialization strategies to achieve reasonable . Both papers train and evaluate their model on sentiment analysis datasets only, but they do not evaluate their CMSM as a general-purpose sentence encoder. Other works have represented words as matrices as well, but unlike our work not within the framework of the CMSM. BID11 represent only relational words as matrices. BID29 and BID3 argue that while CMSMs are arguably more expressive than embeddings located in a vector space, the associativeness of matrix multiplication does not reflect the hierarchical structure of language. Instead, they represent the word sequence as a tree structure. BID29 directly represent each word as a matrix (and a vector) in a recursive neural network. BID3 present a two-layer architecture. In the first layer, pre-trained word embeddings are mapped to their matrix representation. In the second layer, a non-linear function composes the constituents. Sentence embeddings have recently become an active field of research. A desirable property of the embeddings is that the encoded knowledge is useful in a variety of high-level downstream tasks. To this end, BID4 and introduced an evaluation framework for sentence encoders that tests both their performance on downstream tasks as well as their ability to capture linguistic properties. Most works focus on either i) the ability of encoders to capture appropriate semantics or on ii) training objectives that give the encoders incentive to capture those semantics. Regarding the former, large RNNs are by far the most popular BID5 BID16 BID31 BID22 BID14 BID18 BID25 BID17, followed by convolutional neural networks BID7. A third group are efficient methods that aggregate word embeddings BID33 BID0 BID23 BID26. Most of the methods in the latter group are word order agnostic. Sent2Vec BID23 is an exception in the sense that they also incorporate bigrams. Despite also employing an objective similar to CBOW, their work is very different to ours in that they still use addition as composition function. Regarding the training objectives, there is an ongoing debate whether language modeling BID25 BID27, machine translation BID18, natural language inference BID5, paraphrase identification BID33, or a mix of many tasks BID30 ) is most appropriate for incentivizing the models to learn important aspects of language. In our study, we focus on adapting the well-known objective from word2vec BID19 for the CMSM. We formally present CBOW and CMOW encoders in a unified framework. Subsequently, we discuss the training objective, the initialization strategy, and the hybrid model. We start with a lookup table for the word matrices, i.e., an embedding, E ∈ R m×d×d, where m is the vocabulary size and d is the dimensionality of the (square) matrices. We denote a specific word matrix of the embedding by E[·]. By ∆ ∈ {,} we denote the function that aggregates word embeddings into a sentence embedding. Formally, given a sequence s of arbitrary length n, the sequence is encoded as DISPLAYFORM0 For ∆ =, the model becomes CBOW. By setting ∆ = (matrix multiplication), we obtain CMOW. Because the of the aggregation for any prefix of the sequence is again a square matrix of shape d × d irrespective of the aggregation function, the model is well defined for any non-zero sequence length. Thus, it can serve as a general-purpose text encoder. Throughout the remainder of this paper, we denote the encoding step by enc DISPLAYFORM1, where flatten concatenates the columns of the matrices to obtain a vector that can be passed to the next layer. Motivated by its success, we employ a similar training objective as word2vec BID20. The objective consists of maximizing the conditional probability of a word w O in a certain context s: p(w O | s). For a word w t at position t within a sentence, we consider the window of tokens (w t−c, . . ., w t+c) around that word. From that window, a target word w O:= {w t+i}, i ∈ {−c, . . ., +c} is selected. The remaining 2c words in the window are used as the context s. The training itself is conducted via negative sampling NEG-k, which is an efficient approximation of the softmax BID20. For each positive example, k negative examples (noise words) are drawn from some noise distribution P n (w). The goal is to distinguish the target word w O from the randomly sampled noise words. Given the encoded input words enc ∆ (s), a logistic regression with weights v ∈ R m×d 2 is conducted to predict 1 for context words and 0 for noise words. The negative sampling training objective becomes: DISPLAYFORM0 In the original word2vec BID19, the center word w O:= w t is used as the target word. In our experiments, however, this objective did not yield to satisfactory . We hypothesize that this objective is too easy to solve for a word order-aware text encoder, which diminishes incentive for the encoder to capture semantic information at the sentence level. Instead, we propose to select a random output word w O ∼ U({w t−c, . . ., w t+c}) from the window. The rationale is the following: By removing the information at which position the word was removed from the window, the model is forced to build a semantically rich representation of the whole sentence. For CMOW, modifying the objective leads to a large improvement on downstream tasks by 20.8% on average, while it does not make a difference for CBOW. We present details in the appendix (Section B.1). So far, only BID34 and BID1 have proposed algorithms for learning the parameters for the matrices in CMSMs. Both works devote particular attention to the initialization, noting that a standard initialization randomly sampled from N (0, 0.1) does not work well due to the optimization problem being non-convex. To alleviate this, the authors of both papers propose rather complicated initialization strategies based on a bag-of-words solution BID34 or incremental training, starting with two word phrases BID1. We instead propose an effective yet simple strategy, in which the embedding matrices are initialized close to the identity matrix. We argue that modern optimizers based on stochastic gradient descent have proven to find good solutions to optimization problems even when those are non-convex as in optimizing the weights of deep neural networks. CMOW is essentially a deep linear neural network with flexible layers, where each layer corresponds to a word in the sentence. The output of the final layer is then used as an embedding for the sentence. A subsequent classifier may expect that all embeddings come from the same distribution. We argue that initializing the weights randomly from N (0, 0.1) or any other distribution that has most of its mass around zero is problematic in such a setting. This includes the Glorot initialization BID8, which was designed to alleviate the problem of vanishing gradients. FIG0 illustrates the problem: With each multiplication, the values in the embedding become smaller (by about one order of magnitude). This leads to the undesirable effect that short sentences have a drastically different representation than larger ones, and that the embedding values vanish for long sequences. To prevent this problem of vanishing values, we propose an initialization strategy, where each word embedding matrix E[w] ∈ R d×d is initialized as a random deviation from the identity matrix: DISPLAYFORM0 It is intuitive and also easy to prove that the expected value of the multiplication of any number of such word embedding matrices is again the identity matrix (see Appendix A). FIG0 shows how our initialization strategy is able to prevent vanishing values. For training CMSMs, we observe a substantial improvement over Glorot initialization of 2.8% on average. We present details in Section B.2 of the appendix. The simplest combination is to train CBOW and CMOW separately and concatenate the ing sentence embeddings at test time. However, we did not find this approach to work well in preliminary experiments. We conjecture that there is still a considerable overlap in the features learned by each model, which hinders better performance on downstream tasks. To prevent redundancy in the learned features, we expose CBOW and CMOW to a shared learning signal by training them jointly. To this end, we modify Equation 1 as follows: DISPLAYFORM1 Intuitively, the model uses logistic regression to predict the missing word from the concatenation of CBOW and CMOW embeddings. Again, E i ∈ R m×di×di are separate word lookup tables for CBOW and CMOW, respectively, and v ∈ R m×(d We conducted experiments to evaluate the effect of using our proposed models for training CMSMs. In this section, we describe the experimental setup and present the on linguistic probing as well as downstream tasks. In order to limit the total batch size and to avoid expensive tokenization steps as much as possible, we created each batch in the following way: 1,024 sentences from the corpus are selected at random. After tokenizing each sentence, we randomly select (without replacement) at maximum 30 words from the sentence to function as center words for a context window of size c = 5, i.e., we generate up to 30 training samples per sentence. By padding with copies of the neutral element, we also include words as center words for which there are not enough words in the left or the right context. For CBOW, the neutral element is the zero matrix. For CMOW, the neutral element is the identity matrix. We trained our models on the unlabeled UMBC news corpus BID12, which consists of about 134 million sentences and 3 billion tokens. Each sentence has 24.8 words on average with a standard deviation of 14.6. Since we only draw 30 samples per sentence to limit the batch size, not all possible training examples are used in an epoch, which may in slightly worse generalization if the model is trained for a fixed number of epochs. We therefore use 0.1% of the 134 million sentences for validation. After 1,000 updates (i.e., approximately every millionth training sample) the validation loss is calculated, and training terminates after 10 consecutive validations of no improvement. Following BID20, we limit the vocabulary to the 30,000 mostfrequent words for comparing our different methods and their variants. Out-of-vocabulary words are discarded. The optimization is carried out by Adam BID15 with an initial learning rate of 0.0003 and k = 20 negative samples as suggested by BID20 for rather small datasets. For the noise distribution P n (w) we again follow BID20 and use U(w) 3/4 /Z, where Z is the partition function to normalize the distribution. We have trained five different models: CBOW and CMOW with d = 20 and d = 28, which lead to 400-dimensional and 784-dimensional word embeddings, respectively. We also trained the Hybrid CBOW-CMOW model with d = 20 for each component, so that the total model has 800 parameters per word in the lookup tables. We report the of two more models: H-CBOW is the 400-dimensional CBOW component trained in Hybrid and H-CMOW is the respective CMOW component. Below, we compare the 800-dimensional Hybrid method to the 784-dimensional CBOW and CMOW models. After training, only the encoder of the model enc E ∆ is retained. We assess the capability to encode linguistic properties by evaluating on 10 linguistic probing tasks. In particular, the Word Content (WC) task tests the ability to memorize exact words in the sentence. Bigram Shift (BShift) analyzes the encoder's sensitivity to word order. The downstream performance is evaluated on 10 supervised and 6 unsupervised tasks from the SentEval framework BID4. We use the standard evaluation configuration, where a logistic regression classifier is trained on top of the embeddings. Considering the linguistic probing tasks (see TAB0), CBOW and CMOW show complementary . While CBOW yields the highest performance at word content memorization, CMOW outperforms CBOW at all other tasks. Most improvements vary between 1-3 percentage points. The difference is approximately 8 points for CoordInv and Length, and even 21 points for BShift. The hybrid model yields scores close to or even above the better model of the two on all tasks. In terms of relative numbers, the hybrid model improves upon CBOW in all probing tasks but WC and SOMO. The relative improvement averaged over all tasks is 8%. Compared to CMOW, the hybrid model shows rather small differences. The largest loss is by 4% on the CoordInv task. However, due to the large gain in WC (20.9%), the overall average gain is still 1.6%.We now compare the jointly trained H-CMOW and H-CBOW with their separately trained 400-dimensional counterparts. We observe that CMOW loses most of its ability to memorize word content, while CBOW shows a slight gain. On the other side, H-CMOW shows, among others, improvements at BShift. TAB1 shows the scores from the supervised downstream tasks. Comparing the 784-dimensional models, again, CBOW and CMOW seem to complement each other. This time, however, CBOW has the upperhand, matching or outperforming CMOW on all supervised downstream tasks except TREC by up to 4 points. On the TREC task, on the other hand, CMOW outperforms CBOW by 2.5 points. Our jointly trained model is not more than 0.8 points below the better one of CBOW and CMOW on any of the considered supervised downstream tasks. On 7 out of 11 supervised tasks, the joint model even improves upon the better model, and on SST2, SST5, and MRPC the difference is more than 1 point. The average relative improvement over all tasks is 1.2%.Regarding the unsupervised downstream tasks TAB2, CBOW is clearly superior to CMOW on all datasets by wide margins. For example, on STS13, CBOW's score is 50% higher. The hybrid model is able to repair this deficit, reducing the difference to 8%. It even outperforms CBOW on two of the tasks, and yields a slight improvement of 0.5% on average over all unsupervised downstream tasks. However, the variance in relative performance is notably larger than on the supervised downstream tasks. Our CMOW model produces sentence embeddings that are approximately at the level of fastSent BID14. Thus, CMOW is a reasonable choice as a sentence encoder. Essential to the success of our training schema for the CMOW model are two changes to the original word2vec training. First, our initialization strategy improved the downstream performance by 2.8% compared to Glorot initialization. Secondly, by choosing the target word of the objective at random, the performance of CMOW on downstream tasks improved by 20.8% on average. Hence, our novel training scheme is the first that provides an effective way to obtain parameters for the Compositional Matrix Space Model of language from unlabeled, large-scale datasets. Regarding the probing tasks, we observe that CMOW embeddings better encode the linguistic properties of sentences than CBOW. CMOW gets reasonably close to CBOW on some downstream tasks. However, CMOW does not in general supersede CBOW embeddings. This can be explained by the fact that CBOW is stronger at word content memorization, which is known to highly correlate with the performance on most downstream tasks ). Yet, CMOW has an increased performance on the TREC question type classification task (88.0 compared to 85.6). The rationale is that this particular TREC task belongs to a class of downstream tasks that require capturing other linguistic properties apart from Word Content.Due to joint training, our hybrid model learns to pick up the best features from CBOW and CMOW simultaneously. It enables both models to focus on their respective strengths. This can best be seen by observing that H-CMOW almost completely loses its ability to memorize word content. In return, H-CMOW has more capacity to learn other properties, as seen in the increase in performance at BShift and others. A complementary behavior can be observed for H-CBOW, whose scores on Word Content are increased. Consequently, with an 8% improvement on average, the hybrid model is substantially more linguistically informed than CBOW. This transfers to an overall performance improvement by 1.2% on average over 11 supervised downstream tasks, with large improvements on sentiment analysis tasks (SST2, SST5), question classification (TREC), and the sentence representation benchmark (STS-B). The improvements on these tasks is expected because they arguably depend on word order information. On the other tasks, the differences are small. Again, this can be explained by the fact that most tasks in the SentEval framework mainly depend on word content memorization, where the hybrid model does not improve upon CBOW.Please note, the models in our study do not represent the state-of-the-art for sentence embeddings. BID24 show that better scores are achieved by LSTMs and Transformer models, but also by averaging word embedding from fastText BID21. These embeddings were trained on the CBOW objective, and are thus very similar to our models. However, they are trained on large corpora (600B tokens vs 3B in our study), use large vocabularies (2M vs 30k in our study), and incorporate numerous tricks to further enhance the quality of their models: word subsampling, subword-information, phrase representation, n-gram representations, position-dependent weighting, and corpus de-duplication. In the present study, we focus on comparing CBOW, CMOW, and the hybrid model in a scenario where we have full control over the independent variables. To single out the effect of the independent variables better, we keep our models relatively simple. Our analysis yields interesting insights on what our models learn when trained separately or jointly, which we consider more valuable in the long term for the research field of text representation learning. We offer an efficient order-aware extension to embedding algorithms from the bag-of-words family. Our 784-dimensional CMOW embeddings can be computed at the same rate as CBOW embeddings. We empirically measured in our experiments 71k for CMOW vs. 61k for CBOW in terms of encoding sentences per second. This is because of the fast implementation of matrix multiplication in GPUs. It allows us to encode sentences approximately 5 times faster than using a simple Elman RNN of the same size (12k per second). Our matrix embedding approach also offers valuable theoretical advantages over RNNs and other autoregressive models. Matrix multiplication is associative such that only log 2 n sequential steps are necessary to encode a sequence of size n. Besides parallelization, also dynamic programming techniques can be employed to further reduce the number of matrix multiplication steps, e. g., by pre-computing frequent bigrams. We therefore expect our matrix embedding approach to be specifically well-suited for large-scale, time-sensitive text encoding applications. Our hybrid model serves as a blueprint for using CMOW in conjunction with other existing embedding techniques such as fastText BID21. We have presented the first efficient, unsupervised learning scheme for the word order aware Compositional Matrix Space Model. We showed that the ing sentence embeddings capture linguistic features that are complementary to CBOW embeddings. We thereupon presented a hybrid model with CBOW that is able to combine the complementary strengths of both models to yield an improved downstream task performance, in particular on tasks that depend on word order information. Thus, our model narrows the gap in terms of representational power between simple word embedding based sentence encoders and highly non-linear recurrent sentence encoders. We made the code for this paper available at https://github.com/florianmai/ word2mat. The statement that we formally proof is the following. For any sequence s = s 1... s n: DISPLAYFORM0 The basis (n = 1) follows trivially due to the expected value of each entry being the mean of the normal distribution. For the induction step, let E[DISPLAYFORM1 In Section 3.2, we describe a more general training objective than the classical CBOW objective from BID19 . The original objective always sets the center word from the window of tokens (w t−c, . . ., w t+c) as target word, w O = w t. In preliminary experiments, this did not yield satisfactory . We believe that this objective is too simple for learning sentence embeddings that capture semantic information. Therefore, we experimented a variant where the target word is sampled randomly from a uniform distribution, w O:= U({w t−c, . . ., w t+c}).To test the effectiveness of this modified objective, we evaluate it with the same experimental setup as described in Section 4. TAB3 lists the on the linguistic probing tasks. CMOW-C and CBOW-C refer to the models where the center word is used as the target. CMOW-R and CBOW-R refer to the models where the target word is sampled randomly. While CMOW-R and CMOW-C perform comparably on most probing tasks, CMOW-C yields 5 points lower scores on WordContent and BigramShift. Consequently, CMOW-R also outperforms CMOW-C on 10 out of 11 supervised downstream tasks and on all unsupervised downstream tasks, as shown in TAB4, respectively. On average over all downstream tasks, the relative improvement is 20.8%. For CBOW, the scores on downstream tasks increase on some tasks and decrease on others. The differences are miniscule. On average over all 16 downstream tasks, CBOW-R scores 0.1% lower than CBOW-C. In Section 3.3, we present a novel random initialization strategy. We argue why it is more adequate for training CMSMs than classic strategies that initialize all parameters with random values close to zero, and use it in our experiments to train CMOW.To verify the effectiveness of our initialization strategy empirically, we evaluate it with the same experimental setup as described in Section 4. The only difference is the initialization strategy, where we include Glorot initialization BID8 and the standard initialization from N (0, 0.1). TAB6 shows the on the probing tasks. While Glorot achieves slightly better on BShift and TopConst, CMOW's ability to memorize word content is improved by a wide margin by our initialization strategy. This again affects the downstream performance as shown in TAB7 and 9, respectively: 7 out of 11 supervised downstream tasks and 4 out of 5 unsupervised downstream tasks improve. On average, the relative improvement of our strategy compared to Glorot initialization is 2.8%. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1MgjoR9tQ | We present a novel training scheme for efficiently obtaining order-aware sentence representations. |
This paper proposes Metagross (Meta Gated Recursive Controller), a new neural sequence modeling unit. Our proposed unit is characterized by recursive parameterization of its gating functions, i.e., gating mechanisms of Metagross are controlled by instances of itself, which are repeatedly called in a recursive fashion. This can be interpreted as a form of meta-gating and recursively parameterizing a recurrent model. We postulate that our proposed inductive bias provides modeling benefits pertaining to learning with inherently hierarchically-structured sequence data (e.g., language, logical or music tasks). To this end, we conduct extensive experiments on recursive logic tasks (sorting, tree traversal, logical inference), sequential pixel-by-pixel classification, semantic parsing, code generation, machine translation and polyphonic music modeling, demonstrating the widespread utility of the proposed approach, i.e., achieving state-of-the-art (or close) performance on all tasks. Sequences are fundamentally native to the world we live in, i.e., language, logic, music and time are all well expressed in sequential form. To this end, the design of effective and powerful sequential inductive biases has far-reaching benefits across many applications. Across many of these domains, e.g., natural language processing or speech, the sequence encoder lives at the heart of many powerful state-of-the-art model architectures. Models based on the notion of recurrence have enjoyed pervasive impact across many applications. In particular, the best recurrent models operate with gating functions that not only ameliorate vanishing gradient issues but also enjoy fine-grain control over temporal compositionality (; . Specifically, these gating functions are typically static and trained via an alternate transformation over the original input. In this paper, we propose a new sequence model that recursively parameterizes the recurrent unit. More concretely, the gating functions of our model are now parameterized repeatedly by instances of itself which imbues our model with the ability to reason deeply 1 and recursively about certain inputs. To achieve the latter, we propose a soft dynamic recursion mechanism, which softly learns the depth of recursive parameterization at a per-token basis. Our formulation can be interpreted as a form of meta-gating since temporal compositionality is now being meta-controlled at various levels of abstractions. Our proposed method, Meta Gated Recursive Controller Units (METAGROSS), marries the benefits of recursive reasoning with recurrent models. Notably, we postulate that this formulation brings about benefits pertaining to modeling data that is instrinsically hierarchical (recursive) in nature, e.g., natural language, music and logic, an increasingly prosperous and emerging area of research (; ;). While the notion of recursive neural networks is not new, our work is neither concerned with syntax-guided composition (; ; nor unsupervised grammar induction (; ; ;). Instead, our work is a propulsion on a different frontier, i.e., learning recursively parameterized models which bears a totally different meaning. Overall, the key contributions of this work are as follows: • We propose a new sequence model. Our model is distinctly characterized by recursive parameterization of recurrent gates, i.e., compositional flow is controlled by instances of itself,á la repeatedly and recursively. We propose a soft dynamic recursion mechanism that dynamically and softly learns the recursive depth of the model at a token-level. • We propose a non-autoregressive parallel variation of METAGROSS,that when equipped with the standard Transformer model , leads to gains in performance. • We evaluate our proposed method on a potpourri of sequence modeling tasks, i.e., logical recursive tasks (sorting, tree traversal, logical inference), pixel-wise sequential image classification, semantic parsing, neural machine translation and polyphonic music modeling. METAGROSS achieves state-of-the-art performance (or close) on all tasks. This section introduces our proposed model. METAGROSS is fundamentally a recurrent model. Our proposed model accepts a sequence of vectors X ∈ R ×d as input. The main unit of the Metagross unit h t = Metagross n (x t, h t−1) is defined as follows: where σ r is a nonlinear activation such as tanh. σ s is the sigmoid activation function. In a nutshell, the Metagross unit recursively calls itself until a max depth L is hit. When n = L, f t and o t are parameterized by: is the forget and output gate of METAGROSS at time step t while at the maximum depth L. We also include an optional residual connection h n t = h t + x t to facilitate gradient flow down the recursive parameterization of METAGROSS. We propose learning the depth of recursion in a data-driven fashion. To learn α t, β t, we use the following: where F * (xt) = W x t + b is a simple linear transformation layer applied to sequence X across the temporal dimension. Intuitively, α, β control the extent of recursion, enabling a soft depth pertaining to the hierarchical parameterization. Alternatively, we may also consider a static variation where: where the same value of α, β is computed based on global information from the entire sequence. Note that this strictly cannot be used for autoregressive decoding. Finally, we note that it is also possible to assign α ∈ R, β ∈ R to be trainable scalar parameters. Intuitively, F * n ∀ * ∈ F, O, Z are level-wise parameters of METAGROSS. We parameterize F n with either level-wise RNN units or simple linear transformations. We postulate that METAGROSS can also be useful as a non-autoregressive parallel model. This can be interpreted as a form of recursive feed-forward layer that is used in place of recurrent META-GROSS for speed benefits. In early experiments, we find this a useful enhancement to state-of-theart Transformer models. The non-autoregressive variant of METAGROSS is written as follows: More concretely, we dispense with the reliance on the previous hidden state. This can be used in place of any position-wise feed-forward layer. In this case, note that F * n (x t) are typically positionwise functions as well. We conduct experiments on a suite of diagnostic synthetic tasks and real world tasks. We evaluate our model on three diagnostic logical tasks as follows: • Task 1 (SORT SEQUENCES) -The input to the model is a sequence of integers. The correct output is the sorted sequence of integers. Since mapping sorted inputs to outputs can be implemented in a recursive fashion, we evaluate our model's ability to better model recursively structured sequence data. Example input output pair would be 9, 1, 10, 5, 3 → 1, 3, 5, 9, 10. • Task 2 (TREE TRAVERSAL) -We construct a binary tree of maximum depth N. The goal is to generate the postorder tree traversal given the inorder and preorder traversal of the tree. Note that this is known to arrive at only one unique solution. The constructed trees have random sparsity where we assign a probability p of growing the tree up to maximum depth N. Hence, the trees can be of varying depths 2. This requires inferring hierarchical structure and long-term reasoning across sequences. We concatenate the postorder and inorder sequences, delimiteted by a special token. We evaluate on n ∈ {3, 4, 5, 8, 10}. For n = {5, 8}, we ensure that each tree traversal has at least 10 tokens. For n = 10, we ensure that each path has at least 15 tokens. Example input output pair would be 13, 15, 4, 7, 5, X, 13, 4, 15, 5, 7 → 7, 15, 13, 4, 5. • Task 3 (LOGICAL INFERENCE) -We use the standard logical inference dataset 3 proposed in . This is a classification task in which the goal is to determine the semantic equivalence of two statements expressed with logic operators such as not, and, and or. The language vocabulary is of six words and three logic operators. As per prior work , the model is trained on sequences with 6 or less operations and evaluated on sequences of 6 to 12 operations. For Task 1 and Task 2, we frame these tasks as a Seq2Seq task and evaluate models on exact match accuracy and perplexity (P) metrics. We use a standard encoder-decoder architecture with attention. We vary the encoder module with BiLSTMs, Stacked BiLSTMs (3 layers) and Ordered Neuron LSTMs . For Task 3 (logical inference), we use the common setting in other published works. Results on Sorting and Tree Traversal Table 1 reports our on the Sorting and Tree Traversal task. All models solve the task with n = 3. However, the task gets increasingly harder with a greater maximum possible length and largely still remains a challenge for neural models today. The relative performance of METAGROSS is on a whole better than any of the baselines, especially pertaining to perplexity. We also found that S-BiLSTMs are always better than LSTMs on this task and Ordered LSTMs are slightly worst than vanilla BiLSTMs. However, on sorting, ON-LSTMs are much better than standard BiLSTMs and S-BiLSTMs. TREE TRAVERSAL SORT n = 3 n = 4 n = 5 n = 8 n = 10 n = 5 n = 10 . METAGROSS achieves state-of-the-art performance. Table 2 reports our on logical inference task. We compare with mainly other published work. METAGROSS is a strong and competitive model on this task, outperforming ON-LSTM by a wide margin (+12% on the longest nunber of operations). Performance of our model also exceeds Tree-LSTM, which has access to ground truth syntax. Our model achieves state-of-the-art performance on this dataset even when considering models with access to syntactic information. We evaluate our model on its ability to model and capture long-range dependencies. More specifically, the sequential pixel-wise image classification problem treats pixels in images as sequences. We use the well-established pixel-wise MNIST and CIFAR-10 datasets. We use 3 layered META-GROSS of 128 hidden units each. Results on Pixel-wise Image Classification Table 3 reports the of METAGROSS against other published works. Our method achieves state-of-the-art performance on the CIFAR-10 dataset, outperforming the recent Trellis Network (b). On the other hand, on MNIST are reasonable, outperforming a wide range of other published works. On top of that, our method has 8 times less parameters than Trellis network (b) while achieving similar or better performance. This ascertains that METAGROSS is a reasonably competitive long-range sequence encoder. We run our experiments on the publicly released code 4 of , replacing the recurrent decoder with our METAGROSS decoder. Hyperparameter details followed the codebase of 85.7 85.3 --ASN+Att 87.1 85.9 --TranX 88 Table 4 reports our experimental on Semantic Parsing (GEO, ATIS, JOBS) and Code Generation (DJANGO). We observe that TranX + METAGROSS outperforms all competitor approaches, achieving stateof-the-art performance. More importantly, the performance gain over the base TranX method allows us to observe the ablative benefits of METAGROSS. We conduct experiments on two IWSLT datasets which are collections derived from TED talks. Specifically, we compare on the IWSLT 2014 German-English and IWSLT 2015 EnglishVietnamese datasets. We compare against a suite of published and strong baselines. For our method, we replaced the multi-head aggregation layer in the Transformer networks with a parallel non-autoregressive adaptation of METAGROSS. The base models are all linear layers. For our experiments, we use the standard implementation and hyperparameters in Tensor2Tensor 5 , using the small (S) and base (B) setting for Transformers. Model averaging is used and beam size of 8/4 and length penalty of 0.6 is adopted for De-En and En-Vi respectively. For our model, max depth is tuned amongst {1, 2, 3}. We also ensure to compare, in an ablative fashion, our own reported runs of the base Transformer models. Model BLEU MIXER 21.83 AC+LL 28.53 NPMT 28.96 Dual Transfer 32.35 Transformer S 32.86 Layer-wise 35 Model BLEU 23.30 Att-Seq2Seq 26.10 NPMT 27.69 NPMT + LM 28.07 Transformer B 28.43 Transformer B + METAGROSS 30.81 Table 6: Experimental on Neural Machine Translation on IWSLT 2015 En-Vi. We evaluate METAGROSS on the polyphonic music modeling. We use three well-established datasets, namely Nottingham, JSB Chorales and Piano Midi . The input to the model are 88-bit sequences, each corresponding to the 88 keys of the piano. The task is evaluated on the Negative Log-likelihood (NLL). We compare with a wide range of published works (; a;) Model Nott JSB Piano GRU (Chung et al.) 3.13 8.54 8.82 LSTM (Song et al.) 3.25 8.61 7.99 G2-LSTM (Li et al.) 3.21 8.67 8.18 B-LSTM (Song et al.) 3.16 8.30 7.55 TCN (Bai et al.) 3.07 8.10 -TCN (our run) 2.95 8.13 7.53 METAGROSS 2.88 8.12 7.49 Table 7: Experimental Results (NLL) on Polyphonic Music Modeling. Table 7 reports our scores on this task. METAGROSS achieves stateof-the-art performance on the Nottingham and Piano midi datasets, outperforming a wide range of competitive models such as Gumbel Gate LSTMs (b). This section reports some analysis and discussion regarding the proposed model. Table 9: Optimal Maximum Depth N and base unit for different tasks. Table 8 reports some ablation studies on the semantic parsing and code generation tasks. We observe that the base unit and optimal maximum depth used is task dependent. For ATIS dataset, using the linear transform as the base unit performs the best. Conversely, the linear base unit performs worse than the recurrent base unit (LSTM) on the DJANGO dataset. On a whole, we also observed this across other tasks, i.e., the base unit and maximum depth of METAGROSS is a critical choice for most tasks. Table 9 reports the optimal max depth N and best base unit for each task. 3.6.2 ANALYSIS OF SOFT DYNAMIC RECURSION Figure 6 illustrates the depth gate values on CIFAR and MNIST datasets. These values reflect the α and β values in METAGROSS, signifying how the parameter tree is being constructed during training. This is reflected as L and R in the figures representing left and right gates. Firstly, we observe that our model indeed builds data-specific parameterization of the network. This is denoted by how METAGROSS builds different 6 trees for CIFAR and MNIST. Secondly, we analyze the dynamic recursion depth with respect to time steps. The key observation that all datasets have very diverse construction of recursive parameters. The recursive gates fluctuate aggressively on CI-FAR while remaining more stable on Music modeling. Moreover, we found that the recursive gates remain totally constant on MNIST. This demonstrates that our model has the ability to adjust the dynamic construction adaptively and can revert to static recursion over time if necessary. We find that compelling. The adaptive recursive depth is made more intriguing by observing how the recursive parameterization alters on CIFAR and Music datasets. From Figure 8 we observe that the structure of the network changes in a rhythmic fashion, in line with our intuition of musical data. When dealing with pixel information, the tree structure changes adaptively according to the more complex information processed by the network. The study of effective inductive biases for sequential representation learning has been a prosperous research direction. This has spurred on research across multiple fronts, starting from gated recurrent models (;, convolution (a) to the recently popular self-attention based models . The intrinsic hierarchical structure native to many forms of sequences have long fascinated and inspired many researchers (; ; . The study of recursive networks, popularized by has provided a foundation for learning syntax-guided composition in language processing research. Along the same vein, proposed Tree-LSTMs which guide LSTM composition with grammar. Recent attempts have been made to learn this process without guidance nor syntax-based supervision (; ; ;). Ordered Neuron LSTMs proposed structured gating mechanisms, imbuing the recurrent unit with a tree-structured inductive bias. shows that recurrence is important for modeling hierarchical structure. Notably, learning hierachical representations across multiple time-scales (; ; ; ;) have also demonstrated reasonable success. Learning an abstraction and controller over a base recurrent unit is also another compelling direction. First proposed by Fast Weights , several recent works explore this notion. HyperNetworks learns to generate weights for another recurrent unit, i.e., a form of relaxed weight sharing. On the other hand, RCRN explicitly parameterizes the gates of a RNN unit with other RNN units. Recent attempts to speed up the recurrent unit are also reminiscent of this particular notion . The marriage of recursive and recurrent architectures is also notable. This direction is probably the closest relevance to our proposed method, although with vast differences. proposed Recursive Recurrent Networks for machine translation which are concerned with the more traditional syntactic supervision concept of vanilla recursive nets. proposed RRNet, which learns hierarchical structures on the fly. RR-Net proposes to learn to split or merge nodes at each time step, which makes it reminiscent of . proposed doubly recurrent decoders for tree-structured decoding. The core of their method is a depth and breath-wise recurrence which is similar to our model. However, METAGROSS is concerned with learning gating controllers which is different from the objective of decoding trees. Our work combines the idea of external meta-controllers (; ;) with recursive architectures. In particular, our recursive parameterization is also a form of dynamic memory which gives our model improved expressiveness in similar spirit to memoryaugmented recurrent models (; ;). We proposed Meta Gated Recursive Controller Units (METAGROSS) a sequence model characterized by recursive parameterization of gating functions. Our proposed method achieves very promising and competitive on a spectrum of benchmarks across multiple modalities (e.g., language, logic, music). We propose a non-autoregressive variation of METAGROSS, which allows simple drop-in enhancement to state-of-the-art Transformers. We study and visualise our network as it learns a dynamic recursive parameterization, shedding light on the expressiveness and flexibility to learn dynamic parameter structures depending on the data. | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Sygn20VtwH | Recursive Parameterization of Recurrent Models improve performance |
Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Learning in a continual fashion is a key aspect for cognitive development among biological species BID4. In Machine Learning, such learning scenario has been formalized as a Continual Learning (CL) setting BID30 BID21 BID27 BID29 BID26. The goal of CL is to learn from a data distribution that change over time without forgetting crucial information. Unfortunately, neural networks trained with backpropagation are unable to retain previously learned information when the data distribution change, an infamous problem called "catastrophic forgetting" BID6. Successful attempts at CL with neural networks have to overcome the inexorable forgetting happening when tasks change. In this paper, we focus on generative models in Continual Learning scenarios. Previous work on CL has mainly focused on classification tasks BID14 BID23 BID29 BID26. Traditional approaches are regularization, rehearsal and architectural strategies, as described in Section 2. However, discriminative and generative models strongly differ in their architecture and learning objective. Several methods developed for discriminative models are thus not directly extendable to the generative setting. Moreover, successful CL strategies for generative models can be used, via sample generation as detailed in the next section, to continually train discriminative models. Hence, studying the viability and success/failure modes of CL strategies for generative models is an important step towards a better understanding of generative models and Continual Learning in general. We conduct a comparative study of generative models with different CL strategies. In our experiments, we sequentially learn generation tasks. We perform ten disjoint tasks, using commonly used benchmarks for CL: MNIST , Fashion MNIST BID34 and CIFAR10 BID15. In each task, the model gets a training set from one new class, and should learn to generate data from this class without forgetting what it learned in previous tasks, see Fig. 1 for an example with tasks on MNIST.We evaluate several generative models: Variational Auto-Encoders (VAEs), Generative Adversarial Networks (GANs), their conditional variant (CVAE ans CGAN), Wasserstein GANs (WGANs) and Figure 1: The disjoint setting considered. At task i the training set includes images belonging to category i, and the task is to generate samples from all previously seen categories. Here MNIST is used as a visual example,but we experiment in the same way Fashion MNIST and CIFAR10.Wasserstein GANs Gradient Penalty (WGAN-GP). We compare on approaches taken from CL in a classification setting: finetuning, rehearsal, regularization and generative replay. Generative replay consists in using generated samples to maintain knowledge from previous tasks. All CL approaches are applicable to both variational and adversarial frameworks. We evaluate with two quantitative metrics, Fréchet Inception Distance BID10 and Fitting Capacity BID17, as well as visualization. Also, we discuss the data availability and scalability of CL strategies. • Evaluating a wide range of generative models in a Continual Learning setting.• Highlight success/failure modes of combinations of generative models and CL approaches.• Comparing, in a CL setting, two evaluation metrics of generative models. We describe related work in Section 2, and our approach in Section 3. We explain the experimental setup that implements our approach in Section 4. Finally, we present our and discussion in Section 5 and 6, before concluding in Section 7. Continual Learning has mainly been applied to discriminative tasks. On this scenario, classification tasks are learned sequentially. At the end of the sequence the discriminative model should be able to solve all tasks. The naive method of fine-tuning from one task to the next one leads to catastrophic forgetting BID6, i.e. the inability to keep initial performance on previous tasks. Previously proposed approaches can be classified into four main methods. The first method, referred to as rehearsal, is to keep samples from previous tasks. The samples may then be used in different ways to overcome forgetting. The method can not be used in a scenario where data from previous tasks is not available, but it remains a competitive baseline BID23 BID21. Furthermore, the scalability of this method can also be questioned because the memory needed to store samples grows linearly with the number of tasks. The second method employs regularization. Regularization constrains weight updates in order to maintain knowledge from previous tasks and thus avoid forgetting. Elastic Weight Consolidation (EWC) BID14 has become the standard method for this type of regularization. It estimates the weights' importance and adapt the regularization accordingly. Extensions of EWC have been proposed, such as online EWC BID26. Another well known regularization method is distillation, which transfers previously learned knowledge to a new model. Initially proposed by BID11, it has gained popularity in CL BID20 BID23 BID33 BID29 as it enables the model to learn about previous tasks and the current task at the same time. The third method is the use of a dynamic architecture to maintain past knowledge and learn new information. Remarkable approaches that implement this method are Progressive Networks BID24, Learning Without Forgetting (LWF) BID19 and PathNet BID5.The fourth and more recent method is generative replay BID29 BID31, where a generative model is used to produce samples from previous tasks. This approach has also been referred to as pseudo-rehearsal. Discriminative and generative models do not share the same learning objective and architecture. For this reason, CL strategies for discriminative models are usually not directly applicable to generative models. Continual Learning in the context of generative models remains largely unexplored compared to CL for discriminative models. Among notable previous work, BID27 successfully apply EWC on the generator of Conditional-GANs (CGANS), after observing that applying the same regularization scheme to a classic GAN leads to catastrophic forgetting. However, their work is based on a scenario where two classes are presented first, and then unique classes come sequentially, e.g the first task is composed of 0 and 1 digits of MNIST dataset, and then is presented with only one digit at a time on the following tasks. This is likely due to the failure of CGANs on single digits, which we observe in our experiments. Moreover, the method is shown to work on CGANs only. Another method for generative Continual Learning is Variational Continual Learning (VCL) BID21, which adapts variational inference to a continual setting. They exploit the online update from one task to another inspired from Bayes' rule. They successfully experiment with VAEs on a single-task scenario. While VCL has the advantage of being a parameter-free method. However, they experiment only on VAEs. Plus, since they use a multi-head architecture, they use specific weights for each task, which need task index for inference. A second method experimented on VAEs is to use a student-teacher method where the student learns the current task while the teacher retains knowledge BID22. Finally, VASE BID0 ) is a third method, also experimented only on VAEs, which allocates spare representational capacity to new knowledge, while protecting previously learned representations from catastrophic forgetting by using snapshots (i.e. weights) of previous model. A different approach, introduced by BID29 is an adaptation of the generative replay method mentioned in Section 2.1. It is applicable to both adversarial and variational frameworks. It uses two generative models: one which acts as a memory, capable of generating all past tasks, and one that learns to generate data from all past tasks and the current task. It has mainly been used as a method for Continual Learning of discriminative models BID29 BID31 BID28. Recently, BID32 have developed a similar approach called Memory Replay GANs, where they use Generative Replay combined to replay alignment, a distillation scheme that transfers previous knowledge from a conditional generator to the current one. However they note that this method leads to mode collapse because it could favor learning to generate few class instances rather than a wider range of class instances. Typical previous work on Continual Learning for generative models focus on presenting a novel CL technique and comparing it to previous approaches, on one type of generative model (e.g. GAN or VAE). On the contrary, we focus on searching for the best generative model and CL strategy association. For now, empirical evaluation remain the only way to find the best performing combinations. Hence, we compare several existing CL strategies on a wide variety of generative models with the objective of finding the most suited generative model for Continual Learning. In this process, evaluation metrics are crucial. CL approaches are usually evaluated by computing a metric at the end of each task. Whichever method that is able to maintain the highest performance is best. In the discriminative setting, classification accuracy is the most commonly used metric. Here, as we focus on generative models, there is no consensus on which metric should be used. Thus, we use and compare two quantitative metrics. The Fréchet Inception Distance (FID) BID10 ) is a commonly used metric for evaluating generative models. It is designed to improve on the Inception Score (IS) BID25 which has many intrinsic shortcomings, as well as additional problems when used on a dataset different than ImageNet BID2. FID circumvent these issues by comparing the statistics of generated samples to real samples, instead of evaluating generated samples directly. BID10 propose using the Fréchet distance between two multivariate Gaussians: DISPLAYFORM0 where the statistics (µ r, Σ r) and (µ g, Σ g) are the activations of a specific layer of a discriminative neural network trained on ImageNet, for real and generated samples respectively. A lower FID correspond to more similar real and generated samples as measured by the distance between their activation distributions. Originally the activation should be taken from a given layer of a given Inception-v3 instance, however this setting can be adapted with another classifier in order to compare a set of models with each other BID17.A different approach is to use labeled generated samples from a generator G (GAN or VAE) to train a classifier and evaluate it afterwards on real data BID17. This evaluation, called Fitting Capacity of G, is the test accuracy of a classifier trained with G's samples. It measures the generator's ability to train a classifier that generalize well on a testing set, i.e the generator's ability to fit the distribution of the testing set. This method aims at evaluating generative models on complex characteristics of data and not only on their features distribution. In the original paper, the authors annotated samples by generating them conditionally, either with a conditional model or by using one unconditional model for each class. In this paper, we also use an adaptation of the Fitting Capacity where data from unconditional models are labelled by an expert network trained on the dataset. We believe that using these two metrics is complementary. FID is a commonly used metric based solely on the distribution of images features. In order to have a complementary evaluation, we use the Fitting Capacity, which evaluate samples on a classification criterion rather than features distribution. For all the progress made in quantitative metrics for evaluating generative models BID3, qualitative evaluation remains a widely used and informative method. While visualizing samples provides a instantaneous detection of failure, it does not provide a way to compare two wellperforming models. It is not a rigorous evaluation and it may be misleading when evaluating sample variability. We now describe our experimental setup: data, tasks, and evaluated approaches. Our code is available online 2. Our main experiments use 10 sequential tasks created using the MNIST, Fashion MNIST and CI-FAR10 dataset. For each dataset, we define 10 sequential tasks, one task corresponds to learning to generate a new class and all the previous ones (See Fig. 1 for an example on MNIST). Both evaluations, FID and Fitting Capacity of generative models, are computed at the end of each task. We use 6 different generative models. We experiment with the original and conditional version of GANs BID7 and VAEs BID13. We also added WGAN ) and a variant of it WGAN-GP BID8, as they are commonly used baselines that supposedly improve upon the original GAN. We focus on strategies that are usable in both the variational and adversarial frameworks. We use 3 different strategies for Continual Learning of generative models, that we compare to 3 baselines. Our experiments are done on 8 seeds with 50 epochs per tasks for MNIST and Fashion MNIST using Adam BID12 for optimization (for hyper-parameter settings, see Appendix F). For CIFAR10, we experimented with the best performing CL strategy. The first baseline is Fine-tuning, which consists in ignoring catastrophic forgetting and is essentially a lower bound of the performance. Our other baselines are two upper bounds: Upperbound Data, for which one generative model is trained on joint data from all past tasks, and Upperbound Model, for which one separate generator is trained for each task. For Continual Learning strategies, we first use a vanilla Rehearsal method, where we keep a fixed number of samples of each observed task, and add those samples to the training set of the current generative model. We balance the ing dataset by copying the saved samples so that each class has the same number of samples. The number of samples selected, here 10, is motivated by the in Fig. 7a and 7b, where we show that 10 samples per class is enough to get a satisfactory but not maximal validation accuracy for a classification task on MNIST and Fashion MNIST. As the Fitting Capacity share the same test set, we can compare the original accuracy with 10 samples per task to the final fitting capacity. A higher Fitting capacity show that the memory prevents catastrophic forgetting. Equal Fitting Capacity means overfitting of the saved samples and lower Fitting Capacity means that the generator failed to even memorize these samples. We also experiment with EWC. We followed the method described by BID27 for GANs, i.e. the penalty is applied only on the generator's weights, and for VAEs we apply the penalty on both the encoder and decoder. As tasks are sequentially presented, we choose to update the diagonal of the Fisher information matrix by cumulatively adding the new one to the previous one. The last method is Generative Replay, described in Section 2.2. Generative replay is a dual-model approach where a "frozen" generative model G t−1 is used to sample from previously learned distributions and a "current" generative model G t is used to learn the current distribution and G t−1 distribution. When a task is over, the G t−1 is replaced by a copy of G t, and learning can continue. The figures we report show the evolution of the metrics through tasks. Both FID and Fitting Capacity are computed on the test set. A well performing model should increase its Fitting Capacity and decrease its FID. We observe a strong correlation between the Fitting Capacity and FID (see FIG0 for an example on GAN for MNIST and Appendix C for full ). Nevertheless, Fitting Capacity are more stable: over the 8 random seeds we used, the standard deviations are less important than in the FID . For that reason, we focus our interpretation on the Fitting Capacity . Our main with Fitting Capacity are displayed in FIG1 and by class in Fig. 4. We observe that, for the adversarial framework, Generative Replay outperforms other approaches by a significant margin. However, for the variational framework, the Rehearsal approach was the best performing. The Rehearsal approach worked quite well but is unsatisfactory for CGAN and WGAN-GP. Indeed, the Fitting Capacity is lower than the accuracy of a classifier trained on 10 samples per classes (see Fig. 7a and 7b in Appendix). In our setting, EWC is not able to overcome catastrophic forgetting and performs as well as the naive Fine-tuning baseline which is contradictory with the of BID27 who found EWC successful in a slightly different setting. We replicated their in a setting where there are two classes by tasks (see Appendix E for details), showing the strong effect of task definition. In BID27 authors already found that EWC did not work with non-conditional models but showed successful with conditional models (i.e. CGANs). The difference come from the experimental setting. In BID27, the training sequence start by a task with two classes. Hence, when CGAN is trained it is possible for the Fisher Matrix to understand the influence of the class-index input vector c. In our setting, since there is only one class at the first task, the Fisher matrix can not get the importance of the class-index input vector c. Hence, as for non conditional models, the Fisher Matrix is not able to protect weights appropriately and at the end of the second task the model has forgot the first task. Moreover, since the generator forgot what it learned at the first task, it is only capable of generating samples of only one class. Then, the Fisher Matrix will still not get the influence of c until the end of the sequence. Moreover, we show that even by starting with 2 classes, when there is only one class for the second task, the Fisher matrix is not able to protect the class from the second task in the third task. (see Figure 11).Our do not give a clear distinction between conditional and unconditional models. However, adversarial methods perform significantly better than variational methods. GANs variants are able to produce better, sharper quality and variety of samples, as observed in FIG7 in Appendix G. Hence, adversarial methods seem more viable for CL. We can link the accuracy from 7a and 7b to the Fitting Capacity . As an example, we can estimate that GAN with Generative Replay is equivalent for both datasets to a memory of approximately 100 samples per class. Catastrophic forgetting can be visualized in Fig.4. Each square's column represent the task index and each row the class, the color indicate the Fitting Capacity (FC). Yellow squares show a high FC, blue one show a low FC. We can visualize both the performance of VAE and GAN but also the performance evolution for each class. For Generative Replay, at the end of the task sequence, VAE decreases its performance in several classes when GAN does not. For Rehearsal it is the opposite. Concerning the high performance of original GAN and WGAN with Generative Replay, they benefit from their samples quality and their stability. In comparison, samples from CGAN and WGAN-GP are more noisy and samples from VAE and CVAE more blurry (see in appendix 14). However in the Rehearsal approach GANs based models seems much less stable (See TAB0 and FIG1). In this setting the discriminative task is almost trivial for the discriminator which make training harder for the generator. In opposition, VAE based models are particularly effective and stable in the Rehearsal setting (See Fig. 4b). Indeed, their learning objective (pixel-wise error) is not disturbed by a low samples variability and their probabilistic hidden variables make them less prone to overfit. However the Fitting Capacity of Fine-tuning and EWC in TAB0 is higher than expected for unconditional models. As the generator is only able to produce samples from the last task, the Fitting capacity should be near 10%. This is a downside of using an expert for annotation before computing the Fitting Capacity. Fuzzy samples can be wrongly annotated, which can artificially increase the labels variability and thus the Fitting Capacity of low performing models, e.g., VAE with Fine-tuning. However, this stay lower than the Fitting Capacity of well performing models. Incidentally, an important side is that the Fitting capacity of conditional generative models is comparable to of Continual Learning classification. Our best performance in this setting is with CGAN: 94.7% on MNIST and 75.44% on Fashion MNIST. In a similar setting with 2 sequential tasks, which is arguably easier than our setting (one with digits from 0,1,2,3,4 and another with 5,6,7,8,9), achieve a performance of 94.91%. This shows that using generative models for CL could be a competitive tool in a classification scenario. It is worth noting that we did not compare our of unconditional models Fitting Capacity with classification state of the art. Indeed, in this case, the Fitting capacity is based on an annotation from an expert not trained in a continual setting. The comparison would then not be fair. In this experiment, we selected the best performing CL methods on MNIST and Fashion MNIST, Generative Replay and Rehearsal, and tested it on the more challenging CIFAR10 dataset. We compared the two method to naive Fine-tuning, and to Upperbound Model (one generator for each class). The setting remains the same, one task for each category, for which the aim is to avoid forgetting of previously seen categories. We selected WGAN-GP because it produced the most satisfying samples on CIFAR10 (see Fig. 16 in Appendix G).Results are provided in Fig. 5, where we display images sampled after the 10 sequential tasks, and FID + Fitting Capacity curves throughout training. The Fitting Capacity show that all four methods fail to generate images that allow to learn a classifier that performs well on real CIFAR10 test data. As stated for MNIST and Fashion MNIST, with non-conditional models, when the Fitting Capacity is low, it can been artificially increased by automatic annotation which make the difference between curves not significant in this case. Naive Fine-tuning catastrophically forgets previous tasks, as expected. Rehearsal does not yield satisfactory . While the FID score shows improvement at each new task, visualization clearly shows that the generator copies samples in memory, and suffers from mode collapse. This confirms our intuition that Rehearsal overfits to the few samples kept in memory. Generative Replay fails; since the dataset is composed of real-life images, the generation task is much harder to complete. We illustrate its failure mode in Figure 17 in Appendix G. As seen in Task 0, the generator is able to produce images that roughly resemble samples of the category, here planes. As tasks are presented, minor generation errors accumulated and snowballed into the in task 9: samples are blurry and categories are indistinguishable. As a consequence, the FID improves at the beginning of the training sequence, and then deteriorates at each new task. We also trained the same model separately on each task, and while the is visually satisfactory, the quantitative metrics show that generation quality is not excellent. These negative shows that training a generative model on a sequential task scenario does not reduce to successfully training a generative model on all data or each category, and that state-of-theart generative models struggle on real-life image datasets like CIFAR10. Designing a CL strategy for these type of datasets remains a challenge. Besides the quantitative and visual evaluation of the generated samples, the evaluated strategies have, by design, specific characteristics relevant to CL that we discuss here. Rehearsal violates the data availability assumption, often required in CL scenarios, by recording part of the samples. Furthermore the risk of overfitting is high when only few samples represent a task, as shown in the CIFAR10 . EWC and Generative Replay respect this assumption. EWC has the advantage of not requiring any computational overload during training, but this comes at the cost of computing the Fisher information matrix, and storing its values as well as a copy of previous parameters. The memory needed for EWC to save information from the past is twice the size of the model which may be expensive in comparison to rehearsal methods. Nevertheless, with Rehearsal and Generative Replay, the model has more and more samples to learn from at each new task, which makes training more costly. Another point we discuss is about a recently proposed metric BID32 to evaluate CL for generative models. Their evaluation is defined for conditional generative models. For a given label l, they sample images from the generator conditioned on l and feed it to a pre-trained classifier. If the predicted label of the classifier matches l, then it is considered correct. In our experiment we find that it gives a clear advantage to rehearsal methods. As the generator may overfit the few samples kept in memory, it can maximizes the evaluation proposed by BID33, while not producing diverse samples. We present this phenomenon with our experiments in appendix D. Nevertheless, even if their metric is unable to detect mode collapse or overfitting, it can efficiently expose catastrophic forgetting in conditional models. In this paper, we experimented with the viability and effectiveness of generative models on Continual Learning (CL) settings. We evaluated the considered approaches on commonly used datasets for CL, with two quantitative metrics. Our experiments indicate that on MNIST and Fashion MNIST, the original GAN combined to the Generative Replay method is particularly effective. This method avoids catastrophic forgetting by using the generator as a memory to sample from the previous tasks and hence maintain past knowledge. Furthermore, we shed light on how generative models can learn continually with various methods and present successful combinations. We also reveal that generative models do not perform well enough on CIFAR10 to learn continually. Since generation errors accumulate, they are not usable in a continual setting. The considered approaches have limitations: we rely on a setting where task boundaries are discrete and given by the user. In future work, we plan to investigate automatic detection of tasks boundaries. Another improvement would be to experiment with smoother transitions between tasks, rather than the disjoint tasks setting. A SAMPLES AT EACH STEP Figure 11: Reproduction of EWC experiment BID27 with four tasks. First task with 0 and 1 digits, then digits of 2 for task 2, digits of 3 for task 3 etc. When task contains only one class, the Fisher information matrix cannot capture the importance of the class-index input vector because it is always fixed to one class. This problem makes the learning setting similar to a non-conditional models one which is known to not work BID27. As a consequence 0 and 1 are well protected when following classes are not. Figure 16: WGAN-GP samples on CIFAR10, with on training for each separate category. The implementation we used is available here: https://github.com/caogang/wgan-gp. Classes, from 0 to 9, are planes, cars, birds, cats, deers, dogs, frogs, horses, ships and trucks. Figure 17: WGAN-GP samples on 10 sequential tasks on CIFAR10, with Generative Replay. Classes, from 0 to 9, are planes, cars, birds, cats, deers, dogs, frogs, horses, ships and trucks. We observe that generation errors snowballs as tasks are encountered, so that the images sampled after the last task are completely blurry. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1eFtj0cKQ | A comparative study of generative models on Continual Learning scenarios. |
We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks. The policy gradient problem in deep reinforcement learning (DRL) can be defined as seeking a parameterized policy with high expected reward. An issue with policy gradient methods is poor sample efficiency BID10 BID21 BID27 BID29. In algorithms such as REINFORCE BID28, new samples are needed for every gradient step. When generating samples is expensive (such as robotic environments), sample efficiency is of central concern. The sample efficiency of an algorithm is defined to be the number of calls to the environment required to attain a specified performance level BID10.Thus, given the current policy and a fixed number of trajectories (samples) generated, the goal of the sample efficiency problem is to construct a new policy with the highest performance improvement possible. To do so, it is desirable to limit the search to policies that are close to the original policy π θ k BID21 BID29 BID24. Intuitively, if the candidate new policy π θ is far from the original policy π θ k, it may not perform better than the original policy because too much emphasis is being placed on the relatively small batch of new data generated by π θ k, and not enough emphasis is being placed on the relatively large amount of data and effort previously used to construct π θ k.This guideline of limiting the search to nearby policies seems reasonable in principle, but requires a distance η(π θ, π θ k) between the current policy π θ k and the candidate new policy π θ, and then attempt to solve the constrained optimization problem: DISPLAYFORM0 subject to η(π θ, π θ k) ≤ δwhereĴ(π θ | π θ k, new data) is an estimate of J(π θ), the performance of policy π θ, based on the previous policy π θ k and the batch of fresh data generated by π θ k. The objective attempts to maximize the performance of the updated policy, and the constraint ensures that the updated policy is not too far from the policy π θ k that was used to generate the data. Several recent papers BID21 BID24 belong to the framework-.Our work also strikes the right balance between performance and simplicity. The implementation is only slightly more involved than PPO. Simplicity in RL algorithms has its own merits. This is especially useful when RL algorithms are used to solve problems outside of traditional RL testbeds, which is becoming a trend BID30 BID16.We propose a new methodology, called Supervised Policy Update (SPU), for this sample efficiency problem. The methodology is general in that it applies to both discrete and continuous action spaces, and can address a wide variety of constraint types for. Starting with data generated by the current policy, SPU optimizes over a proximal policy space to find an optimal non-parameterized policy. It then solves a supervised regression problem to convert the non-parameterized policy to a parameterized policy, from which it draws new samples. We develop a general methodology for finding an optimal policy in the non-parameterized policy space, and then illustrate the methodology for three different definitions of proximity. We also show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. While SPU is substantially simpler than NPG/TRPO in terms of mathematics and implementation, our extensive experiments show that SPU is more sample efficient than TRPO in Mujoco simulated robotic tasks and PPO in Atari video game tasks. Off-policy RL algorithms generally achieve better sample efficiency than on-policy algorithms BID8. However, the performance of an on-policy algorithm can usually be substantially improved by incorporating off-policy training BID17, BID26 ). Our paper focuses on igniting interests in separating finding the optimal policy into a two-step process: finding the optimal non-parameterized policy, and then parameterizing this optimal policy. We also wanted to deeply understand the on-policy case before adding off-policy training. We thus compare with algorithms operating under the same algorithmic constraints, one of which is being on-policy. We leave the extension to off-policy to future work. We do not claim state-of-the-art . We consider a Markov Decision Process (MDP) with state space S, action space A, and reward function r(s, a), s ∈ S, a ∈ A. Let π = {π(a|s): s ∈ S, a ∈ A} denote a policy, let Π be the set of all policies, and let the expected discounted reward be: DISPLAYFORM0 where γ ∈ is a discount factor and τ = (s 0, a 0, s 1, . . .) is a sample trajectory. Let A π (s, a) be the advantage function for policy π BID14. Deep reinforcement learning considers a set of parameterized policies Π DL = {π θ |θ ∈ Θ} ⊂ Π, where each policy is parameterized by a neural network called the policy network. In this paper, we will consider optimizing over the parameterized policies in Π DL as well as over the non-parameterized policies in Π. For concreteness, we assume that the state and action spaces are finite. However, our methodology also applies to continuous state and action spaces, as shown in the Appendix. One popular approach to maximizing J(π θ) over Π DL is to apply stochastic gradient ascent. The gradient of J(π θ) evaluated at a specific θ = θ k can be shown to be BID28: DISPLAYFORM1 We can approximate by sampling N trajectories of length T from π θ k: DISPLAYFORM2 for the the future state probability distribution for policy π, and denote π(·|s) for the probability distribution over the action space A when in state s and using policy π. Further denote D KL (π π θ k)[s] for the KL divergence from π(·|s) to π θ k (·|s), and denote the following as the "aggregated KL divergence". DISPLAYFORM3 For the sample efficiency problem, the objective J(π θ) is typically approximated using samples generated from π θ k BID21. Two different approaches are typically used to approximate J(π θ) − J(π θ k). We can make a first order approximation of J(π θ) around θ k BID18 BID29 BID21: DISPLAYFORM0 where g k is the sample estimate. The second approach is to approximate the state distribution BID1: DISPLAYFORM1 DISPLAYFORM2 There is a well-known bound for the approximation BID11. Furthermore, the approximation DISPLAYFORM3 to the first order with respect to the parameter θ. Natural gradient was first introduced to policy gradient by and then in BID18 BID29 BID1 BID21. referred to collectively here as NPG/TRPO. Algorithmically, NPG/TRPO finds the gradient update by solving the sample efficiency problem- with η(π θ, π θ k) =D KL (π θ π θ k), i.e., use the aggregate KL-divergence for the policy proximity constraint. NPG/TRPO addresses this problem in the parameter space θ ∈ Θ. First, it approximates J(π θ) with the first-order approximation andD KL (π θ π θ k) using a similar second-order method. Second, it uses samples from π θ k to form estimates of these two approximations. Third, using these estimates (which are functions of θ), it solves for the optimal θ *. The optimal θ * is a function of g k and of h k, the sample average of the Hessian evaluated at θ k. TRPO also limits the magnitude of the update to ensureD KL (π θ π θ k) ≤ δ (i.e., ensuring the sampled estimate of the aggregated KL constraint is met without the second-order approximation).SPU takes a very different approach by first (i) posing and solving the optimization problem in the non-parameterized policy space, and then (ii) solving a supervised regression problem to find a parameterized policy that is near the optimal non-parameterized policy. A recent paper, Guided Actor Critic (GAC), independently proposed a similar decomposition BID24. However, GAC is much more restricted in that it considers only one specific constraint criterion (aggregated reverse-KL divergence) and applies only to continuous action spaces. Furthermore, GAC incurs significantly higher computational complexity, e.g. at every update, it minimizes the dual function to obtain the dual variables using SLSQP. MPO also independently propose a similar decomposition BID24. MPO uses much more complex machinery, namely, Expectation Maximization to address the DRL problem. However, MPO has only demonstrates preliminary on problems with discrete actions whereas our approach naturally applies to problems with either discrete or continuous actions. In both GAC and MPO, working in the non-parameterized space is a by-product of applying the main ideas in those papers to DRL. Our paper demonstrates that the decomposition alone is a general and useful technique for solving constrained policy optimization. Clipped-PPO ) takes a very different approach to TRPO. At each iteration, PPO makes many gradient steps while only using the data from π θ k. Without the clipping, PPO is the approximation. The clipping is analogous to the constraint in that it has the goal of keeping π θ close to π θ k. Indeed, the clipping keeps π θ (a t |s t) from becoming neither much larger than (1 +)π θ k (a t |s t) nor much smaller than (1 −)π θ k (a t |s t). Thus, although the clipped PPO objective does not squarely fit into the optimization framework FORMULA0 - FORMULA1, it is quite similar in spirit. We note that the PPO paper considers adding the KL penalty to the objective function, whose gradient is similar to ours. However, this form of gradient was demonstrated to be inferior to Clipped-PPO. To the best of our knowledge, it is only until our work that such form of gradient is demonstrated to outperform Clipped-PPO.Actor-Critic using Kronecker-Factored Trust Region (ACKTR) BID29 proposed using Kronecker-factored approximation curvature (K-FAC) to update both the policy gradient and critic terms, giving a more computationally efficient method of calculating the natural gradients. ACER BID26 ) exploits past episodes, linearizes the KL divergence constraint, and maintains an average policy network to enforce the KL divergence constraint. In future work, it would of interest to extend the SPU methodology to handle past episodes. In contrast to bounding the KL divergence on the action distribution as we have done in this work, Relative Entropy Policy Search considers bounding the joint distribution of state and action and was only demonstrated to work for small problems . The SPU methodology has two steps. In the first step, for a given constraint criterion η(π, π θ k) ≤ δ, we find the optimal solution to the non-parameterized problem: DISPLAYFORM0 Note that π is not restricted to the set of parameterized policies Π DL. As commonly done, we approximate the objective function. However, unlike PPO/TRPO, we are not approximating the constraint. We will show below the optimal solution π * for the non-parameterized problem- can be determined nearly in closed form for many natural constraint criteria η(π, π θ k) ≤ δ. In the second step, we attempt to find a policy π θ in the parameterized space Π DL that is close to the target policy π *. Concretely, to advance from θ k to θ k+1, we perform the following steps:(i) We first sample N trajectories using policy π θ k, giving sample data DISPLAYFORM1 Here A i is an estimate of the advantage value A π θ k (s i, a i). (For simplicity, we index the samples with i rather than with (i, t) corresponding to the tth sample in the ith trajectory.)(ii) For each s i, we define the target distribution π * to be the optimal solution to the constrained optimization problem- for a specific constraint η.(iii) We then fit the policy network π θ to the target distributions π * (·|s i), i = 1,.., m. Specifically, to find θ k+1, we minimize the following supervised loss function: DISPLAYFORM2 For this step, we initialize with the weights for π θ k. We minimize the loss function L(θ) with stochastic gradient descent methods. The ing θ becomes our θ k+1. To illustrate the SPU methodology, for three different but natural types of proximity constraints, we solve the corresponding non-parameterized optimization problem and derive the ing gradient for the SPU supervised learning problem. We also demonstrate that different constraints lead to very different but intuitive forms of the gradient update. We first consider constraint criteria of the form: DISPLAYFORM0 subject to DISPLAYFORM1 Note that this problem is equivalent to minimizing L π θ k (π) subject to the constraints FORMULA0 and FORMULA0. We refer to as the "aggregated KL constraint" and to FORMULA0 as the "disaggregated KL constraint". These two constraints taken together restrict π from deviating too much from π θ k. We shall refer to FORMULA0 - FORMULA0 as the forward-KL non-parameterized optimization problem. Note that this problem without the disaggregated constraints is analogous to the TRPO problem. The TRPO paper actually prefers enforcing the disaggregated constraint to enforcing the aggregated constraints. However, for mathematical conveniences, they worked with the aggregated constraints: "While it is motivated by the theory, this problem is impractical to solve due to the large number of constraints. Instead, we can use a heuristic approximation which considers the average KL divergence" BID21. The SPU framework allows us to solve the optimization problem with the disaggregated constraints exactly. Experimentally, we compared against TRPO in a controlled experimental setting, e.g. using the same advantage estimation scheme, etc. Since we clearly outperform TRPO, we argue that SPU's two-process procedure has significant potentials. DISPLAYFORM2 Note that π λ (a|s) is a function of λ. Further, for each s, let λ s be such that DISPLAYFORM3 Theorem 1 The optimal solution to the problem- FORMULA0 is given by: DISPLAYFORM4 where λ is chosen so that DISPLAYFORM5 Equation FORMULA0 provides the structure of the optimal non-parameterized policy. As part of the SPU framework, we then seek a parameterized policy π θ that is close toπ λ (a|s), that is, minimizes the loss function. For each sampled state s i, a straightforward calculation shows (Appendix B): DISPLAYFORM6 where DISPLAYFORM7 We estimate the expectation in with the sampled action a i and approximate A π θ k (s i, a i) as A i (obtained from the critic network), giving: DISPLAYFORM8 To simplify the algorithm, we slightly modify. We replace the hyper-parameter δ with the hyper-parameter λ and tune λ rather than δ. Further, we set ∼ λ si = λ for all s i in and introduce per-state acceptance to enforce the disaggregated constraints, giving the approximate gradient: DISPLAYFORM9 We make the approximation that the disaggregated constraints are only enforced on the states in the sampled trajectories. We use as our gradient for supervised training of the policy network. The equation FORMULA0 has an intuitive interpretation: the gradient represents a trade-off between the approximate performance of π θ (as captured by 1 λ DISPLAYFORM10 For the stopping criterion, we train until DISPLAYFORM11 In a similar manner, we can derive the structure of the optimal policy when using the reverse KL-divergence as the constraint. For simplicity, we provide the for when there are only disaggregated constraints. We seek to find the non-parameterized optimal policy by solving: DISPLAYFORM0 DISPLAYFORM1 Theorem 2 The optimal solution to the problem- FORMULA1 is given by: DISPLAYFORM2 where λ(s) > 0 and DISPLAYFORM3 Note that the structure of the optimal policy with the backward KL constraint is quite different from that with the forward KL constraint. A straight forward calculation shows (Appendix B): DISPLAYFORM4 The equation FORMULA1 has an intuitive interpretation. It increases the probability of action a if A π θ k (s, a) > λ (s) − 1 and decreases the probability of action a if A π θ k (s, a) < λ (s) − 1. also tries to keep π θ close to π θ k by minimizing their KL divergence. In this section we show how a PPO-like objective can be formulated in the context of SPU. Recall from Section 3 that the the clipping in PPO can be seen as an attempt at keeping π θ (a i |s i) from becoming neither much larger than (1 +)π θ k (a i |s i) nor much smaller than (1 −)π θ k (a i |s i) for i = 1,..., m. In this subsection, we consider the constraint function DISPLAYFORM0 which leads us to the following optimization problem: DISPLAYFORM1 DISPLAYFORM2 Note that here we are using a variation of the SPU methodology described in Section 4 since here we first create estimates of the expectations in the objective and constraints and then solve the optimization problem (rather than first solve the optimization problem and then take samples as done for Theorems 1 and 2). Note that we have also included an aggregated constraint in addition to the PPO-like constraint, which further ensures that the updated policy is close to π θ k. The optimal solution to the optimization problem is given by: DISPLAYFORM0 for some λ > 0 where DISPLAYFORM1 To simplify the algorithm, we treat λ as a hyper-parameter rather than δ. After solving for π *, we seek a parameterized policy π θ that is close to π * by minimizing their mean square error over sampled states and actions, i.e. by updating θ in the negative direction of ∇ θ i (π θ (a i |s i) − π * (a i |s i)) 2. This loss is used for supervised training instead of the KL because we take estimates before forming the optimization problem. Thus, the optimal values for the decision variables do not completely characterize a distribution. We refer to this approach as SPU with the L ∞ constraint. Although we consider three classes of proximity constraint, there may be yet another class that leads to even better performance. The methodology allows researchers to explore other proximity constraints in the future. Extensive experimental demonstrate SPU outperforms recent state-of-the-art methods for environments with continuous or discrete action spaces. We provide ablation studies to show the importance of the different algorithmic components, and a sensitivity analysis to show that SPU's performance is relatively insensitive to hyper-parameter choices. There are two definitions we use to conclude A is more sample efficient than B: (i) A takes fewer environment interactions to achieve a pre-defined performance threshold BID10; (ii) the averaged final performance of A is higher than that of B given the same number environment interactions. Implementation details are provided in Appendix D. The Mujoco BID25 ) simulated robotics environments provided by OpenAI gym BID5 have become a popular benchmark for control problems with continuous action spaces. In terms of final performance averaged over all available ten Mujoco environments and ten different seeds in each, SPU with L ∞ constraint (Section 5.3) and SPU with forward KL constraints (Section 5.1) outperform TRPO by 6% and 27% respectively. Since the forward-KL approach is our best performing approach, we focus subsequent analysis on it and hereafter refer to it as SPU. SPU also outperforms PPO by 17%. FIG0 illustrates the performance of SPU versus TRPO, PPO.To ensure that SPU is not only better than TRPO in terms of performance gain early during training, we further retrain both policies for 3 million timesteps. Again here, SPU outperforms TRPO by 28%. FIG3 in the Appendix illustrates the performance for each environment. Code for the Mujoco experiments is at https://github.com/quanvuong/Supervised_Policy_Update. The indicator variable in enforces the disaggregated constraint. We refer to it as per-state acceptance. Removing this component is equivalent to removing the indicator variable. We refer to using i D KL (π θ π θ k)[s i] to determine the number of training epochs as dynamic stopping. Without this component, the number of training epochs is a hyper-parameter. We also tried removing DISPLAYFORM0 ] from the gradient update step in. TAB0 illustrates the contribution of the different components of SPU to the overall performance. The third row shows that the term DISPLAYFORM1 ] makes a crucially important contribution to SPU. Furthermore, per-state acceptance and dynamic stopping are both also important for obtaining high performance, with the former playing a more central role. When a component is removed, the hyper-parameters are retuned to ensure that the best possible performance is obtained with the alternative (simpler) algorithm. To demonstrate the practicality of SPU, we show that its high performance is insensitive to hyperparameter choice. One way to show this is as follows: for each SPU hyper-parameter, select a reasonably large interval, randomly sample the value of the hyper parameter from this interval, and then compare SPU (using the randomly chosen hyper-parameter values) with TRPO. We sampled 100 SPU hyper-parameter vectors (each vector including δ,, λ), and for each one determined the relative performance with respect to TRPO. First, we found that for all 100 random hyper-parameter value samples, SPU performed better than TRPO. 75% and 50% of the samples outperformed TRPO by at least 18% and 21% respectively. The full CDF is given in Figure 4 in the Appendix. We can conclude that SPU's superior performance is largely insensitive to hyper-parameter values.6.4 ON ATARI BID20 BID15 demonstrates that neural networks are not needed to obtain high performance in many Mujoco environments. To conclusively evaluate SPU, we compare it against PPO on the Arcade Learning Environments exposed through OpenAI gym BID5. Using the same network architecture and hyper-parameters, we learn to play 60 Atari games from raw pixels and rewards. This is highly challenging because of the diversity in the games and the high dimensionality of the observations. Here, we compare SPU against PPO because PPO outperforms TRPO by 9% in Mujoco. Averaged over 60 Atari environments and 20 seeds, SPU is 55% better than PPO in terms of averaged final performance. FIG1 provides a high-level overview of the . The dots in the shaded area represent environments where their performances are roughly similar. The dots to the right of the shaded area represent environment where SPU is more sample efficient than PPO. We can draw two : (i) In 36 environments, SPU and PPO perform roughly the same; SPU clearly outperforms PPO in 15 environments while PPO clearly outperforms SPU in 9; (ii) In those 15+9 environments, the extent to which SPU outperforms PPO is much larger than the extent to which PPO outperforms SPU. FIG5, Figure 6 and FIG6 in the Appendix illustrate the performance of SPU vs PPO throughout training. SPU's high performance in both the Mujoco and Atari domains demonstrates its high performance and generality. We first show that FORMULA0 - FORMULA0 is a convex optimization. To this end, first note that the objective FORMULA0 is a linear function of the decision variables π = {π(a|s): s ∈ S, a ∈ A}. The LHS of FORMULA0 can be rewritten as: a∈A π(a|s) log π(a|s) − a∈A π(a|s) log π θ k (a|s). The second term is a linear function of π. The first term is a convex function since the second derivative of each summand is always positive. The LHS of FORMULA0 is thus a convex function. By extension, the LHS of FORMULA0 is also a convex function since it is a nonnegative weighted sum of convex functions. The problem FORMULA0 - FORMULA0 is thus a convex optimization problem. According to Slater's constraint qualification, strong duality holds since π θ k is a feasible solution to FORMULA0 - FORMULA0 where the inequality holds strictly. We can therefore solve FORMULA0 - FORMULA0 by solving the related Lagrangian problem. For a fixed λ consider: DISPLAYFORM0 The above problem decomposes into separate problems, one for each state s: DISPLAYFORM1 DISPLAYFORM2 Further consider the unconstrained problem without the constraint: DISPLAYFORM3 subject to DISPLAYFORM4 A simple Lagrange-multiplier argument shows that the opimal solution to FORMULA1 - FORMULA2 is given by: DISPLAYFORM5 where Z λ (s) is defined so that π λ (·|s) is a valid distribution. Now returning to the decomposed constrained problem-, there are two cases to consider. The first case is when DISPLAYFORM6 In this case, the optimal solution to- FORMULA0 is π λ (a|s). The second case is when DISPLAYFORM7 In this case the optimal is π λ (a|s) with λ replaced with λ s, where λ s is the solution to DISPLAYFORM8 Thus, an optimal solution to- is given by: DISPLAYFORM9 where DISPLAYFORM10 To find the Lagrange multiplier λ, we can then do a line search to find the λ that satisfies: DISPLAYFORM11 A.2 BACKWARD KL CONSTRAINTThe problem FORMULA0 - FORMULA1 decomposes into separate problems, one for each state s ∈ S: DISPLAYFORM12 subject to E DISPLAYFORM13 After some algebra, we see that above optimization problem is equivalent to: DISPLAYFORM14 DISPLAYFORM15 where = + entropy(π θ k). FORMULA2 - FORMULA1 is a convex optimization problem with Slater's condition holding. Strong duality thus holds for the problem FORMULA2 - FORMULA1. Applying standard Lagrange multiplier arguments, it is easily seen that the solution to FORMULA2 - FORMULA1 is DISPLAYFORM16 where λ(s) and λ (s) are constants chosen such that the disaggregegated KL constraint is binding and the sum of the probabilities equals 1. It is easily seen λ(s) > 0 and DISPLAYFORM17 The problem is equivalent to: DISPLAYFORM18 DISPLAYFORM19 This problem is clearly convex. π θ k (a i |s i), i = 1,..., m is a feasible solution where the inequality constraint holds strictly. Strong duality thus holds according to Slater's constraint qualification. To solve FORMULA2 - FORMULA3, we can therefore solve the related Lagrangian problem for fixed λ: DISPLAYFORM20 DISPLAYFORM21 which is separable and decomposes into m separate problems, one for each s i: DISPLAYFORM22 DISPLAYFORM23 DISPLAYFORM24 λ s (Log of product is sum of log) DISPLAYFORM25 (Adding the gradient of the entropy on both sides and collapse the sum of gradients of cross entropy and entropy into the gradient of the KL) DISPLAYFORM26 (Taking gradient on both sides) DISPLAYFORM27 (Adding the gradient of the entropy on both sides and collapse the sum of gradients of cross entropy and entropy into the gradient of the KL) The methodology developed in the body of this paper also applies to continuous state and action spaces. In this section, we outline the modifications that are necessary for the continuous case. We first modify the definition of d π (s) by replacing P π (s t = s) with d ds P π (s t ≤ s) so that d π (s) becomes a density function over the state space. With this modification, the definition ofD KL (π π k) and the approximation are unchanged. The SPU framework described in Section 4 is also unchanged. Consider now the non-parameterized optimization problem with aggregate and disaggregate constraints, but with continuous state and action space: DISPLAYFORM0 Theorem 1 holds although its proof needs to be slightly modified as follows. It is straightforward to show that remains a convex optimization problem. We can therefore solve by solving the Lagrangian with the sum replaced with an integral. This problem again decomposes with separate problems for each s ∈ S giving exactly the same equations. The proof then proceeds as in the remainder of the proof of Theorem 1.Theorem 2 and 3 are also unchanged for continuous action spaces. Their proofs require slight modifications, as in the proof of Theorem 1. As in, for Mujoco environments, the policy is parameterized by a fullyconnected feed-forward neural network with two hidden layers, each with 64 units and tanh nonlinearities. The policy outputs the mean of a Gaussian distribution with state-independent variable standard deviations, following BID21 BID7. The action dimensions are assumed to be independent. The probability of an action is given by the multivariate Gaussian probability distribution function. The baseline used in the advantage value calculation is parameterized by a similarly sized neural network, trained to minimize the MSE between the sampled states TD−λ returns and the their predicted values. For both the policy and baseline network, SPU and TRPO use the same architecture. To calculate the advantage values, we use Generalized Advantage Estimation BID22. States are normalized by dividing the running mean and dividing by the running standard deviation before being fed to any neural networks. The advantage values are normalized by dividing the batch mean and dividing by the batch standard deviation before being used for policy update. The TRPO is obtained by running the TRPO implementation provided by OpenAI, commit 3cc7df060800a45890908045b79821a13c4babdb. At every iteration, SPU collects 2048 samples before updating the policy and the baseline network. For both networks, gradient descent is performed using Adam with step size 0.0003, minibatch size of 64. The step size is linearly annealed to 0 over the course of training. γ and λ for GAE BID22 BID17. The output of the network is passed through a relu, linear and softmax layer in that order to give the action distribution. The output of the network is also passed through a different linear layer to give the baseline value. States are normalized by dividing by 255 before being fed into any network. The TRPO is obtained by running the PPO implementation provided by OpenAI, commit 3cc7df060800a45890908045b79821a13c4babdb. 8 different processes run in parallel to collect timesteps. At every iteration, each process collects 256 samples before updating the policy and the baseline network. Each process calculates its own update to the network's parameters and the updates are averaged over all processes before being used to update the network's parameters. Gradient descent is performed using Adam with step size 0.0001. In each process, random number generators are initialized with a different seed according to the formula process_seed = experiment_seed + 10000 * process_rank. Training is performed for 10 million timesteps for both SPU and PPO. For SPU, δ,, λ and the maximum number of epochs per iteration are set to 0.02, δ/1.3, 1.1 and 9 respectively. Algorithm 1 Algorithmic description of forward-KL non-parameterized SPU Require: A neural net π θ that parameterizes the policy. Require: A neural net V φ that approximates V π θ. Require: General hyperparameters: γ, β (advantage estimation using GAE), α (learning rate), N (number of trajectory per iteration), T (size of each trajectory), M (size of training minibatch). Require: Algorithm-specific hyperparameters: δ (aggregated KL constraint), (disaggregated constraint), λ, ζ (max number of epoch). 1: for k = 1, 2,... do under policy π θ k, sample N trajectories, each of size T (s it, a it, r it, s i(t+1) ), i = 1,..., N, t = 1,..., T Using any advantage value estimation scheme, estimate A it, i = 1,..., N, t = 1,..., T DISPLAYFORM0 10: DISPLAYFORM1 θ ← θ − αL(θ) TRPO and SPU were trained for 1 million timesteps to obtain the in section 6. To ensure that SPU is not only better than TRPO in terms of performance gain early during training, we further retrain both policies for 3 million timesteps. Again here, SPU outperforms TRPO by 28%. FIG3 illustrates the performance on each environment. When values for SPU hyper-parameter are randomly sampled as is explained in subsection 6.3, the percentage improvement of SPU over TRPO becomes a random variable. Figure 4 illustrates the CDF of this random variable. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJxTroR9F7 | first posing and solving the sample efficiency optimization problem in the non-parameterized policy space, and then solving a supervised regression problem to find a parameterized policy that is near the optimal non-parameterized policy. |
Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream. However, we neither know whether such task-optimized networks enable equally good models of the rodent visual system, nor if a similar hierarchical correspondence exists. Here, we address these questions in the mouse visual system by extracting features at several layers of a convolutional neural network (CNN) trained on ImageNet to predict the responses of thousands of neurons in four visual areas (V1, LM, AL, RL) to natural images. We found that the CNN features outperform classical subunit energy models, but found no evidence for an order of the areas we recorded via a correspondence to the hierarchy of CNN layers. Moreover, the same CNN but with random weights provided an equivalently useful feature space for predicting neural responses. Our suggest that object recognition as a high-level task does not provide more discriminative features to characterize the mouse visual system than a random network. Unlike in the primate, training on ethologically relevant visually guided behaviors -- beyond static object recognition -- may be needed to unveil the functional organization of the mouse visual cortex. Visual object recognition is a fundamental and difficult task performed by the primate brain via a hierarchy of visual areas (the ventral stream) that progressively untangles object identity information, gaining invariance to a wide range of object-preserving visual transformations. Fueled by the advances of deep learning, recent work on modeling neural responses in sensory brain areas builds upon hierarchical convolutional neural networks (CNNs) trained to solve complex tasks like object recognition. Interestingly, these models have not only achieved unprecedented performance in predicting neural responses in several brain areas of macaques and humans, but they also revealed a hierarchical correspondence between the layers of the CNNs and areas of the ventral stream: the higher the area in the ventral stream, the higher the CNN layer that explained it best. The same approach also provided a quantitative signature of a previously unclear hierarchical organization of A1 and A2 in the human auditory cortex. These discoveries about the primate have sparked a still unresolved question: to what extent is visual object processing also hierarchically organized in the mouse visual cortex and how well can the mouse visual system be modeled using goal-driven deep neural networks trained on static object classification? This question is important since mice are increasingly used to study vision due to the plethora of available experimental techniques such as the ability to genetically identify and manipulate neural circuits that are not easily available in primates. Recent work suggests that rats are capable of complex visual discrimination tasks and recordings from extrastriate areas show a gradual increase in the ability of neurons in higher visual areas to support discrimination of visual objects. Here, we set out to study how well the mouse visual system can be characterized by goal-driven deep neural networks. We extracted features from the hidden layers of a standard CNN (VGG16, ) trained on object categorization, to predict responses of thousands of neurons in four mouse visual areas (V1, LM, AL, RL) to static natural images. We found that VGG16 yields powerful features for predicting neural activity, outperforming a Gabor filter bank energy model in these four visual areas. However, VGG16 does not significantly outperform a feature space produced by a network with an identical architecture but random weights. In contrast to previous work in primates, our data provide no evidence so far for a hierarchical correspondence between the deep network layers and the visual areas we recorded. trough the core (A) network (first n layers of VGG16) to produce a feature space shared by all neurons. Then, the spatial transformer readout (B) finds a mapping between these features and the neural responses for each neuron separately. The shifter network (an MLP with one hidden layer) corrects for eye movements. The output of the readout is multiplied by a gain predicted by the modulator network (an MLP with one hidden layer) that uses running speed and pupil dilation. A static nonlinearity converts the into the predicted spike rate. All components of the model are trained jointly end-to-end to minimize the difference between predicted and observed neural responses. Our network (Fig.1) builds upon earlier work. It consist of four main network components: a core that provides nonlinear features of input images, a readout that maps those features to each neuron's responses, a shifter that predicts receptive field shifts from pupil position, and a modulator that provides a gain factor for each neuron based on running speed and pupil dilation of the mouse. For the core we use VGG16 up to one of the first eight convolutional layers. We chose VGG16 due to its simple feed-forward architecture, competitive object classification performance, and increasing popularity to characterize rodent visual areas. The collection of output feature maps of a VGG16 layer -the shared feature space -was then fed into a spatial transformer readout for each neuron (Fig.1B, see for details). This readout learns one (x, y) location for each neuron (its receptive field location, RF) and extracts a feature vector at this location from multiple downsampled versions (scales) of the feature maps. The output of the readout is a linear combination of the concatenated feature vectors. We regularized the feature weights with an L 1 penalty to encourage sparsity. Shifter and modulator are multi-layer perceptrons (MLP) with one hidden layer. The shifter takes the tracked pupil position in camera coordinates and predicts a global receptive field shift (∆x, ∆y) in monitor coordinates. The modulator uses the mouse's running speed, its pupil diameter, and the derivative to predict a gain for each neuron by which the neuron's predicted response is multiplied. A soft-thresholding nonlinearity turns the into a non-negative spike rate prediction (Fig.1). All components of the model (excluding the core, which is pre-trained on ImageNet) are trained jointly end-to-end to minimize the difference between predicted and observed neural responses using Adam with a learning rate of 10 −4, a batch size of 125 and early stopping. Neural data. We recorded responses of excitatory neurons in areas V1, LM, AL, and RL (layer 2/3) from two scans from one mouse and a third scan from a second mouse with a large-field-of-view two-photon mesoscope (see for details) at a frame rate of 6.7 Hz. We selected cells based on a classifier for somata on the segmented cell masks and deconvolved their fluorescence traces, yielding 7393, 4674, 4680, 5797 neurons from areas V1, LM, AL, and RL, respectively. We further monitored pupil position, pupil dilation, and absolute running speed of the animal. Visual stimuli. Stimuli consisted of 5100 images taken from ImageNet, cropped to 16:9 and converted to grayscale. The screen was 55 × 31 cm at a distance of 15 cm, covering roughly 120 • × 90 •. In each scan, we showed 5000 of these images once (training and validation set) and the remaining 100 images 10 times each (test set). Each image was presented for 500 ms followed by a blank screen lasting between 300 ms and 500 ms. For each neuron, we extract the accumulated activity between 50 ms and 550 ms after stimulus onset using a Hamming window. We fitted one model (that of Fig.1 ; see for training details) for each combination of scan, brain area, VGG16 layer (out of the first eight), random initialization (out of three seeds), and input resolution. We considered several resolutions of the input images because the right scale at which VGG16 layers extract relevant features that best match the representation in the brain is unknown. Optimizing the scale for each layer was critical, since the correspondence between a single layer and a brain area (in terms of best correlation performance) strongly depends on the input resolution (e.g. see Fig 3A for V1 data). For further analyses (Fig 3B & 4), we pick for each case the best performing input scale in the validation set. No hierarchical correspondence. Previous in primates show that a brain area higher in the hierarchy is better matched (i.e has a peak in prediction performance) by a higher network layer. In contrast, when comparing the average performance across cells and scans for each convolutional layer and brain area, we find no clear evidence for a hierarchy (Fig.3B) since there is no clear ordering of the brain areas. VGG16 outperforms classical models. We then investigate whether the lack of an evident hierarchy was due to an overall poor performance of our model. Thus, we first revise how much of the explainable stimulus-driven variability the VGG16-based model captures. To this end we calculate the oracle correlation (the conditional mean of n − 1 responses without the model) obtaining an upper bound of the achievable performance. Then we evaluate the test correlation of our model restricted to visual input information (no shifter and no modulator), against the oracle (Fig.4B) and find that VGG16 features explain a substantial fraction of the oracle for the for areas (70-78%) Second, we consider a subunit energy model with Gabor quadrature pairs as a baseline due to its competitive predictive performance of macaque V1 responses. We replace the core from Fig. 1 with a Gabor filter bank (GFB) consisting of a large number of Gabor filters with different orientations, sizes and spatial frequencies arranged in quadrature pairs, and followed by a squaring nonlinearity. We find that for all areas and scans, the VGG16 core outperformed the GFB (Fig.4C). Core with random weights performs similarly. The so far show that VGG16 provides a powerful feature space to predict responses, which may suggest that static object recognition could be a useful high-level goal to describe the function of the mouse visual system. However, we were surprised that most VGG layers led to similar performance. To understand this better, we also evaluated a core with identical architecture but random weights. This random core performed similarly well as its pre-trained counterpart (Fig.4D), suggesting that training on static object recognition as a high-level goal is not necessary to achieve state-of-the-art performance in predicting neural responses in those four visual areas. Instead, a sufficiently large collection of random features followed by rectification provides a similarly powerful feature space. The number of LN layers is critical to best match neural activity. Since random features produced by a linearnonlinear (LN) hierarchy closely match the performance of the pretrained VGG16, we then asked if the number of LN steps -when accounting for multiple input resolutions -was the key common aspect of these networks that yielded the best predictions. Effectively, similar to the case of the pretrained VGG16 core, we found that the fourth and fifth rectified convolutional layers of the random core are the best predictive layers for the four areas we studied (Fig. 5). However, it is important to note that in both cases the increase in performance after the second convolutional layer is only marginal. Overall, we conclude that the nonlinear degree -number of LN stages -rather than the static object recognition training goal dictates how close the representations are to the neural activity. In contrast to similar work in the primate, we find no match between the hierarchy of mouse visual cortical areas and the layers of CNNs trained on object categorization. Although VGG16 achieves state-of-the-art performance, it is matched by random weights. There are three implications of our : First, our work is in line with previous work in machine learning that shows the power of random features. Therefore, we argue that models based on random features should always be reported as baselines in studies on neural system identification. Second, which VGG layer best predicted any given brain area depended strongly on the image resolution we used to feed into VGG16. We observed a similar effect in our earlier work on primate V1. Thus, the studies reporting a hierarchical correspondence between goal-driven deep neural networks and the primate ventral stream should be taken with a grain of salt, as they -to the best of our knowledge -do not include this control. Third, optimizing the network for static object recognition alone as a high-level goal does not appear to be the right approximation to describe representations and the visual hierarchy in the mouse cortex. Although our do not exclude a potential object processing hierarchy in the mouse visual system, they suggest that training with more ethologically relevant visually guided tasks for the mouse could be a more fruitful goal-driven approach to characterize the mouse visual system. For instance, an approach with dynamic stimuli such as those found during prey capture tasks could yield more meaningful features to unveil the functional organization of the mouse visual system. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkxcXmtUUS | A goal-driven approach to model four mouse visual areas (V1, LM, AL, RL) based on deep neural networks trained on static object recognition does not unveil a functional organization of visual cortex unlike in primates |
The reparameterization trick has become one of the most useful tools in the field of variational inference. However, the reparameterization trick is based on the standardization transformation which restricts the scope of application of this method to distributions that have tractable inverse cumulative distribution functions or are expressible as deterministic transformations of such distributions. In this paper, we generalized the reparameterization trick by allowing a general transformation. We discover that the proposed model is a special case of control variate indicating that the proposed model can combine the advantages of CV and generalized reparameterization. Based on the proposed gradient model, we propose a new polynomial-based gradient estimator which has better theoretical performance than the reparameterization trick under certain condition and can be applied to a larger class of variational distributions. In studies of synthetic and real data, we show that our proposed gradient estimator has a significantly lower gradient variance than other state-of-the-art methods thus enabling a faster inference procedure. Most machine learning objective function can be rewritten in the form of an expectation: where θ is a parameter vector. However, due to the intractability of the expectation, it's often impossible or too expensive to calculate the exact gradient w.r.t θ, therefore it's inevitable to estimate the gradient ∇ θ L in practical applications. Stochastic optmization methods such as reparameterization trick and score function methods have been widely applied to address the stochastic gradient estimation problem. Many recent advances in large-scale machine learning tasks have been brought by these stochastic optimization tricks. Like in other stochastic optimzation related works, our paper mainly focus on variational inference tasks. The primary goal of variational inference (VI) task is to approximate the posterior distribution in probabilistic models . To approximate the intractable posterior p(z|x) with the joint probability distribution p(x, z) over observed data x and latent random variables z given, VI introduces a parameteric family of distribution q θ (z) and find the best parameter θ by optimizing the Kullback-Leibler (KL) divergence D KL (q(z; θ) p(z|x)). The performance of VI methods depends on the capacity of the parameteric family of distributions (often measured by Rademacher complexity) and the ability of the optimizer. In this paper, our method tries to introduce a better optimizer for a larger class of parameteric family of distributions. The main idea of our work is to replace the parameter-independent transformation in reparameterization trick with generalized transformation and construct the generalized transformation-based (G-TRANS) gradient with the velocity field which is related to the characteristic curve of the sublinear partial differential equation associated with the generalized transformation. Our gradient model further generalizes the G-REP and provides a more elegant and flexible way to construct gradient estimators. We mainly make the following contributions: 1. We develop a generalized transformation-based gradient model based on the velocity field related to the generalized transformation and explicitly propose the unbiasedness constraint on the G-TRANS gradient. The proposed gradient model provides a more poweful and flexible way to construct gradient estimators. 2. We show that our model is a generalization of the score function method and the reparameterization trick. Our gradient model can reduce to the reparameterization trick by enforcing a transport equation constraint on the velocity field. We also show our model's connection to control variate method. 3. We propose a polynomial-based gradient estimator that cannot be induced by any other existing generalized reparameterization gradient framework, and show its superiority over similar works on several experiments. The rest of this paper is organized as follows. In Sec.2 we review the stochastic gradient variational inference (SGVI) and stochastic gradient estimators. In Sec.3 we propose the generalized transformation-based gradient. In Sec.4 we propose the polynomial-based G-TRANS gradient estimator. In Sec.5 we study the performance of our gradient estimator on synthetic and real data. In Sec.6 we review the related works. In Sec.7 we conclude this paper and discuss future work. To obtain the best variational parameter θ, rather than minimize the KL divergence D KL (q(z; θ) p(z|x)), we usually choose to maximize the evidence lower bound (ELBO) , where The entropy term H[q(z; θ)] is often assumed to be available analytically and usually omitted in the procedure of stochastic optimization. This stochastic optimization problem is the basic setting for our method and experiments. Without extra description, we only consider the simplified version of the ELBO: Generally, this expectation is intractable to compute, let alone its gradient. Therefore, a common stochastic optimization method for VI task is to construct a Monte Carlo estimator for the exact gradient of the ELBO w.r.t θ. Among those gradient estimators, the score function method and the reparamterization trick are most popular and widely applied. Score function method. The score function estimator, also called log-derivative trick or reinforce; is a general way to obtain unbiased stochastic gradients of the ELBO (; ;). The simplest variant of the score function gradient estimator is defined as: and then we can build the Monte Carlo estimator by drawing samples from the variational distribution q θ (z) independently. Although the score function method is very general, the ing gradient estimator suffers from high variance. Therefore, it's necessary to apply variance reduction (VR) methods such as RaoBlackwellization and control variates in practice. Reparameterization trick. In reparameterization trick, we assume that there is an invertible and continuously differentiable standardization function φ(z, θ) that can transform the variational distribution q(z; θ) into a distribution s(ρ) that don't depend on the variational parameter θ as follows, θ (ρ) Then the reparameterization trick can turn the computation of the gradient of the expectation into the expectation of the gradient: Although this reparameterization can be done for many commonly used distributions, such as the Gaussian distribution, it's hard to find appropriate standardization functions for a number of standard distributions, such as Gamma, Beta or Dirichlet because the standardization functions will inevitably involve special functions. On the other hand, though the reparameterization trick is not as generally applicable as the score function method, it does in a gradient estimator with lower variance. Define a random variable ρ by an invertible differentiable transformation ρ = φ(z, θ), where φ is commonly called generalized standardization transformation since it's dependent on the variational parameter θ. Theorem 3.1. Let θ be any component of the variational parameter θ, the probability density function of ρ be w(ρ, θ) and where The proof details of the Theorem.3.1 are included in the Appendix. A.1. We refer to the gradient ∂L ∂θ with v θ satisfying the unbiasedness constraint as generalized transformation-based (G-TRANS) gradient. We can construct the G-TRANS gradient estimator by choosing v θ of specific form. In the following, we demonstrate that the score function gradient and reparameterization gradient are special cases of our G-TRANS gradient model associating with special velocity fields. Remark. The score function method is a special case of the G-TRANS model when The standardization function φ doesn't depend on the parameter θ when v θ = 0 according to the velocity field equation (Equ.5). Conversely, for any φ that doesn't depend on θ, we have = 0, thus the ing gradient estimator has the same variance as the score function estimator. Remark. The reparameterization trick is a special case when The detailed computation to obtain the transport equation (Equ.7) is included in the Appendix. A.1. The transport equation is firstly introduced by , however, their work derive this equation by an analog to the optimal transport theory. In 1-dimensional case, for any standardization distributions w(ρ) that doesn't depend on the parameter θ, the variance of the ing gradient estimator is some constant (for fixed θ) determined by the unique 1-dimensional solution of the transport equation. For the existence of the velocity field v θ and the generalized standardization transformation φ(z, θ), g(z, θ) must satisfy some strong differential constraints . We can see that the G-TRANS model is a special case of the control variate method with a complex differential structure. This connection to CV means our gradient model can combine the advantages of CV and generalized reparameterization. Theorem.3.1 transforms the generalized unbiased reparameterization procedure into finding the appropriate velocity field that satisfy the unbiasedness constraint. It's possible to apply variational optimization theory to find the velocity field with the least estimate variance, however, the solution to the Euler-Lagrange equation contains f (z) in the integrand which makes it impractical to use in real-world model (See Appendix. A.2 for details). By introducing the notion of velocity field, we provide a more elegant and flexible way to construct gradient estimator without the need to compute the Jacobian matrix for a specific transformation. In the next section, we introduce a polynomial-based G-TRANS gradient estimator that cannot be incorporated into any other existing generalized reparameterized gradient framework and is better than the reparameterization gradient estimator theoretically. In this section, we always assume that the base distribution q(z, θ) can be factorized as where N is the dimension of the random variable z, θ i is a slice of θ and θ i share no component with θ j if i = j. We consider an ad-hoc velocity field family: We always assume v θ ah to be continuous which guarantees the existence of the solution to the velocity field equation. We verify in the Appendix. A.3 that v θ ah (z, θ) satisfy the unbiasedness constraint if h(z, θ) is bounded. It's easy to see that the gradient estimator that from v θ ah is more general than the score function method or reparameterization trick since they are two special cases when h(z, θ) = 0 or h(z, θ) = f (z) respectively. In this paper, we mainly consider a more special family of the v θ ah (z, θ): where, but their properties are similar (we present some theoretical of v θ dp in the Appendix. A.4). Therefore we only consider v θ poly (z, θ) here. We refer to v θ poly as polynomial velocity field. Proposition 4.1. For distributions with analytical high order moments such as Gamma, Beta or Dirichlet distribution, the expectation are polynomials of random variable z. Therefore, for distribution with analytical high order moments, With Proposition.4.1, we can write the G-TRANS gradient for the polynomial velocity field as: Thus we can construct a G-TRANS gradient estimator based upon the polynomial velocity field with a samplez drawn from q(z, θ): The polynomial-based G-TRANS gradient estimator has a form close to control variate, thus cannot be induced by any other existing generalized reparameterized gradient framework. In the following, we show that the polynomial-based G-TRANS gradient estimator performs better than the reparameterization gradient estimator under some condition. dz N ), then the gradient estimator ed from polynomial velocity field has a smaller variance than the reparameterization gradient estimator. Proof. Since E q [P k (z, θ) ∂ ∂θ log q] can be resolved analytically, we have then by reorganizing the expression Var(− ∂f ∂zi), we can prove this proposition. As an example about how to choose a good polynomial, for ), we can obtain a polynomial-based G-TRANS gradient estimator that is better than the reparameterization gradient estimator according to the Proposition.4.2. And we can adjust the value of C i (θ) to obtain better performance. According to the approximation theory, we can always find a polynomial P k (z, θ) that is close enough to f (z), and in this case, we can dramatically reduce the variance of the ing gradient estimator. For example, within the convergence radius, we can choose P k (z, θ) to be the k-th degree Taylor polynomial of f (z) with the remainder |f (z) − P k (z, θ)| being small. In the practical situation, however, it's often difficult to estimate the coefficients of the polynomial P k (z, θ). And when k is large, we need to estimate O(N k) coefficients which is almost impossible in real-world applications. Therefore in the following experiments, we only consider k < 2. In this section, we use a Dirichlet distribution to approximate the posterior distribution for a probilistic model which has a multinomial likelihood with a Dirichlet prior. We use Gamma distributions to simulate Dirichlet distributions. If Then the problem we study here can be written as: with f (z) being the multinomial log-likelihood. We use shape parameter α = (α 1, . . ., α K) to parameterize the variational Dirichlet distribution. To construct polynomial-based G-TRANS gradient estimator for the factorized distribution K k=1 Gamma(z k ; α k, 1), we need an accurate and fast way to approximate the derivative of the lower incomplete gamma function (part of the gamma CDF) w.r.t the shape parameter. The lower incomplete gamma function γ(α, z) is a special function and does not admit analytical expression for derivative w.r.t. the shape parameter. However, for small α and z, we have In practice, we take the first 200 terms from this power series. And the approximation error is smaller than 10 −9 when α < 5 and z < 20 with double precision floating point number. For large α, we use central finite difference to approximate the derivative. This approximation scheme for lower incomplete gamma function can also be used to construct polynomial-based G-TRANS gradient estimator for distributions that can be simulated by the Gamma distribution such as Beta distribution and Dirichlet distribution. We follow the experiment setting in. Fig.1 shows the ing variance of the first component of the gradient based on samples simulated from a Dirichlet distribution with K = 100 components, and gradients are computed with N = 100 trials. We use P 1 (z) = c · z to construct the G-TRANS gradient estimator, and we assign 0.2,0 and −0.1 to c successively as α 1 increases. Results. From Fig.1, we can see that the IRG method and our G-TRANS gradient estimator has obviously lower gradient variance than the RSVI (even with the shape augmentation trick ) or G-REP method. Further, our G-TRANS gradient estimator outperforms the IRG method when α 1 is large though there is no obvious difference between these two methods when α 1 is small. In this section, we study the performance of our G-TRANS gradient estimator on the Sparse Gamma deep exponential family (DEF) model with the Olivetti faces dataset that consists of 64 × 64 gray-scale images of human faces in 8 bits. We follow the Sparse Gamma DEF setting in where the DEF model is specified by: . C is the polynomial coefficient, B denotes shape augmentation and optimal concentration is α = 2. Here n is the number of observations, is the layer number, k denotes the k-th component in a specific layer and d is the dimension of the output layer (layer 0). z n,k is local random variable, w k,k is global weight that connects different layers like deep neural networks, and x n,d denotes the set of observations. We use the experiment setting in. α z is set to 0.1, all priors on the weights are set to Gamma(0.1, 0.3), and the top-layer local variables priors are set to Gamma(0.1, 0.1). The model consists of 3 layers, with 100, 40, and 15 components in each. All variational Gamma distributions are parameterized by the shape and mean. For non-negative variational parameters θ, the transfomration θ = log(1 + exp(ϑ)) is applied to avoid constrained optimization. In this experiment, we use the step-size sequence ρ n proposed by: δ = 10 −16, t = 0.1, η = 0.75 is used in this experiment. The best of RSVI is reproduced with B = 4 . We still use P 1 (z) = c · z to construct the G-TRANS gradient estimator and we use c = −10.0 for all time. Results. From Fig.2, We can see that G-TRANS achieves significant improvements in the first 1000 runs and exceeds RSVI though with a slower initial improvement. G-TRANS achieves obviously better accuracy than ADVI, BBVI, G-REP and RSVI, and keeps improving the ELBO even after 75000 runs. G-TRANS is faster than the IRG in early training stage which means G-TRANS has a lower gradient variance. However, this speed advantage of G-TRANS gradually decreases as the step size goes down in the later training stage. There are already some lines of research focusing on extending the reparameterization trick to a larger class of distributions. The G-REP generalizes the reparameterization gradient by using a standardization transformation that allows the standardization distribution to depend weakly on variational parameters. Our gradient model gives a more elegant expression of the generalized reparameterized gradient than that of G-REP which decomposes the gradient as g rep + g cor. Different from G-REP, our model hides the transformation behind the velocity field thus the expensive computation of the Jacobian matrix of the transformation is evaded. And it's more flexible to construct gradient estimator with the velocity field than the very detailed transformation. The RSVI develops a similar generalized reparameterized gradient model with the tools from rejection sampling literatures. RSVI introduces a score function gradient term to compensate the gap that is caused by employing the proposal distribution of a rejection sampler as a surrogate distribution for reparameterization gradient, although the score function gradient term can often be ignored in practice to reduce the gradient variance at the cost of small bias. Unlike RSVI, our gradient estimator can be constructed with deterministic procedure which avoids the additional stochasticity introduced by the accept-reject steps thus lower gradient variance. The path-wise derivative is closely related to our model. They obtain the transport equation by an analog to the displacement of particles, while we derive the transport equation for reparameterization gradient by rigorous mathematical deduction. The path-wise gradient model can be seen as a special case of our G-TRANS gradient model. Their work only focus on standard reparameterization gradient while our model can admit generalized transformation-based gradient. The velocity field used in their work must conform to the transport equation while we only require the velocity field to satisfy the unbiasedness constraint. The implicit reparameterization gradient (IRG) differentiates from the path-wise derivative only by adopting a different method for multivariate distributions. There are also some other works trying to address the limitations of standard reparameterization. applies implicit reparameterization for mixture distributions and uses approximations to the inverse CDF to derive gradient estimators. Both work involve expensive computation that cannot be extended to large-scale variational inference. expressed the gradient in a similar way to G-REP and automatically estimate the gradient in the context of stochastic computation graphs, but their work is short of necessary details therefore cannot be applied to general variational inference task directly. ADVI transforms the random variables such that their support are on the reals and then approximates transformed random variables with Gaussian variational posteriors. However, ADVI struggles to approximate probability densities with singularities as noted by. We proposed a generalized transformation-based (G-TRANS) gradient model which extends the reparameterization trick to a larger class of variational distributions. Our gradient model hides the details of transformation by introducing the velocity field and provides a flexible way to construct gradient estimators. Based on the proposed gradient model, we introduced a polynomial-based G-TRANS gradient estimator that cannot be induced by any other existing generalized reparameterization gradient framework. In practice, our gradient estimator provides a lower gradient variance than other state-of-the-art methods, leading to a fast converging process. For future work, We can consider how to construct G-TRANS gradient estimators for distributions that don't have analytical high-order moments. We can also utilize the from the approximation theory to find certain kinds of high-order polynomial functions that can approximate the test function effectively with cheap computations for the coefficients. Constructing velocity fields with the optimal transport theory is also a promising direction. A.1 PROOF OF THEOREM.3.1 We assume that transformed random variable ρ = φ(z, θ) is of the same dimension as z. And we assume that there exists ψ(ρ, θ) that satisfy the constraint z = ψ(φ(z, θ), θ). Firstly, by the change-of-variable technique, we have Take derivative w.r.t θ (any component of θ) at both sizes, we have With the rule of determinant derivation, we have Substitute the Equ.19 into Equ.18, we have, we obtain the first of the Theorem.3.1. As for the second part, we have Thus we obtain the second part of the Theorem.3.1. Proof ends. As a by-product, if we make ∂ ∂θ w(φ(z, θ), θ) = 0, we can obtain the transport equation for the reparameterization trick: And ∂ ∂θ w(φ(z, θ), θ) = 0 also means that the standardization distribution is independent with θ which is the core of the reparameterization trick. For the simplicity of the proof, we only consider the 1-dimensional here. And denote where with the unbiased constraint, we have E q(z,θ) [r(z, θ)] = E q(z,θ) [f (z) ∂q(z,θ) ∂θ q(z,θ) ] = const, so we need to consider the term E q(z,θ) [(r θ (z, θ)) 2 ] only. According to the Euler-Lagrange equation, we have Simplify it, we have (f ∂q ∂θ Then we have Thus we have which is usually intractable in real world practice. Here we verify that v θ ah (z, θ) satisfy the unbiasedness constraint if h(z, θ) is bounded. Let θ i be any component of θ i, then If h(z, θ) is bounded, we have Therefore, E q θ [If we take the dual polynomial velocity field v θ dp in the G-TRANS framework, we can reach a dual to the Proposition.4.2: Proposition A.1. If Cov(P k ∂ log q(z,θ) ∂θ, (2f − P k) ∂ log q(z,θ) ∂θ ) > 0, then the gradient estimator ed from dual polynomial velocity field has a smaller gradient variance than the score function gradient estimator. The proof is similar to that of Proposition.4.2. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJlrlm8NsH | a generalized transformation-based gradient model for variational inference |
To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation. Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences. For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes. Experimental on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent. Both topic and language models are widely used for text analysis. Topic models, such as latent Dirichlet allocation (LDA) (; ;) and its nonparametric Bayesian generalizations , are well suited to extract document-level word concurrence patterns into latent topics from a text corpus. Their modeling power has been further enhanced by introducing multilayer deep representation (; ; ; ; ; . While having semantically meaningful latent representation, they typically treat each document as a bag of words (BoW), ignoring word order . Language models have become key components of various natural language processing (NLP) tasks, such as text summarization , speech recognition , machine translation , and image captioning (; ; ; ;). The primary purpose of a language model is to capture the distribution of a word sequence, commonly with a recurrent neural network (RNN) or a Transformer based neural network (; ; ; ; 2019). In this paper, we focus on improving RNN-based language models that often have much fewer parameters and are easier to perform end-to-end training. While RNN-based language models do not ignore word order, they often assume that the sentences of a document are independent to each other. This simplifies the modeling task to independently assigning probabilities to individual sentences, ignoring their orders and document context . Such language models may consequently fail to capture the long-range dependencies and global semantic meaning of a document (; . To relax the sentence independence assumption in language modeling, propose larger-context language models that model the context of a sentence by representing its preceding sentences as either a single or a sequence of BoW vectors, which are then fed directly into the sentence modeling RNN. An alternative approach attracting significant recent interest is leveraging topic models to improve RNN-based language models. use pre-trained topic model features as an additional input to the RNN hidden states and/or output. ; combine the predicted word distributions, given by both a topic model and a language model, under variational autoencoder . introduce an attention based convolutional neural network to extract semantic topics, which are used to extend the RNN cell. learn the global semantic coherence of a document via a neural topic model and use the learned latent topics to build a mixture-of-experts language model. further specify a Gaussian mixture model as the prior of the latent code in variational autoencoder, where each mixture component corresponds to a topic. While clearly improving the performance of the end task, these existing topic-guided methods still have clear limitations. For example, they only utilize shallow topic models with only a single stochastic hidden layer in their data generation process. Note several neural topic models use deep neural networks to construct their variational encoders, but still use shallow generative models (decoders) . Another key limitation lies in ignoring the sentence order, as they treat each document as a bag of sentences. Thus once the topic weight vector learned from the document context is given, the task is often reduced to independently assigning probabilities to individual sentences (; 2019). In this paper, as depicted in Fig. 1, we propose to use recurrent gamma belief network (rGBN) to guide a stacked RNN for language modeling. We refer to the model as rGBN-RNN, which integrates rGBN, a deep recurrent topic model, and stacked RNN , a neural language model, into a novel larger-context RNN-based language model. It simultaneously learns a deep recurrent topic model, extracting document-level multi-layer word concurrence patterns and sequential topic weight vectors for sentences, and an expressive language model, capturing both short-and long-range word sequential dependencies. For inference, we equip rGBN-RNN (decoder) with a novel variational recurrent inference network (encoder), and train it end-to-end by maximizing the evidence lower bound (ELBO). Different from the stacked RNN based language model in , which relies on three types of customized training operations (UPDATE, COPY, FLUSH) to extract multi-scale structures, the language model in rGBN-RNN learns such structures purely under the guidance of the temporally and hierarchically connected stochastic layers of rGBN. The effectiveness of rGBN-RNN as a new larger-context language model is demonstrated both quantitatively, with perplexity and BLEU scores, and qualitatively, with interpretable latent structures and randomly generated sentences and paragraphs. Notably, rGBN-RNN can generate a paragraph consisting of a sequence of semantically coherent sentences. Denote a document of J sentences as D = (S 1, S 2, . . ., S J), where S j = (y j,1, . . ., y j,Tj) consists of T j words from a vocabulary of size V. Conventional statistical language models often only focus on the word sequence within a sentence. Assuming that the sentences of a document are independent to each other, they often define Tj t=2 p (y j,t | y j,<t) p (y j,1). RNN based neural language models define the conditional probability of each word y j,t given all the previous words y j,<t within the sentence S j, through the softmax function of a hidden state h j,t, as where f (·) is a non-linear function typically defined as an RNN cell, such as long short-term memory (LSTM) and gated recurrent unit (GRU) . These RNN-based statistical language models are typically applied only at the word level, without exploiting the document context, and hence often fail to capture long-range dependencies.;; 2019) remedy the issue by guiding the language model with a topic model, they still treat a document as a bag of sentences, ignoring the order of sentences, and lack the ability to extract hierarchical and recurrent topic structures. We introduce rGBN-RNN, as depicted in Fig. 1(a), as a new larger-context language model. It consists of two key components: (i) a hierarchical recurrent topic model (rGBN), and (ii) a stacked RNN based language model. We use rGBN to capture both global semantics across documents and long-range inter-sentence dependencies within a document, and use the language model to learn the local syntactic relationships between the words within a sentence. Similar to;, we represent a document as a sequence of sentence-context pairs as + summarizes the document excluding S j, specifically (S 1, ..., S j−1, S j+1, ..., S J), into a BoW count vector, with V c as the size of the vocabulary excluding stop words. Note a naive way is to treat each sentence as a document, use a dynamic topic model to capture the temporal dependencies of the latent topic-weight vectors, which is fed to the RNN to model the word sequence of the corresponding sentence. However, the sentences are often too short to be well modeled by a topic model. In our setting, as d j summarizes the document-level context of S j, it is in general sufficiently long for topic modeling. Note during testing, we redefine d j as the BoW vector summarizing only the preceding sentences, i.e., S 1:j−1, which will be further clarified when presenting experimental . Fig. 1 (a), to model the time-varying sentence-context count vectors d j in document D, the generative process of the rGBN component, from the top to bottom hidden layers, is expressed as where θ l j ∈ R K l + denotes the gamma distributed topic weight vectors of sentence j at layer l, the transition matrix of layer l that captures cross-topic temporal dependencies, the loading matrix at layer l, K l the number of topics of layer l, and τ 0 ∈ R + a scaling hyperparameter. At j = 1, θ + can be factorized into the sum of Φ l+1 θ l+1 j, capturing inter-layer hierarchical dependence, and Π l θ l j−1, capturing intra-layer temporal dependence. rGBN not only captures the document-level word occurrence patterns inside the training text corpus, but also the sequential dependencies of the sentences inside a document. Note ignoring the recurrent structure, rGBN will reduce to the gamma belief network (GBN) of , which can be considered as a multi-stochastic-layer deep generalization of LDA (a). If ignoring its hierarchical structure (i.e., L = 1), rGBN reduces to Poisson-gamma dynamical systems . We refer to the rGBN-RNN without its recurrent structure as GBN-RNN, which no longer models sequential sentence dependencies; see Appendix A for more details. Different from a conventional RNN-based language model, which predicts the next word only using the preceding words within the sentence, we integrate the hierarchical recurrent topic weight vectors θ l j into the language model to predict the word sequence in the jth sentence. Our proposed language model is built upon the stacked RNN proposed in; , but with the help of rGBN, it no longer requires specialized training heuristics to extract multi-scale structures. As shown in Fig. 1 (b), to generate y j,t, the t th token of sentence j in a document, we construct the hidden states h l j,t of the language model, from the bottom to top layers, as where LSTM l word denotes the word-level LSTM at layer l, W e ∈ R V are word embeddings to be learned, and x j,t = y j,t−1. Note a, the conditional probability of y j,t becomes There are two main reasons for combining all the latent representations a 1:L j,t for language modeling. First, the latent representations exhibit different statistical properties at different stochastic layers of rGBN-RNN, and hence are combined together to enhance their representation power. Second, having "skip connections" from all hidden layers to the output one makes it easier to train the proposed network, reducing the number of processing steps between the bottom of the network and the top and hence mitigating the "vanishing gradient" problem . To sum up, as depicted in Fig. 1 (a), the topic weight vector θ l j of sentence j quantifies the topic usage of its document context d j at layer l. It is further used as an additional feature of the language model to guide the word generation inside sentence j, as shown in Fig. 1 (b). It is clear that rGBN-RNN has two temporal structures: a deep recurrent topic model to extract the temporal topic weight vectors from the sequential document contexts, and a language model to estimate the probability of each sentence given its corresponding hierarchical topic weight vector. Characterizing the word-sentencedocument hierarchy to incorporate both intra-and inter-sentence information, rGBN-RNN learns more coherent and interpretable topics and increases the generative power of the language model. Distinct from existing topic-guided language models, the temporally related hierarchical topics of rGBN exhibit different statistical properties across layers, which better guides language model to improve its language generation ability. For rGBN-RNN, given, the marginal likelihood of the sequence of sentence-context where e The inference task is to learn the parameters of both the topic model and language model components. One naive solution is to alternate the training between these two components in each iteration: First, the topic model is trained using a sampling based iterative algorithm provided in; Second, the language model is trained with maximum likelihood estimation under a standard cross-entropy loss. While this naive solution can utilize readily available inference algorithms for both rGBN and the language model, it may suffer from stability and convergence issues. Moreover, the need to perform a sampling based iterative algorithm for rGBN inside each iteration limits the scalability of the model for both training and testing. Algorithm 1 Hybrid SG-MCMC and recurrent autoencoding variational inference for rGBN-RNN. Set mini-batch size m and the number of layer L Initialize encoder and neural language model parameter parameter Ω, and topic model parameter Randomly select a mini-batch of m documents consisting of J sentences to form a subset X = {di,1:,j according to, and update Ω; Sample θ l i,j from and via and {Φ l} L l=1, will be described in Appendix C; end for To this end, we introduce a variational recurrent inference network (encoder) to learn the latent temporal topic weight vectors θ, the ELBO of the log marginal likelihood shown in can be constructed as which unites both the terms that are primarily responsible for training the recurrent hierarchical topic model component, and terms for training the neural language model component. Similar to, we define q(θ, a random sample from which can be obtained by transforming standard uniform variables To capture the temporal dependencies between the topic weight vectors, both k l j and λ l j, from the bottom to top layers, can be expressed as where h Rather than finding a point estimate of the global parameters of the rGBN, we adopt a hybrid inference algorithm by combining TLASGR-MCMC described in Cong et al. (2017a); and our proposed recurrent variational inference network. In other words, the global parameters can be sampled with TLASGR-MCMC, while the parameters of the language model and variational recurrent inference network, denoted by Ω, can be updated via stochastic gradient descent (SGD) by maximizing the ELBO in. We describe a hybrid variational/sampling inference for rGBN-RNN in Algorithm 1 and provide more details about sampling with TLASGR-MCMC in Appendix C. We defer the details on model complexity to Appendix E. To sum up, as shown in Fig. 1(c), the proposed rGBN-RNN works with a recurrent variational autoencoder inference framework, which takes the document context of the jth sentence within a document as input and learns hierarchical topic weight vectors θ 1:L j that evolve sequentially with j. The learned topic vectors in different layer are then used to reconstruct the document context input and as an additional feature for the language model to generate the jth sentence. We consider three publicly available corpora, including APNEWS, IMDB, and BNC. The links, preprocessing steps, and summary statistics for them are deferred to Appendix D. We consider a recurrent variational inference network for rGBN-RNN to infer θ l j, as shown in Fig. 1(c), whose number of hidden units in are set the same as the number of topics in the corresponding layer., which extracts the global semantic coherence of a document via a neural topic model, with the probability of each learned latent topic further adopted to build a mixture-of-experts language model; (vii) TGVAE , combining a variational auto-encoder based neural sequence model with a neural topic model; (viii) GBN-RNN, a simplified rGBN-RNN that removes the recurrent structure of its rGBN component. For rGBN-RNN, to ensure the information about the words in the jth sentence to be predicted is not leaking through the sequential document context vectors at the testing stage, the input d j in only summarizes the preceding sentences S <j. For GBN-RNN, following TDLM and TCNLM, all the sentences in a document, excluding the one being predicted, are used to obtain the BoW document context. As shown in Table 1, rGBN-RNN outperforms all baselines, and the trend of improvement continues as its number of layers increases, indicating the effectiveness of assimilating recurrent hierarchical topic information. rGBN-RNN consistently outperforms GBN-RNN, suggesting the benefits of exploiting the sequential dependencies of the sentence-contexts for language modeling. Moreover, comparing Table 1 and Table 4 of Appendix E suggests rGBN-RNN, with its hierarchical and temporal topical guidance, achieves better performance with fewer parameters than comparable RNN-based baselines. Note that for language modeling, there has been significant recent interest in replacing RNNs with Transformer , which consists of stacked multi-head self-attention modules, and its variants (; ; ; 2019). While Transformer based language models have been shown to be powerful in various natural language processing tasks, they often have significantly more parameters, require much more training data, and take much longer to train than RNN-based language models. For example, Transformer-XL with 12L and that with 24L , which improve Transformer to capture longer-range dependencies, have 41M and 277M parameters, respectively, while the proposed rGBN-RNN with three stochastic hidden layers has as few as 7.3M parameters, as shown in Table 4, when used for language modeling. From a structural point-of-view, we consider the proposed rGBN-RNN as complementary to rather than competing with Transformer based language models, and consider replacing RNN with Transformer to construct rGBN guided Transformer as a promising future extension., we use test-BLEU to evaluate the quality of generated sentences with a set of real test sentences as the reference, and self-BLEU to evaluate the diversity of the generated sentences . Given the global parameters of the deep recurrent topic model (rGBN) and language model, we can generate the sentences by following the data generation process of rGBN-RNN: we first generate topic weight vectors θ L j randomly and then downward propagate it through the rGBN as in to generate θ <L j. By assimilating the random draw topic weight vectors with the hidden states of the language model in each layer depicted in, we generate a corresponding sentence, where we start from a zero hidden state at each layer in the language model, and sample words sequentially until the end-of-the-sentence symbol is generated. Comparisons of the BLEU scores between different methods are shown in Fig. 2, using the benchmark tool in Texygen ; We show below BLEU-3 and BLEU-4 for BNC and defer the analogous plots for IMDB and APNEWS to Appendix G and H. Note we set the validation dataset as the ground-truth. For all datasets, it is clear that rGBN-RNN yields both higher test-BLEU and lower self-BLEU scores than related methods do, indicating the stacked-RNN based language model in rGBN-RNN generalizes well and does not suffer from mode collapse (i.e., low diversity). Hierarchical structure of language model: In Fig. 3, we visualize the hierarchical multi-scale structures learned with the language model of rGBN-RNN and that of GBN-RNN, by visualizing the L 2 -norm of the hidden states in each layer, while reading a sentence from the APNEWS validation set as "the service employee international union asked why cme group needs tax relief when it is making huge amounts of money?" As shown in Fig. 3(a), in the bottom hidden layer (h1), the L 2 norm sequence varies quickly from word to word, except within short phrases such "service employee", "international union," and "tax relief," suggesting layer h1 is in charge of capturing short-term local dependencies. By contrast, in the top hidden layer (h3), the L 2 norm sequence varies slowly and exhibits semantic/syntactic meaningful long segments, such as "service employee international union," "asked why cme group needs tax relief," "when it is," and "making huge amounts of," suggesting that layer h3 is in charge of capturing long-range dependencies. Therefore, the language model in 48 budget lawmakers gov. revenue vote proposal community legislation 57 lawmakers pay proposal legislation credit session meeting gambling 60 budget gov. revenue vote costs mayor california conservative 57 generated sentence: the last of the four companies and the mississippi inter national speedway was voted to accept the proposal. 60 generated sentence: adrian on thursday issued an officer a news release saying the two groups will take more than $ 40,000 for contacts with the private nonprofit. 48 generated sentence: looming monday, the assembly added a proposal to balance the budget medicaid plan for raising the rate to $ 142 million, whereas years later, to $ 200 million. lawmaker proposal legislation approval raising audit senate 75 generated sentence: the state senate would give lawmakers time to accept the retirement payment. 48-57-75 generated sentence: the proposal would give them a pathway to citizenship for the year before, but they don't have a chance to participate in the elections. inc gambling credit assets medicaid investment 62 generated sentence: the gambling and voting department says it was a chance of the game. 48-57-62 generated sentence: the a r k a n s a s s e n a t e h a s purchased a $ 500 million state bond for a proposed medicaid expansion for a new york city. 11 budget revenue loan gains treasury incentives profits 11 generated sentence: the office of north dakota has been offering a $ 22 million bond to a $ 68 m i l l i o n b u d g e t w i t h t h e proceeds from a escrow account. 48-60-11 generated sentence: a new report shows the state has been hit by a number of shortcomings in jindal's budget proposal for the past decade. 84 gov. vote months conservation ballot reform fundraising 84 generated sentence: the u.s. sen. joe mccoy in the democratic party says russ of the district, must take the rest of the vote on the issues in the first half of the year. it was partially the other in the republican caucus that raised significant amounts of spending last year and ended with cuts from a previous government shutdown. Figure 4: Topics and their temporal trajectories inferred by a three-hidden-layer rGBN-RNN from the APNEWS dataset, and the generated sentences under topic guidance (best viewed in color). Top words of each topic at layer 3, 2, 1 are shown in orange, yellow and blue boxes respectively, and each sentence is shown in a dotted line box labeled with the corresponding topic index. Sentences generated with a combination of topics in different layers are at the bottom of the figure. rGBN-RNN can allow more specific information to transmit through lower layers, while allowing more general higher level information to transmit through higher layers. Our proposed model have the ability to learn hierarchical structure of the sequence, despite without designing the multiscale RNNs on purpose like. We also visualize the language model of GBN-RNN in Fig. 3(b); with much less smoothly time-evolved deeper layers, GBN-RNN fails to utilize its stacked RNN structure as effectively as rGBN-RNN does. This suggests that the language model is much better trained in rGBN-RNN than in GBN-RNN for capturing long-range temporal dependencies, which helps explain why rGBN-RNN exhibits clearly boosted BLEU scores in comparison to GBN-RNN. We present an example topic hierarchy inferred by a three-layer rGBN-RNN from APNEWS. In Fig. 4, we select a large-weighted topic at the top hidden layer and move down the network to include any lower-layer topics connected to their ancestors with sufficiently large weights. Horizontal arrows link temporally related topics at the same layer, while top-down arrows link hierarchically related topics across layers. For example, topic 48 of layer 3 on "budget, lawmakers, gov., revenue," is related not only in hierarchy to topic 57 on "lawmakers, pay, proposal, legislation" and topic 60 of the lower layer on "budget, gov., revenue, vote, costs, mayor," but also in time to topic 35 of the same layer on "democratic, taxes, proposed, future, state." Highly interpretable hierarchical relationships between the topics at different layers, and temporal relationships between the topics at the same layer are captured by rGBN-RNN, and the topics are often quite specific semantically at the bottom layer while becoming increasingly more general when moving upwards. Sentence generation under topic guidance: Given the learned rGBN-RNN, we can sample the sentences both conditioning on a single topic of a certain layer and on a combination of the topics from different layers. Shown in the dotted-line boxes in Fig. 4, most of the generated sentences conditioned on a single topic or a combination of topics are highly related to the given topics in terms of their semantical meanings but not necessarily in key words, indicating the language model is successfully guided by the recurrent hierarchical topics. These observations suggest that rGBN-RNN has successfully captured syntax and global semantics simultaneously for natural language generation. Sentence/paragraph generation conditioning on a paragraph: Given the GBN-RNN and rGBN-RNN learned on APNEWS, we further present the generated sentences conditioning on a paragraph, as shown in Fig. 5. To randomly generate sentences, we encode the paragraph into a hierarchical latent representation and then feed it into the stacked-RNN. Besides, we can generate a paragraph with rGBN-RNN, using its recurrent inference network to encode the paragraph into a dynamic hierarchical latent representation, which is fed into the language model to predict the word sequence Document Generated Sentences with GBN-RNN Generated Sentences with rGBN-RNN the proposal would also give lawmakers with more money to protect public safety, he said. the proposal, which was introduced in the house on a vote on wednesday, has already passed the senate floor to the house. Generated temporal Sentences with rGBN-RNN (Paragraph) the senate sponsor (…), a house committee last week removed photo ids issued by public colleges and universities from the measure sponsored by republican rep. susan lynn, who said she agreed with the change. the house approved the bill on a 65-30 vote on monday evening. but republican sen. bill ketron in a statement noted that the upper chamber overwhelmingly rejected efforts to take student ids out of the bill when it passed 21-8 earlier this month. ketron said he would take the bill to conference committee if needed. if the house and senate agree, it will be the first time they'll have to seek their first meeting. the city commission voted last week to approve the law, which would have allowed the council to approve the new bill. senate president pro tem joe scarnati said the governor's office has never resolved the deadline for a vote in the house. the proposal is a new measure version of the bill to enact a senate committee to approve the emergency manager's emergency license. the house gave the bill to six weeks of testimony, but the vote now goes to the full house for consideration. jackson signed his paperwork wednesday with the legislature.the proposal would also give lawmakers with more money to protect public safety, he said. "a spokesman for the federal department of public safety says it has been selected for a special meeting for the state senate to investigate his proposed law. a new state house committee has voted to approve a measure to let idaho join a national plan to ban private school systems at public schools. the campaign also launched a website at the university of california, irvine, which are studying the current proposal. in each sentence of the input paragraph. It is clear that both the proposed GBN-RNN and rGBN-RNN can successfully capture the key textual information of the input paragraph, and generate diverse realistic sentences. Interestingly, the rGBN-RNN can generate semantically coherent paragraphs, incorporating contextual information both within and beyond the sentences. Note that with the topics that extract the document-level word cooccurrence patterns, our proposed models can generate semantically-meaningful words, which may not exist in the original document. We propose a recurrent gamma belief network (rGBN) guided neural language modeling framework, a novel method to learn a language model and a deep recurrent topic model simultaneously. For scalable inference, we develop hybrid SG-MCMC and recurrent autoencoding variational inference, allowing efficient end-to-end training. Experiments conducted on real world corpora demonstrate that the proposed models outperform a variety of shallow-topic-model-guided neural language models, and effectively generate the sentences from the designated multi-level topics or noise, while inferring interpretable hierarchical latent topic structure of document and hierarchical multiscale structures of sequences. For future work, we plan to extend the proposed models to specific natural language processing tasks, such as machine translation, image paragraph captioning, and text summarization. Another promising extension is to replace the stacked-RNN in rGBN-RNN with Transformer, i.e., constructing an rGBN guided Transformer as a new larger-context neural language model. GBN-RNN: {y 1:T, d} denotes a sentence-context pair, where d ∈ Z Vc + represents the document-level context as a word frequency count vector, the vth element of which counts the number of times the vth word in the vocabulary appears in the document excluding sentence y 1:T. The hierarchical model of a L-hidden-layer GBN, from top to bottom, is expressed as The stacked-RNN based language model described in is also used in GBN-RNN. Statistical inference: To infer GBN-RNN, we consider a hybrid of stochastic gradient MCMC (; ; ; ; a), used for the GBN topics φ l k, and auto-encoding variational inference , used for the parameters of both the inference network (encoder) and RNN. More specifically, GBN-RNN generalizes Weibull hybrid auto-encoding inference (WHAI) of: it uses a deterministic-downward-stochastic-upward inference network to encode the bag-of-words representation of d into the latent topic-weight variables θ l across all hidden layers, which are fed into not only GBN to reconstruct d, but also a stacked RNN in language model, as shown in, to predict the word sequence in y 1:T. The topics φ l k can be sampled with topic-layeradaptive stochastic gradient Riemannian (TLASGR) MCMC, whose details can be found in Cong et al. (2017a);, omitted here for brevity. Given the sampled topics φ l k, the joint marginal likelihood of {y 1:T, d} is defined as ) is used to provide a ELBO of the log joint marginal likelihood as and the training is performed by maximizing E pdata(, where both k l and λ l are deterministically transformed from d using neural networks. Distinct from a usual variational auto-encoder whose inference network has a pure bottom-up structure, the inference network here has a determisticupward-stoachstic-downward ladder structure . is the written portion of the British National Corpus . Following the preprocessing steps in , we tokenize words and sentences using Stanford CoreNLP , lowercase all word tokens, and filter out word tokens that occur less than 10 times. For the topic model, we additionally exclude stopwords 2 and the top 0.1% most frequent words. All these corpora are partitioned into training, validation, and testing sets, whose summary statistics are provided in Table 2 of the Appendix. in, and the parameters of the variational recurrent inference network (encoder), consisting of RNN The language model component is parameterized by LSTM l word in and the coupling vectors g l described in Appendix B. We summarize in Table 3 the complexity of rGBN-RNN (ignoring all bias terms), where V denotes the vocabulary size of the language model, E the dimension of word embedding vectors, V c the size of the vocabulary of the topic model that excludes stop words, H w l the number of hidden units of the word-level LSTM at layer l (stacked-RNN language model), H s l the number of hidden units of the sentence-level RNN at layer l (variational recurrent inference network), and K l the number of topics at layer l. Table 4 further compares the number of parameters between various RNN-based language models, where we follow the convention to ignore the word embedding layers. Some models in Table 1 are not included here, because we could not find sufficient information from their corresponding papers or code to accurately calculate the number of model parameters. Note when used for language generation at the testing stage, rGBN-RNN no longer needs its topics {Φ l}, whose parameters are hence not counted. Note the number of parameters of the topic model component is often dominated by that of the language model component. Left panel is BLEU-3 and right is BLEU-4, and a better BLEU score would fall within the lower right corner, where black point represents mean value and circles with different colors denote the elliptical surface of probability of BLEU in a two-dimensional space. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Byl1W1rtvH | We introduce a novel larger-context language model to simultaneously captures syntax and semantics, making it capable of generating highly interpretable sentences and paragraphs |
We present a novel approach to train a natural media painting using reinforcement learning. Given a reference image, our formulation is based on stroke-based rendering that imitates human drawing and can be learned from scratch without supervision. Our painting agent computes a sequence of actions that represent the primitive painting strokes. In order to ensure that the generated policy is predictable and controllable, we use a constrained learning method and train the painting agent using the environment model and follows the commands encoded in an observation. We have applied our approach on many benchmarks and our demonstrate that our constrained agent can handle different painting media and different constraints in the action space to collaborate with humans or other agents. Throughout human history, painting has been an essential element of artistic creation. There are many diverse and complex artistic domains with various styles such as watercolor, oil painting, sketching, and so on. As image processing and computer graphics have advanced, there has been a considerable effort to simulate these styles using non-photorealistic rendering (NPR) techniques .; generate compelling using stroke-based rendering. However, most prior methods in NPR are engineered for a specific application or task, and cannot easily adapt to new styles or medium. Recent developments in machine learning have ed in significant advancements in computer vision and computer graphics, including computer-based painting systems. Many visual generative methods based on generative adversarial networks as;;;; have demonstrated promising . Many of these machine learning methods have also been applied to stroke-based rendering tasks, including modeling the brush , generating brushstroke paintings in an artist's style , reconstructing drawings for specific paintings styles , and constructing stroke-based drawings (Ha & Eck (2017a);;; Jia et al. (2019a) ). In this paper, we focus on a more general and challenging problem of training a natural media painting agent for interactive applications. Given a reference image, our goal is to develop a stroke-based rendering approach that can imitate the human drawing or strokes used in generating the image. A key challenge is to develop a method that can learn from scratch without any supervision. In this regard, we present a technique that can handle all inputs and train an agent to manipulate natural painting media such as charcoal, pencil, watercolor, and so on. We build a model-based natural media environment using deep CNN and train a natural media painting agent using model-based reinforcement learning. In order to introduce controls to the agents for interactive applications, we use a constraint representation along with a different framework for training and use the constrained painting agent. These constraints enable the agent to interact with a human or other agents and generate various styles without retraining the model. The novel contributions of our work include: • A method to train an agent that produces a stream of actions subject to constraint for each action. These constraints can include restricting the start location, stroke width, color, and other stroke parameters. • A method to roll out constrained agents so the user can produce new stylistic effects interactively or automatically, as the agent is painting by modulating the action stream. • By incorporate coarse-to-fine strategy, our painting agents can generate high-resolution stylized images using various constraints and paintbrush configurations. We evaluate our algorithm on different paintbrush configurations and datasets to highlights its benefits over prior reinforcement learning based methods. We also employ differing constraint settings to validate our constrained agents and produce new stylistic effects with a single trained model. In this paper, we focus on the stroke-based rendering problem, which renders the reference image with brush strokes. In contrast to image analogy approaches (; ; ; ; ; ; Li et al. (2017;), stroke-based approaches can generate intermediate painting states for interactive purposes. They can also be deployed in various painting environment to generate different artistic effects. One category of approaches for stroke-based rendering uses a heuristics-based method to guide the agent while painting. Notable examples include the method in , which can produce stylized illustrations, and the method in , which can reproduce a colorful painting in different styles. However, it is difficult to extend these methods to different styles of painting because of the hand-engineered features. Another category of approaches uses machine learning techniques to learn the policy. Compared with predefined policies, these methods use machine learning techniques to learn the painting policy, which enables the agent to generate a more natural . Ha & Eck (2017b) train an RNN to learn the latent space of the sketch data and generate the paintings in this latent space, which requires the paired dataset for training. Other approaches use deep reinforcement learning to learn the policy without supervised data. There have been a few attempts to tackle related problems in this domain. Xie et al. (2012; 2013) propose a series of works to simulate strokes using reinforcement learning and inverse reinforcement learning. These prior approaches learn a policy either from reward functions or expert demonstrations. Comparing with these approaches, we use a more general setup which does not rely on rewards engineering. is based on the Deep Q network. trains the discriminator and the reinforcement learning framework at the same time. However, both methods can work on either a small action space or a small observation space. Jia et al. (2019b; a) use proximal policy optimization with curriculum learning and self-supervision to gradually increase the sampled action space and the frequency of positive rewards. train a differentiable painting environment model, which helps learn the painting policy with gradient-based methods. train a model-based RL using differentiable environment and DDPG . This approach is limited by the OpenCV based renderer and the uncontrollable agent. An overview of our approach is given in Fig. 1 This includes the training phase and extracting the underlying neural network as the painting agent is used for the roll-out process. We highlight all the symbols used in the paper in Table 1. In section 4, we present the details of natural media painting agent with a environment model for the agent to interact. Our approach is based on the training scheme described in and uses Deep Deterministic Policy Gradient (DDPG) to train the modelbased painting agent. For the roll-out algorithm, we apply the painting agent directly to the real environment R r as shown in Figure 1. In section 5, we present our representation of the underlying constraints and the underlying techniques to encode and decode these constraints using reinforcement learning. We use the unconstrained agent to identify the constraints, and encode them as observations to train the constrained painting agent, as shown in Figure 3. In order roll out the We use a actor-critic-based reinforcement learning framework to train the painting agent. For each step, the current states of the canvas and the reference image form the observation for the policy network. Based on the observation, the painting agent predicts an action for the neural renderer R n to execute and updates the canvas accordingly. Roll-out of the Natural Media Painting Agent (right): We extract the actor neural network as the painting agent for the roll-out process. We replace the neural renderer R n with the real renderer R r to get the precise synthetic . In this section, we present our algorithm for training a natural media environment model and training a painting agent to interact with this environment model. The renderer of our painting agent generates the corresponding canvas by the given action, which implements the blending functions and other synthetic programs. Unlike the previous learning-based painting approaches Jia et al. (2019a);; , we use a natural media painting renderer MyPaint (libmypaint contributors) for our experiment. Compared with self-defined painting simulators, it provides rich visual effects and sophisticated blending functions. use the same environment but have not explored the various configurations of the paintbrush. Using a pre-existing environment setup helps our approach concentrate on the learning algorithm to make it easy to generalize. The action of our painting agent consists of the configurations it use to interact with the environment. For the action space, we follow the stroke definition of using a quadratic Bezier curve (QBC) of three points ((x 0, y 0), (x 1, y 1), (x 2, y 2)). Each stroke is a 3-point curve. The pressure which affects the blending function is linear interpolated by the value of the start position (x 0, y 0) and the end position (x 2, y 2). We use one color (R, G, B) for each stroke, and the transparency of the pixels within the stroke is affected by the pressure (p 0, p 1). We use one brush size r for each stroke, and the actual stroke width within the stroke can also be affected by the pressure (p 0, p 1). Formally, we represent the action a t as: In practice, a t ∈ R 12. Each value is normalized to. The action is in a continuous space, which makes it possible to control the agent precisely. The observation of our painting agent consists of information to make decisions. In our setup, the observation is represented by the reference image s *, the current canvas s t at step t, and the constraint c t. Formally, o t = (s *, s t). For the constrained agent described in section 5, o t = (s *, s t, c t). c t is a vector drawn from the sub-space of the action space. To unify the observation representation for implementation, we upsample the constraint vector as a bitmap with size of s t to feed into the CNN of the policy network and the value network. Reward is a metric that enables our painting agent to measure effectiveness of the action. Aside from the reinforcement learning algorithm, the definition of the reward can be seen as a guidance for the policy π. For our problem, the goal of the painting agent is to reproduce the reference image s *. First, we define the loss function L between the current canvas s t at step t and the reference image s *. In practice, we use l 2 loss (Jia et al. (2019a) ) and WGAN loss ) for implementation. Then, we use the difference of the losses of two continuous steps as the reward r t. We normalize r t using Eq.2, such that r t ∈ (−∞, 1]. where L is a loss function defined as l 2 or WGAN. For our reinforcement learning setting, the objective is to maximize the discounted accumulative rewards q t = tmax t=1 r t γ t, where the discount factor γ ∈ For this painting environment, we use the rendering function R to represent the transition function. The action only modifies the current painting state s t with the current action a t as s t+1 = R(a t, s t). Inspired by and , we model the behaviors of the real environment renderer R r using a neural network as R n. We use a 4-layer fully connected neural network followed by 3 convolutional neural networks.) with different paintbrush configurations. The top row is rendered by the real renderer R r and the bottom row is rendered by the neural renderer R n. There are two main benefits of modeling the environment. First, the reinforcement learning algorithm may have millions of steps, and it can be computationally expensive to use the real environment because of sophisticated synthetic algorithms. If we simplify the environment model, it can save training time. Second, modeling the transition function of states using neural network makes the entire framework differentiable, and the loss can be backpropagated through the renderer network R n. In practice, we model R r by building a dataset consisting of paired data (a t, s t, s t+1). To simplify the problem, we always start from a blank canvas s 0 so that the R n only need to predict the stroke image from a single action. At run-time, we synthesize the blending function by making the stroke image transparent and adding it to the current canvas s t. The paired dataset {a (i), s (i) } can be collected by randomly sampling the a i and synthesising the corresponding s i. We treat different paintbrushes of MyPaint (libmypaint contributors) as separate models. It is worth noting that we use the environment model R n for the training process, but use the real environment R r for the roll-out process. After we train the natural media painting agent, we make the agent controllable to handle interactive scenarios and be used for generating stylized images. To achieve this goal, one straightforward way is to constrain the agent with a modified action space. However, this method is difficult to generalize to different constraints. For each different constraint c in the same subspace C in action space A, the policy needs to be retrained. For example, if we train a painting agent and constrain its stroke to width 0.8, we still need to train it again when we need an agent to output a stroke with width 0.9. In the rest of this section, we propose a constrained painting agent that, can follow any constraint c drawn from the subspace of action space A. First, we define the constraint representation, including the definition of the constraint, the encoding function, and the decoding function. Then we introduce the training and roll-out schemes of the constrained agent. The constraint is the vector c in the constraint space C, which is the sub-space of the action space A. While fixing the vector c, the agent has to explore the subtraction of A and C as A by sampling a from A and concatenating with c: We can define C by selecting and combining dimensions in A. For example, we can constrain the color of the stroke by defining C as the color space (R, G, B). We can constrain the start position of the stroke by defining C as (x 0, x 1). We can constrain the stroke width of the stroke by defining C as (r). Moreover, we can use the combination of the constraints to achieve complex effects. As shown in Figure 3, the unconstrained agent takes the reference image and the current canvas (s *, s t) as observation while the constrained agent takes the reference (s *, s t, c t) as an observation, which has an additional constraint c t at step t. c t is a vector drawn from the sub-space of the action space. To unify the observation representation for implementation, we encode the constraint vector c t as a bitmap to feed into the CNN of the policy network and the value network. To encode the constraint c t into observation, we upsample the constraint c t as a matrix c t which has the same size of s * and s t, to stack them and feed into our policy network and our value network. To decode the constraint c t into c t, we implement a downsample module within the policy network. The policy network pi can be seen as separate policy: π c and π a; π c outputs the downsampled constraint c and π a outputs the constrained action a. Then we concatenate a and c to form action a. Formally, we have c t = π c (c t), a t = π a (s *, s t, c t), and a t = a t ⊕ c t. After we introduce the constraint c t as part of the observation of the painting agent, we propose the corresponding training scheme. Because the constraint representation is designed for interactive purposes, we need to use either human experts or another agent to train the constrained agent. As shown in Figure 3, we use an unconstrained agent to generate constraints by cascading them together. For each step t, the unconstrained agent takes the reference image and the current canvas (s *, s t) as observations, and outputs the action a in action space A. Then we identify a subspace in A as constraint space C and transfer a t to c t, followed by upsampling the c t as c t, as defined in section 4. After that, the constrained agent takes the additional constraint c t as an observation (s *, s t, c t) and outputs action a t concatenated by c t and a. Figure 3: Framework for Training (Roll-out) the Constrained Agent We train the constrained agent by cascading an unconstrained agent. First, we identify a subspace of the action space and extract the constraint c from the action a. Then we upsample the constraint c and pass it on as an observation of the constrained agent. We use R n as renderer. For roll-out process (dashed line), we extract the constraint from either the user defined constraint or the action of the previous step. We use real renderer R r to get the best visual quality. For the roll-out of the constrained painting agent, we replace the unconstrained agent from the training scheme in Figure 3 with the constraint extracted from either human command or a painting agent's output for each step t. When the constraint is extracted from a human command, the agent can be ordered to place a paint stroke with a given position (x 0, y 0), color (R, G, B), and stroke width r. When the constraint is extracted from painting agent's output, it can be seen that we split the one agent that explores in entire action space A into two agents that explore separate action spaces A 0 and A 1, where A = A 0 + A 1. For training the natural media painting environment, we treat different paintbrushes as different models. We run 50,000 iterations for each model and record the l 2 loss on the validation dataset as shown in Table 2. The learning curves of the training processes are shown in Figure 4 (left). We train the unconstrained painting agent using the fixed envionment model with various dataset. We use hand-written digits images , character images (, face images (, object images as train painting agents. We run 10,000 episodes for each training task. Because of the difference among the dataset, we use 5 strokes to reproduce hand-written digits images, 20 strokes to reproduce character images, 100 strokes to reproduce face and object images. We measure the l 2 loss between the reference images and reproduced images throughout the training process shown as Figure 4 (middle). The reproduced is shown as Figure 5, and the corresponding l 2 loss is shown as Table 3 (left). Figure 5: We trained our natural media painting agents using MNIST, KanjiVG, CelebA, and ImageNet as reference images (left). We generate (right) using R r for the training and validating process. For the roll-out process, we employ a coarse-to-fine strategy to increase the resolution of the shown as Figure 6. First, we roll out the agent with a low-resolution reference image s * and get the reproductionŝ *. Then we divide s * andŝ * into patches and feed the agent as initial observation. The roll-out using various natural media are shown as Figure 7. We train the constrained painting agents using the learning parameters as unconstrained painting agents. We compute the l 2 distance between the reference images and reproduce for the training process with various constraint configurations. To control variates, we use same neural environment R n (charcoal) and dataset (CelebA) for these experiments. We use the color (R, G, B), the stroke width r, the start position (x 0, y 0) and all of them (x 0, y 0, r, R, G, B) as constraints. The learning curves of these constraints are shown as Figure 4 (right) and the l2 loss is shown as Table 3(right). We demonstrate the constrained painting agents as Figure 2, which uses pencil as paintbrush and incorporates coarse-to-fine strategy by dividing the reference images as 4×4 patches. To constrain the color of strokes, we build a color palette by clustering the colors of the reference image. For each action a, we constrain it from a color randomly selected from the color palette. To constrain the stroke width, we use a constant stroke width for the roll-out process. Figure 6: Coarse-to-fine Roll-out We roll out the trained agent using a coarse-to-fine strategy. We first roll out the model treating the reference image and canvas as one patch. Then we divide the reference image and updated canvas into patches and feed to the agent. In this figure, we roll out 100 strokes in each patch. In this paper, we train natural media painting agents that can generate artistic paintings using various natural media, and collaborate with humans and other agents to get different visual effects. We build a model of natural media environment using deep CNN and train a natural media painting agent using model-based reinforcement learning. To introduce controls to the agents for interactive purposes, we propose constraint representation, a framework for training a constrained painting agent, and various roll-out schemes to apply the agent. We demonstrate our algorithm by applying the trained model using various paintbrushes from MyPaint and constraints set up. The experimental show that our algorithm can reproduce reference images in multiple artistic styles. For future work, we aim to extend the proposed algorithm by building a unified model for differing paintbrush configuration. In addition, we will train a hierarchical agent that uses a constrained agent as the low-level policy. We would like to apply our approach on other reference images and use for interactive painting systems. A APPENDIX Figure 9: Roll-out using Various PaintbrushesWe roll out our natural media painting agents trained with various brushes in MyPaint. To increase the resolutions of the generated images, we incorporate the coarse-to-fine strategy. We use 8 × 8 patches for first row and 4 × 4 for second row. Figure 10: Reproduction of Starry Night using Charcoal We roll out our natural media painting agent trained with charcoal brush in MyPaint to reproduce Van Gogh's starry night. We incorporate the coarse-to-fine strategy by dividing the reference image and canvas into 16 × 16 patches. Figure 11: Reproduction of Starry Night using Watercolor We roll out our natural media painting agent trained with watercolor brush in MyPaint to reproduce Van Gogh's starry night. We incorporate the coarse-to-fine strategy by dividing the reference image and canvas into 16 × 16 patches. | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SyeKGgStDB | We train a natural media painting agent using environment model. Based on our painting agent, we present a novel approach to train a constrained painting agent that follows the command encoded in the observation. |
Delusional bias is a fundamental source of error in approximate Q-learning. To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates. In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are "consistent" with the underlying greedy policy class. We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain (jointly) consistent with the expressible policy class. We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature (implicit) policy commitments. Experimental demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically. Q-learning lies at the heart of many of the recent successes of deep reinforcement learning (RL) (;, with recent advancements (e.g., van ; ; ;) helping to make it among the most widely used methods in applied RL. Despite these successes, many properties of Q-learning are poorly understood, and it is challenging to successfully apply deep Q-learning in practice. When combined with function approximation, Q-learning can become unstable (; ; ;). Various modifications have been proposed to improve convergence or approximation error (; 1999; Szepesvári & ; ; ;); but it remains difficult to reliably attain both robustness and scalability. identified a source of error in Q-learning with function approximation known as delusional bias. It arises because Q-learning updates the value of state-action pairs using estimates of (sampled) successor-state values that can be mutually inconsistent given the policy class induced by the approximator. This can in unbounded approximation error, divergence, policy cycling, and other undesirable behavior. To handle delusion, the authors propose a policy-consistent backup operator that maintains multiple Q-value estimates organized into information sets. Each information set has its own backed-up Q-values and corresponding "policy commitments" responsible for inducing these values. Systematic management of these sets ensures that only consistent choices of maximizing actions are used to update Q-values. All potential solutions are tracked to prevent premature convergence on any specific policy commitments. Unfortunately, the proposed algorithms use tabular representations of Q-functions, so while this establishes foundations for delusional bias, the function approximator is used neither for generalization nor to manage the size of the state/action space. Consequently, this approach is not scalable to RL problems of practical size. In this work, we propose CONQUR (CONsistent Q-Update Regression), a general framework for integrating policy-consistent backups with regression-based function approximation for Q-learning and for managing the search through the space of possible regressors (i.e., information sets). With suitable search heuristics, our framework provides a computationally effective means for minimizing the effects of delusional bias in Q-learning, while admitting scaling to practical problems. Our main contributions are as follows. First we define novel augmentations of standard Q-regression to increase the degree of policy consistency across training batches. While testing exact consistency is expensive, we introduce an efficient soft-consistency penalty that promotes consistency of new labels with earlier policy commitments. Second, drawing on the information-set structure of , we define a search space over Q-regressors to allow consideration of multiple sets of policy commitments. Third, we introduce heuristics for guiding the search over regressors, which is critical given the combinatorial nature of information sets. Finally, we provide experimental on the Atari suite demonstrating that CONQUR can offer (sometimes dramatic) improvements over Q-learning. We also show that (easy-to-implement) consistency penalization on its own (i.e., without search) can improve over both standard and double Q-learning. We assume a discounted, infinite horizon Markov decision process (MDP), M = (S, A, P, p 0, R, γ). The state space S can reflect both discrete and continuous features, but we take the action space A to be finite (and practically enumerable). We consider Q-learning with a function approximator Q θ to learn an (approximately) optimal Q-function , drawn from some approximation class parameterized by Θ (e.g., the weights of a neural network). When the approximator is a deep network, we generically refer to the algorithm as DQN, the method at the heart of many recent RL successes (; . For online Q-learning, at a transition s, a, r, s, the Q-update is given by: Batch versions of Q-learning, including DQN, are similar, but fit a regressor repeatedly to batches of training examples . Batch methods are usually more data efficient and stable than online Q-learning. Abstractly, batch Q-learning works through a sequence of (possibly randomized) data batches D 1, · · · D T to produce a sequence of regressors Q θ1,..., Q θ T = Q θ, estimating the Q-function. 1 For each (s, a, r, s) ∈ D k, we use a prior estimator Q θ k−1 to bootstrap the Q-label q = r + γ max a Q θ k−1 (s, a). We then fit Q θ k to this training data using a suitable regression procedure with an appropriate loss function. Once trained, the (implicit) induced policy π θ is the greedy policy w.r.t. Q θ, i.e., π θ (s) = arg max a∈A Q θ (s, a). Let F(Θ), resp. G(Θ), be the corresponding class of expressible Q-functions, resp. greedy policies. Intuitively, delusional bias occurs whenever a backed-up value estimate is derived from action choices that are not (jointly) realizable in G(Θ) . Standard Q-updates back up values for each (s, a) pair by independently choosing maximizing actions at the corresponding next states s. However, such updates may be "inconsistent" under approximation: if no policy in G(Θ) can jointly express all past action choices, backed up values may not be realizable by any expressible policy. show that delusion can manifest itself with several undesirable consequences. Most critically, it can prevent Q-learning from learning the optimal representable policy in G(Θ); it can also cause divergence. To address this, they propose a non-delusional policy consistent Q-learning (PCQL) algorithm that provably eliminates delusion. We refer to the original paper for details, but review the main concepts we need to consider below. The first key concept is that of policy consistency. For any S ⊆ S, an action assignment σ S: S → A associates an action σ(s) with each s ∈ S. We say σ is policy consistent if there is a greedy policy π ∈ G(Θ) s.t. π(s) = σ(s) for all s ∈ S. We sometimes equate a set SA of state-action pairs with an implied assignment π(s) = a for all (s, a) ∈ SA. If SA contains multiple pairs with the same state s, but different actions a, it is a multi-assignment (though we loosely use the term "assignment" in both cases when there is no risk of confusion). In (batch) Q-learning, each successive regressor uses training labels generated by assuming maximizing actions (under the prior regressor) are taken at its successor states. Let σ k reflect the collection of states and corresponding maximizing actions taken to generate labels for regressor Q θ k (assume it is policy consistent). Suppose we train Q θ k by bootstrapping on Q θ k−1 and consider a training sample (s, a, r, s). Q-learning generates label r + γ max a Q θ k−1 (s, a) for input (s, a). Notice, however, that taking action a * = argmax a Q θ k (s, a) at s may not be policy consistent with σ k. Thus Q-learning will estimate a value for (s, a) assuming the execution of a policy that cannot be realized given the limitations of the approximator. The PCQL algorithm prevents this by insisting that any action assignment σ used to generate bootstrapped labels is consistent with earlier assignments. Notice that this means Q-labels will often not be generated using maximizing actions relative to the prior regressor. The second key concept is that of information sets. One will generally not be able to use maximizing actions to generate labels, so tradeoffs can be made when deciding which actions to assign to different states. Indeed, even if it is feasible to assign a maximizing action a to state s early in training, say at batch k, since it may prevent assigning a maximizing a to s later, say batch k +, we may want to consider a different assignment to s to give more flexibility to maximize at other states later. PCQL doesn't try to anticipate the tradeoffs-rather it maintains multiple information sets, each corresponding to a different assignment to the states seen in the training data so far. Each gives rise to a different Q-function estimate, ing in multiple hypotheses. At the end of training, the best hypothesis is the one maximizing expected value w.r.t. an initial state distribution. PCQL provides strong convergence guarantees, but it is a tabular algorithm: the function approximator retricts the policy class, but is not used to generalize Q-values. Furthermore, its theoretical guarantees come at a cost: it uses exact policy consistency tests-tractable for linear approximators, but not practical for large problems; and it maintains all consistent assignments. As a , PCQL cannot be used for large RL problems of the type tackled by DQN. We develop the CONQUR framework to provide a practical approach to reducing delusion in Qlearning, specifically addressing the limitations of PCQL identified above. CONQUR consists of three main components: a practical soft-constraint penalty that promotes policy consistency; a search space to structure the search over multiple regressors (information sets, action assignments); and heuristic search schemes (expansion, scoring) to find good Q-regressors. We assume a set of training data consisting of quadruples (s, a, r, s), divided into (possibly nondisjoint) batches D 1,... D T for training. This perspective is quite general: online RL corresponds to |D i | = 1; off-line batch training (with sufficiently exploratory data) corresponds to a single batch (i.e., T = 1); and online or batch methods with replay are realized when the D i are generated by sampling some data source with replacement. For any data batch D, let χ(D) = {s : (s, a, r, s) ∈ D} denote the collection of successor states of D. An action assignment σ D for D is an assignment (or multi-assignment) from χ(D) to A: this dictates which action σ D (s) is considered "maximum" for the purpose of generating a Q-label for pair (s, a); i.e., (s, a) will be assigned training label r + γQ(s, σ(s)) rather than r + γ max a ∈A Q(s, a). The set of all such assignments is Σ(D) = A χ(D); note that it grows exponentially with |D|. 2 This is simple policy consistency, but with notation that emphasizes the policy class. Let Σ Θ (D) denote the set of all Θ-consistent assignments over D. The union σ 1 ∪ σ 2 of two assignments (over D 1, D 2, resp.) is defined in the usual way. Enforcing strict Θ-consistency as regressors θ 1, θ 2,..., θ T are generated is computationally challenging. Suppose assignments σ 1,..., σ k−1, used to generate labels for D 1,... D k−1, are jointly Θ-consistent (let σ ≤k−1 denote their multi-set union). Maintaining Θ-consistency when generating θ k imposes two requirements. First, one must generate an assignment σ k over D k s.t. σ ≤k−1 ∪ σ k is consistent. Even testing assignment consistency can be problematic: for linear approximators this is a linear feasibility program whose constraint set grows linearly with |D 1 ∪... ∪ D k |. For DNNs, this is a complex, and much more expensive, polynomial program. Second, the regressor θ k should itself be consistent with σ ≤k−1 ∪ σ k. Again, this imposes a significant constraint on the regression optimization: in the linear case, this becomes a constrained least-squares problem (solvable, e.g., as a quadratic program); while with DNNs, it could be solved, say, using a much more complicated projected SGD. However, the sheer number of constraints makes this impractical. Rather than enforcing consistency, we propose a simple, computationally tractable scheme that "encourages" it: a penalty term that can be incorporated into the regression itself. Specifically, we add a penalty function to the usual squared loss to encourage updates of the Q-regressors to be consistent with the underlying information set, i.e., the prior action assignments used to generate its labels. When constructing θ k, let D ≤k = ∪{D j : j ≤ k}, and σ ∈ Σ Θ (D ≤k) be the collective (possibly multi-) assignment used to generate labels for all prior regressors (including θ k itself). The multiset of pairs B = {(s, σ(s))|s ∈ χ(D ≤k)}, is called a consistency buffer. The collective assignment need not be consistent (as we elaborate below), nor does the regressor θ k need to be consistent with σ. Instead, we incorporate the following soft consistency penalty when constructing θ k: where. This penalizes Q-values of actions at state s that are larger than that of action σ(s). We note that σ is Θ-consistent if and only if min θ∈Θ C θ (B) = 0. We incorporate this penalty into our regression loss for batch D k: Here Q θ k is prior estimator on which labels are bootstrapped (other prior regressors may be used). The penalty effectively acts as a "regularizer" on the squared Bellman error, where λ controls the degree of penalization, allowing a tradeoff between Bellman error and consistency with the action assignment used to generate labels. It thus promotes consistency without incurring the expense of testing strict consistency. It is a simple matter to replace the classical Q-learning update with one using a consistency penalty: This scheme is quite general. First, it is agnostic as to how the prior action assignments are made, which can be the standard maximizing action at each stage w.r.t. the prior regressor like in DQN, Double DQN (DDQN), or other variants. It can also be used in conjunction with a search through alternate assignments (see below). Second, the consistency buffer B may be populated in a variety of ways. Including all max-action choices from all past training batches promotes full consistency in an attempt to minimize delusion. However, this may be too constraining since action choices early in training are generally informed by very inaccurate value estimates. Hence, B may be implemented in other ways to focus only on more recent data (e.g., with a sliding recency window, weight decay, or subsampling); and the degree of recency bias may adapt during training (e.g., becoming more inclusive as training proceeds and the Q-function approaches convergence). Reducing the size of B also has various computational benefits. We discuss other practical means of promoting consistency in Sec. 5. The proposed consistency penalty resembles the temporal-consistency loss of , but our aims are very different. Their temporal consistency notion penalizes changes in a next state's Q-estimate over all actions, whereas we discourage inconsistencies in the greedy policy induced by the Q-estimator, regardless of the actual estimated values. Ensuring optimality requires that PCQL track all Θ-consistent assignments. While the set of such assignments is shown to be of polynomial size , it is still impractical to track this set in realistic problems. As such, in CONQUR we recast information set tracking as a search problem and propose several strategies for managing the search process. We begin by defining the search space and discussing its properties. We discuss search procedures in Sec. 3.4. As above, assume training data is divided into batches D 1,... D T and we have some initial Qfunction estimate θ 0 (for bootstrapping D 1 's labels). The regressor θ k for D k can, in principle, be trained with labels generated by any assignment σ ∈ Σ Θ (D k) of actions to its successor states χ(D k), not necessarily maximizing actions w.r.t. θ k−1. Each σ gives rise to a different updated Q-estimator θ k. There are several restrictions we could place on "reasonable" σ-candidates: (ii) σ is jointly Θ-consistent with all σ j, for j < k, used to construct the prior regressors on which, and this inequality is strict for at least one s. Conditions (i) and (ii) are the strict consistency requirements of PCQL. We will, however, relax these below for reasons discussed in Sec. 3.2. Condition (iii) is inappropriate in general, since we may add additional assignments (e.g., to new data) that render all non-dominated assignments inconsistent, requiring that we revert to some dominated assignment. This gives us a generic search space for finding policy-consistent, delusion-free Q-function, as illustrated in Fig k can also be viewed as an information set). We assume the root n 0 is based on an initial regression θ 0, and has an empty action assignment σ 0. Nodes at level k of the tree are defined as follows. For each node n, and its regressor θ i k is trained using the following data set: The entire search space constructed in this fashion to a maximum depth of T. See Appendix B, Algorithm 1 for pseudocode of a simple depth-first recursive specification. The exponential branching factor in this search tree would appear to make complete search intractable; however, since we only allow Θ-consistent "collective" assignments we can bound the size of the tree-it is polynomial in the VC-dimension of the approximator. Theorem 1. The number of nodes in the search tree is no more than VCDim(G) ) where VCDim(·) is the VC-dimension of a set of boolean-valued functions, and G is the set of boolean functions defining all feasible greedy policies under Θ: A linear approximator with a fixed set of d features induces a policy-indicator function class G with VC-dimension d, making the search tree polynomial in the size of the MDP. Similarly, a fixed ReLU DNN architecture with W weights and L layers has VC-dimension of size O(W L log W) again rendering the tree polynomially sized. Even with this bound, navigating the search space exhaustively is generally impractical. Instead, various search methods can be used to explore the space, with the aim of reaching a "high quality" regressor at some leaf of the tree (i.e., trained using all T data sets/batches). We discuss several key considerations in the next subsection. Even with the bound in Theorem 1, traversing the search space exhaustively is generally impractical. Moreover, as discussed above, enforcing consistency when generating the children of a node, and their regressors, may be intractable. Instead, various search methods can be used to explore the space, with the aim of reaching a "high quality" regressor at some (depth T) leaf of the tree. We outline three primary considerations in the search process: child generation, node evaluation or scoring, and the search procedure. Generating children. Given node n i k−1, there are, in principle, exponentially many action assignments, or children, Σ Θ (D k) (though Theorem 1 significantly limits the number of children if we enforce consistency). For this reason, we consider heuristics for generating a small set of children. Three primary factors drive these heuristics. The first factor is a preference for generating high-value assignments. To accurately reflect the intent of (sampled) Bellman backups, we prefer to assign actions to state s ∈ χ(D k) with larger predicted Q-values over actions with lower values, i.e., a preference for a over a if Q θ (s, a). However, since the maximizing assignment may be Θ-inconsistent (in isolation, or jointly with the parent's information set, or with future assignments), candidate children should merely have higher probability of a high-value assignment. The second factor is the need to ensure diversity in the assignments among the set of children. Policy commitments at stage k constrain the possible assignments at subsequent stages. In many search procedures (e.g., beam search), we avoid backtracking, so we want the policy commitments we make at stage k to offer as much flexibility as possible in later stages. The third is the degree to which we enforce consistency. There are several ways to generate such high-value assignments. We focus on just one natural technique: sampling action assignments using a Boltzmann distribution. Specifically, let σ denote the assignment (information set) of some node (parent) at level k − 1 in the tree. We can generate an assignment σ k for D k as follows. Assume some permutation s 1,..., s |D k | of χ(D k). For each s i in turn, we sample a i with probability proportional to e τ Q θ k−1 (s i,ai). This can be done without regard to consistency, in which case we would generally use the consistency penalty when constructing the regressor θ k for this child to "encourage" consistency rather than enforce it. If we want strict consistency, we can use rejection sampling without replacement to ensure a i is consistent with σ j k−1 ∪ σ ≤i−1 (we can also use a subset of σ j k−1 as a less restrictive consistency buffer). 3 The temperature parameter τ controls the degree to which we focus on purely maximizing assignments versus more diverse, random assignments. While stochastic sampling ensures some diversity, this procedure will bias selection of high-value actions to states s ∈ χ(D k) that occur early in the permutation. To ensure sufficient diversity, we use a new random permutation for each child. Scoring children. Once the children of some expanded node are generated (and, optionally, their regressors constructed), we need some way of evaluating the quality of each child as a means of deciding which new nodes are most promising for expansion. Several techniques can be used. We could use the average Q-label (overall, or weighted using some initial state distribution), Bellman error, or loss incurred by the regressor (including the consistency penalty or other regularizer). However, care must be taken when comparing nodes at different depths of the search tree, since deeper nodes have a greater chance to accrue rewards or costs-simple calibration methods can be used. Alternatively, when a simulator is available, rollouts of the induced greedy policy can be used evaluate the quality of a node/regressor. Notice that using rollouts in this fashion incurs considerable computational expense during training relative to more direct scoring based on properties on the node, regressor, or information set. Search Procedure. Given any particular way of generating/sampling and scoring children, a variety of different search procedures can be applied: best-first search, beam search, local search, etc. all fit very naturally within the CONQUR framework. Moreover, hybrid strategies are possible-one we develop below is a variant of beam search in which we generate multiple children only at certain levels of the tree, then do "deep dives" using consistency-penalized Q-regression at the intervening levels. This reduces the size of the search tree considerably and, when managed properly, adds only a constant-factor (proportional to beam size) slowdown to standard Q-learning methods like DQN. We now outline a specific instantiation of the CONQUR framework that can effectively navigate the large search space that arises in practical RL settings. We describe a heuristic, modified beamsearch strategy with backtracking and priority scoring. Pseudocode is provided in Algorithm 2 (see Appendix B); here we simply outline some of the key refinements. Our search process grows the tree in a breadth-first manner, and alternates between two phases. In an expansion phase, parent nodes are expanded, generating one or more child nodes with action assignments sampled from the Boltzmann distribution. For each child, we create target Q-labels, then optimize the child's regressor using consistency-penalized Bellman error (Eq. 2) as our loss. We thus forego strict policy consistency, and instead "encourage" consistency in regression. In a dive phase, each parent generates one child, whose action assignment is given by the usual max-actions selected by the parent node's regressor as in standard Q-learning. No additional diversity is considered in the dive phase, but consistency is promoted using consistency-penalized regression. From the root, the search begins with an expansion phase to create c children-c is the splitting factor. Each child inherits its parent's consistency buffer from which we add the new action assignments that were used to generate that child's Q-labels. To limit the size of the tree, we only track a subset of the children, the frontier nodes, selected using one of several possible scoring functions. We select the top -nodes for expansion, proceed to a dive phase and iterate. It is possible to move beyond this "beam-like" approach and consider backtracking strategies that will return to unexpanded nodes at shallower depths of the tree. We consider this below as well. Other work has considered multiple hypothesis tracking in RL. One particularly direct approach has been to use ensembling, where multiple Q-approximators are updated in parallel (Faußer & ; ;) then combined straightforwardly to reduce instability and variance. An alternative approach has been to consider population-based methods inspired by evolutionary search. For example, combine a novelty-search and quality diversity technique to improve hypothesis diversity and quality in RL. consider augmenting an off-policy RL method with diversified population information from an evolutionary algorithm. Although these techniques do offer some benefit, they do not systematically target an identified weakness of Q-learning, such as delusion. We experiment using the Atari test suite to assess the performance of CONQUR. We first assess the impact of using the consistency penalty in isolation (without search) as a "regularizer" that promotes consistency with both DQN and DDQN. We then test the modified beam search described in Appendix B to assess the full power of CONQUR. We first study the effects of introducing the soft-policy consistency in isolation, augmenting both DQN and DDQN with the consistency penalty term. We train our models using an open-source implementation of both DQN and DDQN (with the same hyperparameters). We call these modified algorithms DQN(λ) and DDQN(λ), respectively, where λ is the penalty coefficient defined in Eq. 2. Note that λ = 0 recovers the original methods. This is a lightweight modification that can be applied readily to any regression-based Q-learning method, and serves to demonstrate the effectiveness of soft-policy consistency penalty. Since we don't consider search (i.e., don't track multiple hypotheses), we maintain a small consistency buffer using only the current data batch by sampling from the replay buffer-this prevents getting "trapped" by premature policy constraints. As the action assignment is the maximizing action of some network, σ(s) can be computed easily for each batch. This in a simple algorithmic extension that adds only an additional penalty term to the original TD-loss. We train and evaluate DQN(λ) and DDQN(λ) for λ = {0.25, 0.5, 1, 1.5, 2} on 19 Atari games. In training, λ is initialized to 0 and slowly annealed to the desired value to avoid premature commitment to poor action assignments. Without annealing, the model tends fit to poorly informed action assignments during early phases of training, and thus fails to learn a good model. The best λ is generally different across games, depending on the nature of the game and the extent of delusional bias. Though a single λ = 0.5 works well across all games tested, Fig. 2 illustrates the effect of increasing λ on two games. In Gravitar, increasing λ generally in better performance for both DQN and DDQN, whereas in SpaceInvaders, λ = 0.5 gives improvement over both baselines, but performance starts to degrade for λ = 2. We compare the performance of the algorithms using each λ value separately, as well as using the best λ for each game. Under the best λ, DQN(λ) and DDQN(λ) outperform their "potentially delusional" counterparts on all except 3 and 2 games, respectively. In 9 of these games, each of DQN(λ) and DDQN(λ) beats both baselines. With a constant λ = 0.5, each algorithm still beats their respective baseline in 11 games. These suggest that consistency penalization (independent of the general CONQUR model) can improve the performance of DQN and DDQN by addressing the delusional bias that is critical to learning a good policy. Moreover, we see that consistency penalization seems to have a different effect on learned Q-values than double Q-learning, which addresses maximization bias. Indeed, consistency penalization, when applied to DQN, can achieve gains that are greater than DDQN (in 15 games). Third, in 9 games DDQN(λ) provides additional performance gains over DQN(λ). A detailed description of the experiments and further can be found in Appendix C. Table 1: Results of CONQUR with 8 (split 2) nodes on 9 games using the proposed scoring function compared to evaluation using rollouts. We test the full CONQUR framework using the modified beam search discussed above. Rather than training a full Q-network, for effective testing of its core principles, we leverage pre-trained networks from the Dopamine package. 4. These networks have the same architecture as in and are trained on 200M frames with sticky actions using DQN. We use CONQUR to retrain only the last (fully connected) layer (implicitly freezing the other layers), which can be viewed as a linear Q-approximator over the features learned by the CNN. We run CONQUR using only 4M addtional frames to train our Q-regressors. 5 We consider splitting factors c of 2 and 4; impose a limit on the frontier size of 8 or 16; and an expansion factor of 2 or 4. The dive phase is always of length 9 (i.e., 9 batches of data), giving an expansion phase every 10 iterations. Regressors are trained using the loss in Eq. 2 and the consistency buffer comprises all prior action assignments. (See Appendix D for details, hyperparameter choices and more .) We run CONQUR with λ = {1, 10, 100, 1000} and select the best performing policy. We initially test two scoring approaches, policy evaluation using rollouts and scoring using the loss function (Bellman error with consistency penalty). Results comparing the two on a small selection of games are shown in Table 1. While rollouts, not surprisingly, tend to give rise to better-performing policies, consistent-Bellman scoring is competitive. Since the latter much less computationally intense, and does not require sampling the environment, we use it throughout our remaining experiments. We compare CONQUR with the value of the pre-trained DQN. We also evaluate a "multi-DQN" baseline that applies multiple versions of DQN independently, warm-starting from the same pretrained DQN. It uses the same number of frontier nodes as CONQUR, and is otherwise trained identically as CONQUR but with direct Bellman error (no consistency penalty). This gives DQN the same advantage of multiple-hypothesis tracking as CONQUR but without policy consistency. We test on 59 games, comparing CONQUR with frontier size 16 and expansion factor 4 and splitting factor 4 with backtracking (as described in the Appendix D) ed in significant improvements to the pre-trained DQN, with an average score improvement of 125% (excluding games with non-positive pre-trained score). The only games without improvement are Montezuma's Revenge, Tennis, PrivateEye and BankHeist. This demonstrates that, even when simply retraining the last layer of a highly tuned DQN network, removing delusional bias has the potential to offer strong improvements in policy performance. It is able exploit the reduced parameterization to obtain these gains with only 4M frames of training data. Roughly, a half-dozen games have outsized score improvements, including Solaris (11 times greater value), Tutankham (6.5 times) and WizardOfWor (5 times). Compared to the stronger multi-DQN baseline (with 16 nodes), CONQUR wins by at least a 10% margin in 20 games, while 22 games see improvements of 1-10% and 8 games show little effect (plus/minus 1%) and 7 games show a decline of greater than 1% (most are 1-6% with the exception of Centipede at -12% and IceHockey at -86%). Results are similar when comparing CONQUR and multi-DQN each with 8 nodes: 9 games exhibit 10%+ improvement, 21 games show 1-8% improvement, 12 games perform comparably and 7 games do worse under CONQUR. A table of complete appears in Appendix D.3, Table 4, and training curves (all games, all λ) in Fig. 11. Increasing the number of nodes from 8 to 16 generally leads to better performance for CONQUR, with 38 games achieving strictly higher scores with 16 nodes: 16 games with 10%+ improvement, 5 games tied and the remaining 16 games performing worse (only a few with a 5%+ decline). Fig. 3 shows the (smoothed) effect of increasing the number of nodes for a fixed λ = 10. The y-axis represents the rollout value of the best frontier node (i.e., the greedy policy of its Q-regressor) as a function of the training iteration. For both Alien and Solaris, the multi-DQN (baseline) training curve is similar with both 8 and 16 nodes, but CONQUR improves Alien from 3k to 4.3k while Solaris improves from 2.2k to 3.5k. Fig. 4 and Fig. 5 (smoothed, best frontier node) shows node policy values and training curves, respectively, for Solaris. When considering nodes ranked by their policy value, comparing nodes of equal rank generated by CONQUR and by multi-DQN (baseline), we see that CONQUR nodes dominate their multi-DQN counterparts: the three highest-ranked nodes achieve a score improvement of 18%, 13% and 15%, respectively, while the remaining nodes achieve improvements of roughly 11-12%. Fig. 6 (smoothed, best frontier node) shows the effects of varying λ. In Alien, increasing λ from 1 to 10 improves performance, but it starts to decline for higher values of 100 and 1000. This is similar to patterns observed in 4.1 and represents a trade-off between emphasizing consistency and not over-committing to action assignments. In Atlantis, stronger penalization tends to degrade performance. In fact, the stronger the penalization, the worse the performance. We have introduced CONQUR, a framework for mitigating delusional bias in value-based RL that relaxes some of the strict assumptions of exact delusion-free algorithms to ensure scalability. Its two main components are (a) a tree-search procedure used to create and maintain diverse, promising Q-regressors (and corresponding information sets); and (b) a consistency penalty that encourages "maximizing" actions to be consistent with the FA class. CONQUR embodies elements of both value-based and policy-based RL: it can be viewed as using partial policy constraints to bias the value estimator or as a means of using candidate value functions to bias the search through policy space. Empirically, we find that CONQUR can improve the quality of existing approximators by removing delusional bias. Moreover, the consistency penalty applied on its own, directly in DQN or DDQN, itself can improve the quality of the induced policies. Given the generality of the CONQUR framework, there remain numerous interesting directions for future research. Other methods for nudging regressors to be policy-consistent include exact consistency (constrained regression), other regularization schemes that bias the regressor to fall within the information set, etc. Given its flexibility, more extensive exploration of search strategies (e.g., best first), child-generation strategies, and node scoring schemes should be examined within CONQUR. Our (full) experiments should also be extended beyond those that warm-start from a DQN model, as should testing CONQUR in other domains. Other connections and generalizations are of interest as well. We believe our methods can be extended to both continuous actions and soft max-action policies. We suspect that there is a connection between maintaining multiple "hypotheses" (i.e., Q-regressors) and notions in distributional RL, which maintains distributions over action values. We describe an example, taken directly from , to show concretely how delusional bias causes problems for Q-learning with function approximation. The MDP in Fig. 7 illustrates the phenomenon: use a linear approximator over a specific set of features in this MDP to show that: (a) No π ∈ G(Θ) can express the optimal (unconstrained) policy (which requires taking a 2 at each state); (b) The optimal feasible policy in G(Θ) takes a 1 at s 1 and a 2 at s 4 (achieving a value of 0.5). (c) Online Q-learning (Eq. 1) with data generated using an ε-greedy behavior policy must converge to a fixed point (under a range of rewards and discounts) corresponding to a "compromise" admissible policy which takes a 1 at both s 1 and s 4 (value of 0.3). Algorithm 1 CONQUR SEARCH (Generic, depth-first) Training set S ← {} 4: end for end if 14: end for Q-learning fails to find a reasonable fixed-point because of delusion. Consider the backups at (s 2, a 2) and (s 3, a 2). Supposeθ assigns a "high" value to (s 3, a 2), so that Qθ(s 3, a 2) > Qθ(s 3, a 1) as required by π θ *. They show that any suchθ also accords a "high" value to (s 2, a 2). But Qθ(s 2, a 2) > Qθ(s 2, a 1) is inconsistent the first requirement. As such, any update that makes the Q-value of (s 2, a 2) higher undercuts the justification for it to be higher (i.e., makes the "max" value of its successor state (s 3, a 2) lower). This occurs not due to approximation error, but the inability of Q-learning to find the value of the optimal representable policy. The pseudocode of (depth-first) version of the CONQUR search framework is listed in Algorithm 1. As discussed in Sec. 3.5, a more specific instantiation of the CONQUR algorithm is listed in Algorithm. 2. Both DQN and DDQN uses a delayed version of the Q-network Q θ − (s, a) for label generation, but in a different way. In DQN, Q θ − (s, a) is used for both value estimate and action assignment σ DQN (s) = argmax a Q θ k (s, a), whereas in DDQN, Q θ − (s, a) is used only for value estimate and the action assignment is computed from the current network σ DDQN (s) = argmax a Q θ k (s, a). With respect to delusional bias, action assignment of DQN is consistent for all batches after the latest network weight transfer, as σ DQN (s) is computed from the same Q θ − (s, a) network. DDQN, on the other hand, could have very inconsistent assignments, since the action is computed from the current network that is being updated at every step. Select top scoring nodes n 1,..., n ∈ P 10: for each selected node n i do Generate c children n i,1,..., n i,c using Boltzmann sampling on D k with Q θ i 12: for each child n i,j do 13: Let assignment history σ i,j be σ i ∪ {new assignment} 14: Determine regressor θ i,j by applying update if k is a refinement ("dive") level then for each frontier node n i,j ∈ F do 25: Update regressor θ i,j by applying update to θ . In particular, we modify existing DqnAgent and DdqnAgent by adding a consistency penalty term to the original TD loss. We use TF-Agents implementation of DQN training on Atari with the default hyperparameters, which are mostly the same as that used in the original DQN paper . For conveniece to the reader, some important hyperparameters are listed in Table 2. The reward is clipped between [−1, 1] following the original DQN. We empirically evaluate our modified DQN and DDQN agents trained with consistency penalty on 15 Atari games. Evaluation is run using the training and evaluation framework for Atari provided in TF-Agents without any modifications. C.4 DETAILED Fig. 8 shows the effects of varying λ on both DQN and DDQN. Table 3 summarizes the best penalties for each game and their corresponding scores. Fig. 9 shows the training curves of the best penalization constants. Finally, Fig. 10 shows the training curves for a fixed penalization of λ = 1/2. steps, and within each window, we take the largest policy value (and over ≈2-5 multiple runs). This is done to reduce visual clutter. Our use a frontier queue of size (F) 8 or 16 (these are the top scoring leaf nodes which receive gradient updates and rollout evaluations during training). To generate training batches, we select the best node's regressor according to our scoring function, from which we generate training samples (transitions) using ε-greedy. Results are reported in Table 4 and 5, and related figures where max number of nodes are 8 or 16. We used Bellman error plus consistency penalty as our scoring function. During the training process, we also calibrated the scoring to account for the depth difference between the leaf nodes at the frontier versus the leaf nodes in the candidate pool. We calibrated by taking the mean of the difference between scores of the current nodes in the frontier with their parents. We scaled this difference by multiplying with a constant of 2.5. In our implementation, we initialized our Q-network with a pre-trained DQN. We start with the expansion phase. During this phase, each parent node splits into l children nodes and the Q-labels are generated using action assignments from the Boltzmann sampling procedure, in order to create high quality and diversified children. We start the dive phase until the number of children generated is at least F. In particular, with F = 16 configuration, we performed the expansion phase at the zero-th and first iterations, and then at every tenth iteration starting at iteration 10, then at 20, and so on until ending at iteration 90. In the F = 8 configuration, the expansion phase occurred at the zero-th and first iterations, then at every tenth iterations starting at iterations 10 and 11, then at iterations 20 and 21, and so on until ending at iterations 90 and 91. All other iterations execute the "dive" phase. For every fifth iteration, Q-labels are generated from action assignments sampled according to the Boltzmann distribution. For all other iterations, Q-labels are generated in the same fashion as the standard Q-learning (taking the max Q-value). The generated Q-labels along with the consistency penalty are then converted into gradient updates that applies to one or more generated children nodes. Each iteration consists of 10k transitions sampled from the environment. Our entire training process has 100 iterations which consumes 1M transitions or 4M frames. We used RMSProp as the optimizer with a learning rate of 2.5 × 10 −6. One training iteration has 2.5k gradient updates and we used a batch size of 32. We replace the target network with the online network every fifth iteration and reward is clipped between [−1, 1]. We use a discount value of γ = 0.99 and ε-greedy with ε = 0.01 for exploration. Details of hyper-parameter settings can be found in Table 6 (for 16 nodes) and Table 7 (for 8 nodes). We empirically evaluate our algorithms on 59 Atari games , and followed the evaluation procedure as in. We evaluate our agents every 10th iterations (and also the initial and first iteration) by suspending our training process. We evaluate on 500k frames, and we cap the length of the episodes for 108k frames. We used ε-greedy as the evaluation policy with ε = 0.001. Fig. 11 shows training curves of CONQUR with 16 nodes under different penalization strengths λ. Each plotted step of each training curve (including the baseline) shows the best performing node's policy value as evaluated with full rollouts. Table 4 shows the summary of the highest policy values achieved for all 59 games for CONQUR and the baseline under 8 and 16 nodes. Table 5 shows a similar summary, but without no-op starts (i.e. policy actions are applied immediately). Both the baseline and CONQUR improve overall, but CONQUR's advantage over the baseline is amplified. This may suggest that for more deterministic MDP environments, CONQUR may have even better improvements. The on 16 and 8 nodes use a splitting factor of 4 and 2, respectively. Table 4: Summary of scores with ε-greedy (ε = 0.001) evaluation with up to 30 no-op starts. Mini-batch size Size of the mini batch data used to train the Qnetwork. ε train ε-greedy policy for exploration during training. 0.01 ε eval ε-greedy policy for evaluating Q-regressors. 0.001 Training calibration parameter Calibration to adjust the difference between the nodes from the candidate pool m which didn't selected during both the expansion nor the dive phases. The calibration is performed based on the average difference between the frontier nodes and their parents. We denote this difference as. Discount factor γ Discount factor during the training process. 0.99 | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | ryg8WJSKPr | We developed a search framework and consistency penalty to mitigate delusional bias. |
The paper proposes and demonstrates a Deep Convolutional Neural Network (DCNN) architecture to identify users with disguised face attempting a fraudulent ATM transaction. The recent introduction of Disguised Face Identification (DFI) framework proves the applicability of deep neural networks for this very problem. All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users. However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage. The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not. The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction. The network is trained using a dataset of images captured in similar situations as of an ATM. The comparatively low clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises. The widespread acceptance of Automated Teller Machine (ATM) and their omnipresence in the banking sector has engendered numerous security concerns. One of the most imperative concerns being, verifying the authenticity of the user. Evidently, most of the ATMs across the globe simply rely on a card and a Personal Identification Number (PIN) for authentication. However, in either case, it is plausible that the user is not authorised to transact. For instance, illegal practices like phishing, shoulder surfing, card fraud, stolen card can cause substantial monetary loss to the owner. To overcome and identify such practices, ATMs have an inbuilt camera which records 24x7. The current state of art ATM security works in the following way: After a fraudulent transaction, the owner of the corresponding bank account reports about the fraud. The police then investigates and goes through the footage recorded by the ATM camera to find the face of the imposter. Once the face is identified, the police searches for the imposter. Clearly, this security measure can be easily gamed by using artifacts or alterations like wigs, caps, eyeglasses, beard to cover the face for intentional disguises. As a , BID6 stated that such face alterations can substantially degrade the performance of the system. Hence, this approach has a very low success rate which is unacceptable in banking sectors. Additionally, BID7 explained different openings and vulnerabilities that exist at the time of transactions due to fake entries and fake cards. Apparently, this chaos can be prevented by ensuring that the transaction proceeds only if the face is undisguised and reveal identity of the user. The proposed system extracts the user's face from the footage and checks if the face is disguised. The system is trained cleverly to identify such faces by an extensive pool of disguised and undisguised faces. If the face is disguised, the system will not allow the transaction to be proceeded, thereby preventing the imposter from stealing. To achieve this, the proposed system uses Deep Convolutional Neural Networks for image classification using statistical dimensionality reduction method. Deep networks have proved to be exceptional in computer vision problems BID9 BID4. BID9 stated a three-layer cascading style which superficially captures the high level features and refines them to detect deeper features. Analogously, the proposed system uses a five-layer architecture, first 3 layers comprises of a convolutional layers followed by a pooling layers to learn the features of the following types of images: Disguised, Partially disguised and Undisguised.2 PROPOSED SYSTEM 2.1 EXISTING MECHANISMS Plenty of research work has been published in response to the ATM security problems and a lot of it relates to using machine learning to authenticate users. BID3 proposed a facebased authentication as identity test for users and the system uses facial recognition with biometric features. T. BID10 stated the applicability of image processing by amalgamation of Face Recognition System (FRS) in the identity verification process engaged in ATMs. BID2 proposed a framework to classify local facial regions of both visible and thermal face images into biometric (regions without disguise) and non-biometric (regions with disguise) classes and used the biometric patches for facial feature extraction and matching. The fact that none of the above mentioned mechanisms are implemented in the current state-ofart endorses that they are not at par with the requirements and have to be compromised due to certain trade-offs. The mechanisms use an extensive pool of user information and glut of computing resources such as cloud storage, network bandwidth which makes them in-feasible and erratic.According to T. Suganya FORMULA0, transactor's picture should be clicked and matched to existing records after his card and PIN are verified. However, such computation cannot be relied upon as it highly depends on the remote server which holds the data and the available bandwidth. Such systems try to authenticate too much and exceed the computational limits of the machine. Also, emergency situations where the card owner is unavailable to authenticate the transaction is where the current systems BID3, BID10, BID5 suffer. Moreover, with reference to BID1, clearly Eigenface based method can be spoofed by using face masks or photos of account holder. The proposed solution takes advantage of the fact that fraudsters hide their identity to complete the transaction and not get caught in police investigation. To achieve this, fraudsters cover their faces with disguises while performing the transaction. We propose and demonstrate the use of a Convolutional Neural Network to identify face disguises, thereby preventing fraudulent transactions. An ATM equipped with the proposed DCNN model will not allow any user with disguised face to complete the transaction and prompt for removing the disguise to complete the transaction. The DCNN is trained to identify in real time whether the user interacting with the ATM is having disguised face or not without consuming any additional data or remote resources. The output is swiftly reported to the ATM to further proceed the transaction based on the computed label. The proposed system makes sure that every user interacting with the ATM shows his/her identity to the ATM and essentially allows police to identify the user corresponding to a fraud transaction. The DCNN model is implemented in python 2.7 using TensorFlow(r1.3) and is executed on an Ubuntu 16.04 operating system. Stochastic Gradient Descent method is used to train the model and Kingma and Ba's Adam algorithm to control the learning rate. The dataset used to train and test the contains 1986 images of faces covered with any combination of 4 disguises namely scarf, helmet, eyeglasses and fake beard. The images are published by BID8 and matches the requirement to an acceptable level. The images are manually classified in 3 classes, which are -disguised, undisguised and partially disguised. FIG0 exemplifies the dataset in short. While feeding to the network the 1986 images were randomly split into 1500 and 486 images which is approximately 75% and 25% split ratio. 1500 images were used for training and remaining 486 for testing. 30 batches of 50 randomly chosen samples were used for every training cycle. After every training cycle, the training set was shuffled to randomize the data and ensure generalization in learning process. • Disguised Faces which are not recognizable by humans are labelled as disguised. These images particularly contain more than one disguises and effectively hide the identity of the person in the image. 1372 images belong to disguised class. We introduced this label to adopt the network to allow users having unintentional disguises such as spectacles and natural beard. These images are recognizable by humans and the apparent disguises are part of the persons identity. There are 212 samples of partially disguised faces. Faces showing clear and effective identity of the person in the image are labelled as undisguised. A total of 402 images belong to the undisguised class. The Rectified Linear Unit (ReLU) activation function is used to model the neurons. The following equation defines ReLU function. DISPLAYFORM0 The activation is simply thresholded at zero. Due to its linear and non-saturating form, it significantly accelerates the convergence of Stochastic Gradient Descent (SGD) compared to the sigmoid or tanh functions. The network parameters are initialised with a standard deviation of 0.1 to facilitate SGD.Furthermore, softmax classification function is used to classify each sample and present the predicted output to cost function to calculate training error. Cross entropy is the loss measure used to measure training error of the learner and it is as defined below The training steps are executed using the tensorflow.train.AdamOptimizer function in TensorFlow, with 1e-4 as the initial learning rate and cross entropy as the cost functions needed to be optimized. The AdamOptimizer uses Kingma and Ba's Adam algorithm to control the learning rate. Adam offers several advantages and foremost is that it uses moving averages of the parameters (momentum). BID0 ] discusses the reasons for why this is beneficial. Simply put, this enables Adam to use a larger effective step size, and the algorithm will converge to this step size without fine tuning. DISPLAYFORM1 The DCNN model contains 3 convolutional layers each followed by a pooling layer. After the 3rd pooling layer two fully connected hidden layers are introduced to classify the features learned by the convolutional layers. The output layer uses softmax function to calculate probabilities of the image to belong to each class. FIG2 shows the model architecture along with specific network parameters and respective input and output image dimensions. The first convolutional layer uses 32 feature maps with kernel dimensions as (7x7x3). Similarly, the 2nd and 3rd convolutional layers use 64 and 128 feature maps and kernels with dimensions (5x5x32) and (5x5x64) respectively. All 3 pooling layers use (2x2) max pooling with stride length of 2. The 3 convolutional layers are accompanied with bias vectors of sizes FORMULA0, FORMULA0, FORMULA0 respectively. The following 2 fully connected hidden layers have weight matrices of dimensions FORMULA0 and FORMULA0 respectively and bias vectors of size (512x1) each. The output layer contains 3 neurons and weight matrix of size (3x512) with (3x1) sized bias vector. In all, the network incorporates 9,962,819 trainable parameters which are eventually learned by the network in training step and used to classify the unknown examples while testing. The performance of the model is evaluated on a test set which is disjoint from training set, simulating the real life situation. It reports a final accuracy of 90.535% after around 60 training cycles. Figure 3 shows evolution of test accuracy over 60 training cycles and Figure 4 demonstrates accuracy for every training step for first 5 training cycles. Neural Networks are proven to work efficiently in numerous classification and regression tasks due to their flexibility and high fault tolerance. The system using Deep Convolutional Neural Network demonstrates high accuracy and works efficiently in real time without needing to access data on cloud. Speed being indispensable to banking transactions is one of the significant advantages of this model. The performance can be significantly improved by training through a large dataset obtained from actual ATMs. | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SyhcXjy0Z | Proposed System can prevent impersonators with facial disguises from completing a fraudulent transaction using a pre-trained DCNN. |
Auto-encoders are commonly used for unsupervised representation learning and for pre-training deeper neural networks. When its activation function is linear and the encoding dimension (width of hidden layer) is smaller than the input dimension, it is well known that auto-encoder is optimized to learn the principal components of the data distribution (Oja1982). However, when the activation is nonlinear and when the width is larger than the input dimension (overcomplete), auto-encoder behaves differently from PCA, and in fact is known to perform well empirically for sparse coding problems. We provide a theoretical explanation for this empirically observed phenomenon, when rectified-linear unit (ReLu) is adopted as the activation function and the hidden-layer width is set to be large. In this case, we show that, with significant probability, initializing the weight matrix of an auto-encoder by sampling from a spherical Gaussian distribution followed by stochastic gradient descent (SGD) training converges towards the ground-truth representation for a class of sparse dictionary learning models. In addition, we can show that, conditioning on convergence, the expected convergence rate is O(1/t), where t is the number of updates. Our analysis quantifies how increasing hidden layer width helps the training performance when random initialization is used, and how the norm of network weights influence the speed of SGD convergence. d. An auto-encoder can be decomposed into two parts, encoder and decoder. The encoder can be viewed as a composition function s e • a e: R d → R n; function a e: R d → R n is defined as a e (x):= W e x + b e with W e ∈ R n×d, b e ∈ R n W e and b e are the network weights and bias associated with the encoder. s e is a coordinate-wise activation function defined as s e (y) j:= s(y j) where s: R → R is typically a nonlinear functionThe decoder takes the output of encoder and maps it back to R d. Let x e:= s e (a e (x)). The decoding function, which we denote asx, is defined aŝ DISPLAYFORM0 where (W d, b d) and s d are the network parameters and the activation function associated with the decoder respectively. Suppose the activation functions are fixed before training. One can viewx as a reconstruction of the original signal/data using the hidden representation parameterized by (W e, b e) and (W d, b d). The goal of training an auto-encoder is to learn the "right" network parameters, (W e, b e, W d, b d), so that x has low reconstruction error. Weight tying A folklore knowledge when training auto-encoders is that, it usually works better if one sets W d = W T e. This trick is called "weight tying", which is viewed as a trick of regularization, since it reduces the total number of free parameters. With tied weights, the classical auto-encoder is simplified asx(s e (a e (x))) = s d (W T s e (W x + b e) + b d )In the rest of the manuscript, we focus on weight-tied auto-encoder with the following specific architecture:x W,b (x) = W T s ReLu (a(x)) = W T s ReLu (W x + b) with s ReLu (y) i:= max{0, y i}Here we abuse notation to usex W,b to denote the encoder-decoder function parametrized by weights W and bias b. In the deep learning community, s ReLu is commonly referred to as the rectified-linear (ReLu) activation. Reconstruction error A classic measure of reconstruction error used by auto-encoders is the expected squared loss. Assuming that the data fed to the auto-encoder is i.i.d distributed according to an unknown distribution, i.e., x ∼ p(x), the population expected squared loss is defined as DISPLAYFORM1 Learning a "good representation" thus translates to adjusting the parameters (W, b) to minimize the squared loss function. The implicit hope is that the squared loss will provide information about what is a good representation. In other words, we have a certain level of belief that the squared loss characterizes what kind of network parameters are close to the parameters of the latent distribution p(x). This unwarranted belief leads to two natural questions that motivated our theoretical investigation:• Does the global minimum (or any of global minima, if more than one) of L(W, b) correspond to the latent model parameters of distribution p(x)?• From an optimization perspective, since L(W, b) is non-convex in W and is shown to have exponentially many local minima , one would expect a local algorithm like stochastic gradient descent, which is the go-to algorithm in practice for optimizing L(W, b), to be stuck in local minima and only find sub-optimal solutions. Then how should we explain the practical observation that auto-encoders trained with SGD often yield good representation?Stochastic-gradient based training Stochastic gradient descent (SGD) is a scalable variant of gradient descent commonly used in deep learning. At every time step t, the algorithm evaluates a stochastic gradient g(·) of the population loss function with respect to the network parameters using back propagation by sampling one or a mini-batch of data points. The weight and bias update has the following generic form DISPLAYFORM2 where η t w and η t b are the learning rates for updating W and b respectively, typically set to be a small number or a decaying function of time t. The unbiased gradient estimate g(W t) and g(b t) can be obtained by differentiating the empirical loss function defined on a single or a mini-batch of size m, Then the stochastic or mini-batch gradient descent update can be written as DISPLAYFORM3 DISPLAYFORM4 n (width of hidden layer)Max-norm regularization A common trick called "max-norm regularization" or "weight clipping" is used in training deep neural networks. 1 In particular, after each step of stochastic gradient descent, the updated weights is forced to satisfy DISPLAYFORM5 for some constant c. This means the row norm of the weights can never exceed the prefixed constant c. In practice, whenever W i, 2 > c, the max-norm constraint is enforced by projecting the weights back to a ball of radius c. In this section, we start by defining notations. Then we introduce a norm-controlled variant of SGD algorithm that operates on the auto-encoder architecture formalized in. Finally, we introduce assumptions on the data generating model. We use the same notation for network parameters W, b, and for activation a(·), as in Section 1. We use s(·) as a shorthand for the ReLu activation function s ReLu (·). We use capital letters, such as W or F, either to denote a matrix or an event; we use lower case letters, such as x, for vectors. W T denotes the transpose of W. We use W s, to denote the s-th row of W. When a matrix W is modified through time, we let W t denote the state of the matrix at time t, and W t s, for the state of the corresponding row. We use · for l 2 -norm of vectors and | · | for absolute value of real numbers. Matrix-vector multiplication between W and x (assuming their dimensions match) is denoted by W x. Inner product of vectors x and y is denoted by x, y.Organization of notations Throughout the manuscript, we introduce notations that can be divided into "model", "algorithm", and "analysis" categories according to their utility. They are organized in TAB0 to help readers interpreting our . For example, If a reader is interested in knowing how to apply our to parameter tuning in training auto-encoders, then she might ignore the auxiliary notations and only refer to algorithmic parameters and model parameters in TAB0, and examine how does the setting of the former is influenced by the latter in Theorem 1. We assume that the algorithm has access to i.i.d. samples from an unknown distribution p(x). This means the algorithm can access stochastic gradients of the population squared-loss objective in via random samples from p(x). The norm-controlled SGD variant we analyze is presented in Algorithm 1 (it can be easily extended to the mini-batch SGD version, where for each update we sample more than one data points). It is almost the same as what is commonly used in practice: it random initializes the weight matrix by sampling unit spherical Gaussian, and at every step the algorithm moves towards the direction of the negative stochastic gradient with a linearly decaying learning rate. However, there are two differences between Algorithm 1 and original SGD: first, we impose that the norm of the rows of W t be controlled; this is akin to the practical trick of "max-norm regularization" as explained in Section 1; second the update of bias is chosen differently than what is usually done in practice, which deserves additional explanation. The stochastic gradient of bias b with respect to squared loss in can be evaluated by sampling a single data point and differentiate against the empirical loss in, can be derived as DISPLAYFORM0 Since the gradient is noisy, the generic form of SGD suggests modifying b t j using the update DISPLAYFORM1 for a small learning rate η t b to mitigate noise. This amounts to stepping towards the negative gradient direction and move a little. On the other hand, we can directly find the next update b t+1 j as the point that sets the gradient to zero, that is, we find b * DISPLAYFORM2 The closed form solution to this is to choose DISPLAYFORM3 This strategy, which is essentially Newton's algorithm, should perform better than gradient descent if we have an accurate estimate of the true gradient, so it would likely benefit from evaluating the gradient using a mini-batch of data. If, on the other hand, the gradient is very noisy, then this method will likely not work as well as the original SGD update. Analyzing the evolvement of both W t and b t, which has dependent stochastic dynamic if we follow the original SGD update, would be a daunting task. Thus, to simplify our analysis, we assume in our analysis that we have access to We assume that the data x we sample follows the dictionary learning model DISPLAYFORM4 DISPLAYFORM5 Here k is the size of the dictionary, which we assume to be at least two (otherwise, the model becomes degenerate), and the true value of k is unknown to the algorithm. The rows of W * are the dictionary items; W * j satisfies DISPLAYFORM6 Let the incoherence between dictionary items be defined as λ: DISPLAYFORM7. In our simplified model, the coefficient vector s ∈ {0, 1} k is assumed to be 1-sparse, with P r(s j = 1) = 1 k DISPLAYFORM8 Finally, we assume that the noise has bounded norm 2: max ≤ DISPLAYFORM9 Algorithm 1 Norm-controlled SGD training Input: width parameter n; norm parameter c; learning rate parameters c, t o, δ; total number of iterations, t max. DISPLAYFORM10 While auto-encoders are often related to PCA, the latter cannot reveal any information about the true dictionary under this model even in the complete case, where d = k, due to the isotropic property of the underlying distribution. The data generating model can be equivalently viewed as a mixture model: for example, when s j = 1, it means x is of the form W * j +. When is Gaussian, the model coincides with mixture of Gaussians model, with the dictionary items being the latent locations of individual Gaussians. Thus, we adopt the concept from mixture models, and use x ∼ C j to indicate that x is generated from the j-th component of the distribution. To formally study the convergence property of Algorithm 1, we need a measure to gauge the distance between the learned representation at time t, W t, and the ground-truth representation, W *, which may have different number of rows. There are potentially different ways to go about this. The distance measure we use is DISPLAYFORM0 is the squared sine of the angle between the two vectors, which decreases monotonically as their angle decreases, and equals zero if and only if the vectors align. Thus, DISPLAYFORM1 can be viewed as the angular distance from the best approximation in the learned hidden representations of the network, to the ground-truth dictionary item W * j. And Θ(·, ·) measures this distance averaged over all dictionary items. Our main provides recovery and speed guarantee of Algorithm 1 under our data model. Theorem 1. Suppose we have access to i.i.d. samples x ∼ p(x), where the distribution p(x) satisfies our model assumption in Section 2.2. Fix any δ ∈ (0, n e). If we train auto-encoder with norm-controlled SGD as described in Algorithm 1, with the following parameter setting• The row norm of weights set to be DISPLAYFORM2 • If the bias update at t is chosen such that DISPLAYFORM3 • The learning rate of SGD is set to be η t:= c t+to, with c > 2kc and DISPLAYFORM4 Then Algorithm 1 has the following guarantees• When random initialization with i.i.d. samples from N is used, the algorithm will be initialized successfully (see definition of successful initialization in Definition 1) with probability at least 1 − k exp{−n( DISPLAYFORM5 • When random initialization with i.i.d. samples x ∼ p(x) is used, the algorithm will be initialized successfully with probability at least DISPLAYFORM6 • Conditioning on successful initialization, let Ω denote the sample space of all realizations of the algorithm's stochastic output, (W 1, W 2, . . .,). Then at any time t, there exists a large subset of the sample space, DISPLAYFORM7 Interpretation The first statement of the theorem suggests that the probability of successful initialization increases as the width of hidden layer increases. In particular, when Gaussian initialization is used, in order to ensure a significantly large probability of successful initialization, the analysis suggests that the number of neurons required must scale as DISPLAYFORM8, which is exponential in the ambient dimension. When the neurons are initialized with samples from the unknown distribution, the analysis suggests that the number of neurons required scale as Ω(DISPLAYFORM9, which is polynomial in the number of dictionary size. Hence, our analysis suggests that, at least under our specific model, initializing with data is perhaps a better option than Gaussian initialization. The second statement suggests that conditioning on a successful initialization, the algorithm will have expected convergence towards W *, measured by Θ(·, ·), of order O(1 t). If we examine of form of bound on the convergence rate, we see that the rate will be dominated by the second term, whose constant is heavily influenced by the choice of learning rate parameter c.Explaining distributed sparse representation via gradient-based training The main advantage of gradient-based training of auto-encoders, as revealed by our analysis, is that it simultaneously updates all its neurons in parallel, in an independent fashion. During training, a subset of neurons will specialize at learning a single dictionary item: some of them will be successful while others may fail to converge to a ground-truth representation. However, since the update of each neuron is independent (in an algorithmic sense), when larger number of neurons are used (widening the hidden layer), it becomes more likely that each ground-truth dictionary will be learned by some neuron, even from random initialization. Despite the simplicity of auto-encoders in comparison to other deep architectures, we still have a very limited theoretical understanding of them. For linear auto-encoders whose width n is less than than its input dimension d, the seminal work of revealed their connection to online stochastic PCA. For non-linear auto-encoders, recent work Arpit et al. FORMULA1 analyzed sufficient conditions on the activation functions and the regularization term (which is added to the loss function) under which the auto-encoder learns a sparse representation. Another work showed that under a class of sparse dictionary learning model (which is more general than ours) the ground-truth dictionary is a critical point (that is, either a saddle point or a local miminum) of the squared loss function, when ReLu activation is used. We are not aware of previous work providing global convergence guarantee of SGD for non-linear auto-encoders, but our analysis techniques are closely related to recent works;; that are at the intersection of stochastic (non-convex) optimization and unsupervised learning. PCA, k-means, and sparse coding The work of provided the first convergence rate analysis of Oja's and Krasulina's update rule for online learning the principal component (stochastic 1-PCA) of a data distribution. The neural network corresponding to 1-PCA has a single node in the hidden layer without activation function. We argue that a ReLu activated width n auto-encoder can be viewed as a generalized, multi-modal version of 1-PCA. This is supported by our analysis: the expected improvement of each neuron, W t s, bears a striking similarity to that obtained in. The training of auto-encoders also has a similar flavor to online/stochastic k-means algorithm: we may view each neuron as trying to learn a hidden dictionary item, or cluster center in k-means terminology. However, there is a key difference between k-means and auto-encoders: the performance of k-means is highly sensitive to the number of clusters. If we specify the number of clusters, which corresponds to the network width n in our notation, to be larger than the true k, then running n-means will over-partition data from each component, and each learned center will not converge to the true component center (because they converge to the mean of the sub-component). For auto-encoders, however, even when n is much larger than k, the individual neurons can still converge to the true cluster center (dictionary item) thanks to the independent update of neurons. SGD training of auto-encoders is perhaps closest to a family of sparse coding algorithms Schnass FORMULA1;. For the latter, however, a critical hyper-parameter to tune is the threshold at which the algorithm decides to cut off insignificant signals. Existing guarantees for sparse coding algorithms therefore depend on knowing this threshold. For ReLu activated auto-encoders, the threshold is adaptively set for each neuron s at every iteration as −b t s via gradient descent. Thus, they can be viewed as a sparse coding algorithm that self-tunes its threshold parameter. In our analysis, we define an auxiliary variable DISPLAYFORM0 Note that φ(·, ·) is the squared cosine of the angle between W t s and W * j, which increases as their angle decreases. Thus, φ can be thought as as measuring the angular "closeness" between two vectors; it is always bounded between zero and one and equals one if and only if the two vectors align. Our analysis can be divided into three steps. We first define what kind of initialization enables SGD to converge quickly to the correct solution, and show that when the number of nodes in the hidden layer is large, random initialization will satisfy this sufficient condition. Then we derive expected the per-iteration improvement of SGD, conditioning on the algorithm's iterates staying in a local neighborhood (Definition 4). Finally, we use martingale analysis to show that the local neighborhood condition will be satisfied with high probability. Piecing these elements together will lead us to the proof of Theorem 1, which is in the Appendix. Covering guarantee from random initialization Intuitively, for each ground-truth dictionary item, we only require that at least one neuron is initialized to be not too far from it. Definition 1. If the rows of W o have fixed norm c > 0. Then we define the event of successful initialization as DISPLAYFORM0 Lemma 1 (Random initialization with Gaussian variables). DISPLAYFORM1 Lemma 2 (Random initialization with data points). Suppose W o ∈ R n×d is constructed by drawing X 1,..., X n from the data distribution p(x), and setting DISPLAYFORM2 Figure 1: The auto-encoder in this example has 5 neurons in the hidden layer and the dictionary has two items; in this case, g = g = 1, g = 2, and the other two neurons do not learn any ground-truth (neurons mapped to 0 are considered useless). Under unique firing condition, which holds when the dictionary is sufficiently incoherent, the red dashed connection will not take place (each neuron is learning at most one dictionary item). DISPLAYFORM3, according to the following firing map. Note that some rows in W o may not be mapped to any dictionary item, in which case we let g(s) = 0. This means such neurons are not close (in angular distance) to any ground-truth after random initialization. Also note that for some rows W o s, there might exist multiple j ∈ [k] such that g(s) = j according to our criterion in the definition. But when λ ≤ 1 2, which is always the case by our model assumption on incoherence, Lemma 3 shows that the assignment must be unique, in which case the mapping is well defined. Lemma 3 (Uniqueness of firing). Suppose during training, the weight matrix has a fixed norm c. At time t, for any row of weight matrix W t s, we denote by τ s,1:= max j W t s c, W * j, and we denote by DISPLAYFORM4 DISPLAYFORM5 Thus, for any s ∈ [n] with g(s) > 0, the uniqueness of firing condition holds and the mapping g is defined unambiguously. So we simplify notations on measure of distance and closeness as ∆ DISPLAYFORM6 This section lower bounds the expected increase of φ t s after each SGD update, conditioning on F t. We first show that conditioning on F t, the firing of a neuron s with g(s) = j, will indicate that the data indeed comes from the j-th component, which is characterized by event E t.Definition 3. At step t, we denote the event of correct firing of W t as DISPLAYFORM0 Definition 4. At step t, we denote the event of satisfying local condition of W t as DISPLAYFORM1 DISPLAYFORM2 for some constant B > 0 where B is a constant depending on the model parameter and the norm of rows of weight matrix. By Theorem 2, the sequence φ is conditional on E t, the correct firing condition. So showing that the correct firing event indeed holds is crucial to our overall convergence analysis. Since by Lemma 4, F t =⇒ E t, it suffices to show that F t holds. To this end, note that F t's form a nested sequence DISPLAYFORM0 We denote the limit of this sequence as DISPLAYFORM1 Theorem 3 shows that P r(F ∞) is in fact arbitrarily close to one, conditioning on FORMULA1, where similar technical difficulty arise: to show local improvement of the algorithm on a non-convex functions, one usually needs to lower bound the probability of the algorithm entering a "bad" region, which can be saddle points Ge et al. FORMULA1; DISPLAYFORM2 Then conditioning on F o, we have P r(F ∞) = 1 − δ There are several interesting questions that are not addressed here. First, as noted in our discussion in Section 2, the update of bias as analyzed in our algorithm is not exactly what is used in original SGD. It would be interesting (and difficult) to explore whether the algorithm has fast convergence when b t is updated by SGD with a decaying learning rate. Second, our model assumption is rather strong, and it would be interesting to see whether similar hold on a relaxed model, for example, where one may relax to 1-sparse constraint to m-sparse, or one may relax the finite bound requirement on the noise structure. Third, our performance guarantee of random initialization depends on a lower bound on the surface area of spherical caps. Improving this bound can improve the tightness of our initialization guarantee. Finally, it would be very interesting to examine whether similar holds for activation functions other than ReLu, such as sigmoid function. Derivation of stochastic gradients Upon receiving a data point x, the stochastic gradient with respect to W is a jacobian matrix whose (j *, i *)-th entry reads DISPLAYFORM0 For i = i *, the derivative of the second term can be written using the chain rule as DISPLAYFORM1 where we let a j *:= w j * l x l + b j *, which is the activation of the j * -th neuron upon receiving x in the hidden layer before going through the ReLu unit. For i = i *, the derivative of the second term can be written using product rule and chain rule as DISPLAYFORM2 Let r ∈ R d be the residual vector with r i: DISPLAYFORM3 In vector notation, the stochastic gradient of loss with respect to the j-th row of W can be written as DISPLAYFORM4 Similarly, we can obtain the stochastic gradient with respect to the j-th entry of the bias term as DISPLAYFORM5 Now let us examine the terms ∂s(aj) ∂aj and r, W j. By property of ReLu function, DISPLAYFORM6 Mathematically speaking, the derivative of ReLu at zero does not exist. Here we follow the convention used in practice by setting the derivative of ReLu at zero to be 0. In effect, the event {a j = 0} has zero probability, so what derivative to use at zero does not affect our analysis (as long as the derivative is finite).Proof of main theorem. Consider any time t > 0. By Lemma 1 and 2, the probability of successfully initializing the network can be lower bounded by DISPLAYFORM7 100k 2 }) if initialized with data Conditioning on F o * and applying Theorem 3, we get that for all t ≥ 0, P r(DISPLAYFORM8 Since F t =⇒ E t by Lemma 4, we can apply version 1 of Theorem 2 to get the expected increase in φ t s for any s such that g(s) > 0 as: DISPLAYFORM9 such that g(s) = j. Then, the inequality above translates to DISPLAYFORM10 by our choice of c and by our assumption on the initial value ∆ o s(j). Taking total expectation up to time t, conditioning on F t, and letting β denote a lower bound on β t, we get DISPLAYFORM11 where the last inequality is by the same argument as that in Lemma 8. This has the exact same form as in Lemma D.1 of. Applying it with u t:= E[∆ t s(j) |F t ], a = β, and b = (c) 2 B (note our t + t o matches their notion of t), we get DISPLAYFORM12 By the upper bound on β t, we can choose β as small as 2c kc. So we can get an upper expressed in algorithmic and model parameters as DISPLAYFORM13 The second inequality holds because by DISPLAYFORM14 Finally, DISPLAYFORM15 where the last inequality is by our requirement that c > 2kc. Proof of Lemma 1. Let u = z z, where z ∈ R d with z i ∼ N. We know that u is a random DISPLAYFORM0 where S cap (v, h) is the surface of the spherical cap centered at v with height h. By property of spherical Gaussian, we know that u is uniformly distributed on S d−1. So we can directly calculate the probability above as DISPLAYFORM1 where µ measures the area of surface. The latter ratio can be lower bounded (see Lemma 5 in the Appendix) as a function of d and h: DISPLAYFORM2 By union bound, this implies that DISPLAYFORM3 Now by our choice of the form of lower bound on the inner product, we have DISPLAYFORM4 substituting this into the function f (d, h), we get a nice form DISPLAYFORM5 Substituting this into the previous inequality written in terms of h and letting ρ = DISPLAYFORM6 DISPLAYFORM7 where V ol(·) denotes measure of volume in R d. So this lower bounds the ratio between volumes between spherical cap and the unit ball. We show that we can use this to lower bound the ratio between surface areas between spherical cap and the unit ball. Since by?, we know that the ratio between their area can be expressed exactly as DISPLAYFORM8 and the ratio between their volume DISPLAYFORM9 where I x (a, b) is the regularized incomplete beta function. By property of I x (a, b), DISPLAYFORM10 Proof of Lemma 2. For any X i ∼ C j, for any j ∈ [k]. We first claim that DISPLAYFORM11 Proof of claim. Let us consider the two-dimensional plane H determined by X i and W * j. Clearly, i = W * j − X i also lies in H. Let θ:= ∠(X i, W * j) denote the angle between X i and W * j. Note that cos θ = Xi Xi, W * j. Fix the norm of noise i. It is clear from elementary geometric insight that θ is maximized (and hence cos θ is minimized) when the line of X i is tangent to the ball centered at W * j with radius i. We can directly calculate the value of cos θ at this point (see FIG4 as cos θ = 1 − i 2, which finishes the proof of claim. Now, we denote two events A := {min DISPLAYFORM12 The probability of event A can be lower bounded by concentration inequality for multinomial distribution : Let n j := i∈[n] 1 {Xi∈Cj}. We get DISPLAYFORM13 where the second inequality is by Lemma 3 of. DISPLAYFORM14 where 2 is the empirical mean of i 2 for all X i, i ∈ [n] belonging to the same component C j for some j ∈ [k]. Conditioning on A, we know that the average is taken over at least n 2k samples for each component C j. By one-sided Hoeffding's inequality, DISPLAYFORM15 (note, we abuse notation B in the exponent as an upper bound on 2, as in other parts of the analysis). Thus, DISPLAYFORM16 Proof of Lemma 3. To simplify notation, we use τ 1 (τ 2) as a shorthand for τ s,1 (τ s,2). Figure 3 ) that, DISPLAYFORM17 2. It can be verified that this implies DISPLAYFORM18 Observing that τ DISPLAYFORM19 Since F o holds, we know that the firing of neuron s is unique. Let τ s,1 and τ s,2 as defined in Lemma 3.Let x ∼ C j. Consider the case g(s) = j. In this case, Lemma 3 implies that DISPLAYFORM20 We will repeatedly use the following relation, as proven by Lemma 6, DISPLAYFORM21 Now, observe that 1 c 2 − 1 < 0 and DISPLAYFORM22 So we get, DISPLAYFORM23 4k. Furthermore, since by our assumption, ≤ DISPLAYFORM24 Consider the case g(s) = j, j = j. We first upper bound b o in this case. Since 1 c 2 − 1 < 0, we would like to lower bound DISPLAYFORM25 On the other hand, DISPLAYFORM26 where the last inequality is by assumptions c > Case 0 < i ≤ t Suppose E i holds, we show that E i+1 holds for i ≤ t − 1. Let x ∼ C j for any j ∈ [k]. Since E i holds, we know that is set such that DISPLAYFORM27 DISPLAYFORM28 Consider the case x ∼ C j, we have DISPLAYFORM29 where the last inequality is by assumption τ DISPLAYFORM30 holds by our assumptions that c ≤ √ 6k, and that ≤ DISPLAYFORM31 we can apply Lemma 3 to get DISPLAYFORM32 where the last inequality holds similarly as in the base case. Lemma 6. Let τ s,1, τ s,2 be as defined in Lemma 3, and let λ be the incoherence parameter. If τ DISPLAYFORM33 Proof. Using the same argument as the proof of Lemma 3, we get DISPLAYFORM34, and the second inequality is by Lemma 8. For k ≥ 1, we define DISPLAYFORM35 We can similarly get, for k = 0,..., i, DISPLAYFORM36 Since the bound is shrinking as β increases and β ≥ 2, DISPLAYFORM37 Recursively applying the relation until we get to the term DISPLAYFORM38 Combining all these recursive inequalities with the bound on λ (k), we get Finally, we have DISPLAYFORM39 DISPLAYFORM40 Now recall that this holds for each s, that is, ∀s ∈ [n], 1 e, and t ≥ (Lemma 10. Suppose our model assumptions on parameters, c, α hold, and that our assumptions on the algorithmic parameter c in Theorem 1 holds, then DISPLAYFORM41 Proof.(1 − 1 c 2)( where the last term is greater than zero because α < k − 1 4k 2 − 3k + 1 < 2k − 1/2 2k 2 − k So the term DISPLAYFORM42 DISPLAYFORM43 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HyiRazbRb | theoretical analysis of nonlinear wide autoencoder |
Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions. In order to realize this potential of faster learning, hierarchical agents need to be able to learn their multiple levels of policies in parallel so these simpler subproblems can be solved simultaneously. Yet, learning multiple levels of policies in parallel is hard because it is inherently unstable: changes in a policy at one level of the hierarchy may cause changes in the transition and reward functions at higher levels in the hierarchy, making it difficult to jointly learn multiple levels of policies. In this paper, we introduce a new Hierarchical Reinforcement Learning (HRL) framework, Hierarchical Actor-Critic (HAC), that can overcome the instability issues that arise when agents try to jointly learn multiple levels of policies. The main idea behind HAC is to train each level of the hierarchy independently of the lower levels by training each level as if the lower level policies are already optimal. We demonstrate experimentally in both grid world and simulated robotics domains that our approach can significantly accelerate learning relative to other non-hierarchical and hierarchical methods. Indeed, our framework is the first to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces. Hierarchy has the potential to accelerate learning in sequential decision making tasks because hierarchical agents can decompose problems into smaller subproblems. In order to take advantage of these shorter horizon subproblems and realize the potential of HRL, an HRL algorithm must be able to learn the multiple levels within the hierarchy in parallel. That is, at the same time one level in the hierarchy is learning the sequence of subtasks needed to solve a task, the level below should be learning the sequence of shorter time scale actions needed to solve each subtask. Yet the existing HRL algorithms that are capable of automatically learning hierarchies in continuous domains BID11 BID4 BID1 BID15 BID9 do not efficiently learn the multiple levels within the hierarchy in parallel. Instead, these algorithms often resort to learning the hierarchy one level at a time in a bottom-up fashion. Learning multiple levels of policies in parallel is challenging due to non-stationary state transition functions. In nested, multi-level hierarchies, the transition function for any level above the ground level depends on the current policies below that level. For instance, in a 2-level hierarchy, the Figure 1: An ant agent uses a 3-level hierarchy to traverse though rooms to reach its goal, represented by the yellow cube. Π 2 uses as input the current state (joint positions θ and velocitiesθ) and goal state (yellow box) and outputs a subgoal state (green box) for Π 1 to achieve. Π 1 takes in the current state and its goal state (green box) and outputs a subgoal state (purple box) for Π 0 to achieve. Π 0 takes in the current state and goal state (purple box) and outputs a vector of joint torques.high-level policy may output a subgoal state for the low level to achieve, and the state to which this subgoal state leads will depend on the current low-level policy. When all policies within the hierarchy are trained simultaneously, the transition function at each level above ground level will continue to change as long as the policies below that level continue to be updated. In this setting of non-stationary transition functions, RL will likely struggle to learn the above ground level policies in the hierarchy because in order for RL methods to effectively value actions, the distribution of states to which those actions lead should be stable. However, learning multiple policies in parallel is still possible because the transition function for each level above ground level will stabilize once all lower level policies have converged to optimal or near optimal policies. Thus, RL can be used to learn all policies in parallel if each level above ground level had a way to simulate a transition function that uses the optimal versions of lower level policies. Our framework is able to simulate a transition function that uses an optimal lower level policy hierarchy and thus can learn multiple levels of policies in parallel. We introduce a new HRL framework, Hierarchical Actor-Critic (HAC), that can significantly accelerate learning by enabling hierarchical agents to jointly learn a hierarchy of policies. Our framework is primarily comprised of two components: (i) a particular hierarchical architecture and (ii) a method for learning the multiple levels of policies in parallel given sparse rewards. The hierarchies produced by HAC have a specific architecture consisting of a set of nested, goalconditioned policies that use the state space as the mechanism for breaking down a task into subtasks. The hierarchy of nested policies works as follows. The highest level policy takes as input the current state and goal state provided by the task and outputs a subgoal state. This state is used as the goal state for the policy at the next level down. The policy at that level takes as input the current state and the goal state provided by the level above and outputs its own subgoal state for the next level below to achieve. This process continues until the lowest level is reached. The lowest level then takes as input the current state and the goal state provided by the level above and outputs a primitive action. Further, each level has a certain number of attempts to achieve its goal state. When the level either runs out of attempts or achieves its goal state, execution at that level ceases and the level above outputs another subgoal. Figure 1 shows how an ant agent trained with HAC uses its 3-level policy hierarchy (π 2, π 1, π 0) to move through rooms to reach its goal. At the beginning of the episode, the ant's highest level policy, π 2, takes as input the current state, which in this case is a vector containing the ant's joint positions and velocities ([θ,θ] ), and its goal state, represented by the yellow box. π 2 then outputs a subgoal state, represented by the green box, for π 1 to achieve. π 1 takes as input the current state and its goal state represented by the green box and outputs the subgoal state represented by the purple box. Finally, π 0 takes as input the current state and the goal state represented by purple box and outputs a primitive action, which in this case is a vector of joint torques. π 0 has a fixed number of attempts to move to the purple box before π 1 outputs another subgoal state. Similarly, π 1 has a fixed number of subgoal states that it can output to try to move the agent to the green box before π 2 outputs another subgoal. In addition, HAC enables agents to learn multiple policies in parallel using only sparse reward functions as a of two types of hindsight transitions. Hindsight action transitions help agents learn multiple levels of policies simultaneously by training each subgoal policy with respect to a transition function that simulates the optimal lower level policy hierarchy. Hindsight action transitions are implemented by using the subgoal state achieved in hindsight instead of the original subgoal state as the action component in the transition. For instance, when a subgoal level proposes subgoal state A, but the next level policy is unsuccessful and the agent ends in state B after a certain number of attempts, the subgoal level receives a transition in which the state B is the action component, not state A. The key outcome is that now the action and next state components in the transition are the same, as if the optimal lower level policy hierarchy had been used to achieve subgoal state B.Training with respect to a transition function that uses the optimal lower level policy hierarchy is critical to learning multiple policies in parallel, because the subgoal policies can be learned independently of the changing lower level policies. With hindsight action transitions, a subgoal level can focus on learning the sequences of subgoal states that can reach a goal state, while the lower level policies focus on learning the sequences of actions to achieve those subgoal states. The second type of hindsight transition, hindsight goal transitions, helps each level learn a goal-conditioned policy in sparse reward tasks by extending the idea of Hindsight Experience Replay BID0 ) to the hierarchical setting. In these transitions, one of the states achieved in hindsight is used as the goal state in the transition instead of the original goal state. We evaluated our approach on both grid world tasks and more complex simulated robotics environments. For each task, we evaluated agents with 1, 2, and 3 levels of hierarchy. In all tasks, agents using multiple levels of hierarchy substantially outperformed agents that learned a single policy. Further, in all tasks, agents using 3 levels of hierarchy outperformed agents using 2 levels of hierarchy. Indeed, our framework is the first to show empirically that it can jointly learn 3-level hierarchical policies in tasks with continuous state and action spaces. In addition, our approach outperformed another leading HRL algorithm, HIRO BID9, on three simulated robotics tasks. Building agents that can learn hierarchical policies is a longstanding problem in Reinforcement Learning BID13 BID3 BID7 BID5 BID8 BID12 BID2 ). However, most HRL approaches either only work in discrete domains, require pre-trained low-level controllers, or need a model of the environment. There are several other automated HRL techniques that can work in continuous domains. BID11 proposed a HRL approach that can support multiple levels, as in our method. However, the approach requires that the levels are trained one at a time, beginning with the bottom level, which can slow learning. BID4 proposed Skill Chaining, a 2-level HRL method that incrementally chains options backwards from the end goal state to the start state. Our key advantage relative to Skill Chaining is that our approach can learn the options needed to bring the agent from the start state to the goal state in parallel rather than incrementally. BID9 proposed HIRO, a 2-level HRL approach that can learn off-policy like our approach and outperforms two other popular HRL techniques used in continuous domains: Option-Critic BID1 ) and FeUdal Networks (FUN) BID15 ). HIRO, which was developed simultaneously and independently to our approach, uses the same hierarchical architecture, but does not use either form of hindsight and is therefore not as efficient at learning multiple levels of policies in sparse reward tasks. We are interested in solving a Markov Decision Process (MDP) augmented with a set of goals G (each a state or set of states) that we would like an agent to learn. We define an MDP augmented with a set of goals as a Universal MDP (UMDP). A UMDP is a tuple U = (S, G, A, T, R, γ), in which S is the set of states; G is the set of goals; A is the set of actions; T is the transition probability function in which T (s, a, s) is the probability of transitioning to state s when action a is taken in state s; R is the reward function; γ is the discount rate ∈. At the beginning of each episode in a UMDP, a goal g ∈ G is selected for the entirety of the episode. The solution to a UMDP is a control policy π: S, G → A that maximizes the value function v π (s, g) = E π [∞ n=0 γ n R t+n+1 |s t = s, g t = g] for an initial state s and goal g. In order to implement hierarchical agents in tasks with continuous state and actions spaces, we will use two techniques from the RL literature: (i) the Universal Value Function Approximator (UVFA) BID10 and (ii) Hindsight Experience Replay BID0. The UVFA will be used to estimate the action-value function of a goal-conditioned policy π, DISPLAYFORM0 In our experiments, the UVFAs used will be in the form of feedforward neural networks. UVFAs are important for learning goal-conditioned policies because they can potentially generalize Q-values from certain regions of the (state, goal, action) tuple space to other regions of the tuple space, which can accelerate learning. However, UVFAs are less helpful in difficult tasks that use sparse reward functions. In these tasks when the sparse reward is rarely achieved, the UVFA will not have large regions of the (state, goal, action) tuple space with relatively high Q-values that it can generalize to other regions. For this reason, we also use Hindsight Experience Replay BID0. HER is a data augmentation technique that can accelerate learning in sparse reward tasks. HER first creates copies of the [state, action, reward, next state, goal] transitions that are created in traditional off-policy RL. In the copied transitions, the original goal element is replaced with a state that was actually achieved during the episode, which guarantees that at least one of the HER transitions will contain the sparse reward. These HER transitions in turn help the UVFA learn about regions of the (state, goal, action) tuple space that should have relatively high Q-values, which the UVFA can then potentially extrapolate to the other areas of the tuple space that may be more relevant for achieving the current set of goals. We introduce a HRL framework, Hierarchical Actor-Critic, that can efficiently learn the levels in a multi-level hierarchy in parallel. HAC contains two components: (i) a particular hierarchical architecture and (ii) a method for learning the levels of the hierarchy simultaneously and independently. In this section, we will more formally present our proposed system as a UMDP transformation operation. The purpose of our framework is to efficiently learn a k-level hierarchy Π k−1 consisting of k individual policies π 0,..., π k−1, in which k is a hyperparameter chosen by the user. In order to learn π 0,..., π k−1 in parallel our framework transforms the original UMDP, DISPLAYFORM0 In the remainder of the section, we will describe these tuples at a high-level. See section 7.3 in the Appendix for the full definition of each UMDP tuple. In our approach, each level of the UMDP hierarchy learns its own deterministic policy: DISPLAYFORM0 The state space for every level i is identical to the state space in the original problem: S i = S. Since each level will learn to solve a shortest path problem with respect to a goal state, we set the goal space at each level i to be identical to the state space: G i = S. Finally, the action space at all levels except the bottom-most level is identical to the goal space of the next level down (i.e. the state space): A i = S, i > 0. These levels output subgoal states for the next lower level to achieve. The action space of the bottom-most level is identical to the set of primitive actions that are available to the agent: A 0 = A. HAC learns hierarchies of nested policies. Nesting is critical to decomposing problems because it enables agents to learn tasks requiring long sequences of primitive actions with policies that only need to learn short sequences of actions. HAC nests policies by embedding the policy at level i − 1, π i−1, into the transition function at level i, T i. The transition function at each subgoal level, T i, i > 0, will work as follows. The subgoal action selected a i by level i is assigned to be the goal of level i − 1: g i−1 = a i. π i−1 then has at most H attempts to achieve g i−1, in which H, or the maximum horizon of a subgoal action, is another parameter provided by the user. When either π i−1 runs out of H attempts or a goal g n, n ≥ i − 1, is achieved, the transition function terminates and the agent's current state is returned. Level i's state transition function T i thus depends on the full policy hierarchy below level i, Π i−1, due to the hierarchy's nested architecture. Each action from π i−1 depends on T i−1, which depends on π i−2 and so on. Consequently, we use the notation T i|Πi−1 for level i's state transition function going forward as it depends on the full lower level policy hierarchy. The full state transition function for level i > 0 is provided in Algorithm 3 in the Appendix. The base transition function T 0 is assumed to be provided by the task: T 0 = T. There are two causes of non-stationary transition functions in our framework that will need to be overcome in order to learn multiple policies in parallel. One cause of non-stationary transition functions is updates to lower level policies. That is, whenever π i changes, the transition function at levels above i, T j|Πj−1, j > i, can change. The second cause is exploring lower level policies. Because all levels have a deterministic policy in our algorithm, all levels will need to explore with some behavior policy π i b that is different than the policy it is learning π i. For instance, in continuous domains, the agent may add Gaussian noise to its greedy policy: DISPLAYFORM0 2. Yet whenever a lower level policy hierarchy uses some behavior policy Π i−1 b to achieve a subgoal, the transition function at level i, T i|Πi−1 b, will also vary over time. RL methods will likely not be effective at learning subgoal policies in parallel if each subgoal policy at level i is trained with respect to a transition function that uses the current lower level policy hierarchy Π i−1 or the behavior lower level policy hierarchy Π i−1 b. RL methods need the distribution of states to which actions lead to be stable in order to effectively value actions and both Π i−1 and Π i−1 b are continually changing. In order to overcome these non-stationary issues that hinder the joint learning of policies, HAC instead trains each subgoal policy assuming a transition function that uses the optimal lower level policy hierarchy, DISPLAYFORM1 is stationary because it is independent of the changing and exploring lower level policies, allowing an agent to learn a policy at level i at the same time the agent learns policies below level i. Hindsight action transitions use a simple technique to simulate the transition function that uses the optimal policy hierarchy below level i, T i|Π * i−1. In order to explain how hindsight actions transitions are implemented, we will use the example in FIG0, in which a k = 2-level robot is looking to move from its start state to the yellow flag. The robot begins in state s 0 when the high level policy π 1 outputs the subgoal state g 0 for the low level to achieve. The low level policy π 0 then executes H = 5 primitive actions using some behavior policy π 0 b but is unable to achieve g 0, instead landing in s 1. After executing H = 5 primitive actions, the first action by π 1 is complete and a hindsight action transition can be created. Hindsight action transitions have two key components. The first is that the subgoal state achieved in hindsight is used as the action component in the transition, not the originally proposed subgoal state. Thus, the hindsight action transition so far will look like: [initial state = s 0, action = s 1, reward = TBD, next state = s 1, goal = yellow flag, discount rate = gamma]. The second key component of the hindsight action transition is the reward function used at all subgoal levels. The first requirement for this reward function is that it should incentivize short paths to the goal because shorter paths can be learned more quickly. The second requirement for the reward function is that it should be independent of the path taken at lower levels. The purpose of hindsight action transitions is to simulate a transition function that uses the optimal lower level policy hierarchy Π * i−1. Yet without a model of the environment, the exact path Π * i−1 would have taken is unknown. Thus, the reward should only be a function of the state reached in hindsight and the goal state. For each subgoal level, we use the reward function in which a reward of -1 is granted if the goal has not been achieved and a reward of 0 otherwise. Thus, in the example above, the high level of the robot would receive the hindsight action transition [initial state = s 0, action = s 1, reward = -1, next state = s 1, goal = yellow flag, discount rate = gamma], which is the same transition that would have been created had the high level originally proposed state s 1 as a subgoal and the transition function used the optimal lower level policy hierarchy to achieve it. Using the same process, the hindsight action transition created for the second action by π 1 would be [initial state = s 1, action = s 2, reward = -1, next state = s 2, goal = yellow flag, discount rate = γ].Although none of the hindsight actions produced in the episode contained the sparse reward of 0, they are still helpful for the high level of the agent. Through these transitions, the high level discovers on its own possible subgoals that fit the time scale of H primitive actions per high level action, which is the time scale that it should be learning. More importantly, these transitions are robust to a changing and exploring lower level policy π 0 because they assume a transition function that uses π * 0 and not the current low level policy π 0 or low level behavior policy π 0 b. We supplement all levels of the hierarchy with an additional set of transitions, which we refer to as hindsight goal transitions, that enable each level to learn more effectively in sparse reward tasks by extending the idea of Hindsight Experience Replay BID0 to the hierarchical setting. As the toy robot example illustrates, it can be difficult for any level in our framework to receive the sparse reward. A level needs to randomly reach its goal state in order to obtain the sparse reward. Hindsight goal transitions use another simple use of hindsight to guarantee that after every sequence of actions by each level in the hierarchy, that level receives a transition containing the sparse reward. Hindsight goal transitions would be created for each level in the toy robot example as follows. Beginning with the low level, after each of the at most H = 5 primitive actions executed by the low level policy π 0 per high level action, the low level will create two transitions. The first transition is the typical transition non-hierarchical agents create evaluating the primitive action that was taken given the goal state. For instance, assuming the same shortest path reward function described earlier, after the first primitive action in the episode, the low level will receive the transition [initial state = s 0, action = joint torques, reward = -1, next state = first tick mark, goal = g 0, discount rate = γ]. The second transition is a copy of the first transition, but the goal state and reward components are temporarily erased: [initial state = s 0, action = joint torques, reward = TBD, next state = first tick mark, goal = TBD, discount rate = γ]. After the sequence of at most H = 5 primitive actions, the hindsight goal transitions will be created by filling in the TBD components in the extra transitions that were created. First, one of the "next state" elements in one of the transitions will be selected as the new goal state replacing the TBD component in each transition. Second, the reward will be updated in each transition to reflect the new goal state. For instance, after the first set of H = 5 primitive actions, the state s 1 may be chosen as the hindsight goal. The hindsight goal transition created by the fifth primitive action that achieved the hindsight goal would then be [initial state = 4th tick mark, action = joint torques, reward = 0, next state = s 1, goal = s 1, discount rate = 0]. Moreover, hindsight goal transitions would be created in the same way for the high level of the toy robot, except that the hindsight goal transitions would be made from copies of the hindsight action transitions. Assuming the last state reached s 5 is used as the hindsight goal, the first hindsight goal transition for the high level would be [initial state = s 0, action = s 1, reward = -1, next state = s 1, goal = s 5, discount rate = γ]. The last hindsight goal transition for the high level would be [initial state = s 4, action = s 5, reward = 0, next state = s 5, goal = s 5, discount rate = 0].Hindsight goal transitions should significantly help each level learn an effective goal-conditioned policy because it guarantees that after every sequence of actions, at least one transition will be created that contains the sparse reward (in our case a reward and discount rate of 0). These transitions containing the sparse reward will in turn incentivize the UVFA critic function to assign relatively high Q-values to the (state, action, goal) tuples described by these transitions. The UVFA can then potentially generalize these high Q-values to the other actions that could help the level solve its tasks. Hindsight action and hindsight goal transitions give agents the potential to learn multiple policies in parallel with only sparse rewards, but some key issues remain. The most serious flaw is that the strategy only enables a level to learn about a restricted set of subgoal states. A level i will only execute in hindsight subgoal actions that can be achieved with at most H actions from level i − 1. For instance, when the toy robot is in state s 2, it will not be able to achieve a subgoal state on the yellow flag in H = 5 primitive actions. As a , level i in a hierarchical agent will only learn Q-values for subgoal actions that are relatively close to its current state and will ignore the Q-values for all subgoal actions that require more than H actions. This is problematic because the action space for all subgoal levels should be the full state space in order for the framework to be end-toend. If the action space is the full state space and the Q-function is ignoring large regions of the action space, significant problems will occur if the learned Q-function assigns higher Q-values to distant subgoals that the agent is ignoring than to feasible subgoals that can be achieved with at most H actions from the level below. π i may adjust its policy to output these distant subgoals that have relatively high Q-values. Yet the lower level policy hierarchy Π i−1 has not been trained to achieve distant subgoals, which may cause the agent to act erratically. A second, less significant shortcoming is that hindsight action and goal transitions do not incentivize a subgoal level to propose paths to the goal state that the lower levels can actually execute with its current policy hierarchy. Hindsight action and goal transitions purposefully incentivize a subgoal level to ignore the current capabilities of lower level policies and propose the shortest path of subgoals that has been found. But this strategy can be suboptimal because it may cause a subgoal level to prefer a path of subgoals that cannot yet be achieved by the lower level policy hierarchy over subgoal paths that both lead to the goal state and can be achieved by the lower level policy hierarchy. Our framework addresses the above issues by supplying agents with a third set of transitions, which we will refer to as subgoal testing transitions. Subgoal testing transitions essentially play the opposite role of hindsight action transitions. While hindsight actions transitions help a subgoal level learn the value of a subgoal state when lower level policies are optimal, subgoal testing transitions enable a level to understand whether a subgoal state can be achieved by the current set of lower level policies. Subgoal testing transitions are implemented as follows. After level i proposes a subgoal a i, a certain fraction of the time λ, the lower level behavior policy hierarchy, Π i−1 b, used to achieve subgoal a i must be the current lower level policy hierarchy Π i−1. That is, instead of a level being able to explore with a noisy policy when trying to achieve its goal, the current lower level policy hierarchy must be followed exactly. Then, if subgoal a i is not achieved in at most H actions by level i − 1, level i will be penalized with a low reward, penalty. In our experiments, we set penalty = −H, or the negative of the maximum horizon of a subgoal. In addition, we use a discount rate of 0 in these transitions to avoid non-stationary transition function issues. Using the robot example in FIG0, after the robot proposes the ambitious subgoal g 2 when in state s 2, the robot may randomly decide to test that subgoal. The low level policy then has at most H = 5 primitive actions to achieve g 2. These primitive actions must follow π 0 exactly. Because the robot misses its subgoal, it would be penalized with following transition [initial state = s 2, action = g 2, reward = -5, next state = s 3, goal = Yellow Flag, discount rate = 0]. Subgoal testing transitions have three different effects on Q-values depending on the (state, goal, subgoal action) tuple that is under consideration. For this analysis, we use the notation |s − a| to refer to the number of actions required by an optimal version of the policy at the level below, π * i−1, to move the agent from state s to subgoal state a.1. |s − a| > H: For those (state, goal, subgoal action) tuples in which the subgoal action could never be completed with H actions by the optimal policy at the level below, the critic function will be incentivized to learn Q-values of −H because the only transitions a subgoal level will receive for these tuples is the penalty transition. Thus, subgoal testing transitions can overcome the major flaw of training only with hindsight action and goal transitions because now the more distant subgoal actions are no longer ignored.2. |s − a| ≤ H and Achievable by Π i−1: For those (state, goal, subgoal action) tuples in which the subgoal action can be achieved by the current lower level policy hierarchy Π i−1, subgoal testing should have little to no effect. Critic functions will be incentivized to learn Q-values close to the Q-value targets prescribed by the hindsight action and hindsight goal transitions.3. |s − a| ≤ H and Not Achievable by Π i−1: The effects of subgoal testing are a bit more subtle for those (state, goal, subgoal action) tuples in which the subgoal action can be achieved with at most H actions by an optimal version of the policy below, π * i−1, but cannot yet be achieved with the current policy π i−1. For these tuples, critic functions are incentivized to assign a Q-value that is a weighted average of the target Q-values prescribed by the hindsight action/goal transitions and the penalty value of −H prescribed by the subgoal testing transitions. However, it is important to note that for any given tuple there are likely significantly fewer subgoal testing transitions than the total number of hindsight action and goal transitions. Hindsight action transitions are created after every subgoal action, even during subgoal testing, whereas subgoal testing transitions are not created after each subgoal action. Thus, the critic function is likely to assign Q-values closer to the target value prescribed by the hindsight action and hindsight goal transitions than the penalty value of −H prescribed by the subgoal testing transition. To summarize, subgoal testing transitions can overcome the issues caused by only training with hindsight goal and hindsight action transitions while still enabling all policies of the hierarchy to be learned in parallel. With subgoal testing transitions, critic functions no longer ignore the Q-values of infeasible subgoals. In addition, each subgoal level can still learn simultaneously with lower levels because Q-values are predominately decided by hindsight action and goal transitions, but each level will have a preference for paths of subgoals that can be achieved by the current lower level policy hierarchy. Algorithm 1 in the Appendix shows the full procedure for Hierarchical Actor-Critic (HAC). Section 7.6 in the Appendix provides additional HAC implementation details. We also provide the discrete version of our algorithm, Hierarchical Q-Learning (HierQ), in Algorithm 2 in the Appendix. We evaluated our framework in several discrete state and action and continuous state and action tasks. The discrete tasks consisted of grid world environments. The continuous tasks consisted of the following simulated robotics environments developed in MuJoCo BID14: (i) inverted pendulum, (ii) UR5 reacher, (iii) ant reacher, and (iv) ant four rooms. A video showing our experiments is available at https://www.youtube.com/watch?v=DYcVTveeNK0. Figure 3 shows some episode sequences from the grid world and inverted pendulum environments for a 2-level agent. Figure 3: Episode sequences from the four rooms (top) and inverted pendulum tasks (bottom). In the four rooms task, the k=2 level agent is the blue square; the goal is the yellow square; the learned subgoal is the purple square. In the inverted pendulum task, the goal is the yellow sphere and the subgoal is the purple sphere. We compared the performance of agents using policy hierarchies with 1 (i.e., flat), 2, and 3 levels on each task. The flat agents used Q-learning with HER in the discrete tasks and DDPG BID6 with HER in the continuous tasks. Our approach significantly outperformed the flat agent in all tasks. FIG1 shows the average episode success rate for each type of agent in each task. The discrete tasks average data from 50 trials. The continuous tasks average data from at least 7 trials. In addition, our empirical show that our framework can benefit from additional levels of hierarchy likely because our framework can learn multiple levels of policies in parallel. In all tasks, the 3-level agent outperformed the 2-level agent, and the 2-level agent outperformed the flat agent. We also directly compared our approach HAC to another HRL technique, HIRO BID9, which outperforms the other leading HRL techniques that can work in continuous state and action spaces: FeUdal Networks BID15 and Option-Critic BID1. HIRO enables agents to learn a 2-level hierarchical policy that like our approach can be trained off-policy and uses the state space to decompose a task. Two of the key differences between the algorithms are that (i) HIRO does not use Hindsight Experience Replay at either of the 2 levels and (ii) HIRO uses a different approach for handling the non-stationary transition functions. Instead of replacing the original proposed action with the hindsight action as in our approach, HIRO uses a subgoal action from a set of candidates that when provided to the current level 0 policy would most likely cause the sequence of (state, action) tuples that originally occurred at level 0 when the level 0 policy was trying to achieve its original subgoal. In other words, HIRO values subgoal actions with respect to a transition function that essentially uses the current lower level policy hierarchy, not the optimal lower level policy hierarchy as in our approach. Consequently, HIRO may need to wait until the lower level policy converges before the higher level can learn a meaningful policy. We compared the 2-level version of HAC to HIRO on the inverted pendulum, UR5 reacher, and ant reacher tasks. In all experiments, the 2-level version of HAC significantly outperformed HIRO. The are shown in FIG2. We also implemented some ablation studies examining our subgoal testing procedure. We compared our method to (i) no subgoal testing and (ii) always penalizing missed subgoals even when the lower levels use noisy policies when attempting to achieve a subgoal. Our implementation significantly outperformed both baselines. The and analysis of the ablation studies are given in section 6 of the Appendix. Hierarchy has the potential to accelerate learning but in order to realize this potential, hierarchical agents need to be able to learn their multiple levels of policies in parallel. We present a new HRL framework that can efficiently learn multiple levels of policies simultaneously. HAC can overcome the instability issues that arise when agents try to learn to make decisions at multiple time scales because the framework trains each level of the hierarchy as if the lower levels are already optimal. Our in several discrete and continuous domains, which include the first 3-level agents in tasks with continuous state and action spaces, confirm that HAC can significantly improve sample efficiency. ONR through N000141410047, Amazon through an ARA to Platt, Google through a FRA to Platt, and DARPA. • Key agent parameters: number of levels in hierarchy k, maximum subgoal horizon H, and subgoal testing frequency λ. Output:• k trained actor and critic functions π 0,..., DISPLAYFORM0 Sample initial state and task goal DISPLAYFORM1 Begin training Update all actor and critic networks end for function TRAIN-LEVEL(i :: level, s :: state, g :: goal) s i ← s, g i ← g Set current state and goal for level i for H attempts or until g n, i ≤ n < k achieved do DISPLAYFORM2 DISPLAYFORM3 Replace original action with action executed in hindsight end ifEvaluate executed action on current goal and hindsight goals DISPLAYFORM4 Replay Buf f er i ← Perform HER using HER Storage i transitions return s iOutput current state end function HierQ is the version of our algorithm designed for domains with discrete state and action spaces. Note that HierQ does not use subgoal testing. Instead, the algorithm uses pessimistic Q-value initializations to prevent agents from learning to propose subgoal states that are too distant. Algorithm 2 Hierarchical Q-Learning (HierQ)Input:• Key agent parameters: number of levels in hierarchy k > 1, maximum subgoal horizon H, learning rate α Output:• k trained Q-tables Q 0 (s, g, a),..., Q k−1 (s, g, a)Use pessimistic Q-value initialization: DISPLAYFORM0 Sample initial state and task goal Initialize previous state arrays for levels i, 0 DISPLAYFORM1 Sample action using -greedy policy DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 Add state s 0 to all previous state arrays DISPLAYFORM5 end for end for end for end if s i ← s i end for return s iOutput current state end function We now formally define the UMDPs tuples for all levels. U 0: This is the lowest level of the hierarchy. It has the same state set, action set, and state transition function as U original: S 0 = S, A 0 = A, and T 0 = T. The goal states for which U 0 will be responsible for learning will be dictated by the UMDP one level higher, U 1. However, given that every state is potentially a goal, the goal space is defined to be the state space: G 0 = S. The framework is flexible as to the reward function used at level 0, but we will use the shortest path reward function, in which a reward of 0 is granted if the next state, s, maps to the goal and -1 otherwise: R 0 (s, g) = 0, ∀s: s ∈ g and R 0 (s, g) = −1, ∀s: s / ∈ g. In addition, the discount rate is set to 0 if the goal has been achieved, but remains γ otherwise: γ 0 (s, g) = 0, ∀s: s ∈ g, γ 0 (s, g) = γ, ∀s: s / ∈ g. The objective in U 0 is to find the policy π 0: S 0, G 0 → A 0 that maximizes the value function v π0 (s, g) = E π0 [∞ n=0 γ n 0 R 0t+n+1 |s 0t = s, g 0t = g] for the initial state and goal combinations that the policy π 1 from U 1 requires. DISPLAYFORM0 These UMDPs represent the remainder of the hierarchy. The state space is the same as the state space in U original: S i = S. For all levels except for the top level, the set of goals is dictated by the level above. These goals can potentially be any state: G i = S, 1 ≤ i < k−1. For the top level, the goal set is provided by the task: G k−1 = G. The action space is the state space as these policies will output subgoal states for the next level to achieve: A i = S. The state transition function consists of two transition functions: DISPLAYFORM1 The first transition function will be used to generate hindsight transitions. Note that the actionā passed to the first transition function must be generated by some hierarchical behavior policy Π i b. The second transition function will be used for subgoal testing and will follow the current lower level policy hierarchy exactly. The reward function is R i (a, s, g) = penalty, ∀s: s / ∈ a, R i (a, s, g) = −1, ∀s: (s ∈ a) ∧ (s / ∈ g), and R i (a, s, g) = 0, ∀s: (s ∈ a) ∧ (s ∈ g). The penalty reward is only issued during subgoal testing. γ i is set to 0 if a subgoal is tested and missed or if an action achieves the goal, but is otherwise γ from U original: γ i (a, s, g) = 0, ∀s: (s / ∈ a) ∨ (s ∈ g). The objective in each U i is to learn a policy π i: function EXECUTE-H-ACTIONS(s :: state, a :: action, i :: level, itr :: iteration) s = T i|Πi−1 (s, π i (s, a)) Execute 1 action using policy π i itr -= 1 Decrement iteration counter if itr == 0 or s ∈ g, ∀g ∈ {g i, ..., g k−1} then return s Return next state if out of iterations or goal achieved else return Execute − H − Actions(s, a, i, itr) Execute another action from π i end if end function DISPLAYFORM2 Both the qualitative and quantitative of the subgoal testing ablation studies support our implementation. When no subgoal testing was used, the were as expected. The subgoal policies would always learn to set unrealistic subgoals that could not be achieved within H actions by the level below. This led to certain levels of the hierarchy needing to learn very long sequences of actions that the level was not trained to do. When the Q-values of these unrealistic subgoal states were examined, they were high, likely because there were no transitions indicating that these should have low Q-values. The implementation of always penalizing subgoals even when a noisy lower level policy hierarchy was used also performed significantly worse than our implementation. One likely reason for this Subgoal Ablation Tests Figure 6: Results from the ablation studies examining our subgoal testing procedure. We compare our implementation to two other options: (i) no subgoal testing and (ii) an implementation in which all missed subgoals are penalized even when lower levels use noisy policies to try to achieve the subgoal state that is being tested. 3-level agents were used in all options. outcome is that always penalizing strategy incentivizes subgoal levels to output overly conservative subgoals, which means a subgoal level needs to learn longer sequences of subgoals that in turn take longer to learn. Subgoal levels are incentivized to set nearby subgoals because more distant ones are less likely to be achieved when noise is added to actions. Below we provide some of the important details of our implementation of HAC. For further detail, see the GitHub repository available at https://github.com/andrew-j-levy/ Hierarchical-Actor-Critc-HAC-.• DDPG Parameters:-Bounded Q-Values: We bound the output of each critic function to the range [−H, 0] using a negative sigmoid function. An upper bound of 0 was chosen because all rewards in HAC are nonpositive so positive Q-values are illegal. We believe the lower bound of −H is helpful for learning Q-values as the critic function does not need to learn precise Q-values for the large space of irrelevant actions in which the current stat is far from the goal state. The bound of −H also ensures that subgoal states that were reached in hindsight should have higher Q-values than any subgoal state that is too distant and penalized during subgoal testing. -DDPG Target Networks: For the 2-and 3-level agents, we removed the target networks used in DDPG because we saw in some of our early MuJoCo experiments that these hierarchical agents performed at least as well without target networks. We evaluated the flat agents with and without target networks. The flat agent generally performed better with target networks, but our show the better of the two. -Exploration: Each level uses the following exploration strategy when a level is not involved in subgoal testing. * 20% of actions are sampled uniformly at random from the level's action space * 80% of actions are the sum of actions sampled from the level's policy and Gaussian noise -Neural Network Architectures: All actor and critic neural networks had 3 hidden layers, with 64 nodes in each hidden layer. ReLU activation functions were used.• HAC Parameters:-Maximum horizon of a subgoal, H: 1. For k=3-level agents in MuJoCo tasks, H = 10 | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | ryzECoAcY7 | We introduce the first Hierarchical RL approach to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces. |
Positive-unlabeled (PU) learning addresses the problem of learning a binary classifier from positive (P) and unlabeled (U) data. It is often applied to situations where negative (N) data are difficult to be fully labeled. However, collecting a non-representative N set that contains only a small portion of all possible N data can be much easier in many practical situations. This paper studies a novel classification framework which incorporates such biased N (bN) data in PU learning. The fact that the training N data are biased also makes our work very different from those of standard semi-supervised learning. We provide an empirical risk minimization-based method to address this PUbN classification problem. Our approach can be regarded as a variant of traditional example-reweighting algorithms, with the weight of each example computed through a preliminary step that draws inspiration from PU learning. We also derive an estimation error bound for the proposed method. Experimental demonstrate the effectiveness of our algorithm in not only PUbN learning scenarios but also ordinary PU leaning scenarios on several benchmark datasets. In conventional binary classification, examples are labeled as either positive (P) or negative (N), and we train a classifier on these labeled examples. On the contrary, positive-unlabeled (PU) learning addresses the problem of learning a classifier from P and unlabeled (U) data, without need of explicitly identifying N data BID6 BID42 ).PU learning finds its usefulness in many real-world problems. For example, in one-class remote sensing classification, we seek to extract a specific land-cover class from an image. While it is easy to label examples of this specific land-cover class of interest, examples not belonging to this class are too diverse to be exhaustively annotated. The same problem arises in text classification, as it is difficult or even impossible to compile a set of N samples that provides a comprehensive characterization of everything that is not in the P class BID24 BID8. Besides, PU learning has also been applied to other domains such as outlier detection BID13 BID36 ), medical diagnosis BID45, or time series classification BID28.By carefully examining the above examples, we find out that the most difficult step is often to collect a fully representative N set, whereas only labeling a small portion of all possible N data is relatively easy. Therefore, in this paper, we propose to study the problem of learning from P, U and biased N (bN) data, which we name PUbN learning hereinafter. We suppose that in addition to P and U data, we also gather a set of bN samples, governed by a distribution distinct from the true N distribution. As described previously, this can be viewed as an extension of PU learning, but such bias may also occur naturally in some real-world scenarios. For instance, let us presume that we would like to judge whether a subject is affected by a particular disease based on the of a physical examination. While the data collected from the patients represent rather well the P distribution, healthy subjects that request the examination are in general highly biased with respect to the whole healthy subject population. We are not the first to be interested in learning with bN data. In fact, both BID22 and BID7 attempted to solve similar problems in the context of text classification. BID22 simply discarded negative samples and performed ordinary PU classification. It was also mentioned in the paper that bN data could be harmful. BID7 adapted another strategy. The authors considered even gathering unbiased U data is difficult and learned the classifier from only P and bN data. However, their method is specific to text classification because it relies on the use of effective similarity measures to evaluate similarity between documents. Therefore, our work differs from these two in that the classifier is trained simultaneously on P, U and bN data, without resorting to domain-specific knowledge. The presence of U data allows us to address the problem from a statistical viewpoint, and thus the proposed method can be applied to any PUbN learning problem in principle. In this paper, we develop an empirical risk minimization-based algorithm that combines both PU learning and importance weighting to solve the PUbN classification problem, We first estimate the probability that an example is sampled into the P or the bN set. Based on this estimate, we regard bN and U data as N examples with instance-dependent weights. In particular, we assign larger weights to U examples that we believe to appear less often in the P and bN sets. P data are treated as P examples with unity weight but also as N examples with usually small or zero weight whose actual value depends on the same estimate. The contributions of the paper are three-fold:1. We formulate the PUbN learning problem as an extension of PU learning and propose an empirical risk minimization-based method to address the problem. We also theoretically establish an estimation error bound for the proposed method. 2. We experimentally demonstrate that the classification performance can be effectively improved thanks to the use of bN data during training. In other words, PUbN learning yields better performance than PU learning. 3. Our method can be easily adapted to ordinary PU learning. Experimentally we show that the ing algorithm allows us to obtain new state-of-the-art on several PU learning tasks. Relation with Semi-supervised Learning With P, N and U data available for training, our problem setup may seem similar to that of semi-supervised learning BID2 BID29. Nonetheless, in our case, N data are biased and often represent only a small portion of the whole N distribution. Therefore, most of the existing methods designed for the latter cannot be directly applied to the PUbN classification problem. Furthermore, our focus is on deducing a risk estimator using the three sets of data, whereas in semi-supervised learning the main concern is often how U data can be utilized for regularization BID10 BID1 BID20 BID25. The two should be compatible and we believe adding such regularization to our algorithm can be beneficial in many cases. Relation with Dataset Shift PUbN learning can also be viewed as a special case of dataset shift 1 BID31 ) if we consider that P and bN data are drawn from the training distribution while U data are drawn from the test distribution. Covariate shift BID38 BID39 ) is another special case of dataset shift that has been studied intensively. In the covariate shift problem setting, training and test distributions have the same class conditional distribution and only differ in the marginal distribution of the independent variable. One popular approach to tackle this problem is to reweight each training example according to the ratio of the test density to the training density BID15. Nevertheless, simply training a classifier on a reweighted version of the labeled set is not sufficient in our case since there may be examples with zero probability to be labeled. It is also important to notice that the problem of PUbN learning is intrinsically different from that of covariate shift and neither of the two is a special case of the other. In this section, we briefly review the formulations of PN, PU and PNU classification and introduce the problem of learning from P, U and bN data. Let x ∈ R d and y ∈ {+1, −1} be random variables following an unknown probability distribution with density p(x, y). Let g: R d → R be an arbitrary decision function for binary classification and ℓ: R → R + be a loss function of margin yg(x) that usually takes a small value for a large margin. The goal of binary classification is to find g that minimizes the classification risk: DISPLAYFORM0 where E (x,y)∼p (x,y) [·] denotes the expectation over the joint distribution p(x, y). When we care about classification accuracy, ℓ is the zero-one loss ℓ 01 (z) = (1 − sign(z))/2. However, for ease of optimization, ℓ 01 is often substituted with a surrogate loss such as the sigmoid loss ℓ sig (z) = 1/(1 + exp(z)) or the logistic loss ℓ log (z) = ln(1 + exp(−z)) during learning. In standard supervised learning scenarios (PN classification), we are given P and N data that are sampled independently from p(x | y = +1) and p(x | y = −1) as X P = {x [ℓ(−g(x) )] partial risks and π = p(y = 1) the P prior. We have the equality R(g) = πR DISPLAYFORM1 DISPLAYFORM2 The classification risk can then be empirically approximated from data bŷ DISPLAYFORM3. By minimizingR PN (g) we obtain the ordinary empirical risk minimizerĝ PN. In PU classification, instead of N data X N we have only access to X U = {x DISPLAYFORM0 a set of U samples drawn from the marginal density p(x). Several effective algorithms have been designed to address this problem. BID23 proposed the S-EM approach that first identifies reliable N data in the U set and then runs the Expectation-Maximization (EM) algorithm to build the final classifier. The biased support vector machine (Biased SVM) introduced in BID24 regards U samples as N samples with smaller weights. BID27 solved the PU problem by aggregating classifiers trained to discriminate P data from a small random subsample of U data. More recently, attention has been paid on the unbiased risk estimator proposed in du BID4 and du BID3. The key idea is to use the following equality: DISPLAYFORM1. As a , we can approximate the classification risk bŷ DISPLAYFORM2 DISPLAYFORM3 We then minimizê R PU (g) to obtain another empirical risk minimizerĝ PU. Note that as the loss is always positive, the classification risk thatR PU (g) approximates is also positive. However, BID19 pointed out that when the model of g is too flexible, that is, when the function class G is too large,R PU (ĝ PU) indeed goes negative and the model seriously overfits the training data. To alleviate overfitting, the authors observed that R DISPLAYFORM4 and proposed the non-negative risk estimator for PU learning: DISPLAYFORM5 In terms of implementation, stochastic optimization was used and when r =R − U (g) − πR − P (g) becomes negative for a mini-batch, they performed a step of gradient ascent along ∇r to make the mini-batch less overfitted. In semi-supervised learning (PNU classification), P, N and U data are all available. An abundance of works have been dedicated to solving this problem. Here we in particular introduce the PNU risk estimator proposed in BID35. By directly leveraging U data for risk estimation, it is the most comparable to our method. The PNU risk is simply defined as a linear combination of PN and PU/NU risks. Let us just consider the case where PN and PU risks are combined, then for some γ ∈, the PNU risk estimator is expressed aŝ DISPLAYFORM0 We can again consider the non-negative correction by forcing the term DISPLAYFORM1 to be non-negative. In the rest of the paper, we refer to the ing algorithm as non-negative PNU (nnPNU) learning (see Appendix D.4 for an alternative definition of nnPNU and the corresponding ). In this paper, we study the problem of PUbN learning. It differs from usual semi-supervised learning in the fact that labeled N data are not fully representative of the underlying N distribution p(x | y = −1). To take this point into account, we introduce a latent random variable s and consider the joint distribution p (x, y, s) DISPLAYFORM0. Both π and ρ are assumed known throughout the paper. In practice they often need to be estimated from data BID17 BID32. In place of ordinary N data we collect a set of bN samples DISPLAYFORM1 The goal remains the same: we would like to minimize the classification risk. In this section, we propose a risk estimator for PUbN classification and establish an estimation error bound for the proposed method. Finally we show how our method can be applied to PU learning as a special case when no bN data are available. DISPLAYFORM0 The first two terms on the right-hand side of the equation can be approximated directly from data by writingR DISPLAYFORM1 We therefore focus on the third termR DISPLAYFORM2 Our approach is mainly based on the following theorem. We relegate all proofs to the appendix. DISPLAYFORM3 In the theorem,R − s=−1 (g) is decomposed into three terms, and when the expectation is substituted with the average over training samples, these three terms are approximated respectively using data from X U, X P and X bN. The choice of h and η is thus very crucial because it determines what each of the three terms tries to capture in practice. Ideally, we would like h to be an approximation of σ. Then, for x such that h(x) is close to 1, σ(x) is close to 1, so the last two terms on the righthand side of the equation can be reasonably evaluated using X P and X bN (i.e., samples drawn from p(x | s = +1)). On the contrary, if h(x) is small, σ(x) is small and such samples can be hardly found in X P or X bN. Consequently the first term appeared in the decomposition is approximated with the help of X U. Finally, in the empirical risk minimization paradigm, η becomes a hyperparameter that controls how important U data is against P and bN data when we evaluateR − s=−1 (g). The larger η is, the more attention we would pay to U data. One may be curious about why we do not simply approximate the whole risk using only U samples, that is, set η to 1. There are two main reasons. On one hand, if we have a very small U set, which means n U ≪ n P and n U ≪ n bN, approximating a part of the risk with labeled samples should help us reduce the estimation error. This may seem unrealistic but sometimes unbiased U samples can also be difficult to collect BID16. On the other hand, more importantly, we have empirically observed that when the model of g is highly flexible, even a sample regarded as N with small weight gets classified as N in the latter stage of training and performance of the ing classifier can thus be severely degraded. Introducing η alleviates this problem by avoiding treating all U data as N samples. As σ is not available in reality, we propose to replace σ by its estimateσ in. We further substitute h with the same estimate and obtain the following expression: DISPLAYFORM4 ].We notice thatR s=−1,η,σ depends both on η andσ. It can be directly approximated from data bŷ DISPLAYFORM5.We are now able to derive the empirical version of Equation FORMULA14 aŝ DISPLAYFORM6 Estimating σ If we regard s as a class label, the problem of estimating σ is then equivalent to training a probabilistic classifier separating the classes with s = +1 and s = −1. Observing that DISPLAYFORM7, it is straightforward to apply nnPU learning with availability of X P, X bN and X U to DISPLAYFORM8 In other words, here we regard X P and X bN as P and X U as U, and attempt to solve a PU learning problem by applying nnPU. Since we are interested in the classposterior probabilities, we minimize the risk with respect to the logistic loss and apply the sigmoid function to the output of the model to getσ(x). However, the above risk estimator accepts any reasonableσ and we are not limited to using nnPU for computingσ. For example, the least-squares fitting approach proposed in BID18 for direct density ratio estimation can also be adapted to solving the problem. Here we establish an estimation error bound for the proposed method. Let G be the function class from which we find a function. The Rademacher complexity of G for the samples of size n drawn from q(x) is defined as DISPLAYFORM0 where X = {x 1, . . ., x n} and θ = {θ 1, . . ., θ n} with each x i drawn from q(x) and θ i as a Rademacher variable BID26. In the following we will assume that R n,q (G) vanishes asymptotically as n → ∞. This holds for most of the common choices of G if proper regularization is considered BID0 BID9. Assume additionally the exis- DISPLAYFORM1 We also assume that ℓ is Lipschitz continuous on the interval DISPLAYFORM2 Theorem 2. Let g * = arg min g∈G R(g) be the true risk minimizer andĝ PUbN,η,σ = arg min g∈GRPUbN,η,σ (g) be the PUbN empirical risk minimizer. We suppose thatσ is a fixed function independent of data used to computeR PUbN,η,σ DISPLAYFORM3. Then for any δ > 0, with probability at least 1 − δ, DISPLAYFORM4 Theorem 2 shows that as DISPLAYFORM5 where O p denotes the order in probability. As for ϵ, knowing thatσ is also estimated from data in practice 3, apparently its value depends on both the estimation algorithm and the number of samples that are involved in the estimation process. For example, in our approach we applied nnPU with the logistic loss to obtainσ, so the excess risk can be written as E x∼p(x) KL(σ(x)∥σ(x)), where by abuse of notation KL(p∥q) = p ln(p/q)+(1−p) ln((1−p)/(1−q)) denotes the KL divergence between two Bernouilli distributions with parameters respectively p and q. It is known that BID44. The excess risk itself can be decomposed into the sum of the estimation error and the approximation error. BID19 showed that under mild assumptions the estimation error part converges to zero when the sample size increases to infinity in nnPU learning. It is however impossible to get rid of the approximation error part which is fixed 2 For instance, this holds for linear-in-parameter model class DISPLAYFORM6 DISPLAYFORM7 where Cw and C ϕ are positive constants BID26.3 These data, according to theorem 2, must be different from those used to evaluateR PUbN,η,σ (g). This condition is however violated in most of our experiments. See Appendix D.3 for more discussion.once we fix the function class G. To circumvent this problem, we can either resort to kernel-based methods with universal kernels BID44 or simply enlarge the function class when we get more samples. In PU learning scenarios, we only have P and U data and bN data are not available. Nevertheless, if we let y play the role of s and ignore all the terms related to bN data, our algorithm is naturally applicable to PU learning. Let us name the ing algorithm PUbN\N, then DISPLAYFORM0 whereσ is an estimate of p(y = +1 | x) and DISPLAYFORM1 ].PUbN\N can be viewed as a variant of the traditional two-step approach in PU learning which first identifies possible N data in U data and then perform ordinary PN classification to distinguish P data from the identified N data. However, being based on state-of-the-art nnPU learning, our method is more promising than other similar algorithms. Moreover, by explicitly considering the posterior p(y = +1 | x), we attempt to correct the bias induced by the fact of only taking into account confident negative samples. The benefit of using an unbiased risk estimator is that the ing algorithm is always statistically consistent, i.e., the estimation error converges in probability to zero as the number of samples grows to infinity. In this section, we experimentally investigate the proposed method and compare its performance against several baseline methods. We focus on training neural networks with stochastic optimization. For simplicity, in an experiment, σ and g always use the same model and are trained for the same number of epochs. All models are learned using AMSGrad BID33 as the optimizer and the logistic loss as the surrogate loss unless otherwise specified. To determine the value of η, we introduce another hyperparameter τ and choose η such that #{x ∈ X U |σ(x) ≤ η} = τ (1 − π − ρ)n U. In all the experiments, an additional validation set, equally composed of P, U and bN data, is sampled for both hyperparameter tuning and choosing the model parameters with the lowest validation loss among those obtained after every epoch. Regarding the computation of the validation loss, we use the PU risk estimator with the sigmoid loss for g and an empirical approximation of DISPLAYFORM0 We assess the performance of the proposed method on three benchmark datasets: MNIST, CIFAR-10 and 20 Newsgroups. Experimental details are given in Appendix C. In particular, since all the three datasets are originally designed for multiclass classification, we group different categories together to form a binary classification problem. Baselines. When X bN is given, two baseline methods are considered. The first one is nnPNU adapted from. In the second method, named as PU→PN, we train two binary classifiers: one is learned with nnPU while we regard s as the class label, and the other is learned from X P and X bN to separate P samples from bN samples. A sample is classified in the P class only if it is so classified by the two classifiers. When X bN is not available, nnPU is compared with the proposed PUbN\N.Sampling bN Data To sample X bN, we suppose that the bias of N data is caused by a latent prior probability change BID40 BID14 in the N class. Let z ∈ Z:= DISPLAYFORM0 In the experiments, the latent categories are the original class labels of the datasets. Concrete definitions of X bN with experimental are summarized in TAB0.Results. Overall, our proposed method consistently achieves the best or comparable performance in all the scenarios, including those of standard PU learning. Additionally, using bN data can effectively help improving classification performance. However, the choice of algorithm is essential. Both nnPNU and the naive PU→PN are able to leverage bN data to enhance classification accuracy in only relatively few tasks. In the contrast, the proposed PUbN successfully reduce the misclassification error most of the time. Clearly, the performance gain that we can benefit from the availability of bN data is case-dependent. On CIFAR-10, the greatest improvement is achieved when we regard mammals (i.e. cat, deer, dog and horse) as P class and drawn samples from latent categories bird and frog as labeled negative data. This is not surprising because birds and frogs are more similar to mammals than vehicles, which makes the classification harder specifically for samples from these two latent categories. By explicitly labeling these samples as N data, we allow the classifier to make better predictions for these difficult samples. Through experiments we have demonstrated that the presence of bN data effectively helps learning a better classifier. Here we would like to provide some intuition for the reason behind this. Let us consider the MNIST learning task where X bN is uniformly sampled from the latent categories 1, 3 and 5. We project the representations learned by the classifier (i.e., the activation values of the last layer of the neural network) into a 2D plane using PCA for both nnPU and PUbN algorithms. The are shown in FIG0. Since for both nnPU and PUbN classifiers, the first two principal components account around 90% of variance, we believe that this figure depicts fairly well the learned representations. Thanks to the use of bN data, in the high-level feature space 1, 3, 5 and P data are further pushed away when we employ the proposed PUbN learning algorithm, and we are always able to separate 7, 9 from P to some extent. This explains the better performance which is achieved by PUbN learning and the benefit of incorporating bN data into the learning process. This paper studied the PUbN classification problem, where a binary classifier is trained on P, U and bN data. The proposed method is a two-step approach inspired from both PU learning and importance weighting. The key idea is to attribute appropriate weights to each example to evaluate the classification risk using the three sets of data. We theoretically established an estimation error bound for the proposed risk estimator and experimentally showed that our approach successfully leveraged bN data to improve the classification performance on several real-world datasets. A variant of our algorithm was able to achieve state-of-the-art in PU learning. DISPLAYFORM0 We obtain Equation after replacing p(x, s = +1) by πp(x | y = +1)+ρp(x | y = −1, s = +1). Forσ and η given, let us define DISPLAYFORM0 The following lemma establishes the uniform deviation bound fromR PUbN,η,σ to R PUbN,η,σ. DISPLAYFORM1 ] be a fixed function independent of data used to computeR PUbN,η,σ and η ∈. For any δ > 0, with probability at least 1 − δ, DISPLAYFORM2 Proof. For ease of notation, let DISPLAYFORM3 ].From the sub-additivity of the supremum operator, we have DISPLAYFORM4 As a consequence, to conclude the proof, it suffices to prove that with probability at least 1 − δ/3, the following bounds hold separately: DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 Below we prove. FORMULA43 and FORMULA0 are proven similarly. Let ϕ x: R → R + be the function defined by DISPLAYFORM8 Following the proof of Theorem 3.1 in BID26, it is then straightforward to show that with probability at least 1 − δ/3, it holds that DISPLAYFORM9 where θ = {θ 1, . . ., θ n P} and each θ i is a Rademacher variable. Also notice that for all x, ϕ x is a (L l /η)-Lipschitz function on the interval [−C g, C g]. By using a modified version of Talagrad's concentration lemma (specifically, Lemma 26.9 in), we can show that, when the set X P is fixed, we have DISPLAYFORM10 After taking expectation over X P ∼ p np P, we obtain the Equation FORMULA42.However, what we really want to minimize is the true risk R(g). Therefore, we also need to bound the difference between R PUbN,η,σ (g) and R(g), or equivalently, the difference between DISPLAYFORM11 Proof. One one hand, we havē DISPLAYFORM12 On the other hand, we can expressR DISPLAYFORM13 The last equality follows from the fact p(DISPLAYFORM14 From the second to the third line we use the Cauchy-Schwarz inequality. |A 1 − A 2 | ≤ C l √ ζϵ can be proven similarly, which concludes the proof. Combining lemma 1 and lemma 2, we know that with probability at least 1 − δ, the following holds: DISPLAYFORM15 Finally, with probability at least 1 − δ, DISPLAYFORM16 The first inequality uses the definition ofĝ PUbN,η,σ . In terms of validation we want to choose the model forσ such that DISPLAYFORM0 The last term does not depend onσ and can be ignored if we want to identifyσ achieving the smallest J(σ). We denote by J(σ) the sum of the first two terms. The middle term can be further expanded using DISPLAYFORM1 The validation loss of an estimationσ is then defined aŝ DISPLAYFORM2 It is also possible to minimize this value directly to acquireσ. In our experiments we decide to learn σ by nnPU for a better comparison between different methods. In the experiments we work on multiclass classification datasets. Therefore it is necessary to define the P and N classes ourselves. MNIST is processed in such a way that pair numbers 0, 2, 4, 6, 8 form the P class and impair numbers 1, 3, 5, 7, 9 form the N class. Accordingly, π = 0.49. For CIFAR-10, we consider two definitions of the P class. The first one corresponds to a quite natural task that aims to distinguish vehicles from animals. Airplane, automobile, ship and truck are therefore defined to be the P class while the N class is formed by bird, cat, deer, dog, frog and horse. For the sake of diversity, we also study another task in which we attempt to distinguish the mammals from the non-mammals. The P class is then formed by cat, deer, dog, and horse while the N class consists of the other six classes. We have π = 0.4 in the two cases. As for 20 Newsgroups, alt., comp., misc. and rec. make up the P class whereas sci., soc. and talk. make up the N class. This gives π = 0.56. For the three datasets, we use the standard test examples as a held-out test set. The test set size is thus of 10000 for MNIST and CIFAR-10, and 7528 for 20 Newsgroups. Regarding the training set, we sample 500, 500 and 6000 P, bN and U training examples for MNIST and 20 Newsgroups, and 1000, 1000 and 10000 P, bN and U training examples for CIFAR-10. The validation set is always five times smaller than the training set. The original 20 Newsgroups dataset contains raw text data and needs to be preprocessed into text feature vectors for classification. In our experiments we borrow the pre-trained ELMo word embedding BID30 from https://allennlp.org/elmo. The used 5.5B model was, according to the website, trained on a dataset of 5.5B tokens consisting of Wikipedia (1.9B) and all of the monolingual news crawl data from WMT 2008 WMT -2012. For each word, we concatenate the features from the three layers of the ELMo model, and for each document, as suggested in BID34, we concatenate the average, minimum, and maximum computed along the word dimension. This in a 9216-dimensional feature vector for a single document. MNIST For MNIST, we use a standard ConvNet with ReLU. This model contains two 5x5 convolutional layers and one fully-connected layer, with each convolutional layer followed by a 2x2 max pooling. The channel sizes are 5-10-40. The model is trained for 100 epochs with a weight decay of 10 −4. Each minibatch is made up of 10 P, 10 bN (if available) and 120 U samples. The learning rate α ∈ {10 −2, 10 −3} and τ ∈ {0. 5, 0.7, 0.9}, γ ∈ {0.1, 0.3, 0.5, 0.7, 0 .9} are selected with validation data. CIFAR-10 For CIFAR-10, we train PreAct ResNet-18 BID11 for 200 epochs and the learning rate is divided by 10 after 80 epochs and 120 epochs. This is a common practice and similar adjustment can be found in BID11. The weight decay is set to 10 −4. The minibatch size is 1/100 of the number of training samples, and the initial learning rate is chosen from {10 −2, 10 −3}. We also have τ ∈ {0. 5, 0.7, 0.9} and γ ∈ {0.1, 0.3, 0.5, 0.7, 0.9}. 20 Newsgroups For 20 Newsgroups, with the extracted features, we simply train a multilayer perceptron with two hidden layers of 300 neurons for 50 epochs. We use basically the same hyperparameters as for MNIST except that the learning rate α is selected from {5 · 10 Our method, specifically designed for PUbN learning, naturally outperforms other baseline methods in this problem. Nonetheless, Table 1 equally shows that the proposed method when applied to PU learning, achieves significantly better performance than the state-of-the-art nnPU algorithm. Here we numerically investigate the reason behind this phenomenon. Besides nnPU and PUbN\N, we compare with unbiased PU (uPU) learning. Both uPU and nnPU are learned with the sigmoid loss, learning rate 10 −3 for MNIST, initial learning rate 10 −4 for CIFAR-10, and learning rate 10 −4 for 20 Newsgroups. This is because uPU learning is unstable with the logistic loss. The other parts of the experiments remain unchanged. On the test sets we compute the false positive rates, false negative rates and misclassification errors for the three methods and plot them in FIG3. We first notice that PUbN\N still outperforms nnPU trained with the sigmoid loss. In fact, the final performance of the nnPU classifier does not change much when we replace the logistic loss with the sigmoid loss. In BID19, the authors observed that uPU overfits training data with the risk going to negative. In other words, a large portion of U samples are classified to the N class. This is confirmed in our experiments by an increase of false negative rate and decrease of false positive rate. nnPU remedies the problem by introducing the non-negative risk estimator. While the non-negative correction successfully prevents false negative rate from going up, it also causes more N samples to be classified as P compared to uPU. However, since the gain in terms of false negative rate is enormous, at the end nnPU achieves a lower misclassification error. By further identifying possible N samples after nnPU learning, we expect that our algorithm can yield lower false positive rate than nnPU without misclassifying too many P samples as N as in the case of uPU. FIG3 suggests that this is effectively the case. In particular, we observe that on MNIST, our method achieves the same false positive rate than uPU whereas its false negative rate is comparable to nnPU. In the proposed algorithm we introduce η to control howR s=−1 (g) is approximated from data and assume that ρ = p(y = −1, s = +1) is given. Here we conduct experiments to see how our method is affected by these two factors. To assess the influence of η, from TAB0 we pick four learning tasks and we choose τ from {0.5, 0.7, 0.9, 2} while all the other hyperparameters are fixed. Similarly to simulate the case where ρ is misspecified, we replace it by ρ ′ ∈ {0.8ρ, ρ, 1.2ρ} in our learning method and run experiments with all hyperparameters being fixed to a certain value. However, we still use the true ρ to compute η from τ to ensure that we always use the same number of U samples in the second step of the algorithm independent of the choice of ρ ′.The are reported in TAB1 and TAB2. We can see that the performance of the algorithm is sensitive to the choice of τ. With larger value of τ, more U data are treated as N data in PUbN learning, and consequently it often leads to higher false negative rate and lower false positive rate. The trade-off between these two measures is a classic problem in binary classification. In particular, when τ = 2, a lot more U samples are involved in the computation of the PUbN risk, but this does not allow the classifier to achieve a better performance. We also observe that there is a positive correlation between the misclassification rate and the validation loss, which confirms that the optimal value of η can be chosen without need of unbiased N data. TAB2 shows that in general slight misspecification of ρ does not cause obvious degradation of the classification performance. In fact, misspecification of ρ mainly affect the weights of each sample when we computeR PUbN,η,σ (due to the direct presence of ρ in and influence on estimating σ). However, as long as the variation of these weights remain in a reasonable range, the learning algorithm should yield classifiers with similar performances. DISPLAYFORM0 Theorem 2 suggests thatσ should be independent from the data used to computeR PUbN,η,σ. Therefore, here we investigate the performance of our algorithm whenσ and g are optimized using different sets of data. We sample two training sets and two validation sets in such a way that they are all disjoint. The size of a single training set and a single validation set is as indicated in Appendix C.2, except for 20 Newsgroups we reduce the number of examples in a single set by half. We then use different pairs of training and validation sets to learnσ and g. For 20 Newsgroups we also conduct standard experiments whereσ and g are learned on the same data, whereas for MNIST and CIFAR-10 we resort to TAB0.The are presented in TAB3. Estimating σ from separate data does not seem to benefit much the final classification performance, despite the fact that it requires collecting twice more samples. In fact,R − s=−1,η,σ (g) is a good approximation ofR − s=−1,η,σ (g) as long as the functionσ is smooth enough and does not possess abrupt changes between data points. With the use of non-negative correction, validation data and L2 regularization, the ingσ does not overfit training data so this should always be the case. As a consequence, even ifσ and g are learned on the same data, we are still able to achieve small generalization error with sufficient number of samples. In subsection 2.3, we define the nnPNU algorithm by forcing the estimator of the whole N partial risk to be positive. However, notice that the term γ(1 − π)R − N (g) is always positive and the chances are that including it simply makes non-negative correction weaker and is thus harmful to the final classification performance. Therefore, here we consider an alternative definition of nnPNU where we only force the term (1 − γ)(R − U (g) − πR − P (g)) to be positive. We plug the ing algorithm in the experiments of subsection 4.2 and summarize the in TAB4 in which we denote the alternative version of nnPNU by nnPU+PN since it uses the same non-negative correction as nnPU. The table indicates that neither of the two definitions of nnPNU consistently outperforms the other. It also ensures that there is always a clear superiority of our proposed PUbN algorithm compared to nnPNU despite its possible variant that is considered here. 17.14 ± 1.87 15.80 ± 0.95 soc. > talk. > sci.15.93 ± 1.88 15.80 ± 1.91 sci. 14.69 ± 0.46 14.50 ± 1.32 talk.14.38 ± 0.74 14.71 ± 1.01 soc. > talk. > sci.14.41 ± 0.70 13.66 ± 0.72 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1ldNoC9tX | This paper studied the PUbN classification problem, where we incorporate biased negative (bN) data, i.e., negative data that is not fully representative of the true underlying negative distribution, into positive-unlabeled (PU) learning. |
Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve. Reinforcement learning (RL) is an appealing path for advancement in Machine Translation (MT), as it allows training systems to optimize non-differentiable score functions, common in MT evaluation, as well as tackling the "exposure bias" in standard training, namely that the model is not exposed during training to incorrectly generated tokens, and is thus unlikely to recover from generating such tokens at test time. These motivations have led to much interest in RL for text generation in general and MT in particular (see §2). Various policy gradient methods have been used, notably REINFORCE and variants thereof (e.g., ;) and Minimum Risk Training (MRT; e.g., ; . Another popular use of RL is for training GANs . Nevertheless, despite increasing interest and strong , little is known about what accounts for these performance gains, and the training dynamics involved. We present the following contributions. First, our theoretical analysis shows that commonly used approximation methods are theoretically ill-founded, and may converge to parameter values that do not minimize the risk, nor are local minima thereof (§2.2). Second, using both naturalistic experiments and carefully constructed simulations, we show that performance gains observed in the literature likely stem not from making target tokens the most probable, but from unrelated effects, such as increasing the peakiness of the output distribution (i.e., the probability mass of the most probable tokens). We do so by comparing a setting where the reward is informative, vs. one where it is constant. In §4 we discuss this peakiness effect (PKE). Third, we show that promoting the target token to be the mode is likely to take a prohibitively long time. The only case we find, where improvements are likely, is where the target token is among the first 2-3 most probable tokens according to the pretrained model. These findings suggest that REINFORCE (§5) and CMRT (§6) are likely to improve over the pre-trained model only under the best possible conditions, i.e., where the pre-trained model is "nearly" correct. We conclude by discussing other RL practices in MT which should be avoided for practical and theoretical reasons, and briefly discuss alternative RL approaches that will allow RL to tackle a larger class of errors in pre-trained models (§7). An MT system generates tokens y = (y 1, ..., y n) from a vocabulary V one token at a time. The probability of generating y i given preceding tokens y <i is given by P θ (·|x, y <i), where x is the source sentence and θ are the model parameters. For each generated token y i, we denote with r(y i ; y <i, x, y (ref) ) the score, or reward, for generating y i given y <i, x, and the reference sentence y (ref). For brevity, we omit parameters where they are fixed within context. For simplicity, we assume r does not depend on following tokens y >i. We also assume there is exactly one valid target token, as de facto, training is done against a single reference . In practice, either a token-level reward is approximated using MonteCarlo methods (e.g.,), or a sentence-level (sparse) reward is given at the end of the episode (sentence). The latter is equivalent to a uniform token-level reward. r is often the negative log-likelihood, or a standard MT metric, e.g., BLEU . RL's goal is to maximize the expected episode reward (denoted with R); i.e., to find 2.1 REINFORCE For a given source sentence, and past predictions y <i, REINFORCE samples k tokens (k is a hyperparameter) S = y,..., y (k) from P θ and updates θ according to this rule: The right-hand side of equation 2 is an unbiased estimator of the gradient of the objective function, i.e., E [∆θ] ∝ ∇ θ R (θ). Therefore, REINFORCE is performing a form of stochastic gradient ascent on R, and has similar formal guarantees. From here follows that if R is constant with respect to θ, then the expected ∆θ prescribed by REINFORCE is zero. We note that r may be shifted by a constant term (called a "baseline"), without affecting the optimal value for θ. REINFORCE is used in MT, text generation, and image-to-text tasks;;; ) -in isolation, or as a part of training . Lately, an especially prominent use for REINFORCE is adversarial training with discrete data, where another network predicts the reward (GAN). For some recent work on RL for NMT, see (; ; ; ;). The term Minimum Risk Training (MRT) is used ambiguously in MT to refer either to the application of REINFORCE to minimizing the risk (equivalently, to maximizing the expected reward, the negative loss), or more commonly to a somewhat different estimation method, which we term Contrastive MRT (CMRT) and turn now to analyzing. CMRT was proposed by , adapted to NMT by, and often used since (; ; ; ;). The method works as follows: at each iteration, sample k tokens S = {y 1, . . ., y k} from P θ, and update θ according to the gradient of where Q θ,S (y i) = P (y i) α yj ∈S P (y j) α Commonly (but not universally), deduplication is performed, so R sums over a set of unique values . This changes little in our empirical and theoretical analysis. Despite the resemblance in definitions of R (equation 1) and R (indeed, R is sometimes presented as an approximation of R), they differ in two important aspects. First, Q's support is S, so increasing Q(y i) for some y i necessarily comes at the expense of Q(y) for some y ∈ S. In contrast, increasing P (y i), as in REINFORCE, may come at the expense of P (y) for any y ∈ V. Second, α is a smoothness parameter: the closer α is to 0, the closer Q is to be uniform. We show in Appendix A.1 that despite its name, CMRT does not optimize R, nor does it optimize E[R]. That is, it may well converge to values that are not local maxima of R, making it theoretically ill-founded. 1 However, given CMRT popularity, the strong it yielded and the absence of theory for explaining it, we discuss it here. Given a sample S, the gradient of R is given by where Comparing Equations 2 and 3, the differences between REINFORCE and CMRT are reflected again. First, ∇ R has an additional term, proportional to ∇ log Z(S), which yields the contrastive effect. This contrast may improve the rate of convergence since it counters the decrease of probability mass for non-sampled tokens. Second, given S, the relative weighting of the gradients ∇ log P (y i) is proportional to r(y i)Q(y i), or equivalently to r(y i)P (y i) α. CMRT with deduplication sums over distinct values in S (equation 3), while REINFORCE sums over all values. This means that the relative weight of the unique value y i is r(yi)|{yi∈S}| k in REINFORCE. For α = 1 the expected value of these relative weights is the same, and so for α < 1 (as is commonly used), more weight is given to improbable tokens, which could also have a positive effect on the convergence rate. 2 However, if α is too close to 0, ∇ R vanishes, as it is not affected by θ. This tradeoff explains the importance of tuning α reported in the literature. In §6 we present simulations with CMRT, showing very similar trends as presented by REINFORCE. Implementing a stochastic gradient ascent, REINFORCE is guaranteed to converge to a stationary point of R under broad conditions. However, not much is known about its convergence rate under the prevailing conditions in NMT. We begin with a qualitative, motivating analysis of these questions. As work on language generation empirically showed, RNNs quickly learn to output very peaky distributions . This tendency is advantageous for generating fluent sentences with high probability, but may also entail slower convergence rates when using RL to fine-tune the model, because RL methods used in text generation sample from the (pretrained) policy distribution, which means they mostly sample what the pretrained model deems to be likely. Since the pretrained model (or policy) is peaky, exploration of other potentially more rewarding tokens will be limited, hampering convergence. Intuitively, REINFORCE increases the probabilities of successful (positively rewarding) observations, weighing updates by how rewarding they were. When sampling a handful of tokens in each context (source sentence x and generated prefix y <i), and where the number of epochs is not large, it is unlikely that more than a few unique tokens will be sampled from P θ (·|x, y <i). (In practice, k is typically between 1 and 20, and the number of epochs between 1 and 100.) It is thus unlikely that anything but the initially most probable candidates will be observed. Consequently, REINFORCE initially raises their probabilities, even if more rewarding tokens can be found down the list. We thus hypothesize the peakiness of the distribution, i.e., the probability mass allocated to the most probable tokens, will increase, at least in the first phase. We call this the peakiness-effect (PKE), and show it occurs both in simulations (§4.1) and in full-scale NMT experiments (§4.2). With more iterations, the most-rewarding tokens will be eventually sampled, and gradually gain probability mass. This discussion suggests that training will be extremely sample-inefficient. We assess the rate of convergence empirically in §5, finding this to be indeed the case. A histogram of the update size (x-axis) to the total predicted probability of the 10 most probable tokens (left) or the most probable token (right) in the Constant Reward setting. An update is overwhelmingly more probable to increase this probability than to decrease it. We turn to demonstrate that the initially most probable tokens will initially gain probability mass, even if they are not the most rewarding, yielding a PKE. recently observed in the context of language modeling using GANs that performance gains similar to those GAN yield can be achieved by decreasing the temperature for the prediction softmax (i.e., making it peakier). However, they proposed no causes for this effect. Our findings propose an underlying mechanism leading to this trend. We return to this point in §7. Furthermore, given their findings, it is reasonable to assume that our are relevant for RL use in other generation tasks, whose output space too is discrete, high-dimensional and concentrated. We experiment with a 1-layer softmax model, that predicts a single token i ∈ V with probability. θ = {θ j} j∈V are the model's parameters. This model simulates the top of any MT decoder that ends with a softmax layer, as essentially all NMT decoders do. To make experiments realistic, we use similar parameters as those reported in the influential Transformer NMT system . Specifically, the size of V (distinct BPE tokens) is 30,715, and the initial values for θ were sampled from 1,000 sets of logits taken from decoding the standard newstest2013 development set, using a pretrained Transformer model. The model was pretrained on WMT2015 training data . Hyperparameters are reported in Appendix A.3. We define one of the tokens in V to be the target token and denote it with y best. We assign deterministic token reward, this makes learning easier than when relying on approximations and our predictions optimistic. We experiment with two reward functions: 1. Simulated Reward: r(y) = 2 for y = y best, r(y) = 1 if y is one of the 10 initially highest scoring tokens, and r(y) = 0 otherwise. This simulates a condition where the pretrained model is of decent but sub-optimal quality. r here is at the scale of popular rewards used in MT, such as GAN-based rewards or BLEU (which are between 0 and 1). 2. Constant Reward: r is constantly equal to 1, for all tokens. This setting is aimed to confirm that PKE is not a of the signal carried by the reward. Experiments with the first setting were run 100 times, each time for 50K steps, updating θ after each step. With the second setting, it is sufficient to take a single step at a time, as the expected update after each step is zero, and so any PKE seen in a single step is only accentuated in the next. It is, therefore, more telling to run more repetitions rather than more steps per initialization. We, therefore, sample 10,000 pretrained distributions, and perform a single REINFORCE step. As RL training lasts about 30 epochs before stopping, samples about 100K tokens per epoch, and as the network already predicts y best in about two thirds of the contexts, 3 we estimate the number of steps used in practice to be in the order of magnitude of 1M. For visual clarity, we present figures for 50K-100K steps. However, full experiments (with 1M steps) exhibit similar trends: where REINFORCE was not close to converging after 50K steps, the same was true after 1M steps. We evaluate the peakiness of a distribution in terms of the probability of the most probable token (the mode), the total probability of the ten most probable tokens, and the entropy of the distribution (lower entropy indicates more peakiness). Results. The distributions become peakier in terms of all three measures: on average, the mode's probability and the 10 most probable tokens increases, and the entropy decreases. Figure 1a presents the histogram of the update size, the difference in the probability of the 10 most probable tokens in the Constant Reward setting, after a single step. Figure 1b depicts similar statistics for the mode. The average entropy in the pretrained model is 2.9 is reduced to 2.85 after one REINFORCE step. Simulated Reward setting shows similar trends. For example, entropy decreases from 3 to about 0.001 in 100K steps. This extreme decrease suggests it is effectively a deterministic policy. PKE is achieved in a few hundred steps, usually before other effects become prominent (see Figure 2), and is stronger than for Constant Reward. Figure 3: The cumulative distribution of the probability of the most likely token in the NMT experiments. The green distribution corresponds to the pretrained model, and the blue corresponds to the reinforced model. The yaxis is the proportion of conditional probabilities with a mode of value ≤ x (the x-axis). Note that a lower cumulative percentage means a more peaked output distribution. A lower cumulative percentage means a more peaked output distribution. We turn to analyzing a real-world application of REINFORCE to NMT. Important differences between this and the previous simulations are: it is rare in NMT for REINFORCE to sample from the same conditional distribution more than a handful of times, given the number of source sentences x and sentence prefixes y <i (contexts); and in NMT P θ (·|x, y <i) shares parameters between contexts, which means that updating P θ for one context may influence P θ for another. We follow the same pretraining as in §4.1. We then follow in defining the reward function based on the expected BLEU score. Expected BLEU is computed by sampling suffixes for the sentence, and averaging the BLEU score of the sampled sentences against the reference. We use early stopping with a patience of 10 epochs, where each epoch consists of 5,000 sentences sampled from the WMT2015 German-English training data. We use k = 1. We retuned the learning-rate, and positive baseline settings against the development set. Other hyper-parameters were an exact replication of the experiments reported in . Results. Results indicate an increase in the peakiness of the conditional distributions. Our are based on a sample of 1,000 contexts from the pretrained model, and another (independent) sample from the reinforced model. The modes of the conditional distributions tend to increase. Figure 3 presents the distribution of the modes' probability in the reinforced conditional distributions compared with the pretrained model, showing a shift of probability mass towards higher probabilities for the mode, following RL. Another indication of the increased peakiness is the decrease in the average entropy of P θ, which was reduced from 3.45 in the pretrained model to an average of 2.82 following RL. This more modest reduction in entropy (compared to §4.1) might also suggest that the procedure did not converge to the optimal value for θ, as then we would have expected the entropy to substantially drop if not to 0 (overfit), then to the average entropy of valid next tokens (given the source and a prefix of the sentence). We now turn to assessing under what conditions it is likely that REINFORCE will lead to an improvement in the performance of an NMT system. As in the previous section, we use both controlled simulations and NMT experiments. We use the same model and experimental setup described in Section 4.1, this time only exploring the Simulated Reward setting, as a Constant Reward is not expected to converge to any meaningful θ. Results are averaged over 100 conditional distributions sampled from the pretrained model. Caution should be exercised when determining the learning rate (LR). Common LRs used in the NMT literature are of the scale of 10 −4. However, in our simulations, no LR smaller than 0.1 yielded any improvement in R. We thus set the LR to be 0.1. We note that in our simulations, a higher learning rate means faster convergence as our reward is noise-free: it is always highest for the best option. In practice, increasing the learning rate may deteriorate , as it may cause the system to overfit to the sampled instances. Indeed, when increasing the learning rate in our NMT experiments (see below) by an order of magnitude, early stopping caused the RL procedure to stop without any parameter updates. Figure 2 shows the change in P θ over the first 50K REINFORCE steps (probabilities are averaged over 100 repetitions), for a case where y best was initially the second, third and fourth most probable. Although these are the easiest settings, and despite the high learning rate, it fails to make y best the mode of the distribution within 100K steps, unless y best was initially the second most probable. In cases where y best is initially of a lower rank than four, it is hard to see any increase in its probability, even after 1M steps. We trained an NMT system, using the same procedure as in Section 4.2, and report BLEU scores over the news2014 test set. After training with an expected BLEU reward, we indeed see a minor improvement which is consistent between trials and pretrained models. While the pretrain BLEU score is 30.31, the reinforced one is 30.73. Analyzing what words were influenced by the RL procedure, we begin by computing the cumulative probability of the target token y best to be ranked lower than a given rank according to the pretrained model. Results (Figure 4) show that in about half of the cases, y best is not among the top three choices of the pretrained model, and we thus expect it not to gain substantial probability following REINFORCE, according to our simulations. We next turn to compare the ranks the reinforced model assigns to the target tokens, and their ranks according to the pretrained model. Figure 6 presents the difference in the probability that y best is ranked at a given rank following RL and the probability it is ranked there initially. Results indicate that indeed more target tokens are ranked first, and less second, but little consistent shift of probability mass occurs otherwise across the ten first ranks. It is possible that RL has managed to push y best in some cases between very low ranks (<1,000) to medium-low ranks (between 10 and 1,000). However, token probabilities in these ranks are so low that it is unlikely to affect the system outputs in any way. This fits well with the of our simulations that predicted that only the initially top-ranked tokens are likely to change. In an attempt to explain the improved BLEU score following RL with PKE, we repeat the NMT experiment this time using a constant reward of 1. Our present a nearly identical improvement in BLEU, achieving 30.72, and a similar pattern in the change of the target tokens' ranks (see Ap- pendix 8). Therefore, there is room to suspect that even in cases where RL yields an improvement in BLEU, it may partially from reward-independent factors, such as PKE. 6 EXPERIMENTS WITH CONTRASTIVE MRT Figure 6: Difference between the ranks of y best in the reinforced and the pretrained model. Each column x corresponds to the difference in the probability that y best is ranked in rank x in the reinforced model, and the same probability in the pretrained model. In §2.2 we showed that CMRT does not, in fact, maximize R, and so does not enjoy the same theoretical guarantees as REINFORCE and similar policy gradient methods. However, being the RL procedure of choice in much recent work we repeat the simulations described in §4 and §5, assessing CMRT's performance in these conditions. We experiment with α = 0.005 and k = 20, common settings in the literature, and average over 100 trials. Figure 5 shows how the distribution P θ changes over the course of 50K update steps to θ, where y best is taken to be the second and third initially most probable token (Simulated Reward setting). Results are similar in trends to those obtained with REINFORCE: MRT succeeds in pushing y best to be the highest ranked token if it was initially second, but struggles where it was initially ranked third or below. We only observe a small PKE in MRT. This is probably due to the contrastive effect, which means that tokens that were not sampled do not lose probability mass. All graphs we present here allow sampling the same token more than once in each batch (i.e., S is a sample with replacements). Simulations with deduplication show similar . In this paper, we showed that the type of distributions used in NMT entail that promoting the target token to be the mode is likely to take a prohibitively long times for existing RL practices, except under the best conditions (where the pretrained model is "nearly" correct). This leads us to conclude that observed improvements from using RL for NMT are likely due either to fine-tuning the most probable tokens in the pretrained model (an effect which may be more easily achieved using reranking methods, and uses but little of the power of RL methods), or to effects unrelated to the signal carried by the reward, such as PKE. Another contribution of this paper is in showing that CMRT does not optimize the expected reward and is thus theoretically unmotivated. A number of reasons lead us to believe that in our NMT experiments, improvements are not due to the reward function, but to artefacts such as PKE. First, reducing a constant baseline from r, so as to make the expected reward zero, disallows learning. This is surprising, as REINFORCE, generally and in our simulations, converges faster where the reward is centered around zero, and so the fact that this procedure here disallows learning hints that other factors are in play. As PKE can be observed even where the reward is constant (if the expected reward is positive; see §4.1), this suggests PKE may play a role here. Second, we observe more peakiness in the reinforced model and in such cases, we expect improvements in BLEU . Third, we achieve similar with a constant reward in our NMT experiments (§5.2). Fourth, our controlled simulations show that asymptotic convergence is not reached in any but the easiest conditions (§5.1). Our analysis further suggests that gradient clipping, sometimes used in NMT , is expected to hinder convergence further. It should be avoided when using REINFORCE as it violates REINFORCE's assumptions. The per-token sampling as done in our experiments is more exploratory than beam search , reducing PKE. Furthermore, the latter does not sample from the behavior policy, but does not properly account for being off-policy in the parameter updates. Adding the reference to the sample S, which some implementations allow may help reduce the problems of never sampling the target tokens. However, as point out, this practice may lower , as it may destabilize training by leading the model to improve over outputs it cannot generalize over, as they are very different from anything the model assigns a high probability to, at the cost of other outputs. The standard MT scenario poses several uncommon challenges for RL. First, the action space in MT problems is a high-dimensional discrete space (generally in the size of the vocabulary of the target language or the product thereof for sentences). This contrasts with the more common scenario studied by contemporary RL methods, which focuses mostly on much smaller discrete action spaces (e.g., video games (; 2016) ), or continuous action spaces of relatively low dimensions (e.g., simulation of robotic control tasks ). Second, reward for MT is naturally very sparse -almost all possible sentences are "wrong" (hence, not rewarding) in a given context. Finally, it is common in MT to use RL for tuning a pretrained model. Using a pretrained model ameliorates the last problem. But then, these pretrained models are in general quite peaky, and because training is done on-policy -that is, actions are being sampled from the same model being optimized -exploration is inherently limited. Here we argued that, taken together, these challenges in significant weaknesses for current RL practices for NMT, that may ultimately prevent them from being truly useful. At least some of these challenges have been widely studied in the RL literature, with numerous techniques developed to address them, but were not yet adopted in NLP. We turn to discuss some of them. Off-policy methods, in which observations are sampled from a different policy than the one being currently optimized, are prominent in RL , and were also studied in the context of policy gradient methods . In principle, such methods allow learning from a more "exploratory" policy. Moreover, a key motivation for using α in CMRT is smoothing; off-policy sampling allows smoothing while keeping convergence guarantees. In its basic form, exploration in REINFORCE relies on stochasticity in the action-selection (in MT, this is due to sampling). More sophisticated exploration methods have been extensively studied, for example using measures for the exploratory usefulness of states or actions , or relying on parameter-space noise rather than action-space noise . For MT, an additional challenge is that even effective exploration (sampling diverse sets of observations), may not be enough, since the state-action space is too large to be effectively covered, with almost all sentences being not rewarding. Recently, diversity-based and multi-goal methods for RL were proposed to tackle similar challenges (; ;). We believe the adoption of such methods is a promising path forward for the application of RL in NLP. Let θ be a real number in [0, 0.5], and let P θ be a family of distributions over three values a, b, c such that: Let r(a) = 1, r(b) = 0, r(c) = 0.5. The expected reward as a function of θ is: is uniquely maximized by θ * = 0.25. Table 1 details the possible samples of size k = 2, their probabilities, the corresponding R and its gradient. Standard numerical methods show that E[∇ R] over possible samples S is positive for θ ∈ (0, γ) and negative for θ ∈ (γ, 0.5], where γ ≈ 0.295. This means that for any initialization of θ ∈ (0, 0.5], Contrastive MRT will converge to γ if the learning rate is sufficiently small. For θ = 0, R ≡ 0.5, and there will be no gradient updates, so the method will converge to θ = 0. Neither of these values maximizes R(θ). We note that by using some g (θ) the γ could be arbitrarily far from θ *. g could also map to (−inf, inf) more often used in neural networks parameters. We further note that resorting to maximizing E[R] instead, does not maximize R(θ) either. Indeed, plotting E[R] as a function of θ for this example, yields a maximum at θ ≈ 0.32. (1-θ-2θ 2) 2 0.5 0 True casing and tokenization were used , including escaping html symbols and "-" that represents a compound was changed into a separate token of =. Some preprocessing used before us converted the latter to ##AT##-##AT## but standard tokenizers in use process that into 11 different tokens, which over-represents the significance of that character when BLEU is calculated. BPE extracted 30,715 tokens. For the MT experiments we used 6 layers in the encoder and the decoder. The size of the embeddings was 512. Gradient clipping was used with size of 5 for pre-training (see Discussion on why not to use it in training). We did not use attention dropout, but 0.1 residual dropout rate was used. In pretraining and training sentences of more than 50 tokens were discarded. Pretraining and training were considered finished when BLEU did not increase in the development set for 10 consecutive evaluations, and evaluation was done every 1,000 and 5,000 for batches of size 100 and 256 for pretraining and training respectively. Learning rate used for rmsprop was 0.01 in pretraining and for adam with decay was 0.005 for training. 4,000 learning rate warm up steps were used. Pretraining took about 7 days with 4 GPUs, afterwards, training took roughly the same time. Monte Carlo used 20 sentence rolls per word. We present graphs for the constant reward setting in Figures 8 and 7. Trends are similar to the ones obtained for the Simulated Reward setting. Each column x corresponds to the difference in the probability that y best is ranked in rank x in the reinforced model, and the same probability in the pretrained model. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1eCw3EKvH | Reinforcment practices for machine translation performance gains might not come from better predictions. |
Significant advances have been made in Natural Language Processing (NLP) modelling since the beginning of 2018. The new approaches allow for accurate , even when there is little labelled data, because these NLP models can benefit from training on both task-agnostic and task-specific unlabelled data. However, these advantages come with significant size and computational costs. This workshop paper outlines how our proposed convolutional student architecture, having been trained by a distillation process from a large-scale model, can achieve 300x inference speedup and 39x reduction in parameter count. In some cases, the student model performance surpasses its teacher on the studied tasks. The last year has seen several major advances in NLP modelling, stemming from previous innovations in embeddings BID0 BID2 and attention models BID3 BID5 that allow Language Models (LMs) to be trained on very large corpuses: For instance ELMo BID6, OpenAI Transformer BID7 and recently BERT BID8.In addition, the power of building on LM-enhanced contextualised embeddings, using a fine-tuning approach on task-specific unlabelled data BID9, has shown huge benefits for downstream tasks (such as text classification) -especially in a typical industrial setting where labelled data is scarce. In order to make use of these advances, this work shows how a model distillation process BID10 can be used to train a novel'student' CNN structure from a much larger'teacher' Language Model. The teacher model can be fine-tuned on the specific task at hand, using both unlabelled data, and the (small number of) labelled training examples available. The student network can then be trained using both labelled and unlabelled data, in a process akin to pseudo-labelling BID11.Our show it is possible to achieve similar performance to (and surpass in some cases) large attention-based models with a novel, highly efficient student model with only convolutional layers. In this work, we used the OpenAI Transformer BID7 model as the'teacher' in a model-distillation setting, with a variety of different'student' networks (see FIG0 . The OpenAI Transformer model consists of a Byte-Pair Encoded subword BID13 embedding layer followed by 12-layers of "decoder-only transformer with masked self-attention heads" BID3, pretrained on the standard language modelling objective on a corpus of 7000 books. This LM's final layer outputs were then coupled with classification modules and the entire model was discriminatively fine-tuned with an auxiliary language modelling objective, achieving excellent performance on various NLP tasks. To optimize for speed and memory constraints of industrial deployment, a variety of different models were trained (a) on the classification task directly; and (b) via distillation BID10 of the logit layer output by the pretrained OpenAI classification model. To combat label-scarcity and improve distillation quality, we inferred distillation logits for unlabelled samples in a pseudo-labelling manner BID11, while using transfer learning through pretrained GloVe embeddings BID1. A number of common network structures were tested in the student role, specifically:• a two-layer BiLSTM network BID14 • a wide-but shallow CNN network BID15 • a novel CNN structure, dubbed here'BlendCNN'The BlendCNN architecture was inspired by the ELMo'something from every layer' paradigm, and aims to be capable of leveraging hierarchical representations for text classification BID5 BID16.The BlendCNN model is illustrated in FIG0, and comprises a number of CNN layers (with n_channels=100, kernel_width=5, activation=relu), each of which exposes a global pooling output as a'branch'. These branches are then concatenated together and "blended" through a dense network (width=100), followed by the usual classification logits layer. Each of the models was trained and tested on the 3 standard datasets described in BID17: AG News, DBpedia and Yahoo Answers. The experiment proceeded in two phases, the first being to evaluate the performance of two baseline methods (TFIDF+SVM BID18 and fastText BID19) along with that of the student networks (without the benefit of a LM teacher), and the large LM, with a classification'head' trained on the task. The second phase used the large LM in a'teacher' role, to train the other networks as students via distillation of the LM classifier logits layer (with a Mean Absolute Error loss function). Referring to TAB2, the 3-Layer and 8-Layer variants of the proposed BlendCNN architecture achieve the top scores across all studied datasets. However, the performance of the proposed architecture is lower without the'guidance' of the teacher teacher logits during training, implying the marked improvement is due to distillation. The additional given for BlendCNN quantifies the advantage of adding unlabelled data into the distillation phase of the student model training. Notably from TAB1, the 3-Layer BlendCNN student has 39× fewer parameters and performs inference 300× faster than the OpenAI Transformer which it empirically out-scores. For text classifications, mastery may require both high-level concepts gleaned from language under standing and fine-grained textual features such as key phrases. Similar to the larval-adult form analogy made in BID10, high-capacity models with task-agnostic pre-training may be well-suited for task mastery on small datasets (which are common in industry). On the other hand, convolutional student architectures may be more ideal for practical applications by taking advantage of massively parallel computation and a significantly reduced memory footprint. Our suggest that the proposed BlendCNN architecture can efficiently achieve higher scores on text classification tasks due to the direct leveraging of hierarchical representations, which are learnable (even in a label-sparse setting) from a strong teaching model. Further development of specialized student architectures could similarly surpass teacher performance if appropriately designed to leverage the knowledge gained from a pretrained, task-agnostic teacher model whilst optimizing for task-specific constraints. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJxM3hftiX | We train a small, efficient CNN with the same performance as the OpenAI Transformer on text classification tasks |
Combining multiple function approximators in machine learning models typically leads to better performance and robustness compared with a single function. In reinforcement learning, ensemble algorithms such as an averaging method and a majority voting method are not always optimal, because each function can learn fundamentally different optimal trajectories from exploration. In this paper, we propose a Temporal Difference Weighted (TDW) algorithm, an ensemble method that adjusts weights of each contribution based on accumulated temporal difference errors. The advantage of this algorithm is that it improves ensemble performance by reducing weights of Q-functions unfamiliar with current trajectories. We provide experimental for Gridworld tasks and Atari tasks that show significant performance improvements compared with baseline algorithms. Using ensemble methods that combine multiple function approximators can often achieve better performance than a single function by reducing the variance of estimation . Ensemble methods are effective in supervised learning, and also reinforcement learning . There are two situations where multiple function approximators are combined: combining and learning multiple functions during training and combining individually trained functions to jointly decide actions during testing . In this paper, we focus on the second setting of reinforcement learning wherein each function is trained individually and then combined them to achieve better test performance. Though there is a body of research on ensemble algorithms in reinforcement learning, it is not as sizeable as the research devoted to ensemble methods for supervised learning. investigated many ensemble approaches combining several agents with different valuebased algorithms in Gridworld settings. Faußer & Schwenker (2011; 2015a) have shown that combining value functions approximated by neural networks improves performance greater than using a single agent. Although previous work dealt with each agent equally contributing to the final output, weighting each contribution based on its accuracy is also a known and accepted approach in supervised learning . However, unlike supervised learning, reinforcement learning agents learn from trajectories ing from exploration, such that each agent learns from slightly different data. This characteristic is significant in tasks with high-dimensional state-space, where there are several possible optimal trajectories to maximize cumulative rewards. In such a situation, the final joint policy function ing from simple averaging or majority voting is not always optimal if each agent learned different optimal trajectories. Furthermore, it is difficult to decide constant weights of each contribution as it is possible that agents with poor episode rewards have better performance in specific areas. In this paper, we propose the temporal difference weighted (TDW) algorithm, an ensemble method for reinforcement learning at test time. The most important point of this algorithm is that confident agents are prioritized to participate in action selection while contributions of agents unfamiliar with the current trajectory are reduced. To do so in the TDW algorithm, the weights of the contributions at each Q-function are calculated as softmax probabilities based on accumulated TD errors. Extending an averaging method and a majority voting method, actions are determined by weighted average or voting methods according to the weights. The advantage of the TDW algorithm is that arbitrary training algorithms can use this algorithm without any modifications, because the TDW algorithm only cares about the joint decision problem, which could be easily adopted in competitions and development works using reinforcement learning. In our experiment, we demonstrate that the TDW retains performance in tabular representation Gridworld tasks with multiple possible trajectories, where simple ensemble methods are significantly degraded. Second, to demonstrate the effectiveness of our TDW algorithm in high-dimensional state-space, we also show that our TDW algorithm can achieve better performance than baseline algorithms in Atari tasks . Ensemble methods that combine multiple function approximators during training rather than evaluation have been studied in deep reinforcement learning. Bootstrapped deep Q-network (DQN) leverages multiple heads that are randomly initialized to improve exploration, because each head leads to slightly different states. Averaged-DQN reduces the variance of a target approximation by calculating an average value of last several learned Qnetworks. Using multiple value functions to reduce variance of target estimation is also utilized in the policy gradients methods . In contrast, there has been research focused on joint decision making in reinforcement learning. Using multiple agents to jointly select an action achieves better performance than a single agent (; Faußer & Schwenker (2015a;). However, such joint decision making has been limited to relatively small tasks such as Gridworld and Maze. Therefore, it is not known whether joint decision making with deep neural networks can improve performance in high-dimensional state-space tasks such as Atari 2600 . proposes a multiple mode-based reinforcement learning (MMRL), a weighted ensemble method for model-based reinforcement learning, which determines each weight based by using prediction models. The MMRL gives larger weights to reinforcement learning controllers with small errors of special responsibility predictors. Unlike MMRL, our method does not require additional components to calculate weights. Our method is not the first one to use TD errors in combining multiple agents. proposes a module selection mechanism that chooses the module with smallest TD errors to learn current states, which will eventually assign each module to a small area of a large task. As a joint decision making method, a selective ensemble method is proposed to eliminate agents with less confidence at the current state by measuring TD errors (Faußer & Schwenker (2015b) ), which is the closest approach to our method. This selection drops all outputs whose TD errors exceeds a threshold, which can be viewed as a hard version of our method that uses a softmax of all weighted outputs instead of elimination. The threshold is not intuitively determined. Because the range of TD errors varies by tasks and reward settings, setting the threshold requires sensitive tuning. We formulate standard reinforcement learning setting as follows. At time t, an agent receives a state s t ∈ S, and takes an action a t ∈ A based on a policy function a t = π(s t). The next state s t+1 is given to the agent along with a reward r t+1. The return is defined as a discounted cumulative reward, where γ ∈ is a discount factor. The true value of taking an action a t at a state s t is described as follows: where Q π (s t, a t) is an action-value under the policy π. The optimal value is Q * (s t, a t) = max π Q π (s t, a t). With such an optimal Q-function, optimal actions can be determined based on the highest action-values at each state. DQN is a deep reinforcement learning method that approximates an optimal Qfunction with deep neural networks. The Q-function Q(s t, a t |θ) with a parameter θ is approximated by a Q-learning style update . The parameter θ is learned to minimize squared temporal difference errors. where y t = r t+1 + γ max a Q(s t+1, a|θ) with a target network parameter θ. The target network parameter θ is synchronized to the parameter θ in a certain interval. DQN also introduces use of the experience replay , which randomly samples past state transitions from the replay buffer to compute the squared TD error. Assume there are N sets of trained Q-function Q(s, a|θ i) where i denotes an index of the function. The final policy π(s t) is determined by combining the N Q-functions. We formulate two baseline methods commonly used in ensemble algorithms: Average policy and Majority Voting (MV) policy (Faußer & Schwenker (2011; 2015a); ). Majority Voting (MV) policy is an approach to decide the action based on greedy selection according to the formula: where v i (s, a) is a binary function that outputs 1 for the most valued action and 0 for others: Contributions of each function to the final output are completely equal. Average policy is a method that averages all the outputs of the Q-functions, and the action is greedily determined: Averaging outputs from multiple approximated functions reduces variance of prediction. Unlike MV policy, Average policy leverages all estimated values as well as the highest values. In this section, we explain the TDW ensemble algorithm that adjusts weights of contributions based on accumulated TD errors. The TDW algorithm is especially powerful in the complex situation such as high-dimensional state-space where it is difficult to cover whole state-space with a single agent. Section 4.1 describes the error accumulation mechanism. Section 4.2 introduces joint action selection using the weights computed with the accumulated errors. We consider that a squared TD error δ 2 fundamentally consists of two kinds of errors: where δ p is a prediction error of approximated function, and δ u is an error at states where the agent rarely experienced. In a tabular-based value function, δ p will be near 0 at frequently visited states. In contrast, δ u will be extremely large at less visited states with both a tabular-based value function and a function approximator because TD errors are not sufficiently propagated such a state. There are two causes of unfamiliar states: states are difficult to visit due to hard exploration, and states are not optimal to the agent according to learned state transitions. For combining multiple agents at a joint decision, the second case is noteworthy because each agent may be optimized at different optimal trajectories. Thus, some of the agents will produce larger δ u when they face such states as a of an ensemble, and contributions of less confident agents can be reduced based on the TD error δ u. or. end for To measure uncertainty of less confident agents, we define u i t as a uncertainty of an agent: where α ∈ is a constant factor decaying the uncertainty at a previous step. With a large α, the uncertainty u i is extremely large during unfamiliar trajectories, which makes it possible to easily distinguish confident agents from the others. However, a trade-off arises when prediction error δ p is accumulated for a long horizon, which increases correlation between agents. To reduce contributions of less confident agents, each contribution at joint decision is weighted based on uncertainty u i t. Using the uncertainty u i t, a weight w i t of each agent is calculated as a probability by the softmax function: When the agent has a small uncertainty value u i t, the weight w i t becomes large. We consider two weighted ensemble methods corresponding to the Average policy and the MV policy based on the weights w i t. As a counterpart of the Average policy, our TDW Average policy is as follows: For the MV policy, TDW Voting policy is as follows: Unlike the averaging method, because TDW Voting policy directly uses probabilities calculated by, the correlation between agents can be increased significantly with large decay factor α, leading to worse performance. Although these weighted ensemble algorithms are simple enough to extend to arbitrary ensemble methods, we leave more advanced applications for future work so that we may demonstrate the effectiveness of our approach in a simpler setting. The complete TDW ensemble algorithm is described in Algorithm 1. In this section, we describe the experiments performed on the Gridworld tasks and Atari tasks in Section 5.2. To build the trained Q-functions, we used the table-based Qlearning algorithm and DQN with a standard model, respectively. In each experiment, we evaluated our algorithm to address performance improvements from there baselines as well as the effects of selecting the decay factor α. We first evaluated the TDW algorithms with a tabular representation scenario to show their effectiveness in the situation where it is difficult to cover a whole state-space with the single agent. We built two Gridworld environments as shown in Figure 1. Each environment is designed to induce bias of learned trajectories by setting multiple slits. As a of exploration, once an agent gets through one of the slits to the goal, the agent is easily biased to aim for the same slit due to the max operator of Q-learning. The state-representation is a discrete index of a table with size of 13 × 13. There are four actions corresponding to steps of up, down, left and right. If a wall exists where the agent tries to move, the next state remains the same as the current state. The agent always starts from S depicted in Figure 1. At every timestep, the agent receives a reward of −0.1 or +100 at goal states. The agent starts a new episode if either the agent arrives at the goal states or the timestep reaches 100 steps. We trained N = 10 agents with different random seeds for -greedy exploration with = 0.3. Each training continues until 1M steps have been simulated. We set the learning rate to 0.01 and γ = 0.95. After training, we evaluated TDW ensemble algorithms for 20K episodes. As baselines, we also evaluate each single agent as well as ensemble methods of Average policy and MV policy for 20K episodes each. The evaluation on the Gridworld environments are shown in Table 1. Four-slit Gridworld is significantly more difficult than Two-slit Gridworld because each Q-function is not only horizontally biased, but also vertically biased. In both the Two-slit Gridworld and Four-slit Gridworld environments, the TDW ensemble methods achieve better performance than their corresponding Average policy and the MV policy baselines. Additionally, the of both of the Average policy and the MV policy were worse than the single models. It should be noted that Average policy degrades original performance more than MV policy. For the selection of the decay factor α, a larger α tends to increase performance in TDW Average policy. In contrast, the larger α leads to poor performance in TDW Voting policy especially in Fourslit Gridworld. We believe that the large α significantly reduces contributions of most Q-functions, which would ignore votes of actions that would be the best in equal voting. In contrast, TDW Average policy leverages values of all actions, exploiting all contributions to select the best action. To demonstrate effectiveness in high-dimensional state-space, we evaluated TDW algorithm in Atari tasks. We trained DQN agents across 6 Atari tasks (Asterix, Beamrider, Breakout, Enduro, MsPacman and SpaceInvaders) through OpenAI Gym . At each task, N = 10 agents were trained with different random seeds for neural network initialization, exploration and environments in order to vary the learned Q-function. The training continued until 10M steps (40M game frames) with frame skipping and termination on loss of life enabled. The of exploration is linearly decayed from 1.0 to 0.1 through 1M steps. The hyperparameters of neural networks are same as . After training, evaluation was conducted with each Q-function, TDW Average policy, TDW Voting policy and the two baselines. We additionally evaluated weighted versions of the baselines whose Q-functions were weighted based on their evaluation performance. The evaluation continued for 1000 episodes with = 0.05. The experimental are shown in Table 2. Interestingly, both of Average policy and MV policy improved performance from mean performance of single agents, though the simple ensemble algorithms had not been investigated well in the domain of deep reinforcement learning. In the games of Asterix, Beamrider, Breakout and Enduro, the TDW algorithms achieve additional performance improvements as compared with the non-weighted and weighted baselines. Even in MsPacman and SpaceInvaders, the TDW algorithms perform significantly better than non-weighted baselines and the best single models. In most of the cases, the globally weighted ensemble baselines performed worse than non-weighted versions. We believe this is because these globally weighted ensemble methods will ignore local performance, which is significant in high-dimensional state-space because it is difficult to cover all possible states with single models. The TDW algorithms with small α tend to achieve better performance than those with a large α, which suggests that a significantly large α can increase correlation between Q-functions and reduce contributions of less confident Q-functions. To analyze changes of weights through an episode, we plot entropies during a sample episode on Breakout (TDW Average policy, α = 0.8) in Figure 2 (a). If an entropy is low (high in negative scale), some of Q-functions have large weights, while others have extremely small weights. Extreme low entropies are observed when the ball comes closely to the pad as shown in Figure 2 (b) where the value should be close to 0 for the non-optimal actions because missing the ball immediately ends its life. It is easy for sufficiently learned Q-functions to estimate such a terminal state so that the entropy becomes low due to the gap between optimal Q-functions for the current states and the others. The entropies tend to be low during latter steps as there are many variations of the remaining blocks. We consider that the reason why the TDW algorithms with α = 0.8 achieved the best performance in Breakout is that the large α value reduces influence of the Q-functions which cannot correctly predict values with unseen variations of remaining blocks. In contrast to Breakout where living long leads to higher scores, in SpaceInvaders we observe that the low entropies appear at dodging beams rather than shooting invaders, because shooting beams requires long-term value prediction which does not induce large TD errors. Therefore, performance improvements on SpaceInvaders are not significantly better than weighted baselines. To analyze correlation between the decay factor α and entropies, plots of the number of observations with a certain entropy are shown in Figure 3. In most games, higher decay factors increase the presence of low entropy states and decreases the presence of high entropy states. In the games with frequent reward occurences such as Enduro and MsPacman, there are more low-entropy observations than BeamRider and SpaceInvaders, where reward occurences are less frequent. Especially with regards to MsPacman, we believe that the TDW Average policy with larger α values in worse performance because the agent frequently receives positive rewards at almost every timestep, which often induces prediction error δ p, and increases uncertainty in all Q-functions. Thus, globally weighted ensemble methods achieve better performance than TDW algorithms because it is difficult to consistently accumulate uncertainties on MsPacman. In this paper, we have introduced the TDW algorithm: an ensemble method that accumulates temporal difference errors as an uncertainties in order to adjust weights of each Q-function, improving performance especially in high-dimensional state-space or situations where there are multiple optimal trajectories. We have shown performance evaluations in Gridworld tasks and Atari tasks, wherein the TDW algorithms have achieved significantly better performance than non-weighted algorithms and globally weighted algorithms. However, it is difficult to correctly measure uncertainties with frequent reward occurrences because the intrinsic prediction errors are also accumulated. Thus, these types of games did not realize the same performance improvements. In future work, we intend to investigate an extension of this work into continuous action-space tasks because only the joint decision problem of Q-functions is considered in this paper. We believe a similar algorithm can extend a conventional ensemble method of Deep Deterministic Policy Gradients by measuring uncertainties of pairs of a policy function and a Q-function. We will also consider a separate path, developing an algorithm that measures uncertainties without rewards because reward information is not always available especially in the case of real world application. A Q-FUNCTION TABLES OBTAINED ON GRIDWORLDS table 1 0 20 40 60 80 100 table 2 0 20 40 60 80 100 table 3 0 20 40 60 80 100 table 4 0 20 40 60 80 100 table 5 0 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkej86VYvB | Ensemble method for reinforcement learning that weights Q-functions based on accumulated TD errors. |
This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers. The reinforcement learning (RL) problem describes an agent interacting with an environment with the goal of maximizing cumulative reward through time . Unlike other branches of control, the dynamics of the environment are not fully known to the agent, but can be learned through experience. Unlike other branches of statistics and machine learning, an RL agent must consider the effects of its actions upon future experience. An efficient RL agent must address three challenges simultaneously: 1. Generalization: be able to learn efficiently from data it collects. 2. Exploration: prioritize the right experience to learn from. 3. Long-term consequences: consider effects beyond a single timestep. The great promise of reinforcement learning are agents that can learn to solve a wide range of important problems. According to some definitions, an agent that can perform at or above human level across a wide variety of tasks is an artificial general intelligence (AGI) . Interest in artificial intelligence has undergone a resurgence in recent years. Part of this interest is driven by the constant stream of innovation and success on high profile challenges previously deemed impossible for computer systems. Improvements in image recognition are a clear example of these accomplishments, progressing from individual digit recognition , to mastering ImageNet in only a few years . The advances in RL systems have been similarly impressive: from checkers (Samuel), to Backgammon , to Atari games (a), to competing with professional players at DOTA or StarCraft and beating the world champions at Go. Outside of playing games, decision systems are increasingly guided by AI systems . As we look towards the next great challenges for RL and AI, we need to understand our systems better . This includes the scalability of our RL algorithms, the environments where we expect them to perform well, and the key issues outstanding in the design of a general intelligence system. We have the existence proof that a single self-learning RL agent can master the game of Go purely from self-play . We do not have a clear picture of whether such a learning algorithm will perform well at driving a car, or managing a power plant. If we want to take the next leaps forward, we need to continue to enhance our understanding. The practical success of RL algorithms has built upon a base of theory including gradient descent , temporal difference learning and other foundational algorithms. Good theory provides insight into our algorithms beyond the particular, and a route towards general improvements beyond ad-hoc tinkering. As the psychologist Kurt Lewin said,'there is nothing as practical as good theory' . If we hope to use RL to tackle important problems, then we must continue to solidify these foundations. This need is particularly clear for RL with nonlinear function approximation, or'deep RL'. At the same time, theory often lags practice, particularly in difficult problems. We should not avoid practical progress that can be made before we reach a full theoretical understanding. The successful development of algorithms and theory typically moves in tandem, with each side enriched by the insights of the other. The evolution of neural network research, or deep learning, provides a poignant illustration of how theory and practice can develop together . Many of the key ideas for deep learning have been around, and with successful demonstrations, for many years before the modern deep learning explosion (; ;). However, most of these techniques remained outside the scope of developed learning theory, partly due to their complex and non-convex loss functions. Much of the field turned away from these techniques in a'neural network winter', focusing instead of function approximation under convex loss . These convex methods were almost completely dominant until the emergence of benchmark problems, mostly for image recognition, where deep learning methods were able to clearly and objectively demonstrate their superiority . It is only now, several years after these high profile successes, that learning theory has begun to turn its attention back to deep learning (; ;). We should not turn away from deep RL just because our current theory is not yet developed. In this paper we introduce the Behaviour Suite for Reinforcement Learning (or bsuite for short): a collection of experiments designed to highlight key aspects of agent scalability. Our aim is that these experiments can help provide a bridge between theory and practice, with benefits to both sides. These experiments embody fundamental issues, such as'exploration' or'memory' in a way that can be easily tested and iterated. For the development of theory, they force us to instantiate measurable and falsifiable hypotheses that we might later formalize into provable guarantees. While a full theory of RL may remain out of reach, the development of clear experiments that instantiate outstanding challenges for the field is a powerful driver for progress. We provide a description of the current suite of experiments and the key issues they identify in Section 2. Our work on bsuite is part of a research process, rather than a final offering. We do not claim to capture all, or even most, of the important issues in RL. Instead, we hope to provide a simple library that collects the best available experiments, and makes them easily accessible to the community. As part of an ongoing commitment, we are forming a bsuite committee that will periodically review the experiments included in the official bsuite release. We provide more details on what makes an'excellent' experiment in Section 2, and on how to engage in their construction for future iterations in Section 5. The Behaviour Suite for Reinforcement Learning is a not a replacement for'grand challenge' undertakings in artificial intelligence, or a leaderboard to climb. Instead it is a collection of diagnostic experiments designed to provide insight into key aspects of agent behaviour. Just as the MNIST dataset offers a clean, sanitised, test of image recognition as a stepping stone to advanced computer vision; so too bsuite aims to instantiate targeted experiments for the development of key RL capabilities. The successful use of illustrative benchmark problems is not unique to machine learning, and our work is similar in spirit to the Mixed Integer Programming Library (MIPLIB) (miplib2017). In mixed integer programming, and unlike linear programming, the majority of algorithmic advances have (so far) eluded theoretical analysis. In this field, MIPLIB serves to instantiate key properties of problems (or types of problems), and evaluation on MIPLIB is a typical component of any new algorithm. We hope that bsuite can grow to perform a similar role in RL research, at least for those parts that continue to elude a unified theory of artificial intelligence. We provide guidelines for how researchers can use bsuite effectively in Section 3. As part of this project we open source github.com/anon/bsuite, which instantiates all experiments in code and automates the evaluation and analysis of any RL agent on bsuite. This library serves to facilitate reproducible and accessible research on the core issues in reinforcement learning. It includes: • Canonical implementations of all experiments, as described in Section 2. • Reference implementations of several reinforcement learning algorithms. • Example usage of bsuite with alternative codebases, including'OpenAI Gym'. • Launch scripts for Google cloud that automate large scale compute at low cost. • A ready-made bsuite Jupyter notebook with analyses for all experiments. • Automated L A T E X appendix, suitable for inclusion in conference submission. We provide more details on code and usage in Section 4. We hope the Behaviour Suite for Reinforcement Learning, and its open source code, will provide significant value to the RL research community, and help to make key conceptual issues concrete and precise. bsuite can highlight bottlenecks in general algorithms that are not amenable to hacks, and reveal properties and scalings of algorithms outside the scope of current analytical techniques. We believe this offers an avenue towards great leaps on key issues, separate to the challenges of large-scale engineering . Further, bsuite facilitates clear, targeted and unified experiments across different code frameworks, something that can help to remedy issues of reproducibility in RL research . The Behaviour Suite for Reinforcement Learning fits into a long history of RL benchmarks. From the beginning, research into general learning algorithms has been grounded by the performance on specific environments . At first, these environments were typically motivated by small MDPs that instantiate the general learning problem.'CartPole' and'MountainCar' are examples of classic benchmarks that has provided a testing ground for RL algorithm development. Similarly, when studying specific capabilities of learning algorithms, it has often been helpful to design diagnostic environments with that capability in mind. Examples of this include'RiverSwim' for exploration or'Taxi' for temporal abstraction . Performance in these environments provide a targeted signal for particular aspects of algorithm development. As the capabilities or RL algorithms have advanced, so has the complexity of the benchmark problems. The Arcade Learning Environment (ALE) has been instrumental in driving progress in deep RL through surfacing dozens of Atari 2600 games as learning environments . Similar projects have been crucial to progress in continuous control , model-based RL and even rich 3D games . Performing well in these complex environments requires the integration of many core agent capabilities. We might think of these benchmarks as natural successors to'CartPole' or'MountainCar'. The Behaviour Suite for Reinforcement Learning offers a complementary approach to existing benchmarks in RL, with several novel components: 1. bsuite experiments enforce a specific methodology for agent evaluation beyond just the environment definition. This is crucial for scientific comparisons and something that has become a major problem for many benchmark suites (Section 2). 2. bsuite aims to isolate core capabilities with targeted'unit tests', rather than integrate general learning ability. Other benchmarks evolve by increasing complexity, bsuite aims to remove all confounds from the core agent capabilities of interest (Section 3). 3. bsuite experiments are designed with an emphasis on scalability rather than final performance. Previous'unit tests' (such as 'Taxi' or 'RiverSwim') are of fixed size, bsuite experiments are specifically designed to vary the complexity smoothly (Section 2). 4. github.com/anon/bsuite has an extraordinary emphasis on the ease of use, and compatibility with RL agents not specifically designed for bsuite. Evaluating an agent on bsuite is practical even for agents designed for a different benchmark (Section 4). This section outlines the experiments included in the Behaviour Suite for Reinforcement Learning 2019 release. In the context of bsuite, an experiment consists of three parts: 1. Environments: a fixed set of environments determined by some parameters. 2. Interaction: a fixed regime of agent/environment interaction (e.g. 100 episodes). 3. Analysis: a fixed procedure that maps agent behaviour to and plots. One crucial part of each bsuite analysis defines a'score' that maps agent performance on the task to. This score allows for agent comparison'at a glance', the Jupyter notebook includes further detailed analysis for each experiment. All experiments in bsuite only measure behavioural aspects of RL agents. This means that they only measure properties that can be observed in the environment, and are not internal to the agent. It is this choice that allows bsuite to easily generate and compare across different algorithms and codebases. Researchers may still find it useful to investigate internal aspects of their agents on bsuite environments, but it is not part of the standard analysis. Every current and future bsuite experiment should target some key issue in RL. We aim for simple behavioural experiments, where agents that implement some concept well score better than those that don't. For an experiment to be included in bsuite it should embody five key qualities: • Targeted: performance in this task corresponds to a key issue in RL. • Simple: strips away confounding/confusing factors in research. • Challenging: pushes agents beyond the normal range. • Scalable: provides insight on scalability, not performance on one environment. • Fast: iteration from launch to in under 30min on standard CPU. Where our current experiments fall short, we see this as an opportunity to improve the Behaviour Suite for Reinforcement Learning in future iterations. We can do this both through replacing experiments with improved variants, and through broadening the scope of issues that we consider. We maintain the full description of each of our experiments through the code and accompanying documentation at github.com/anon/bsuite. In the following subsections, we pick two bsuite experiments to review in detail:'memory length' and'deep sea', and review these examples in detail. By presenting these experiments as examples, we can emphasize what we think makes bsuite a valuable tool for investigating core RL issues. We do provide a high level summary of all other current experiments in Appendix A. To accompany our experiment descriptions, we present and analysis comparing three baseline algorithms on bsuite: DQN (a), A2C and Bootstrapped DQN . As part of our open source effort, we include full code for these agents and more at bsuite/baselines. All plots and analysis are generated through the automated bsuite Jupyter notebook, and give a flavour for the sort of agent comparisons that are made easy by bsuite. Almost everyone agrees that a competent learning system requires memory, and almost everyone finds the concept of memory intuitive. Nevertheless, it can be difficult to provide a rigorous definition for memory. Even in human minds, there is evidence for distinct types of'memory' handled by distinct regions of the brain . The assessment of memory only becomes more difficult to analyse in the context of general learning algorithms, which may differ greatly from human models of cognition. Which types of memory should we analyse? How can we inspect belief models for arbitrary learning systems? Our approach in bsuite is to sidestep these debates through simple behavioural experiments. We refer to this experiment as memory length; it is designed to test the number of sequential steps an agent can remember a single bit. The underlying environment is based on a stylized T-maze , parameterized by a length N ∈ N. Each episode lasts N steps with observation o t = (c t, t/N) for t = 1,.., N and action space A = {−1, +1}. The context c 1 ∼ Unif(A) and c t = 0 for all t ≥ 2. The reward r t = 0 for all t < N, but r N = Sign(a N = c 1). For the bsuite experiment we run the agent on sizes N = 1,.., 100 exponentially spaced and look at the average regret compared to optimal after 10k episodes. The summary'score' is the percentage of runs for which the average regret is less than 75% of that achieved by a uniformly random policy. Memory length is a good bsuite experiment because it is targeted, simple, challenging, scalable and fast. By construction, an agent that performs well on this task has mastered some use of memory over multiple timesteps. Our summary'score' provides a quick and dirty way to compare agent performance at a high level. Our sweep over different lengths N provides empirical evidence about the scaling properties of the algorithm beyond a simple pass/fail. Figure 2a gives a quick snapshot of the performance of baseline algorithms. Unsurprisingly, actor-critic with a recurrent neural network greatly outperforms the feedforward DQN and Bootstrapped DQN. Figure 2b gives us a more detailed analysis of the same underlying data. Both DQN and Bootstrapped DQN are unable to learn anything for length > 1, they lack functioning memory. A2C performs well for all N ≤ 30 and essentially random for all N > 30, with quite a sharp cutoff. While it is not surprising that the recurrent agent outperforms feedforward architectures on a memory task, Figure 2b gives an excellent insight into the scaling properties of this architecture. In this case, we have a clear explanation for the observed performance: the RNN agent was trained via backprop-through-time with length 30. bsuite recovers an empirical evaluation of the scaling properties we would expect from theory. Reinforcement learning calls for a sophisticated form of exploration called deep exploration. Just as an agent seeking to'exploit' must consider the long term consequences of its actions towards cumulative rewards, an agent seeking to'explore' must consider how its actions can position it to learn more effectively in future timesteps. The literature on efficient exploration broadly states that only agents that perform deep exploration can expect polynomial sample complexity in learning . This literature has focused, for the most part, on bounding the scaling properties of particular algorithms in tabular MDPs through analysis . Our approach in bsuite is to complement this understanding through a series of behavioural experiments that highlight the need for efficient exploration. The deep sea problem is implemented as an N × N grid with a one-hot encoding for state. The agent begins each episode in the top left corner of the grid and descends one row per timestep. Each episode terminates after N steps, when the agent reaches the bottom row. In each state there is a random but fixed mapping between actions A = {0, 1} and the transitions'left' and'right'. At each timestep there is a small cost r = −0.01/N of moving right, and r = 0 for moving left. However, should the agent transition right at every timestep of the episode it will be rewarded with an additional reward of +1. This presents a particularly challenging exploration problem for two reasons. First, following the'gradient' of small intermediate rewards leads the agent away from the optimal policy. Second, a policy that explores with actions uniformly at random has probability 2 −N of reaching the rewarding state in any episode. For the bsuite experiment we run the agent on sizes N = 10, 12,.., 50 and look at the average regret compared to optimal after 10k episodes. The summary'score' computes the percentage of runs for which the average regret drops below 0.9 faster than the 2 N episodes expected by dithering. Deep Sea is a good bsuite experiment because it is targeted, simple, challenging, scalable and fast. By construction, an agent that performs well on this task has mastered some key properties of deep exploration. Our summary score provides a'quick and dirty' way to compare agent performance at a high level. Our sweep over different sizes N can help to provide empirical evidence of the scaling properties of an algorithm beyond a simple pass/fail. Figure 3 presents example output comparing A2C, DQN and Bootstrapped DQN on this task. Figure 4a gives a quick snapshot of performance. As expected, only Bootstrapped DQN, which was developed for efficient exploration, scores well. Figure 4b gives a more detailed analysis of the same underlying data. When we compare the scaling of learning with problem size N it is clear that only Bootstrapped DQN scales gracefully to large problem sizes. Although our experiment was only run to size 50, the regular progression of learning times suggest we might expect this algorithm to scale towards N > 50. This section describes some of the ways you can use bsuite in your research and development of RL algorithms. Our aim is to present a high-level description of some research and engineering use cases, rather than a tutorial for the code installation and use. We provide examples of specific investigations using bsuite in Appendixes C, D and E. Section 4 provides an outline of our code and implementation. Full details and tutorials are available at github.com/anon/bsuite. A bsuite experiment is defined by a set of environments and number of episodes of interaction. Since loading the environment via bsuite handles the logging automatically, any agent interacting with that environment will generate the data required for required for analysis through the Jupyter notebook we provide (Pérez &). Generating plots and analysis via the notebook only requires users to provide the path to the logged data. The'radar plot' (Figure 5) at the start of the notebook provides a snapshot of agent behaviour, based on summary scores. The notebook also contains a complete description of every experiment, summary scoring and in-depth analysis of each experiment. You can interact with the full report at bit.ly/bsuite-agents. If you are developing an algorithm to make progress on fundamental issues in RL, running on bsuite provides a simple way to replicate benchmark experiments in the field. Although many of these problems are'small', in the sense that their solution does not necessarily require large neural architecture, they are designed to highlight key challenges in RL. Further, although these experiments do offer a summary'score', the plots and analysis are designed to provide much more information than just a leaderboard ranking. By using this common code and analysis, it is easy to benchmark your agents and provide reproducible and verifiable research. If you are using RL as a tool to crack a'grand challenge' in AI, such as beating a world champion at Go, then taking on bsuite gridworlds might seem like small fry. We argue that one of the most valuable uses of bsuite is as a diagnostic'unit-test' for large-scale algorithm development. Imagine you believe that'better exploration' is key to improving your performance on some challenge, but when you try your'improved' agent, the performance does not improve. Does this mean your agent does not do good exploration? Or maybe that exploration is not the bottleneck in this problem? Worse still, these experiments might take days and thousands of dollars of compute to run, and even then the information you get might not be targeted to the key RL issues. Running on bsuite, you can test key capabilities of your agent and diagnose potential improvements much faster, and more cheaply. For example, you might see that your algorithm completely fails at credit assignment beyond n = 20 steps. If this is the case, maybe this lack of credit-assignment over long horizons is the bottleneck and not necessarily exploration. This can allow for much faster, and much better informed agent development -just like a good suite of tests for software development. Another benefit of bsuite is to disseminate your more easily and engage with the research community. For example, if you write a conference paper targeting some improvement to hierarchical reinforcement learning, you will likely provide some justification for your in terms of theorems or experiments targeted to this setting. 2 However, it is typically a large amount of work to evaluate your algorithm according to alternative metrics, such as exploration. This means that some fields may evolve without realising the connections and distinctions between related concepts. If you run on bsuite, you can automatically generate a one-page Appendix, with a link to a notebook report hosted online. This can help provide a scientific evaluation of your algorithmic changes, and help to share your in an easily-digestible format, compatible with ICML, ICLR and NeurIPS formatting. We provide examples of these experiment reports in Appendices B, C, D and E. To avoid discrepancies between this paper and the source code, we suggest that you take practical tutorials directly from github.com/anon/bsuite. A good starting point is bit.ly/bsuite-tutorial: a Jupyter notebook where you can play the code right from your browser, without installing anything. 3 The purpose of this section is to provide a high-level overview of the code that we open source. In particular, we want to stress is that bsuite is designed to be a library for RL research, not a framework. We provide implementations for all the environments, analysis, run loop and even baseline agents. However, it is not necessary that you make use of them all in order to make use of bsuite. The recommended method is to implement your RL agent as a class that implements a policy method for action selection, and an update method for learning from transitions and rewards. Then, simply pass your agent to our run loop, which enumerates all the necessary bsuite experiments and logs all the data automatically. If you do this, then all the experiments and analysis will be handled automatically and generate your via the included Jupyter notebook. We provide examples of running these scripts locally, and via Google cloud through our tutorials. If you have an existing codebase, you can still use bsuite without migrating to our run loop or agent structure. Simply replace your environment with environment = bsuite.load and record(bsuite id) and add the flag bsuite id to your code. You can then complete a full bsuite evaluation by iterating over the bsuite ids defined in sweep. SWEEP. Since the environments handle the logging themselves, your don't need any additional logging for the standard analysis. Although full bsuite includes many separate evaluations, no single bsuite environment takes more than 30 minutes to run and the sweep is naturally parallel. As such, we recommend launching in parallel using multiple processes or multiple machines. Our examples include a simple approach using Python's multiprocessing module with Google cloud compute. We also provide examples of running bsuite from OpenAI baselines and Dopamine . Designing a single RL agent compatible with diverse environments can cause problems, particularly for specialized neural networks. bsuite alleviates this problem by specifying an observation spec that surfaces the necessary information for adaptive network creation. By default, bsuite environments implement the dm env standards , but we also include a wrapper for use through Openai gym . However, if your agent is hardcoded for a format, bsuite offers the option to output each environment with the observation spec of your choosing via linear interpolation. This means that, if you are developing a network suitable for Atari and particular observation spec, you can choose to swap in bsuite without any changes to your agent. This paper introduces the Behaviour Suite for Reinforcement Learning, and marks the start of its ongoing development. With our opensource effort, we chose a specific collection of experiments as the bsuite2019 release, but expect this collection to evolve in future iterations. We are reaching out to researchers and practitioners to help collate the most informative, targeted, scalable and clear experiments possible for reinforcement learning. To do this, submissions should implement a sweep that determines the selection of environments to include and logs the necessary data, together with an analysis that parses this data. In order to review and collate these submissions we will be forming a bsuite committee. The committee will meet annually during the NeurIPS conference to decide which experiments will be included in the bsuite release. We are reaching out to a select group of researchers, and hope to build a strong core formed across industry and academia. If you would like to submit an experiment to bsuite or propose a committee member, you can do this via github pull request, or via email to bsuite.committee@gmail.com. This appendix outlines the experiments that make up the bsuite 2019 release. In the interests of brevity, we provide only an outline of each experiment here. Full documentation for the environments, interaction and analysis are kept with code at github.com/anon/bsuite. We begin with a collection of very simple decision problems, and standard analysis that confirms an agent's competence at learning a rewarding policy within them. We call these experiments'basic', since they are not particularly targeted at specific core issues in RL, but instead test a general base level of competence we expect all general agents to attain. To investigate the robustness of RL agents to noisy rewards, we repeat the experiments from Section A.1 under differing levels of Gaussian noise. This time we allocate the 20 different seeds across 5 levels of Gaussian noise N (0, σ 2) for σ = [0.1, 0.3, 1, 3, 10] with 4 seeds each. To investigate the robustness of RL agents to problem scale, we repeat the experiments from Section A.1 under differing reward scales. This time we allocate the 20 different seeds across 5 levels of reward scaling, where we multiply the observed rewards by λ = [0.01, 0.1, 1, 10, 100] with 4 seeds each. As an agent interacts with its environment, it observes the outcomes that from previous states and actions, and learns about the system dynamics. This leads to a fundamental tradeoff: by exploring poorly-understood states and actions the agent can learn to improve future performance, but it may attain better short-run performance by exploiting its existing knowledge. Exploration is the challenge of prioritizing useful information for learning, and the experiments in this section are designed to necessitate efficient exploration for good performance. A.4. If you run an agent on bsuite, and you want to share these as part of a conference submission, we make it easy to share a single-page'bsuite report' as part of your appendix. We provide a simple L A T E Xfile that you can copy/paste into your paper, and is compatible out-the-box with ICLR, ICML and NeurIPS style files. This single page summary displays the summary scores for experiment evaluations for one or more agents, with plots generated automatically from the included ipython notebook. In each report, two sections are left for the authors to fill in: one describing the variants of the agents examined and another to give some brief commentary on the . We suggest that authors promote more in-depth analysis to their main papers, or simply link to a hosted version of the full bsuite analysis online. You can find more details on our automated reports at github.com/anon/bsuite. The sections that follow are example bsuite reports, that give some example of how these report appendixes might be used. We believe that these simple reports can be a good complement to conference submissions in RL research, that'sanity check' the elementary properties of algorithmic implementations. An added bonus of bsuite is that it is easy to set up a like for like experiment between agents from different'frameworks' in a way that would be extremely laborious for an individual researcher. If you are writing a conference paper on a new RL algorithm, we believe that it makes sense for you to include a bsuite report in the appendix by default. The Behaviour Suite for Reinforcement Learning, or bsuite for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. This report provides a snapshot of agent performance on bsuite2019, obtained by running the experiments from github.com/anon/bsuite. In this experiment all implementations are taken from bsuite/baselines with default configurations. We provide a brief summary of the agents run on bsuite2019: • random: selects action uniformly at random each timestep. • dqn: Deep Q-networks Mnih et al. (2015b). • boot dqn: bootstrapped DQN with prior networks Osband et al. (2016; 2018). • actor critic rnn: an actor critic with recurrent neural network. Each bsuite experiment outputs a summary score in. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. A detailed analysis of each of these experiments may be found in a notebook hosted on Colaboratory bit.ly/bsuite-agents. • random performs uniformly poorly, confirming the scores are working as intended. • dqn performs well on basic tasks, and quite well on credit assignment, generalization, noise and scale. DQN performs extremely poorly across memory and exploration tasks. The feedforward MLP has no mechanism for memory, and =5%-greedy action selection is inefficient exploration. • boot dqn is mostly identically to DQN, except for exploration where it greatly outperforms. This matches our understanding of Bootstrapped DQN as a variant of DQN designed to estimate uncertainty and use this to guide deep exploration. • actor critic rnn typically performs worse than either DQN or Bootstrapped DQN on all tasks apart from memory. This agent is the only one able to perform better than random due to its recurrent network architecture. The Behaviour Suite for Reinforcement Learning, or bsuite for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. This report provides a snapshot of agent performance on bsuite2019, obtained by running the experiments from github.com/anon/bsuite. All agents correspond to different instantiations of the DQN agent (b), as implemented in bsuite/baselines but with differnet optimizers from. In each case we tune a learning rate to optimize performance on'basic' tasks from {1e-1, 1e-2, 1e-3}, keeping all other parameters constant at default value. • sgd: vanilla stochastic gradient descent with learning rate 1e-2. • rmsprop: RMSProp with learning rate 1e-3. • adam: Adam with learning rate 1e-3. Each bsuite experiment outputs a summary score in. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. A detailed analysis of each of these experiments may be found in a notebook hosted on Colaboratory: bit.ly/bsuite-optim. Both RMSProp and Adam perform better than SGD in every category. In most categories, Adam slightly outperforms RMSprop, although this difference is much more minor. SGD performs particularly badly on environments that require generalization and/or scale. This is not particularly surprising, since we expect the non-adaptive SGD may be more sensitive to learning rate optimization or annealing. In Figure 11 we can see that the differences are particularly pronounced on the cartpole domains. We hypothesize that this task requires more efficient neural network optimization, and the nonadaptive SGD is prone to numerical issues. The Behaviour Suite for Reinforcement Learning, or bsuite for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. This report provides a snapshot of agent performance on bsuite2019, obtained by running the experiments from github.com/anon/bsuite. In this experiment, all agents correspond to different instantiations of a Bootstrapped DQN with prior networks Osband et al. (2016; 2018). We take the default implementation from bsuite/baselines. We investigate the effect of the number of models used in the ensemble, sweeping over {1, 3, 10, 30}. Each bsuite experiment outputs a summary score in. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. A detailed analysis of each of these experiments may be found in a notebook hosted on Colaboratory: bit.ly/bsuite-ensemble. Generally, increasing the size of the ensemble improves bsuite performance across the board. However, we do see signficantly decreasing returns to ensemble size, so that ensemble 30 does not perform much better than size 10. These are not predicted by the theoretical scaling of proven bounds , but are consistent with previous empirical findings;. The gains are most extreme in the exploration tasks, where ensemble sizes less than 10 are not able to solve large'deep sea' tasks, but larger ensembles solve them reliably. Even for large ensemble sizes, our implementation does not completely solve every cartpole swingup instance. Further examination learning curves suggests this may be due to some instability issues, which might be helped by using Double DQN to combat value overestimation van. | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rygf-kSYwH | Bsuite is a collection of carefully-designed experiments that investigate the core capabilities of RL agents. |
Sequence-to-sequence attention-based models are a promising approach for end-to-end speech recognition. The increased model power makes the training procedure more difficult, and analyzing failure modes of these models becomes harder because of the end-to-end nature. In this work, we present various analyses to better understand training and model properties. We investigate on pretraining variants such as growing in depth and width, and their impact on the final performance, which leads to over 8% relative improvement in word error rate. For a better understanding of how the attention process works, we study the encoder output and the attention energies and weights. Our experiments were performed on Switchboard, LibriSpeech and Wall Street Journal. The encoder-decoder framework with attention BID34 BID60 has been successfully applied to automatic speech recognition (ASR) BID26 BID61 BID58 BID47 and is a promising end-to-end approach. The model outputs are words, sub-words or characters, and training the model can be done from scratch without any prerequisites except the training data in terms of audio features with corresponding transcriptions. In contrast to the conventional hybrid hidden Markov models (HMM) / neural network (NN) approach BID8 , ], the encoder-decoder model does not model the alignment explicitly. In the hybrid HMM/NN approach, a latent variable of hidden states is introduced, which model the phone state for any given time position. Thus by searching for the most probable sequence of hidden states, we get an explicit alignment. There is no such hidden latent variable in the encoder decoder model. Instead there is the attention process which can be interpreted as an implicit soft alignment. As this is only implicit and soft, it is harder to enforce constraints such as monotonicity, i.e. that the attention of future label outputs will focus also only to future time frames. Also, the interpretation of the attention weights as a soft alignment might not be completely valid, as the encoder itself can shift around and reorder evidence, i.e. the neural network could learn to pass over information in any possible way. E.g. the encoder could compress all the information of the input into a single frame and the decoder can learn to just attend on this single frame. We observed this behavior in early stages of the training. Thus, studying the temporal "alignment" behavior of the attention model becomes more difficult. Other end-to-end models such as connectionist temporal classification BID21 has often been applied to ASR in the past BID20 BID23 BID35 BID1 BID51 BID2 BID26 BID63 BID67. Other approaches are e.g. the inverted hidden Markov / segmental encoder-decoder model BID5, the recurrent transducer BID4 BID41, or the recurrent neural aligner. Depending on the interpretation, these can all be seen as variants of the encoder decoder approach. In some of these models, the attention process is not soft, but a hard decision. This hard decision can also become a latent variable such that we include several choices in the beam search. This is also referred to as hard attention. Examples of directly applying this idea on the usual attention approach are given by BID43, BID0,, BID33, BID27.We study recurrent NN (RNN) encoder decoder models in this work, which use long short-term memory (LSTM) units BID24. Recently the transformer model BID57 gained attention, which only uses feed-forward and self-attention layers, and the only recurrence is the label feedback in the decoder. As this does not include any temporal information, some positional encoding is added. This is not necessary for a RNN model, as it can learn such encoding by itself, which we demonstrate later for our attention encoder. We study attention models in more detail here. We are interested in when, why and how they fail and do an analysis on the search errors and relative error positions. We study the implicit alignment behavior via the attention weights and energies. We also analyze the encoder output representation and find that it contains information about the relative position and that it specially marks frames which should not be attended to, which correspond to silence.2 Related work BID25 analyzes individual neuron activations of a RNN language model and finds a neuron which becomes sensitive to the position in line. BID7 analyzed the hidden activations of the DeepSpeech 2 BID1 ] CTC end-to-end system and shows their correlation to a phoneme frame alignment. BID36 analyzed the encoder state and the attention weights of an attention model and makes similar observations as we do. Attention plots were used before to understand the behaviour of the model BID15. BID6 performed a comparison of the alignment behavior between hybrid HMM/NN models, the inverted HMM and attention models. BID42 investigate the effects of varying block sizes, attention types, and sub-word units. Understanding the inner working of a speech recognition system is also subject in, where the authors examine activation distribution and temporal patterns, focussing on the comparison between LSTM and GRU systems. A number of saliency methods BID50 BID32 BID52 are used for interpreting model decisions. In all cases, we use the RETURNN framework BID65 for neural network training and inference, which is based on TensorFlow BID54 and contains some custom CUDA kernels. In case of the attention models, we also use RETURNN for decoding. All experiments are performed on single GPUs, we did not take advantage of multi-GPU training. In some cases, the feature extraction, and in the hybrid case the decoding, is performed with RASR BID59. All used configs as well as used source code are published. The Switchboard corpus BID19 ] consists of English telephone speech. We use the 300h train dataset (LDC97S62), and a 90% subset for training, and a small part for cross validation, which is used for learning rate scheduling and to select a few models for decoding. We decode and report WER on Hub5'00 and Hub5'01. We use Hub5'00 to select the best model which we report the numbers on. Our hybrid HMM/NN model uses a deep bidirectional LSTM as described by BID64. Our baseline has 6 layers with 500 nodes in each direction. It uses dropout of 10% on the non-recurrent input of each LSTM layer, gradient noise with standard deviation of 0.3, Adam with Nesterov momentum (Nadam) BID18, Newbob learning rate scheduling BID64, and focal loss BID28 ].Our attention model uses byte pair encoding BID49 as subword units. We follow the baseline with about 1000 BPE units as described by. All our baselines and a comparison to from the literature are summarized in TAB0. The LibriSpeech dataset BID37 are read audio books and consists of about 1000h of speech. A subset of the training data is used for cross-validation, to perform learning rate scheduling and to select a number of models for full decoding. We use the dev-other set for selecting the final best model. The end-to-end attention model uses byte pair encoding (BPE) BID49 as subword units with a vocabulary of 10k BPE units. We follow the baseline as described by. A comparison of our baselines and other models are in TAB1. The Wall Street Journal (WSJ) dataset BID39 is read text from the WSJ. We use 90% of si284 for training, the remaining for cross validation and learning rate scheduling, dev93 for validation and selection of the final model, and eval92 for the final evaluation. We trained an end-to-end attention model using BPE subword units, with a vocabulary size of about 1000 BPE units. Our preliminary are shown in TAB2. Our attention model is based on the improved pretraining scheme as described in Section 5. the search errors where the models recognized sentence (via beam search) has a worse model score than the ground truth sentence. We observe that we do only very few search errors, and the amount of search errors seems independent from the final WER performance. Thus we conclude that we mostly have a problem in the model. We also were interested in the score difference between the best recognized sentence and the ground truth sentence. The are in Fig. 2. We can see that they concentrate on the lower side, around 10%, which is an indicator why a low beam size seems to be sufficient. It has been observed that pretraining can be substantial for good performance, and sometimes to get a converging model at all [a,b]. We provide a study on cases with attention-based models where pretraining benefits convergence, and compare the performance with and without pretraining. 8 >100 >100 >100 >100 6 8 >100 >100 >100 >100The pretraining variant of the Switchboard baseline (6 layers, time reduction 8 after pretraining) consists of these steps: 1. starts with 2 layers (layer 1 and 6), time reduction 32, and dropout as well as label smoothing disabled; 2. enable dropout; 3. 3 layers (layer 1, 2 and 6); 4. 4 layers (layer 1, 2, 3 and 6); 5. 5 layers (layer 1, 2, 3, 4 and 6); 6. all 6 layers; 7. decrease time reduction to 8; 8. final model, enable label smoothing. Each pretrain step is repeated for 5 epochs, where one epoch corresponds to 1/6 of the whole train corpus. In addition, a linear learning rate warm-up is performed from 1e-4 to 1e-3 in 10 epochs. We have to start with 2 layers as we want to have the time pooling in between the LSTM layers. In TAB3, performed on Switchboard, we varied the number of encoder layers and encoder LSTM units, both with and without pretraining. We observe that the overall best model is with 4 layers without the pretraining variant. I.e. we showed that we can directly start with 4 layers and time reduction 8 and yield very good . We even can start directly with 6 layer with a reduced learning rate. This was surprising to us, as this was not possible in earlier experiments. This might be due to a reduced and improved BPE vocabulary. We note that overall all the pretraining experiments seems to run more stable. We also can see that with 6 layers (and also more), pretraining yields better than no pretraining. These motivated us to perform further investigations into different variants of pretraining. It seems that pretraining allows to train deeper model, however using too much pretraining can also hurt. We showed that we can directly start with a deeper encoder and lower time reduction. In TAB4, we analyzed the optimal initial number of layers, and the initial time reduction. We observed that starting with a deeper network improves the overall performance, but also it still helps to then go deeper during pretraining, and starting too deep does not work well. We also observed that directly starting with time reduction 8 also works and further improves the final performance, but it seems that this makes the training slightly more unstable. In further experiments, we directly start with 4 layers and time reduction 8. We were also interested in the optimal number of repetitions of each pretrain step, i.e. how much epochs to train with each pretrain step; the baseline had 5 repetitions. We collected the in TAB5. In further experiments, we keep 5 repetitions as the default. It has already been shown by that a lower final time reduction performed better. So far, the lowest time reduction was 8 in our experiments. By having a pool size of 3 in the first time max pooling layer, we achieve a better-performing model with time reduction factor of 6 as shown in TAB6. So far we always kept the top layer (layer 6) during pretraining as our intuition was that it might help to get always the same time reduction factor as an input to this layer. When directly starting with the low time reduction, we do not need this scheme anymore, and we can always add a new layer on top. Comparisons are collected in TAB6. We can conclude that this simpler scheme to add layers on top performs better. We also did experiments with growing the encoder width / number of LSTM units during pretraining. We do this orthogonal to the growing in depth / number of layers. As before, our final number of LSTM units in each direction of the bidirectional deep LSTM encoder is 1024. Initially, we start with 50% of the final width, i.e. with 512 units. In each step, we linearly increase the number of units such that we have the final number of units in the last step. We keep the weights of existing units, and weights from/to newly added units are randomly initialized. We also decrease the dropout rate by the same factor. We can see that this width growing scheme performs better. This leads us to our current best model. Our findings are that pretraining is in general more stable, esp. for deep models. However, the pretraining scheme is important, and less pretraining can improve the performance, although it becomes more unstable. We also used the same improved pretraining scheme and time reduction 6 for WSJ as well as LibriSpeech and observed similar improvements, compare TAB1. We have observed that training attention models can be unstable, and careful tuning of initial learning rate, warm-up and pretraining is important. Related to that, we observe a high training variance. I.e. with the same configuration but different random seeds, we get some variance in the final WER performance. We observed this even for the same random seed, which we suspect stems from non-deterministic behaviour in TensorFlow operations such as tf.reduce_sum based on kernels 3-19.9, 19.6, 0.24 25.6-26.6, 26.1, 0.38 12.8-13.3, 13.1, 0.17 19.0-19.7, 19.4, 0.20 attention 5 runs 19.1-19.6, 19.3, 0.22 25.3-26.3, 25.8, 0.40 12.7-13.0, 12.9, 0.12 18.9-19.6, 19.2, 0.27 hybrid 5 seeds 14.3-14.5, 14.4, 0.08 19.0-19.3, 19.1, 0.12 9.6-9.8, 9.7, 0.06 14.3-14.7, 14.5, 0.16 hybrid 5 runs 14.3-14.5, 14.4, 0.07 19.0-19.2, 19.1, 0.09 9.6-9.8, 9.7, 0.08 14.4-14.6, 14.5 This training variance seems to be about the same as due to random seeds, which is higher than we expected. Note that it also depends a lot on other hyper parameters. For example, in a previous iteration of the model using a larger BPE vocabulary, we have observed more unstable training with higher variance, and even sometimes some models diverge while others converge with the same settings. We also compare that to hybrid HMM/LSTM models. It can be observed that it is lower compared to the attention model. We argue that is due to the more difficult optimization problem, and also due to the much bigger model. All the can be seen in TAB7. The encoder creates a high-level representation of the input. It also arguably represents further information needed for the decoder to know where to attend to. We try to analyze the output of the encoder and identify and examine the learned function. In FIG6, we plotted the encoder output and the attention weights, as well as the word positions in the audio. One hypothesis for an important function of the encoder is the detection of frames which should not be attended on by the decoder, e.g. which are silent or non-speech. Such a pattern can be observed in FIG6. By performing a dimensionality reduction (PCA) on the encoder output, we can identify the most important distinct information, which we identify as silence detection and encoder time position, compare FIG3. Similar behavior was shown by BID36. We further try to identify individual cells in the LSTM which encodes the positional information. By qualitatively inspecting the different neurons activations, we have identified multiple neurons which perform the hypothesized function as shown in FIG4 We also observed that the attention weights are always very local in the encoder frames, and often focus mostly on a single encoder frame, compare FIG6. The sharp behavior in the converged attention weight distribution has been observed before BID6 BID36. We conclude that the information about the label also needs to be well-localized in the encoder output. To support this observation, we performed experiments where we explicitly allowed only a local fixed-size window of non-zero attention weights around the arg max of the attention energies, to understand how much we can restrict the local context. The can be seen in TAB9. This confirms the hypothesis that the information is localized in the encoder. We explain the gap in performance with decoder frames where the model is unsure to attend, and where a global attention helps the decoder to gather information from multiple frames at once. We observed that in such case, there is sometimes some relatively large attention weight on the very first and/or very last frame. We provided an overview of our recent attention models on Switchboard, LibriSpeech and WSJ. We performed an analysis on the beam search errors. By our improved pretraining scheme, we improved our Switchboard baseline by over 8% relative in WER. We pointed out the high training variance of attention models compared to hybrid HMM/NN models. We analyzed the encoder output and identified the representation of the relative input position, both clearly visible in the PCA reduction of the encoder but even represented by individual neurons. Also we found indications that the encoder marks frames which can be skipped by decoder, which correlate to silence. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1gp9v_jsm | improved pretraining, and analysing encoder output and attention |
Validation is a key challenge in the search for safe autonomy. Simulations are often either too simple to provide robust validation, or too complex to tractably compute. Therefore, approximate validation methods are needed to tractably find failures without unsafe simplifications. This paper presents the theory behind one such black-box approach: adaptive stress testing (AST). We also provide three examples of validation problems formulated to work with AST. An open question when robots operate autonomously in uncertain, real-world environments is how to tractably validate that the agent will act safely. Autonomous robotic systems may be expected to interact with a number of other actors, including humans, while handling uncertainty in perception, prediction and control. Consequently, scenarios are often too high-dimensional to tractably simulate in an exhaustive manner. As such, a common approach is to simplify the scenario by constraining the number of non-agent actors and the range of actions they can take. However, simulating simplified scenarios may compromise safety by eliminating the complexity needed to find rare, but important failures. Instead, approximate validation methods are needed to elicit agent failures while maintaining the full complexity of the simulation. One possible approach to approximate validation is adaptive stress testing (AST) BID6. In AST, the validation problem is cast as a Markov decision process (MDP). A specific reward function structure is then used with reinforcement learning algorithms in order to identify the most-likely failure of a system in a scenario. Knowing the most-likely failure is useful for two reasons: 1) all other failures are at most as-likely, so it provides a bound on the likelihood of failures, and 2) it uncovers possible failure modes of an autonomous system so they can be addressed. AST is not a silver bullet: it requires accurate models of all actors in the scenario and is susceptible to local convergence. However, it allows failures to be identified tractably in simulation for complicated autonomous systems acting in high-dimensional spaces. This paper briefly presents the latest methodology for using AST and includes example validation scenarios formulated as AST problems. Adaptive stress testing formulates the problem of finding the most-likely failure of a system as a Markov decision process (MDP) BID1. Reinforcement learning (RL) algorithms can then be applied to efficiently find a solution in simulation. The process is shown in FIG0. An RL-based solver outputs Environment Actions, which are the control input to the simulator. The simulator resolves the next time-step by executing the environment actions and then allowing the system-undertest (SUT) to act. The simulator returns the likelihood of the environment actions and whether an event of interest, such as a failure, has occurred. The reward function, covered in Section II-C, uses these to calculate the reward at each time-step. The solver uses these rewards to find the mostlikely failure using reinforcement learning algorithms such as Monte Carlo tree search (MCTS) BID3 or trust region policy optimization (TRPO) BID9. Finding the most-likely failure of a system is a sequential decision-making problem. Given a simulator S and a subset of the state space E where the events of interest (e.g. a collision) occur, we want to find the most-likely trajectory s 0,..., s t that ends in our subset E. Given (S, E), the formal problem is maximize a0,...,at P (s 0, a 0, . . ., s t, a t) subject to s t ∈ E where P (s 0, a 0, . . ., s t, a t) is the probability of a trajectory in simulator S and s t = f (a t, s t−1).AST requires the following three functions to interact with the simulator:• INITIALIZE(S, s 0): Resets S to a given initial state s 0.• STEP(S, E, a): Steps the simulation in time by drawing the next state s after taking action a. The function returns the probability of the transition and an indicator showing whether s is in E or not.• ISTERMINAL(S, E): Returns true if the current state of the simulation is in E or if the horizon of the simulation T has been reached. In order to find the most-likely failure, the reward function must be structured as follows: DISPLAYFORM0 where the parameters are:• α: A large number, to heavily penalize trajectories that do not end in the target set. • βf (s): An optional heuristic. For example, in the autonomous vehicle experiment, we use the distance between the pedestrian and the car at the end of a trajectory. Consequently, the network takes actions that move the pedestrian close to the car early in training, allowing collisions to be found more quickly.• g(a): The action reward. A function recommended to be something proportional to log P (a). Adding logprobabilities is equivalent to multiplying probabilities and then taking the log, so this constraint ensures that summing the rewards from each time-step in a total reward that is proportional to the log-probability of a trajectory. • ηh(s): An optional training heuristic given at each timestep. Looking at Equation FORMULA0, there are three cases:• s ∈ E: The trajectory has terminated because an event has been found. This is the goal, so the reward at this step is as large as possible. DISPLAYFORM1 The trajectory has terminated by reaching the horizon T without reaching an event. This is the leastuseful outcome, so the user should set a large penalty.• s / ∈ E, t < T: A time-step that was non-terminal, which is the most common case. The reward is generally proportional to the negative log-likelihood of the environment action, which promotes likely actions. Ignoring heuristics for now, it is clear that the reward will be better for even a highly-unlikely trajectory that terminates in an event compared to a trajectory that fails to find an event. However, among trajectories that find an event, the more-likely trajectory will have a better reward. Consequently, optimizing to maximize reward will in maximizing the probability of a trajectory that terminates with an event. We present three scenarios in which an autonomous system needs to be validated. For each scenario, we provide an example of how it could be formulated as an AST problem. Further details available in Appendix A. Cartpole is a classic test environment for continuous control algorithms BID0. The system under test (SUT) is a neural network control policy trained by TRPO. The control policy controls the horizontal force F applied to the cart, and the goal is to prevent the bar on top of the cart from falling over.2) Formulation: We define an event as the pole reaching some maximum rotation or the cart reaching some maximum horizontal distance from the start position. The environment action is δ F, the disturbance force applied to the cart at each time-step. The reward function uses α = 1 × 10 4, β = 1 × 10 3, and f (s) as the normalized distance of the final state to failure states. The choice of f (s) encourages the solver to push the SUT closer to failure. The action reward, g(a) is set to the log of the probability density function of the natural disturbance force distribution. See Ma et al. BID7.B. Autonomous Vehicle at a Crosswalk 1) Problem: Autonomous vehicles must be able to safely interact with pedestrians. Consider an autonomous vehicle approaching a crosswalk on a neighborhood road. There is a single pedestrian who is free to move in any direction. The autonomous vehicle has imperfect sensors.2) Formulation: A collision between the car and pedestrian is the event we are looking for. The environment action vector controls both the motion of the pedestrian as well as the scale and direction of the sensor noise. The reward function for this scenario uses α = −1 × 10 5 and β = −1 × 10 4, with f (s) = DIST p v, p p as the distance between the pedestrian and the SUT at the end of a trajectory. This heuristic encourages the solver to move the pedestrian closer to the car in early iterations, which can significantly increase training speeds. The reward function also uses g(a) = M (a, µ a | s), which is the Mahalanobis distance function BID8. Mahalanobis distance is a generalization of distance to the mean for multivariate distributions. See Koren et al. BID4.C. Aircraft Collision Avoidance Software 1) Problem: The next-generation Airborne Collision Avoidance System (ACASX) BID2 gives instructions to pilots when multiple planes are approaching each other. We want to identify system failures in simulation to ensure the system is robust enough to replace the Traffic Alert and Collision Avoidance System (TCAS) BID5. We are interested in a number of different scenarios in which two or three planes are in the same airspace.2) Formulation: The event will be a near mid-air collision (NMAC), which is when two planes pass within 100 vertical feet and 500 horizontal feet of each other. The simulator is quite complicated, involving sensor, aircraft, and pilot models. Instead of trying to control everything explicitly, our environment actions will output seeds to the random number generators in the simulator. The reward function for this scenario uses α = ∞ and no heuristics. The reward function also uses g(a) = log P (s t | s t+1), the log of the known transition probability at each time-step. See Lee et al. BID6. This paper presents the latest formulation of adaptive stress testing, and examples of how it can be applied. AST is an approach to validation that can tractably find failures in autonomous systems in simulation without reducing scenario complexity. Autonomous systems are difficult to validate because they interact with many other actors in high-dimensional spaces according to complicated policies. However, validation is essential for producing autonomous systems that are safe, robust, and reliable. The cartpole scenario from Ma et al. BID7 is shown in Figure 2. The state s = [x,ẋ, θ,θ] represents the cart's horizontal position and speed as well as the bar's angle and angular velocity. The control policy, a neural network trained by TRPO, controls the horizontal force F applied to the cart. The failure of the system is defined as |x| > x max or |θ| > θ max. The initial state is at s 0 =. Fig. 2.Layout of the cartpole environment. A control policy applies horizontal force on the cart to prevent the bar falling over. The autonomous vehicle scenario from Koren et al. BID4 is shown in FIG1. The x-axis is aligned with the edge of the road, with East being the positive x-direction. The y-axis is aligned with the center of the cross-walk, with North being the positive y-direction. The pedestrian is crossing from South to North. The vehicle starts 35 m from the crosswalk, with an initial velocity of 11.20 m/s East. The pedestrian starts 2 m away, with an initial velocity of 1 m/s North. The autonomous vehicle policy is a modified version of the intelligent driver model BID10. An example from Lee et al. BID6 is shown in Figure 4. The planes need to cross paths, and the validation method was able to find a rollout where pilot responses to the ACASX system lead to an NMAC. AST was used to find a variety of different failures in ACASX. Fig. 4. An example from Lee et al. BID6, showing an NMAC identified by AST. Note that the planes must be both vertically and horizontally near to each other to register as an NMAC. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rJgoNK-oaE | A formulation for a black-box, reinforcement learning method to find the most-likely failure of a system acting in complex scenarios. |
Multi-step greedy policies have been extensively used in model-based Reinforcement Learning (RL) and in the case when a model of the environment is available (e.g., in the game of Go). In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming (DP): multi-step Policy and Value Iteration. These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one. By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework. As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and in an improved model-free algorithms. We test this approach and show on both discrete and continuous control problems. The field of Reinforcement learning (RL) span a wide variety of algorithms for solving decisionmaking problems through repeated interaction with the environment. By incorporating deep neural networks into RL algorithms, the field of RL has recently witnessed remarkable empirical success (e.g., Mnih et al. 2015; Lillicrap et al. 2015; Silver et al. 2017). Much of this success had been achieved by model-free RL algorithms, such as Q-learning and policy gradient. These algorithms are known to suffer from high variance in their estimations and to have difficulties handling function approximation (e.g., Thrun & Schwartz 1993; Baird 1995; Van Hasselt et al. 2016; Lu et al. 2018). These problems are intensified in decision problems with long horizon, i.e., when the discount factor, γ, is large. Although using smaller values of γ addresses the γ-dependent issues and leads to more stable algorithms , it comes with a cost, as the algorithm may return a biased solution, i.e., it may not converge to an optimal solution of the original decision problem (the one with large value of γ). Efroni et al. (2018a) recently proposed another approach to mitigate the γ-dependant instabilities in RL in which they study a multi-step greedy versions of the well-known dynamic programming (DP) algorithms policy iteration (PI) and value iteration (VI) . Efroni et al. (2018a) also proposed an alternative formulation of the multi-step greedy policy, called κ-greedy policy, and studied the convergence of the ed PI and VI algorithms: κ-PI and κ-VI. These two algorithms iteratively solve γκ-discounted decision problems, whose reward has been shaped by the solution of the decision problem at the previous iteration. Unlike the biased solution obtained by solving the decision problem with a smaller value of γ, by iteratively solving decision problems with a smaller γκ horizon, the κ-PI and κ-VI algorithms could converge to an optimal policy of the original decision problem. In this work, we derive and empirically validate model-free deep RL (DRL) implementations of κ-PI and κ-VI. In these implementations, we use DQN and TRPO for (approximately) solving γκ-discounted decision problems (with shaped reward), which is the main component of the κ-PI and κ-VI algorithms. The experiments illustrate the performance of model-free algorithms can be improved by using them as solvers of multi-step greedy PI and VI schemes, as well as emphasize important implementation details while doing so. In this paper, we assume that the agent's interaction with the environment is modeled as a discrete time γ-discounted Markov Decision Process (MDP), defined by M γ = (S, A, P, R, γ, µ), where S and A are the state and action spaces; P ≡ P (s |s, a) is the transition kernel; R ≡ r(s, a) is the reward function with the maximum value of R max; γ ∈ is the discount factor; and µ is the initial state distribution. Let π: S → P(A) be a stationary Markovian policy, where P(A) is a probability distribution on the set A. The value of π in any state s ∈ S is defined as V π (s) ≡ E[t≥0 γ t r(s t, π(s t))|s 0 = s, π], where the expectation is over all the randomness in policy, dynamics, and rewards. Similarly, the action-value function of π is defined as Q π (s, a) = E[t≥0 γ t r(s t, π(s t))|s 0 = s, a 0 = a, π]. Since the rewards have the maximum value of R max, both V and Q functions have the maximum value of V max = R max /(1 − γ). An optimal policy π * is the policy with maximum value at every state. We call the value of π * the optimal value, and define it as V * (s) = max π E[t≥0 γ t r(s t, π(s t))|s 0 = s, π], ∀s ∈ S. Furthermore, we denote the stateaction value of π * as Q * (s, a) and remind the following relation holds V * (s) = max a Q * (s, a) for all s. The algorithms by which an is be solved (obtain an optimal policy) are mainly based on two popular DP algorithms: Policy Iteration (PI) and Value Iteration (VI). While VI relies on iteratively computing the optimal Bellman operator T applied to the current value function V (Eq. 1), PI relies on (iteratively) calculating a 1-step greedy policy π 1-step w.r.t. to the value function of the current policy V (Eq. 2): It is known that T is a γ-contraction w.r.t. the max norm and its unique fixed point is V *, and the 1-step greedy policy w.r.t. V * is an optimal policy π *. In practice, the state space is often large, and thus, we can only approximately compute Eqs. 1 and 2, which in approximate PI (API) and VI (AVI) algorithms. These approximation errors then propagate through the iterations of the API and AVI algorithms. However, it has been shown that this (propagated) error can be controlled (; 2005;) and after N steps, the algorithms approximately converge to a solution π N whose difference with the optimal value is bounded (see e.g., Scherrer 2014 for API): is the expected value function at the initial state, 1 δ represents the per-iteration error, and C upper-bounds the mismatch between the sampling distribution and the distribution according to which the final value function is evaluated (µ in Eq. 3), and depends heavily on the dynamics. Finally, the second term on the RHS of Eq. 3 is the error due to initial values of policy/value, and decays with the number of iterations N. The optimal Bellman operator T (Eq. 1) and 1-step greedy policy π 1-step (Eq. 2) can be generalized to multi-step. The most straightforward form of this generalization is by replacing T and π 1-step with h-optimal Bellman operator and h-step greedy policy (i.e., a lookahead of horizon h) that are defined by substituting the 1-step return in Eqs. 1 and 2, r(s 0, a) + γV (s 1), with h-step return, h−1 t=0 r(s t, a t) + γ h V (s h), and computing the maximum over actions a 0,..., a h−1, instead of just a 0 . Efroni et al. (2018a) proposed an alternative form of multi-step optimal Bellman operator and multi-step greedy policy, called κ-optimal Bellman operator, T κ, and κ-greedy policy, π κ, for κ ∈, i.e., 1 Note that the LHS of Eq. 3 is the 1-norm of (V where the shaped reward r t (κ, V) w.r.t. the value function V is defined as It can be shown that the κ-greedy policy w.r.t. the value function V is the optimal policy w.r.t. a κ-weighted geometric average of all future h-step returns (from h = 0 to ∞). This can be interpreted as TD(λ) for policy improvement (see a, Sec. 6). The important difference is that TD(λ) is used for policy evaluation and not for policy improvement. From Eqs. 4 and 5, it is easy to see that solving these equations is equivalent to solving a surrogate γκ-discounted MDP with the shaped reward r t (κ, V), which we denote by M γκ (V) throughout the paper. The optimal value of M γκ (V) (the surrogate MDP) is T κ V and its optimal policy is the κ-greedy policy, π κ. Using the notions of κ-optimal Bellman operator, T κ, and κ-greedy policy, π κ, Efroni et al. (2018a) derived κ-PI and κ-VI algorithms, whose pseudocode is shown in Algorithms 1 and 2. κ-PI iteratively (i) evaluates the value of the current policy π i, and (ii) set the new policy, π i+1, to the κ-greedy policy w.r.t. the value of the current policy V πi, by solving Eq. 5. On the other hand, κ-VI repeatedly applies the T κ operator to the current value function V i (solves Eq. 4) to obtain the next value function, V i+1, and returns the κ-greedy policy w.r.t. the final value V N (κ). Note that for κ = 0, the κ-greedy policy and κ-optimal Bellman operator are equivalent to their 1-step counterparts, defined by Eqs. 1 and 2, which indicates that κ-PI and κ-VI are generalizations of the seminal PI and VI algorithms. It has been shown that both PI and VI converge to the optimal value with an exponential rate that depends on the discount factor γ, i.e., g.,; ). Analogously, Efroni et al. (2018a) showed that κ-PI and κ-VI converge with faster exponential rate of ξ(κ) =, with the cost that each iteration of these algorithms is computationally more expensive than that of PI and VI. Finally, we state the following two properties of κ-PI and κ-greedy policies that we use in our RL implementations of κ-PI and κ-VI algorithms in Sections 4 and 5: 1) Asymptotic performance depends on κ. The following bound that is similar to the one reported in Eq. 3 was proved by Efroni et al. (2018b, Thm. 5) for the performance of κ-PI: where δ(κ) and C(κ) are quantities similar to δ and C in Eq. 3. Note that the first term on the RHS of Eq. 7 is independent of N (κ), while the second one decays with N (κ). 2) Soft updates w.r.t. a κ-greedy policy does not necessarily improve the performance. Let π κ be the κ-greedy policy w.r.t. V π. Then, unlike for 1-step greedy policies, the performance of (1−α)π+απ κ (soft update) is not necessarily better than that of π (b, Thm. 1). This hints that it would be advantages to use κ-greedy policies with'hard' updates (using π κ as the new policy). 4 RL IMPLEMENTATIONS OF κ-PI AND κ-VI As described in Sec. 3, implementing κ-PI and κ-VI requires iteratively solving a γκ-discounted surrogate MDP with a shaped reward. If a model of the environment is given, the surrogate MDP can be solved using a DP algorithm (see a, Sec. 7). When the model is not available, it can be approximately solved by any model-free RL algorithm. In this paper, we focus on the case that the model is not available and propose RL implementations of κ-PI and κ-VI. The main question we investigate in this work is how model-free RL algorithms should be implemented to efficiently solve the surrogate MDP in κ-PI and κ-VI. In this paper, we use DQN and TRPO as subroutines for estimating a κ-greedy policy (Line 4 in κ-PI, Alg. 1 and Line 5 in κ-VI, Alg. 2) or for estimating an optimal value of the surrogate MDP (Line 3 in κ-VI, Alg. 2). For estimating the value of the current policy (Line 3, in κ-PI, Alg. 1), we use standard policy evaluation deep RL (DRL) algorithms. To implement κ-PI and κ-VI, we shall set the value of N (κ) ∈ N, i.e., the total number of iterations of these algorithms, and determine the number of samples for each iteration. Since N (κ) only appears in the second term of Eq. 7, an appropriate choice of Note that setting N (κ) to a higher value would not dramatically improve the 1: Initialize replay buffer D, Q-networks Q θ, Q φ, and target networks Q θ, Q φ; 2: for i = 0,..., N (κ) − 1 do 3: # Policy Improvement 4: Act by an -greedy policy w.r.t. Q θ (st, a), observe rt, st+1, and store (st, at, rt, st+1) in D; 6: Sample a batch {(sj, aj, rj, sj+1)} N j=1 from D; 7: Update θ by DQN rule with {(sj, aj, rj(κ, V φ), sj+1)} N j=1, where 8: Copy θ to θ occasionally (θ ← θ); 10: end for 11: # Policy Evaluation of πi(s) ∈ arg maxa Q θ (s, a) 12: Update φ by TD off-policy rule with {(sj, aj, rj, sj+1)} N j=1, and πi(s) ∈ arg maxa Q θ (s, a); 15: Copy φ to φ occasionally (φ ← φ); 16: end for 17: end for performance, because the asymptotic term in Eq. 7 is independent of N (κ). In practice, since δ(κ) and C(κ) are unknown, we set N (κ) to satisfy the following equality: where C F A is a hyper-parameter that depends on the final-accuracy we are aiming for. For example, if we expect the final accuracy being 90%, we would set C F A = 0.1. Our suggest that this approach leads to a reasonable choice for N (κ), e.g., N (κ = 0.99) 4 and N (κ = 0.5) 115, for C F A = 0.1 and γ = 0.99. As we increase κ, we expect less iterations are needed for κ-PI and κ-VI to converge to a good policy. Another important observation is that since the discount factor of the surrogate MDP that κ-PI and κ-VI solve at each iteration is γκ, the effective horizon (the effective horizon of a γκ-discounted MDP is 1/(1 − γκ)) of the surrogate MDP increases with κ. Lastly, we need to determine the number of samples for each iteration of κ-PI and κ-VI. We allocate equal number of samples per iteration, denoted by T (κ). Since the total number of samples, T, is known beforehand, we set the number of samples per iteration to 5 DQN AND TRPO IMPLEMENTATIONS OF κ-PI AND κ-VI In this section, we study the use of DQN and TRPO in κ-PI and κ-VI algorithms. We first derive our DQN and TRPO implementations of κ-PI and κ-VI in Sections 5.1 and 5.2. We refer to the ing algorithms as κ-PI-DQN, κ-VI-DQN, κ-PI-TRPO, and κ-VI-TRPO. It is important to note that for κ = 1, κ-PI-DQN and κ-VI-DQN are reduced to DQN, and κ-PI-TRPO and κ-VI-TRPO are reduced to TRPO. We then conduct a set of experiments with these algorithms, in Sections 5.1.1 and 5.2.1, in which we carefully study the effect of κ and N (κ) (or equivalently the hyper-parameter C F A, defined by Eq. 8) on their performance. In these experiments, we specifically focus on answering the following questions: 1. Is the performance of DQN and TRPO improve when using them as κ-greedy solvers in κ-PI and κ-VI? Is there a performance tradeoff w.r.t. to κ? 2. Following κ-PI and κ-VI, our DQN and TRPO implementations of these algorithms devote a significant number of sample T (κ) to each iteration. Is this needed or a'naive' choice of T (κ) = 1, or equivalently N (κ) = T, works just well, for all values of κ? Algorithm 3 contains the pseudo-code of κ-PI-DQN. Due to space constraints, we report its detailed pseudo-code in Appendix A.1 (Alg. 5). In the policy improvement stage of κ-PI-DQN, we use DQN to solve the γκ-discounted surrogate MDP with the shaped reward r t (κ, V φ V πi−1), i.e., at the end of this stage M γκ (V φ). The output of the DQN is approximately the optimal Qfunction of M γκ (V φ), and thus, the κ-greedy policy w.r.t. V φ is equal to arg max a Q θ (·, a). At the policy evaluation stage, we use off-policy TD to evaluate the Q-function of the current policy Although what is needed on Line 8 is an estimate of the value function of the current policy, V φ V πi−1, we chose to evaluate the Q-function of π i: the data in our disposal (the transitions stored in the replay buffer) is an off-policy data and the Q-function of a fixed policy can be easily evaluated with this type of a data using off-policy TD, unlike the value function. Remark 1 In order for V φ to be an accurate estimate of the value function of π i−1 on Line 8, we should use an additional target network, Q θ, that remains unchanged during the policy improvement stage. This network should be used in π i−1 (·) = arg max a Q θ (·, a) on Line 8, and be only updated right after the improvement stage on Line 11. However, to reduce the space complexity of the algorithm, we do not use this additional target network and compute π i−1 on Line 8 as arg max Q θ, despite the fact that Q θ changes during the improvement stage. We report the pseudo-code of κ-VI-DQN in Appendix A.1 (Alg. 6). Note that κ-VI simply repeats V ← T κ V and computes T κ V, which is the optimal value of the surrogate MDP M γκ (V). In κ-VI-DQN, we repeatedly solve M γκ (V) by DQN, and use its optimal Q-function to shape the reward of the next iteration. Let Q * γκ,V and V * γκ,V be the optimal Q and V functions of M γκ (V)., where the first equality is by definition (Sec. 2) and the second one holds since T κ V is the optimal value of M γκ (V) (Sec. 3). Therefore, in κ-VI-DQN, we shape the reward of each iteration by max a Q φ (s, a), where Q φ is the output of the DQN from the previous iteration, i.e., max a Q φ (s, a) T κ V i−1. In this section, we empirically analyze the performance of the κ-PI-DQN and κ-VI-DQN algorithms on the Atari domains: Breakout, Seaquest, SpaceInvaders, and Enduro . We start by performing an ablation test on three values of parameter C F A = {0.001, 0.05, 0.2} on the Breakout domain. The value of C F A sets the number of samples per iteration T (κ) (Eq. 8) and the total number of iterations N (κ) (Eq. 9). Aside from C F A, we set the total number of samples to T 10 6. This value represents the number of samples after which our DQN-based algorithms approximately converge. For each value of C F A, we test κ-PI-DQN and κ-VI-DQN for several κ values. In both algorithms, the best performance was obtained with C F A = 0.05, thus, we set C F A = 0.05 in our experiments with other Atari domains. Alg. Table 1 shows the final training performance of κ-PI-DQN and κ-VI-DQN on the Atari domains with C F A = 0.05. Note that the scores reported in Table 1 are the actual returns of the Atari domains, while the vertical axis in the plots of Figure 1 corresponds to a scaled return. We plot the scaled return, since this way it would be easier to reproduce our using the OpenAI Baselines codebase . The of Fig. 1 and Table 1, as well as those in Appendix A.2, exhibit that both κ-PI-DQN and κ-VI-DQN improve the performance of DQN (κ = 1). Moreover, they show that setting N (κ) = T leads to a clear degradation of the final training performance on all of the domains expect Enduro, which attains better performance for N (κ) = T. Although the performance degrades, the for N (κ) = T are still better than for DQN. Algorithm 4 contains the pseudo-code of κ-PI-TRPO (detailed pseudo-code in Appendix A.1). TRPO iteratively updates the current policy using its return and an estimate of its value function. In our κ-PI-TRPO, at each iteration i: 1) we use the estimate of the current policy V φ V πi−1 (computed in the previous iteration) to calculate the return R(κ, V φ) and an estimate of the value function V θ of the surrogate MDP M γκ (V φ), 2) we use the return R(κ, V φ) and V θ to compute the new policy π i, and 3) we estimate the value of the new policy V φ V πi on the original, γ discounted, MDP. In Appendix B.1 we provide the pseudocode of κ-VI-TRPO derived by the κ-VI meta algorithm. As previously noted, κ-VI iteratively solves the γκ discounted surrogate MDP and uses its optimal value T κ V i−1 to shape the reward of the surrogated MDP in the i'th iteration. With that in mind, consider κ-PI-TRPO. Notice that as π θ converges to the optimal policy of the surrogate γκ discounted MDP, Vθ converges to the optimal value of the surrogate MDP, i.e., it converges to Thus, κ-PI-TRPO can be turn to κ-VI-TRPO by eliminating the policy evaluation stage, and simply copy φ ←θ, meaning, V φ ← Vθ = T κ V φ. In this section, we empirically analyze the performance of the κ-PI-TRPO and κ-VI-TRPO algorithms on the MuJoCo domains: Walker2d-v2, Ant-v2, HalfCheetah-v2, HumanoidStandup-v2, and Swimmer-v2, . As in Section 5.1.1, we start by performing an ablation test on the parameter C F A = {0.001, 0.05, 0.2} on the Walker domain. We set the total number of iterations to 2000, with each iteration consisting 1000 samples. Thus, the total number of samples is T 2 × 10 6. This is the number of samples after which our TRPO-based algorithms approximately converge. For each value of C F A, we test κ-PI-TRPO and κ-VI-TRPO for several κ values. In both algorithms, the best performance was obtained with C F A = 0.2, thus, we set C F A = 0.2 in our experiments with other MuJoCo domains. 1: Initialize V -networks V θ and V φ, policy network π ψ, and target network V φ; 2: for i = 0,..., N (κ) − 1 do 3: for t = 1,..., T (κ) do 4: Simulate the current policy π ψ for M steps and calculate the following two returns for all steps j: 5: Rj(κ, V φ) = M t=j (γκ) t−j rt(κ, V φ) and ρj = M t=j γ t−j rt; Update θ by minimizing the batch loss function: # Policy Improvement 8: Update ψ using TRPO by the batch {(Rj(κ, V φ), V θ (sj))} N j=1; 9: # Policy Evaluation 10: Update φ by minimizing the batch loss function: end for 12: Copy φ to φ (φ ← φ); 13: end for Table 2 shows the final training performance of κ-PI-TRPO and κ-VI-TRPO on the MuJoCo domains with C F A = 0.2. The of Figure 2 and Table 2, as well as those in Appendix B.3, exhibit that both κ-PI-TRPO and κ-VI-TRPO yield better performance than TRPO (κ = 1). Furthermore, they show that the algorithms with C F A = 0.2 perform better than with N (κ) = T. However, the improvement is less significant relative to the DQN-based in Section 5.1.1. There is an intimate relation between κ-PI and the GAE algorithm which we elaborate on in this section. In GAE the policy is updated by the gradient: which can be interpreted as a gradient step in a γλ discounted MDP with rewards δ(V), which we refer here as M δ(V) γλ. As noted in Efroni et al. (2018a), Section 6, the optimal policy of the MDP M δ(V) γλ is the optimal policy of M γκ (V) with κ = λ, i.e., the κ-greedy policy w.r.t. V: thus, the Domain Alg. is the κ-greedy policy w.r.t. V. GAE, instead of solving the κ-greedy policy while keeping V fixed, changes the policy and updates V by the return concurrently. Thus, this approach is conceptually similar to κ-PI-TRPO with N (κ) = T. There, the value and policy are concurrently updated as well, without clear separation between the update of the policy and the value. In Figure 2 and Table 2 the performance of GAE is compared to the one of κ-PI-TRPO and κ-VI-TRPO. The performance of the latter two is slightly better than the one of GAE. Remark 2 (Implementation of GAE) We used the OpenAI baseline implementation of GAE with a small modification. In the baseline code, the value network is updated w.r.t. to the target t (γλ) t r t, whereas in the authors used the target t γ t r t (see , Eq.28). We chose the latter form in our implementation to be in accord with. To supply with a more complete view on our experiments, we tested the performance of the "vanilla" DQN and TRPO when trained with different γ values than the previously used one (γ = 0.99). As evident in Figure 3, only for the Ant domain this approach ed in improved performance when for TRPO trained with γ = 0.68. It is interesting to observe that for the Ant domain the performance of κ-PI-TRPO and especially of κ-VI-TRPO (Table 2) significantly surpassed the one of TRPO trained with γ = 0.68. The performance of DQN and TRPO on the Breakout, SpaceInvaders and Walker domains decreased or remained unchanged in the tested γ values. Thus, on these domains, changing the discount factor does not improve the DQN and TRPO algorithms, as using κ-PI or κ-VI with smaller κ value do. It is interesting to observe that the performance on the Mujoco domains for small γ, e.g., γ = 0.68, achieved good performance, whereas for the Atari domains the performance degraded with lowering γ. This fits the nature of these domains: in the Mujoco domains the decision problem inherently has much shorter horizon than in the Atari domains. Furthermore, it is important to stress that γ and κ are two different parameters an algorithm designer may use. For example, one can perform a scan of γ value, fix γ to the one with optimal performance, and then test the performance of different κ values. In this work we formulated and empirically tested simple generalizations of DQN and TRPO derived by the theory of multi-step DP and, specifically, of κ-PI and κ-VI algorithms. The empirical investigation reveals several points worth emphasizing. 1. κ-PI is better than κ-VI for the Atari domains.. In most of the experiments on the Atari domains κ-PI-DQN has better performance than κ-VI-DQN. This might be expected as the former uses extra information not used by the latter: κ-PI estimates the value of current policy whereas κ-VI ignores this information. 2. For the Gym domains κ-VI performs slightly better than κ-PI. For the Gym domains κ-VI-TRPO performs slightly better than κ-PI-TRPO. We conjecture that the reason for the discrepancy relatively to the Atari domains lies in the inherent structure of the tasks of the Gym domains: they are inherently short horizon decision problems. For this reason, the problems can be solved with smaller discount factor (as empirically demonstrated in Section 5.3) and information on the policy's value is not needed. 3. Non trivial κ value improves the performance. In the vast majority of our experiments both κ-PI and κ-VI improves over the performance of their vanilla counterparts (i.e., κ = 1), except for the Swimmer and BeamRider domains from Mujoco and Atari suites. Importantly, the performance of the algorithms was shown to be'smooth' in the parameter κ. This suggests careful hyperparameter tuning of κ is not of great necessity. 4. Using the'naive' choice of N (κ) = T deteriorates the performance. Choosing the number of iteration by Eq. 8 improves the performance on the tested domains. An interesting future work would be to test model-free algorithms which use other variants of greedy policies (; ; a; ;). Furthermore, and although in this work we focused on model-free DRL, it is arguably more natural to use multi-step DP in model-based DRL (e.g., ; ; ;). Taking this approach, the multi-step greedy policy would be solved with an approximate model. We conjecture that in this case one may set κ -or more generally, the planning horizon -as a function of the approximate model's'quality': as the approximate model gets closer to the real model larger κ can be used. We leave investigating such relation in theory and practice to future work. Lastly, an important next step in continuation to our work is to study algorithms with an adaptive κ parameter. This, we believe, would greatly improve the ing methods, and possibly be done by studying the relation between the different approximation errors (i.e., errors in gradient and value estimation,), the performance and the κ value that should be used by the algorithm. A DQN IMPLEMENTATION OF κ-PI AND κ-VI In this section, we report the detailed pseudo-codes of the κ-PI-DQN and κ-VI-DQN algorithms, described in Section 5.1, side-by-side. Algorithm 5 κ-PI-DQN 1: Initialize replay buffer D, and Q-networks Q θ and Q φ with random weights θ and φ; 2: Initialize target networks Q θ and Q φ with weights θ ← θ and φ ← φ; # Policy Improvement 5: Select a t as an -greedy action w.r.t. Q θ (s t, a); Execute a t, observe r t and s t+1, and store the tuple (s t, a t, r t, s t+1) in D; 8: Sample a random mini-batch {(s j, a j, r j, s j+1)} N j=1 from D; Update θ by minimizing the following loss function: 10: 11: Copy θ to θ occasionally (θ ← θ); Set π i (s) ∈ arg max a Q θ (s, a); 16: Update φ by minimizing the following loss function: 19: Copy φ to φ occasionally (φ ← φ); end for 22: end for Algorithm 6 κ-VI-DQN 1: Initialize replay buffer D, and Q-networks Q θ and Q φ with random weights θ and φ; 2: Initialize target network Q θ with weights θ ← θ; # Evaluate T κ V φ and the κ-greedy policy w.r.t. V φ 5: Select a t as an -greedy action w.r.t. Q θ (s t, a); Execute a t, observe r t and s t+1, and store the tuple (s t, a t, r t, s t+1) in D; 8: Update θ by minimizing the following loss function: 10: Copy θ to θ occasionally (θ ← θ); In this section, we report additional of the application of κ-PI-DQN and κ-VI-DQN on the Atari domains. A summary of these has been reported in Table 1 in the main paper. B TRPO IMPLEMENTATION OF κ-PI AND κ-VI In this section, we report the detailed pseudo-codes of the κ-PI-TRPO and κ-VI-TRPO algorithms, described in Section 5.2, side-by-side. Algorithm 7 κ-PI-TRPO 1: Initialize V -networks V θ and V φ, and policy network π ψ with random weights θ, φ, and ψ 2: Initialize target network V φ with weights φ ← φ 3: for i = 0,..., N (κ) − 1 do 4: Simulate the current policy π ψ for M time-steps; 6: end for Sample a random mini-batch {(s j, a j, r j, s j+1)} N j=1 from the simulated M time-steps; 10: Update θ by minimizing the loss function: 11: # Policy Improvement 12: Sample a random mini-batch {(s j, a j, r j, s j+1)} N j=1 from the simulated M time-steps; 13: Update ψ using TRPO with advantage function computed by Update φ by minimizing the loss function: end for # Evaluate T κ V φ and the κ-greedy policy w.r.t. V φ 5: Simulate the current policy π ψ for M time-steps; 7: end for 10: Sample a random mini-batch {(s j, a j, r j, s j+1)} N j=1 from the simulated M time-steps 11: Update θ by minimizing the loss function: Sample a random mini-batch {(s j, a j, r j, s j+1)} N j=1 from the simulated M time-steps 13: Update ψ using TRPO with advantage function computed by In this section, we report additional of the application of κ-PI-TRPO and κ-VI-TRPO on the MuJoCo domains. A summary of these has been reported in Table 2 in the main paper. C REBUTTAL In this section, we analyze the role κ plays in the proposed methods by reporting on the simple CartPole environment for κ-PI TRPO. For all experiments, we use a single layered value function network and a linear policy network. Each hyperparameter configuration is run for 10 different random seeds and plots are shown for a 50% confidence interval. Note that since the CartPole is extremely simple, we do not see a clear difference between the κ values that are closer to 1.0 (see Figure 16). Below, we observe the performance when the discount factor γ is lowered (see Figure 17). Since, there is a ceiling of R = 200 on the maximum achievable return, it makes intuitive sense that observing the κ effect for a lower gamma value such as γ = 0.36 will allow us to see a clearer trade-off between κ values. To this end, we also plot the for when the discount factor is set to 0.36. The intuitive idea behind κ-PI, and κ-VI similarly, is that at every time step, we wish to solve a simpler sub-problem, i.e. the γκ discounted MDP. Although, we are solving an easier/shorter horizon problem, in doing so, the bias induced is taken care of by the modified reward in this new MDP. Therefore, it becomes interesting to look at how κ affects its two contributions, one being the discounting, the other being the weighting of the shaped reward (see eq. 11). Below we look at what happens when each of these terms are made κ independent, one at a time, while varying κ for the other term. To make this clear, we introduce different notations for both such κ instances, one being κ d (responsible for discounting) and the other being κ s (responsible for shaping). We see something interesting here. For the CartPole domain, the shaping term does not seem to have any effect on the performance (Figure 18(b) ), while the discounting term does. This implies that the problem does not suffer from any bias issues. Thus, the correction provided by the shaped term is not needed. However, this is not true for other more complex problems. This is also why we see a similar when lowering γ in this case, but not for more complex problems. In this section, we report for the Mountain Car environment. Contrary to the CartPole , where lowering the κ values degraded the performance, we observe that performance deteriorates when κ is increased. We also plot a bar graph, with the cumulative score on the y axis and different κ values on the x axis. We use the continuous Mountain Car domain here, which has been shown to create exploration issues. Therefore, without receiving any positive reward, using a κ value of 0 in the case of discounting (solving the 1 step problem has the least negative reward) and of 1 in the case of shaping in the best performance. In this section, we move to the Pendulum environment, a domain where we see a non-trivial best κ value. This is due to there not being a ceiling on the maximum possible return, which is the case in CartPole. Under review as a conference paper at ICLR 2020 Choosing the best γ value and running κ-PI on it in an improved performance for all κ values (see Figure 23). To summarize, we believe that in inherently short horizon domains (dense, per time step reward), such as the Mujoco continuous control tasks, the discounting produced by κ-PI and VI is shown to cause major improvement in performance over the TRPO baselines. This is reinforced by the of lowering the discount factor experiments. On the other hand, in inherently long horizon domains (sparse, end of trajectory reward), such as in Atari, the shaping produced by κ-PI and VI is supposed to cause the major improvement over the DQN baselines. Again, this is supported by the fact that lowering the discount factor experiments actually in deterioration in performance. Figure 25: Cumulative training performance of κ-PI-TRPO on HalfCheetah (Left, corresponds to Figure 12) and Ant (Right, corresponds to Figure 11) environments. | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | r1l7E1HFPH | Use model free algorithms like DQN/TRPO to solve short horizon problems (model free) iteratively in a Policy/Value Iteration fashion. |
The adversarial training procedure proposed by is one of the most effective methods to defend against adversarial examples in deep neural net- works (DNNs). In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network. Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks. Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” (low density regions) of the empirical distri- bution of training data but is still on the ground-truth data manifold. For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values. Most importantly, for large datasets with high dimensional and complex data manifold (CIFAR, ImageNet, etc), the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data. Additionally, we find that blind-spots also exist on provable defenses including and because these trainable robustness certificates can only be practically optimized on a limited set of training data. Since the discovery of adversarial examples in deep neural networks (DNNs) BID28, adversarial training under the robustness optimization framework BID24 has become one of the most effective methods to defend against adversarial examples. A recent study by BID1 showed that adversarial training does not rely on obfuscated gradients and delivers promising for defending adversarial examples on small datasets. Adversarial training approximately solves the following min-max optimization problem: where X is the set of training data, L is the loss function, θ is the parameter of the network, and S is usually a norm constrained p ball centered at 0. propose to use projected gradient descent (PGD) to approximately solve the maximization problem within S = {δ | δ ∞ ≤}, where = 0.3 for MNIST dataset on a 0-1 pixel scale, and = 8 for CIFAR-10 dataset on a 0-255 pixel scale. This approach achieves impressive defending on the MNIST test set: so far the best available white-box attacks by BID37 can only decrease the test accuracy from approximately 98% to 88% 1. However, on CIFAR-10 dataset, a simple 20-step PGD can decrease the test accuracy from 87% to less than 50% 2.The effectiveness of adversarial training is measured by the robustness on the test set. However, the adversarial training process itself is done on the training set. Suppose we can optimize perfectly, then certified robustness may be obtained on those training data points. However, if the empirical distribution of training dataset differs from the true data distribution, a test point drawn from the true data distribution might lie in a low probability region in the empirical distribution of training dataset and is not "covered" by the adversarial training procedure. For datasets that are relatively simple and have low intrinsic dimensions (MNIST, Fashion MNIST, etc), we can obtain enough training examples to make sure adversarial training covers most part of the data distribution. For high dimensional datasets (CIFAR, ImageNet), adversarial training have been shown difficult (; BID29 and only limited success was obtained. A recent attack proposed by shows that adversarial training can be defeated when the input image is produced by a generative model (for example, a generative adversarial network) rather than selected directly from the test examples. The generated images are well recognized by humans and thus valid images in the ground-truth data distribution. In our interpretation, this attack effective finds the "blind-spots" in the input space that the training data do not well cover. For higher dimensional datasets, we hypothesize that many test images already fall into these blindspots of training data and thus adversarial training only obtains a moderate level of robustness. It is interesting to see that for those test images that adversarial training fails to defend, if their distances (in some metrics) to the training dataset are indeed larger. In our paper, we try to explain the success of robust optimization based adversarial training and show the limitations of this approach when the test points are slightly off the empirical distribution of training data. Our main contributions are:• We show that on the original set of test images, the effectiveness of adversarial training is highly correlated with the distance (in some distance metrics) from the test image to the manifold of training images. For MNIST and Fashion MNIST datasets, most test images are close to the training data and very good robustness is observed on these points. For CIFAR, there is a clear trend that the adversarially trained network gradually loses its robustness property when the test images are further away from training data.• We identify a new class of attacks, "blind-spot attacks", where the input image resides in a "blind-spot" of the empirical distribution of training data (far enough from any training examples in some embedding space) but is still in the ground-truth data distribution (well recognized by humans and correctly classified by the model). Adversarial training cannot provide good robustness on these blind-spots and their adversarial examples have small distortions.• We show that blind-spots can be easily found on a few strong defense models including, BID32 and BID24. We propose a few simple transformations (slightly changing contrast and ), that do not noticeably affect the accuracy of adversarially trained MNIST and Fashion MNIST models, but these models become vulnerable to adversarial attacks on these sets of transformed input images. These transformations effectively move the test images slightly out of the manifold of training images, which does not affect generalization but poses a challenge for robust learning. Our imply that current adversarial training procedures cannot scale to datasets with a large (intrinsic) dimension, where any practical amount of training data cannot cover all the blind-spots. This explains the limited success for applying adversarial training on ImageNet dataset, where many test images can be sufficiently far away from the empirical distribution of training dataset. Adversarial examples in DNNs have brought great threats to the deep learning-based AI applications such as autonomous driving and face recognition. Therefore, defending against adversarial examples is an urgent task before we can safely deploy deep learning models to a wider range of applications. Following the emergence of adversarial examples, various defense methods have been proposed, such as defensive distillation by BID18 and feature squeezing by BID35. Some of these defense methods have been proven vulnerable or ineffective under strong attack methods such as C&W in BID5. Another category of recent defense methods is based on gradient masking or obfuscated gradient BID4;; BID10 BID25; BID20 ), but these methods are also successfully evaded by the stronger BPDA attack BID1 ). Randomization in DNNs BID34 BID33 is also used to reduce the success rate of adversarial attacks, however, it usually incurs additional computational costs and still cannot fully defend against an adaptive attacker BID1 BID0 ).An effective defense method is adversarial training, which trains the model with adversarial examples freshly generated during the entire training process. First introduced by Goodfellow et al., adversarial training demonstrates the state-of-the-art defending performance. formulated the adversarial training procedure into a min-max robust optimization problem and has achieved state-of-the-art defending performance on MNIST and CIFAR datasets. Several attacks have been proposed to attack the model release by. On the MNIST testset, so far the best attack by BID37 can only reduce the test accuracy from 98% to 88%. Analysis by BID1 shows that this adversarial training framework does not rely on obfuscated gradient and truly increases model robustness; gradient based attacks with random starts can only achieve less than 10% success rate with given distortion constraints and are unable to penetrate this defense. On the other hand, attacking adversarial training using generative models have also been investigated; both BID33 and propose to use GANs to produce adversarial examples in black-box and white-box settings, respectively. Finally, a few certified defense methods BID19 BID24 BID32 were proposed, which are able to provably increase model robustness. Besides adversarial training, in our paper we also consider several certified defenses which can achieve relatively good performance (i.e., test accuracy on natural images does not drop significantly and training is computationally feasible), and can be applied to medium-sized networks with multiple layers. Notably, BID24 analyzes adversarial training using distributional robust optimization techniques. BID32 and BID32 proposed a robustness certificate based on the dual of a convex relaxation for ReLU networks, and used it for training to provably increase robustness. During training, certified defense methods can provably guarantee that the model is robust on training examples; however, on unseen test examples a non-vacuous robustness generalization guarantee is hard to obtain. Along with the attack-defense arms race, some insightful findings have been discovered to understand the natural of adversarial examples, both theoretically and experimentally. show that even for a simple data distribution of two class-conditional Gaussians, robust generalization requires significantly larger number of samples than standard generalization. BID6 extend the well-known PAC learning theory to the case with adversaries, and derive the adversarial VC-dimension which can be either larger or smaller than the standard VC-dimension. BID3 conjecture that a robust classifier can be computationally intractable to find, and give a proof for the computation hardness under statistical query (SQ) model. Recently, BID2 prove a computational hardness under a standard cryptographic assumption. Additionally, finding the safe area approximately is computationally hard according to and. BID16 explain the prevalence of adversarial examples by making a connection to the "concentration of measure" phenomenon in metric measure spaces. BID27 conduct large scale experiments on ImageNet and find a negative correlation between robustness and accuracy. BID30 discover that data examples consist of robust and non-robust features and adversarial training tends to find robust features that have strongly-correlations with the labels. Both adversarial training and certified defenses significantly improve robustness on training data, but it is still unknown if the trained model has good robust generalization property. Typically, we evaluate the robustness of a model by computing an upper bound of error on the test set; specifically, given a norm bounded distortion, we verify if each image in test set has a robustness certificate BID8 BID23. There might exist test images that are still within the capability of standard generalization (i.e., correctly classified by DNNs with high confidence, and well recognized by humans), but behaves badly in robust generalization (i.e., adversarial examples can be easily found with small distortions). Our paper complements those existing findings by showing the strong correlation between the effectiveness of adversarial defenses (both adversarial training and some certified defenses) and the distance between training data and test points. Additionally, we show that a tiny shift in input distribution (which may or may not be detectable in embedding space) can easily destroy the robustness property of an robust model. To verify the correlation between the effectiveness of adversarial training and how close a test point is to the manifold of training dataset, we need to propose a reasonable distance metric between a test example and a set of training examples. However, defining a meaningful distance metric for high dimensional image data is a challenging problem. Naively using an Euclidean distance metric in the input space of images works poorly as it does not reflect the true distance between the images on their ground-truth manifold. One strategy is to use (kernel-)PCA, t-SNE BID14, or UMAP BID17 to reduce the dimension of training data to a low dimensional space, and then define distance in that space. These methods are sufficient for small and simple datasets like MNIST, but for more general and complicated dataset like CIFAR, extracting a meaningful low-dimensional manifold directly on the input space can be really challenging. On the other hand, using a DNN to extract features of input images and measuring the distance in the deep feature embedding space has demonstrated better performance in many applications BID13 BID22, since DNN models can capture the manifold of image data much better than simple methods such as PCA or t-SNE. Although we can form an empirical distribution using kernel density estimation (KDE) on the deep feature embedding space and then obtain probability densities for test points, our experience showed that KDE work poorly in this case because the features extracted by DNNs are still high dimensional (hundreds or thousands dimensions).Taking the above considerations into account, we propose a simple and intuitive distance metric using deep feature embeddings and k-nearest neighbour. Given a feature extraction neural network h(x), a set of n training data points X train = {x DISPLAYFORM0 where DISPLAYFORM1} is an ascending ordering of training data based on the p distance between x j test and x i train in the deep embedding space, i.e., DISPLAYFORM2 In other words, we average the embedding space distance of k nearest neighbors of x j in the training dataset. This simple metric is non-parametric and we found that the are not sensitive to the selection of k; also, for naturally trained and adversarially trained feature extractors, the distance metrics obtained by different feature extractors reveal very similar correlations with the effectiveness of adversarial training. We are also interested to investigate the "distance" between the training dataset and the test dataset to gain some insights on how adversarial training performs on the entire test set. Unlike the setting in Section 3.1, this requires to compute a divergence between two empirical data distributions. Given n training data points X train = {x DISPLAYFORM0 test}, we first apply a neural feature extractor h to them, which is the same as in Section 3.1. Then, we apply a non-linear projection (in our case, we use t-SNE) to project both h(x i train) and h(x j test) to a low dimensional space, and obtainx DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 test; H) are the KDE density functions. K is the kernel function (specifically, we use the Gaussian kernel) and H is the bandwidth parameter automatically selected by Scott's rule BID22. V is chosen as a box bounding all training and test data points. For a multi-class dataset, we compute the aforementioned KDE and K-L divergence for each class separately. Inspired by our findings of the negative correlation between the effectiveness of adversarial training and the distance between a test image and training dataset, we identify a new class of adversarial attacks called "blind-spot attacks", where we find input images that are "far enough" from any existing training examples such that:• They are still drawn from the ground-truth data distribution (i.e. well recognized by humans) and classified correctly by the model (within the generalization capability of the model); • Adversarial training cannot provide good robustness properties on these images, and we can easily find their adversarial examples with small distortions using a simple gradient based attack. Importantly, blind-spot images are not adversarial images themselves. However, after performing adversarial attacks, we can find their adversarial examples with small distortions, despite adversarial training. In other words, we exploit the weakness in a model's robust generalization capability. We find that these blind-spots are prevalent and can be easily found without resorting to complex generative models like in. For the MNIST dataset which, BID32 and BID24 demonstrate the strongest defense so far, we propose a simple transformation to find the blind-spots in these models. We simply scale and shift each pixel value. Suppose the input image x ∈ [−0.5, 0.5] d, we scale and shift each test data example x element-wise to form a new example x: DISPLAYFORM0 where α is a constant close to 1 and β is a constant close to 0. We make sure that the selection of α and β will in a x that is still in the valid input range [−0.5, 0.5] d. This transformation effectively adjusts the contrast of the image, and/or adds a gray to the image. We then perform Carlini & Wagner's attacks on these transformed images x to find their adversarial examples x adv. It is important that the blind-spot images x are still undoubtedly valid images; for example, a digit that is slightly darker than the one in test set is still considered as a valid digit and can be well recognized by humans. Also, we found that with appropriate α and β the accuracy of MNIST and Fashion-MNIST models barely decreases; the model has enough generalization capability for this set of slightly transformed images, yet their adversarial examples can be easily found. Although the blind-spot attack is beyond the threat model considered in adversarial training (e.g. ∞ norm constrained perturbations), our argument is that adversarial training (and some other defense methods with certifications only on training examples such as BID32) are unlikely to scale well to datasets that lie in a high dimensional manifold, as the limited training data only guarantees robustness near these training examples. The blind-spots are almost inevitable in high dimensional case. For example, in CIFAR-10, about 50% of test images are already in blind-spots and their adversarial examples with small distortions can be trivially found despite adversarial training. Using data augmentation may eliminate some blind-spots, however for high dimensional data it is impossible to enumerate all possible inputs due to the curse of dimensionality. In this section we present our experimental on adversarially trained models by. Results on certified defense models by BID32; BID32 and BID24 are very similar and are demonstrated in Section 6.4 in the Appendix. We conduct experiments on adversarially trained models by on four datasets: MNIST, Fashion MNIST, and CIFAR-10. For MNIST, we use the "secret" model release for the MNIST attack challenge 3. For CIFAR-10, we use the public "adversarially trained" model 4. For Fashion MNIST, we train our own model with the same model structure and parameters as the robust MNIST model, except that the iterative adversary is allowed to perturb each pixel by at most = 0.1 as a larger will significantly reduce model accuracy. We use our presented simple blind-spot attack in Section 3.3 to find blind-spot images, and use Carlini & Wagner's (C&W's) ∞ attack BID5 ) to find their adversarial examples. We found that C&W's attacks generally find adversarial examples with smaller perturbations than projected gradient descent (PGD). To avoid gradient masking, we initial our attacks using two schemes: from the original image plus a random Gaussian noise with a standard deviation of 0.2; from a blank gray image where all pixels are initialized as 0. A successful attack is defined as finding an perturbed example that changes the model's classification and the ∞ distortion is less than a given used for robust training. For MNIST, = 0.3; for Fashion-MNIST, = 0.1; and for CIFAR, = 8/255. All input images are normalized to [−0.5, 0.5]. In this set of experiments, we build a connection between attack success rate on adversarially trained models and the distance between a test example and the whole training set. We use the metric defined in Section 3.1 to measure this distance. For MNIST and Fashion-MNIST, the outputs of the first fully connected layer (after all convolutional layers) are used as the neural feature extractor h(x); for CIFAR, we use the outputs of the last average pooling layer. We consider both naturally and adversarially trained networks as the neural feature extractor, with p = 2 and k = 5. The are shown in FIG2, 2 and 3. For each test set, after obtaining the distance of each test point, we bin the test data points based on their distances to the training set and show them in the histogram at the bottom half of each figure (red). The top half of each figure (blue) represents the attack success rates for the test images in the corresponding bins. Some bars on the right are missing because there are too few points in the corresponding bins. We only attack correctly classified images and only calculate success rate on those images. Note that we should not compare the distances shown between the left and right columns of Figures 1, 2 and 3 because they are obtained using different embeddings, however the overall trends are very similar. The distribution of the average 2 (embedding space) distance between the images in test set and the top-5 nearest images in training set. As we can observe in all three figures, most successful attacks in test sets for adversarially trained networks concentrate on the right hand side of the distance distribution, and the success rates tend to grow when the distance is increasing. The trend is independent of the feature extractor being used (naturally or adversarially trained). The strong correlation between attack success rates and the distance from a test point to the training dataset supports our hypothesis that adversarial training tends to fail on test points that are far enough from the training data distribution. To quantify the overall distance between the training and the test set, we calculate the K-L divergence between the KDE distributions of training set and test set for each class according to Eq.. Then, for each dataset we take the average K-L divergence across all classes, as shown in TAB1. We use both adversarially trained networks and naturally trained networks as our feature extractors h(x). Additionally, we also calculate the average normalized distance by calculating the 2 distance between each test point and the training set as in Section 4.2, and taking the average over all test points. To compare between different datasets, we normalize each element of the feature representation h(x) to mean 0 and variance 1. We average this distance among all test points and divide it by √ d t to normalize the dimension, where d t is the dimension of the feature representation h(·).Clearly, Fashion-MNIST is the dataset with the strongest defense as measured by the attack success rates on test set, and its K-L divergence is also the smallest. For CIFAR, the divergence between training and test sets is significantly larger, and adversarial training only has limited success. The hardness of training a robust model for MNIST is in between Fashion-MNIST and CIFAR. Another important observation is that the effectiveness of adversarial training does not depend on the accuracy; for Fashion-MNIST, classification is harder as the data is more complicated than MNIST, but training a robust Fashion-MNIST model is easier as the data distribution is more concentrated and adversarial training has less "blind-spots". In this section we focus on applying the proposed blind-spot attack to MNIST and Fashion MNIST. As mentioned in Section 3.3, for an image x from the test set, the blind-spot image x = αx + β obtained by scaling and shifting is considered as a new natural image, and we use the C&W ∞ attack to craft an adversarial image x adv for x. The attack distortion is calculated as the ∞ distance between x and x adv. For MNIST, = 0.3 so we set the scaling factor to α = {1.0, 0.9, 0.8, 0.7}. For Fashion-MNIST, = 0.1 so we set the scaling factor to α = {1.0, 0.95, 0.9}. We set β to either 0 or a small constant. The case α = 1.0, β = 0.0 represents the original test set images. We report the model's accuracy and attack success rates for each choice of α and β in Table 2 and Table 3. Because we scale the image by a factor of α, we also set a stricter criterion of success -the ∞ perturbation must be less than α to be counted as a successful attack. For MNIST, = 0.3 and for Fashion-MNIST, = 0.1. We report both success criterion, and α in Tables 2 and 3. (a) α = 1.0 β = 0.0 dist= 0.218 DISPLAYFORM0 Figure 4: Blind-spot attacks on Fashion-MNIST and MNIST data with scaling and shifting on adversarially trained models. First row contains input images after scaling and shifting and the second row contains the found adversarial examples. "dist" represents the ∞ distortion of adversarial perturbations. The first rows of figures (a) and (d) represent the original test set images (α = 1.0, β = 0.0); first rows of figures (b), (c), (e), and (f) illustrate the images after transformation. Adversarial examples for these transformed images have small distortions. We first observe that for all pairs of α and β the transformation does not affect the models' test accuracy at all. The adversarially trained model classifies these slightly scaled and shifted images very well, with test accuracy equivalent to the original test set. Visual comparisons in Figure 4 show that when α is close to 1 and β is close to 0, it is hard to distinguish the transformed images from the original images. On the other hand, according to Tables 2 and 3, the attack success rates for those transformed test images are significantly higher than the original test images, for both the original criterion and the stricter criterion α. In Figure 4, we can see that the ∞ adversarial perturbation Table 2: Attack success rate (suc. rate) and test accuracy (acc) of scaled and shifted MNIST. An attack is considered successful if its ∞ distortion is less than thresholds (th.) 0.3 or 0.3α. Table 3: Attack success rate (suc. rate) and test accuracy (acc) of scaled and shifted Fashion-MNIST. An attack is considered as successful if its ∞ distortion is less than threshold (th.) 0.1 or 0.1α. required is much smaller than the original image after the transformation. Thus, the proposed scale and shift transformations indeed move test images into blind-spots. More figures are in Appendix. DISPLAYFORM1 DISPLAYFORM2 One might think that we can generally detect blind-spot attacks by observing their distances to the training dataset, using a metric similar to Eq.. Thus, we plot histograms for the distances between tests points and training dataset, for both original test images and those slightly transformed ones in FIG4. We set α = 0.7, β = 0 for MNIST and α = 0.9, β = 0 for Fashion-MNIST. Unfortunately, the differences in distance histograms for these blind-spot images are so tiny that we cannot reliably detect the change, yet the robustness property drastically changes on these transformed images. In this paper, we observe that the effectiveness of adversarial training is highly correlated with the characteristics of the dataset, and data points that are far enough from the distribution of training data are prone to adversarial attacks despite adversarial training. Following this observation, we defined a new class of attacks called "blind-spot attack" and proposed a simple scale-and-shift scheme for conducting blind-spot attacks on adversarially trained MNIST and Fashion MNIST datasets with high success rates. Our findings suggest that adversarial training can be challenging due to the prevalence of blind-spots in high dimensional datasets. As discussed in Section 3.1, we use k-nearest neighbour in embedding space to measure the distance between a test example and the training set. In Section 4.2 we use k = 5. In this section we show that the choice of k does not have much influence on our . We use the adversarially trained model on the CIFAR dataset as an example. In Figures 6, 7 and 8 we choose k = 10, 100, 1000, respectively. The are similar to those we have shown in Figure 3: a strong correlation between attack success rates and the distance from a test point to the training dataset. Figure 6: Attack success rates and distance distribution of the adversarially trained CIFAR model by. Upper: C&W ∞ attack success rate, = 8/255. Lower: distribution of the average 2 (embedding space) distance between the images in test set and the top-10 (k = 10) nearest images in training set. Figure 7: Attack success rates and distance distribution of the adversarially trained CIFAR model by. Upper: C&W ∞ attack success rate, = 8/255. Lower: distribution of the average 2 (embedding space) distance between the images in test set and the top-100 (k = 100) nearest images in training set. We also studied the German Traffic Sign (GTS) BID12 dataset. For GTS, we train our own model with the same model structure and parameters as the adversarially trained CIFAR model. We set = 8/255 for adversarial training with PGD, and also use the same as the threshold of success. The are shown in Figure 9. The GTS model behaves similarly to the CIFAR model: attack success rates are much higher when the distances between the test example and the training dataset are larger. We demonstrate more MNIST and Fashion-MNIST visualizations in FIG2. Figure 9: Attack success rate and distance distribution of GTS in. Upper: C&W ∞ attack success rate, = 8/255. Lower: distribution of the average 2 (embedding space) distance between the images in test set and the top-5 nearest images in training set. In this section we demonstrate our experimental on two other state-of-the-art certified defense methods, including convex adversarial polytope by BID32 and BID32, and distributional robust optimization based adversarial training by BID24. Different from the adversarial training by, these two methods can provide a formal certification on the robustness of the model and provably improve robustness on the training dataset. However, they cannot practically guarantee non-trivial robustness on test data. We did not include other certified defenses like BID19 and BID11 because they are not applicable to multi-layer networks. For all defenses, we use their official implementations and pretrained models (if available). FIG2 shows the on CIFAR using the defenses in BID32. TAB5 show the blind-spot attack on MNIST and Fashion-MNIST for robust models in BID32 and BID24, respectively. FIG2 shows the blind-spot attack examples on, BID32 and BID24.(a) α = 1.0 β = 0.0 dist= 0.363 Table 5: Blind-spot attack on MNIST and Fashion-MNIST for robust models by BID24. Note that we use 2 distortion for this model as it is the threat model under study in their work. For each group, the first row contains input images transformed with different scaling and shifting parameter α, β (α = 1.0, β = 0.0 is the original image) and the second row contains the found adversarial examples. d represents the distortion of adversarial perturbations. For models from and BID32 we use ∞ norm and for models from BID24 we use 2 norm. Adversarial examples for these transformed images can be found with small distortions d. DISPLAYFORM0 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HylTBhA5tQ | We show that even the strongest adversarial training methods cannot defend against adversarial examples crafted on slightly scaled and shifted test images. |
Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently. BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs. We argue that the accuracy drop of BNNs is due to their geometry. We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart. This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training. Our numerical experiments confirm our geometric intuition. Convolutional neural network has become one of the most powerful tools for solving computer vision, natural language processing, speech recognition, machine translation, and many other complex tasks. The most successful and widely-used recipe for deep neural network is ReLU-style activation function with MSRA style weight initialization . The standard sigmoid and the hyperbolic tangent were the most common activation functions, before the introductio of ReLU. ReLU-like activation functions are widely proved to be superior in terms of accuracy and convergence speed. It is more common to use low-bit quantized networks such as Binary Neural Networks (BNNs) to implement such deep neural networks on edge devices such as cell phones, smart wearables, etc. BNNs only keeps the sign of weights and compute the sign of activations {−1, +1} by applying Sign function in the forward pass. In backward propagation, BNN uses Straight-Through-Estimator (STE) to estimate the backward gradient through the Sign function and update on full-precision weights. The forward and backward loop of a BNN, therefore, becomes similar to the full-precision neural network with hard hyperbolic tangent htanh activation. The htanh function is a piece-wise linear version of the nonlinear hyper-bolic tangent, and is known to be inferior in terms of accuracy compared to ReLU-like activation function. We examine a full-precision network with htanh activation to provide a new look in improving BNN performance. We conclude that the bias initialization is the key to mimic ReLU geometric behavior in networks with htanh activation. This challenges the common practice of deterministic bias initialization for neural networks. Although the analysis is based on htanh function, this equally applies to BNNs that use STE, a htanh-like, back propagation scheme. Other saturating activations like hyperbolic tangent and sigmoid commonly applied in recurrent neural networks may benefit from this resolution as well. Our novelties can be summarized in four items i) we analyze the geometric properties of ReLU and htanh activation. This provides an insight into the training efficiency of the unbounded asymmetric activation functions such as ReLU. ii) we propose bias initialization strategy as a remedy to the bounded activations such as htanh. iii) We back up our findings with experiments on full-precision to reduce the performance gap between htanh and ReLU activations. iv) We show this strategy also improves BNNs, whose geometric behavior is similar to the full-precision neural network with htanh activation. There are very few works that focus on the initialization strategy of the bias term of the neural network. To the best of our knowledge, we are the first to propose random bias initialization as a remedy to the saturating full-precision neural network, also as a method to improve BNN training. 2 RELATED WORKS proposed training deep neurals network with ReLU activation, and argued that ReLU activation alleviates the vanishing gradient problem and encourages sparsity in the model. The hyperbolic tangent only allowed training of shallow neural networks. Since AlexNet , almost every successful neural network architectures use ReLU activation or its variants, such as adaptive ReLU, leaky ReLU, etc. Although many works reported that ReLU activation outperforms the traditional saturating activation functions, the reason for its superior performance remains an open question. utilized automatic search techniques on searching different activation functions. Most top novel activation functions found by the searches have an asymmetric saturating regime, which is similar to ReLU. adapts ReLU and sigmoid while training. To improve the performance of saturating activations, proposed penalized tanh activation, which introduces asymmetric saturating regime to tanh by inserting leaky ReLU before tanh. The penalized tanh could achieve the same level of performance as ReLU activating CNN. It is worth to mention that similar ideas also appear in the related works of binarized neural network. improved the performance of saturating activations by adding random noise when the neuron is saturated, so the backward signal can easily pass through the whole model, and the model becomes easier to optimize. In this works, we proposed to randomize the non-saturated regime by using random bias initialization. This initialization can guarantee all backward signals can pass through the whole model equally. The initial work on BNN appeared in , which limits both weights and activations to −1 and +1, so the weighted sum can be computed by bit-wise XNOR and PopCount instructions. This solution reduces memory usage and computational cost up to 32X compared with its full-precision counterpart. In the original paper, BNN was tested on VGG-7 architecture. Although it is an over-parameterized architecture for CIFAR 10 dataset, there is a performance gap between BNN and full-precision with ReLU activation. We believe the different between the two activations, BNN using the sign and full-precision using ReLU, is partially responsible for this gap. XNOR-Net developed the idea of BNN and proposed to approximate the full-precision neural network by using scaling factors. They suggest inserting non-Binary activation (like ReLU) after the binary convolution layer. This modification helps training considerably. Later, Tang et al. replaced replacing ReLU activation with PReLU in XNOR-Net to improve the accuracy. Note that XNOR-Net and many relaated works require to store the full-precision activation map during the inference stage, therefore their memory occupation is significantly larger than the pure 1-bit solution like the vanilla BNN. A typical full-precision neural network block can be described by Neural networks are trained using the back-propagation algorithm. Back propagation is composed of two components i) forward pass and ii) backward propagation. In the forward pass, the loss function L is evaluated on the current weights, and in backward propagation, the weights are updated sequentially. To simplify the analysis, we assume that all weight vectors W normalized, as the magnitude of the weight vectors does not affect the layer output. The j th neuron response in the (i + 1) th layer are computed as First, the input data points x i are projected to the j th row vector of the weight matrix. The dot product of W During backward propagation, the backward gradient update on W i j and x i are computed using For the case of ReLU activation The activation function only allows the gradients from data point on the activated region to backward propagate and update the hyper-plane (equation 4). From the hyper-plane analysis, we realize that ReLU activation has three ideal properties that are distinguishing it from the others i) the diversity of activated regions at initialization, ii) The equality of data points at initialization, iii) The equality of hyper-planes at initialization. These may explain why ReLU activation outperforms the traditional Hyperbolic tangent or sigmoid activations. To argue each property, let us suppose that the distribution of the dot products is zero-centered. This assumption is automatically preserved in the batch normalization. Weight initialization techniques, like Xavier and MSRA, randomly initialize the weights to maintain the output variance. i) Region diversity: the activated regions of hyper-planes solely depend on the direction of the weight vector, which is randomly initialized. This allows different hyper-planes to learn from a different subset of data points, and ultimately diversifies the backward gradient signal. ii) Data equality: an arbitrary data point x i, is located on the activated regions of approximately half of the total hyper-planes in layer i. In other words, the backward gradients from all data points can pass through the approximately same amount of activation function, update hyper-planes, and propagate the gradient. iii) Hyperplane equality: an arbitrary hyper-plane W i j, is affected by the backward gradients from approximately 50% of the total data points. All hyper-planes on average receive the same amount of backward gradients. Hyper-plane equality speeds up the convergence and facilitates model optimization, see Figure 1 (right panel). Similar to the ReLU activation, only the backward gradients from data points located in the activated region of a hyper-plane can backward propagate through the activation function and update this hyper-plane. This analysis also applies to htanh activation. The performance gap between ReLU activation and htanh activation is caused by their different activated region distribution, see Figure 3. Clearly, htanh activation is not as good as ReLU in defining balanced and fair activated regions. However, we analyze each property for htanh as well. i) Region diversity: activated regions of htanh are not as diverse as ReLU. Activated regions of htanh cover only the area close to the origin. Assuming Gaussian data, this is a dense area that the majority of data points are located in. ii) Data equality: data points are not treated fairly htanh activation function. Data points that closer to the origin can activate more hyper-planes than the data points far from the origin. If the magnitude of a data point x i is small enough, it can activate all hyper-planes in the same layer, see the deepred region of Figure 3 (right panel). As a consequence, in backward gradients, few data instances affect all hyper-planes. In other words, the backward gradients from a part of the training data points have a larger impact on the model than the others. We considered this imbalance ultimately affects model generalization problem since the model training focuses on a subset of the training data points close to the origin. iii) Hyperplane equality: The initial activated regions should cover a similar-sized subset of the training data points overall, and this property is shared in both ReLU and htanh activations. Similar analysis also applies to other activation functions with the zero-centered activated region, like Sigmoid or Hyperbolic Tangent. Here we proposed a simple initialization strategy to alleviate the data inequality issue and improve activated region diversity for the htanh activation relying on our geometric insight. We argue bias initialization with a uniform distribution between [−λ, λ], where λ is a hyper-parameter is a solution to region diversity. With random bias initialization, the data points that far from the origin can activate more hyper-planes. If λ > max(x) + 1, all data points activate approximately the same number of hyper-planes during backward propagation, so data equality can be achieved. Also, with the diverse initial activated region, different hyper-planes learn from different subset of training data points. However, this initialization strategy comes with a drawback. Hyper-plane equality no longer holds when the biases are not set to zero. Hyper-planes with larger initial bias have less activated data. Therefore, choosing the optimal value of λ is a trade-off between the hyper-plane equality and the data equality. Experiments below shows that the validation curve becomes unsteady if λ value set to too high. Empirically, with a batch normalization layer, λ ≈ 2 provide a good initial estimate. In this case, the activated regions covering from −3 to +3, so it allows the gradients from almost all data points to propagate. It's worth to mention that the experiments showed that small λ also helps to improve the performance of ResNet architecture. The proposed bias initialization method is evaluated on the CIFAR-10. The network architectures are based on the original implementation of the BNN . We choose the VGG-7 architecture and the ResNet architecture. The VGG-7 architecture, is a simple and over-parameterized model for CIFAR 10. This is an ideal architecture to compare the performance between different activations. In the original implementation, this full-precision architecture is designed to compare with BNN, so the BatchNorm layers are Table 2: Validation error rate % for Binary training, λ = 0 coincides with common deterministic initialization. inserted after each ReLU activation to match the pattern of BNN. This arrangement has a negative impact on the performance of the full-precision neural network. In our full-precision experiments, we put the BatchNorm layers back to their original position, so the error rate of the baseline model is improved from the original reported 9.0% to 6.98%. This is close to 6.50%, which is the best CIFAR 10 error rate reported on VGG-9 architecture that includes BatchNorm. Considering VGG-9 has much more parameters than this architecture, we keep it as the baseline for full-precision VGG. We also followed the training recipe from the original implementation, SGD optimizer with 0.9 momentum and weight decay set to 5 × 10 − 4. We use more training epochs to compensate for the slow convergence caused by htanh activation. Figure 4 confirms that the random bias initialization strategy helps to reduce the performance gap between htanh and ReLU activation. A similar effect is observed for ResNet type architectures. We also tested the proposed bias initialization on the ResNet-like architecture. The are depicted in Figure 4 re-assures that bias initialization improves htanh and pushes it toward ReLU accuracy, see Table 1. Binary training that use STE is similar to htanh activation. We expect to observe a similar effect in BNN training with STE gradient approximator. The validation error rate is summarized in Table 2. In the Binary VGG-7 experiments, we reduced the accuracy gap between full-precision network with ReLU activation and BNN from 4% to 1.5%. The bias initialization strategy is effective to close the gap on binary ResNet architecture by almost 1%, even while the full-precision model even under-fits on CIFAR10 data. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJx4Ogrtvr | Improve saturating activations (sigmoid, tanh, htanh etc.) and Binarized Neural Network with Bias Initialization |
Understanding how people represent categories is a core problem in cognitive science, with the flexibility of human learning remaining a gold standard to which modern artificial intelligence and machine learning aspire. Decades of psychological research have yielded a variety of formal theories of categories, yet validating these theories with naturalistic stimuli remains a challenge. The problem is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires having a workable representation of these stimuli. Deep neural networks have recently been successful in a range of computer vision tasks and provide a way to represent the features of images. In this paper, we introduce a method for estimating the structure of human categories that draws on ideas from both cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep representation learners. We provide qualitative and quantitative as a proof of concept for the feasibility of the method. Samples drawn from human distributions rival the quality of current state-of-the-art generative models and outperform alternative methods for estimating the structure of human categories. Categorization (or classification) is a central problem in cognitive science BID1, artificial intelligence, and machine learning BID3. In its most general form, the categorization problem concerns why and how we divide the world into discrete units (and various levels of abstraction), and what we do with this information. The biggest challenge for studying human categorization is that the content of mental category representations cannot be directly observed, which has led to development of laboratory methods for estimating this content from human behavior. Because these methods rely on small artificial stimulus sets with handcrafted or lowdimensional feature sets, they are ill-suited to the study of categorization as an intelligent process, which is principally motivated by people's robust categorization performance in complex ecological settings. One of the challenges of applying psychological methods to realistic stimuli such as natural images is finding a way to represent them. Recent work in machine learning has shown that deep learning models, such as convolutional neural networks, perform well on a range of computer vision tasks BID10. The features discovered by these models provide a way to represent complex images compactly. It may be possible to express human category structure using these features, an idea supported by recent work in cognitive science BID8 BID12.Ideally, experimental methods could be combined with state-of-the-art deep learning models to estimate the structure of human categories with as few assumptions as possible and while avoiding the problem of dataset bias. In what follows, we propose a method that uses a human in the loop to directly estimate arbitrary distributions over complex feature spaces, adapting a framework that can exploit advances in deep architectures and computing power to increasingly capture and sharpen the precise structure of human category representations. Such knowledge is crucial to forming an ecological theory of intelligent categorization behavior and to providing a ground-truth benchmark to guide and inspire future work in machine learning. Methods for estimating human category templates have existed for some time. In psychophysics, the most popular and well-understood method is known as classification images BID0. In this experimental procedure, a human participant is shown stimuli from two classes, A and B, each with white noise overlaid, and asked to indicate the correct label. On most trials, the participant will select the exemplar generated from the category in question. However, if the added white noise significantly perturbs features of the image important to making that distinction, they may fail. Exploiting this, we can estimate the decision boundary from a number of these trials using the simple formula: DISPLAYFORM0 where n XY is the average of the noise across trials where the correct class is X and the observer chooses Y. BID15 used a variation on classification images using invertible deep feature spaces. In order to avoid dataset bias introduced through perturbing real class exemplars, white noise in the feature space was used to generate stimuli. In this special case, category templates reduce to n A − n B. On each trial of the experiment, participants were asked to select which of two images (inverted from feature noise) most resembled a particular category. Because the feature vectors were random, thousands of images could be pre-inverted offline using methods that require access to large datasets. This early inversion method was applied to mean feature vectors for thousands of positive choices in the experiments and yielded qualitatively decipherable category template images, as well as better machine classification decision boundaries that were regularized by human bias. Under the assumption that human category distributions are Gaussian with equal variance, this method yields the vector that aligns with the line between means, although a massive number of pairs of random vectors (trials) are required. BID5 conducted similar experiments in which a low-dimensional multi-scale gabor PCA basis was used to represent black-and-white images of scenes. Participants indicated which of two images was more similar to a seen image or a mental image of a scene. These judgments were used in an online genetic algorithm to lead participants to converge to their mental image. This method allowed for relatively efficient estimation of mental templates, although it is limited to scenes, lacks even the few theoretical guarantees of the BID15 method, and yields only a single meaningful image. Finally, an alternative to classification images, Markov Chain Monte Carlo with People (MCMCP; BID14, constructs an experimental procedure by which humans act as a valid acceptance function in the Metropolis-Hastings algorithm, exploiting the fact that Luce's choice axiom, a well-known model of human choice behavior, is equivalent to the Barker acceptance function (see equation in FIG0). On the first trial, a stimulus is drawn arbitrarily from the parameter space and compared to a new proposed stimulus that is nearby in that parameter space. The participant makes a forced choice of the better exemplar of some category (e.g., dog). If the initial stimulus is chosen, the Markov chain remains in that state. If the proposed stimulus is chosen, the chain moves to the proposed state. The process then repeats for as long as one wishes to run the sampling algorithm. MCMCP has been successfully employed to capture a number of different mental categories BID14 BID11, and though these spaces are higher-dimensional than those in previous laboratory experiments, they are still relatively small and artificial compared to real images. Unlike classification images, this method makes no assumptions about the structure of the category distributions and thus can estimate means, variances, and higher order moments. Therefore we take it as a basic starting point for this paper. Deep convolutional neural networks (CNN; BID9 like AlexNet BID7 are excellent for high-accuracy natural image classification and learn feature and category representations that are generally useful for a number of perceptual tasks. However, they can be difficult to interpret, especially in how they relate to human categorization behavior. * for the two states are presented to human raters on a computer screen (leftmost arrow and bottom left). Human raters then view the images in an experiment (bottom middle arrow) and act as part of an MCMC sampling loop, choosing between the two states/images in accordance with the Barker acceptance function (bottom right). The chosen image can then be sent to the inference network (rightmost arrow) and decoded in order to select the state for the next trial, however this step is unnecessary when we know exactly which states corresponds to which images. The inference network will, however, be needed later to characterize new images that we did not generate. Generative Adversarial Networks (GANs; BID4 and Variational Autoencoders (VAEs;) provide a generative approach to understanding image categories. In particular, these generative models provide an opportunity to examine how inference over a set of images and categories is performed. Generative models frame their approach theoretically on a Bayesian decomposition of image-label joint density, p(z, x) = p(z)p(x|z), where z and x are random variables that represent a natural image distribution and its feature representation. Therfore p(z) represents the distribution of natural images, and p(x|z) is the distribution of feature vectors given an image. Together, p(z) and p(x|z) produce a high-dimensional constraint on image generation. This allows for easy visualization of the learned latent space, a property we exploit presently. In the typical MCMCP experiment, as explained above, the participant judges pairs of stimuli, and the chosen stimuli serve as samples from the category distribution of interest. This method is effective as long as noise can be added to dimensions in the stimulus parameter space to create meaningful changes in content. When viewing natural images for example, in the space of all pixel intensities, added noise is very unlikely to modify the image in interesting ways. We propose to instead perturb images in a deep feature space that captures only essential variation. We can then show participants images decoded from these feature representations in order to relate human judgments to the underlying latent space of interest, where category distributions are easier to learn. The ing judgments (samples) approximate distributions that both derive arbitrary human category boundaries for natural images and can be sampled from to create new images, yielding new human-like generative image models. A schematic of this procedure is illustrated in FIG0. Note that the method we propose does not require an inference network in practice, since we always know which latent code produced the images shown to the subjects. This is true as long as we initialize our MCMC chain states as random points in the latent space as opposed to in pixel space. More specifically, popular deep generative networks such as GANs and VAEs learn a probability distribution p(z) over some latent variable z, from which the objective of the network encourages of the mapping (generator or decoder) M that p(f (z; M)) = p(x), where p(x) is the distribution over pixels in the dataset and f is the family of deterministic functions s.t. f: M × z → x. If we assume that human psychological feature space y from some invertible transformation T of x (it is simply some learned transformation of raw input), then correspondingly p(g(y; T −1) = p(x), where g is the family of functions s.t. g: T −1 × y → x. Because people in our experiments see only actual images x, perceived as y, the judgments they make are related to the deep latent space DISPLAYFORM0 Equation 2 assumes that humans approximate p(x) analogously to an unsupervised learner, however, even if T is a more specific feature representation supervised by classification pressure in the environment, we can assume information is only lost and not added, so that our deep generative models, if successful in their goal, will ultimately encode this information as a subset. Further, the equation above entails that p(y) need not be identical to p(z), and M need not equal T −1. However, for the practical sake of convergence using human participants, we would prefer that psychological space y be similar to the deep latent space z -we hope they contain the same relevant featural content for most images. While this assumption is hard to verify beyond face validity at this time, recent work suggests that some deep features spaces can be linearly transformed to approximate human psychological space BID12, and so we assume the deep latent space is relevant enough to human feature representations to act as a surrogate. There are several theoretical advantages to our method over previous efforts. First, MCMCP can capture arbitrary distributions, so it is not as sensitive to the structure of the underlying low-dimensional feature space and should provide better category boundaries than classification images when required. This is important when using various deep features spaces that were learned with different constraints. MCMC inherently spends less time in low probability regions and should in theory waste fewer trials. Having generated the images online and as a function of the participant's decisions, there is no dataset or sampling bias, and auto-correlation can be addressed by removing temporally adjacent samples from the chain. Finally, using a deep generator provides drastically clearer samples than shallow reconstruction methods, and can be potentially be trained end-to-end with an inference network that allows us to categorize new images using the learned distribution. For our experiments, we explored two image generator networks trained on various datasets. Since even relatively low-dimensional deep image embeddings are large compared to controlled laboratory stimulus parameter spaces, we use a hybrid proposal distribution in which a Gaussian with a low variance is used with probability P and a Gaussian with a high variance is used with probability 1 − P. This allows participants to both refine and escape nearby modes, but is simple enough to avoid excessive experimental piloting that more advanced proposal methods often require. Participants in all experiments completed exactly 64 trials (image comparisons), collectively taking about 5 minutes, containing segments of several chains for multiple categories. The order of the categories and chains within those categories were always interleaved. All experiments were conducted on Amazon Mechanical Turk. If a single image did not load for a single trial, the data for the subject undergoing that trial was completely discarded, and a new subject was recruited to continue on from the original chain state. We first test our method using DCGAN BID13 trained on the Asian Faces Dataset. We chose this dataset because it requires a deep architecture to produce reasonable samples (unlike MNIST, for example), yet it is constrained enough to test-drive our method using a relatively simple latent space. Four chains for each of four categories (male, female, happy, and sad) were used. Proposals were generated from an isometric Gaussian with an SD of 0.25 50% of the time, and 2 otherwise. In addition, we conducted a baseline in which two new initial state proposals were drawn on every trial, and were independent of previous trials (classification images). The final dataset contained 50 participants and over 3, 200 trials (samples) in total for all chains. The baseline classification images (CI) dataset contained the same number of trials and participants. MCMCP chains are visualized using Fisher Linear Discriminant Analysis in FIG1, along with the ing averages for each chain and each category. Chain means within a category show interesting variation, yet converge to very similar regions in the latent space as expected. Also on the right of FIG1 are visualizations of the mean faces for both methods in the final two columns. MCMCP means appear to have converged quickly, whereas CI means only show a moderate resemblance to their corresponding category (e.g., the MCMCP mean for "happy" is fully smiling, while the CI mean only barely reveals teeth). All four CI means appear closer to a mean face, which is what one would expect from averages of noise. We validated this improvement with a human experiment in which 30 participants made forced choices between CI and MCMCP means. The are reported in FIG2. MCMCP means are consistently highly preferred as representations of each category as compared to CI. This remained true even when an additional 50 participants (total of 100) were run on the CI task, obtaining twice as many image comparison trials as MCMCP. The in the previous section show that reasonable category templates can be obtained using our method, yet the complexity of the stimulus space used does not rival that of large object classification networks. In this section, we tackle a more challenging (and interesting) form of the problem. To do this, we employ a bidirectional generative adversarial network (BiGAN; BID2 trained on the entire 1.2 million-image ILSVRC12 dataset (64×64 center-cropped). BiGAN includes an inference network, which regularizes the rest of the model and produces unconditional samples competitive with the state-of-the-art. This also allows for the later possibility of comparing human distributions with other networks as well as assessing machine classification performance with new images based on the granular human biases captured. Our generator network was trained given uniform rather than Gaussian noise, which allows us to guarantee participants cannot get lost in extremely improbable regions. Additionally, we avoid proposing states outside of this hypercube by forcing z to wrap around (proposals that travel out-side of z are injected back in from the opposite direction by the amount originally exceeded). In particular, we run our MCMC chains through an unbounded state space by redefining each bounded dimension z k as DISPLAYFORM0 Proposals were generated from an isometric Gaussian with an SD of 0.1 60% of the time, and 0.7 otherwise. We use this network to obtain large chains for two groups of five categories. Group1 included bottle, car, fire hydrant, and person, television, following BID15. Group2 included bird, body of water, fish, flower, and landscape. Each chain was approximately 1, 040 states long, and four of these chains were used for each category (approximately 4, 160). In total, across both groups of categories, we obtained exactly 41, 600 samples from 650 participants. To demonstrate the efficiency and flexibility of our method compared to alternatives, we obtained an equivalent number of trials for all categories using the variant of classification images introduced in BID15, with the exception that we used our BiGAN generator instead of the offline inversion previously used. This also serves as an important baseline against which to quantitatively evaluate our method because it estimates the simplest possible template. The acceptance rate was approximately 50% for both category groups, which is near the common goal for MCMCP experiments. The samples for all ten categories are shown in FIG4 and D using Fisher Linear Discriminant Analysis. Similar to the face chains, the four chains for each category converge to similar regions in space, largely away from other categories. In contrast, classification images shows little separation with so few trials (5C and D). Previous work suggests that at least an order of magnitude higher number of comparisons may be needed for satisfactory estimation of category means. Our method estimates well-separated category means in a manageable number of trials, allowing for the method to scale greatly. This makes sense given that CI proposes comparisons between arbitrary images, potentially wasting many trials, and clearly suffers from a great deal of noise. Beyond yielding our decision rule, our method additionally produces a density estimate of the entire category distribution. In classification images, only mean template images can be viewed, while we are able to visualize several modes in the category distribution. FIG3 visualizes these modes using the means of each component in a mixture of Gaussians density estimate. This produces realistic-looking multi-modal mental category templates, which to our knowledge has never been accomplished with respect to natural image categories. We also provide a quantitative assessment of the samples we obtained and compare them to classification images (CI) using an external classification task. To do this, we scraped approximately 500 images from Flickr for each of our ten categories, which was used for a classification task. To classify the images using our human-derived samples, we used the nearest-mean decision rule, and a decision rule based on the highest log-probability given by our ten density estimates. For classification images, only a nearest-mean decision rule can be tested. In all cases, decision rules based on our MCMCP-obtained samples overall outperform a nearestmean decision rule using classification images (see TAB0). In category group 1, the MCMCP density performed best and was more even across classes. In category group 2, nearest-mean using our MCMCP samples did much better than a density estimate or CI-based nearest-mean. Our demonstrate the potential of our method, which leverages both psychological methods and deep surrogate representations to make the problem of capturing human category representations tractable. The flexibility of our method in fitting arbitrary generative models allows us to visualize multi-modal category templates for the first time, and improve on human-based classification performance benchmarks. It is difficult to guarantee that our chains explored enough of the relevant space to actually capture the concepts in their entirety, but the diversity in the modes visualized and the improvement in class separation achieved are positive indications that we are on the right track. Further, the framework we present can be straightforwardly improved as generative image models advance, and a number of known methods for improving the speed, reach, and accuracy of MCMC algorithms can be applied to MCMCP make better use of costly human trials. There are several obvious limitations of our method. First, the structure of the underlying feature spaces used may either lack the expressiveness (some features may be missing) or the constraints (too many irrelevant features or possible images wastes too many trials) needed to map all characteristics of human mental categories in a practical number of trials. Even well-behaved spaces are very large and require many trials to reach convergence. Addressing this will require continuing exploration of a variety of generative image models. We see our work are as part of an iterative refinement process that can yield more granular human observations and inform new deep network | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJy0fcgRZ | using deep neural networks and clever algorithms to capture human mental visual concepts |
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity. Figure 1: Example of targeted adversarial attack on speech to text systems in practice combination of genetic algorithms and gradient estimation to solve this task. The first phase of the 36 attack is carried out by genetic algorithms, which are a gradient-free method of optimization that 37 iterate over populations of candidates until a suitable sample is produced. In order to limit excess 38 mutations and thus excess noise, we improve the standard genetic algorithm with a new momentum 39 mutation update. The second phase of the attack utilizes gradient estimation, where the gradients 40 of individual audio points are estimated, thus allowing for more careful noise placement when the 41 adversarial example is nearing its target. The combination of these two approaches provides a 89.25% 42 average targeted attack similarity with a 94.6% audio file similarity after 3000 generations. 43 Adversarial attacks can be created given a variety of information about the neural network, such as 45 the loss function or the output probabilities. However in a natural setting, usually the neural network 46 behind such a voice control system will not be publicly released so an adversary will only have access 47 to an API which provides the text the system interprets given a continuous waveform. Given this 48 constraint, we use the open sourced Mozilla DeepSpeech implementation as a black box system, 49 without using any information on how the transcription is done. We perform our black box targeted attack on a model M given a benign input x and a target t by 51 perturbing x to form the adversarial input x = x + δ, such that M (x) = t. To minimize the audible 52 noise added to the input, so a human cannot notice the target, we maximize the cross correlation 53 between x and x. A sufficient value of δ is determined using our novel black box approach, so we 54 do not need access to the gradients of M to perform the attack. Compared to images, audio presents a much more significant challenge for models to deal with. While 57 convolutional networks can operate directly on the pixel values of images, ASR systems typically 58 require heavy pre-processing of the input audio. Most commonly, the Mel-Frequency Cepstrum (MFC) transform, essentially a fourier transform of the sampled audio file, is used to convert the 60 input audio into a spectogram which shows frequencies over time. Models such as DeepSpeech (Fig. 61 2) use this spectogram as the initial input. Extending the research done by BID0, we propose a genetic algorithm and gradient estimation approach 81 to create targeted adversarial audio, but on the more complex DeepSpeech system. The difficulty of 82 this task comes in attempting to apply black-box optimization to a deeply-layered, highly nonlinear 83 decoder model that has the ability to decode phrases of arbitrary length. Nevertheless, the combination 84 of two differing approaches as well as the momentum mutation update bring new success to this task. DeepSpeech outputs a probability distribution over all characters at every frame, for 50 frames per if EditDistance(t, Decode(best)) > 2 then // phase 1 -do genetic algorithm while populationSize children have not been made do Select parent1 from topk(population) according to sof tmax(their score) Select parent2 from topk(population) according to sof tmax(their score) child ← M utate(Crossover(parent1, parent2), p) end while newScores ← −CT CLoss(newPopulation, t) p ← M omentumU pdate(p, newScores, scores) else // phase 2 -do gradient estimation top-element ← top(population) grad-pop ← n copies of top-element, each mutated slightly at one index grad ← (−CT CLoss(grad-pop) − scores)/mutation-delta pop ← top-element + grad end if end while return best As mentioned previously, Alzantot et al. BID0 demonstrated the success of a black-box adversarial 125 attack on speech-to-text systems using a standard genetic algorithm. The basic premise of our 126 algorithm is that it takes in the benign audio sample and, through trial and error, adds noise to the 127 sample such that the perturbed adversarial audio is similar to the benign input yet is decoded as the Loss, we make modifications to the genetic algorithm and introduce our novel momentum mutation. CTC-Loss, which as mentioned previously, is used to determine the similarity between an input audio 137 sequence and a given phrase. We then form our elite population by selecting the best scoring samples 138 from our population. The elite population contains samples with desirable traits that we want to carry 139 over into future generations. We then select parents from the elite population and perform Crossover, 140 which creates a child by taking around half of the elements from parent1 and the other half from 141 parent2. The probability that we select a sample as a parent is a function of the sample's score. With some probability, we then add a mutation to our new child. Finally, we update our mutation 143 probabilities according to our momentum update, and move to the next iteration. The population will 144 continue to improve over time as only the best traits of the previous generations as well as the best 145 mutations will remain. Eventually, either the algorithm will reach the max number of iterations, or 146 one of the samples is exactly decoded as the target, and the best sample is returned. 148 Algorithm 2 Mutation Input: Audio Sample x Mutation Probability p Output: Mutated Audio Sample x for all e in x do noise ← Sample(N (µ, σ 2)) if Sample(Unif) < p then e ← e + f ilter highpass (noise) end if end for return xThe mutation step is arguably the most crucial component of the genetic algorithm and is our only 149 source of noise in the algorithm. In the mutation step, with some probability, we randomly add noise 150 to our sample. Random mutations are critical because it may cause a trait to appear that is beneficial 151 for the population, which can then be proliferated through crossover. Without mutation, very similar 152 samples will start to appear across generations; thus, the way out of this local maximum is to nudge it 153 in a different direction in order to reach higher scores. Furthermore, since this noise is perceived as noise, we apply a filter to the noise before 155 adding it onto the audio sample. The audio is sampled at a rate of f s = 16kHz, which means that 156 the maximum frequency response f max = 8kHz. As seen by Reichenbach and Hudspeth, given 157 that the human ear is more sensitive to lower frequencies than higher ones, we apply a highpass filter 158 at a cutoff frequency of f cutof f = 7kHz. This limits the noise to only being in the high-frequency 159 range, which is less audible and thus less detectable by the human ear. While mutation helps the algorithm overcome local maxima, the effect of mutation is limited by the 161 mutation probability. Much like the step size in SGD, a low mutation probability may not provide 162 enough randomness to get past a local maximum. If mutations are rare, they are very unlikely to 163 occur in sequence and add on to each other. Therefore, while a mutation might be beneficial when 164 accumulated with other mutations, due to the low mutation probability, it is deemed as not beneficial 165 by the algorithm in the short term, and will disappear within a few iterations. This parallels the step 166 size in SGD, because a small step size will eventually converge back at the local minimum/maximum. However, too large of a mutation probability, or step size, will add an excess of variability and prevent 168 the algorithm from finding the global maximum/minimum. To combat these issues, we propose Momentum Mutation, which is inspired by the Momentum Update for Gradient Descent. With this 170 update, our mutation probability changes in each iteration according to the following exponentially 171 weighted moving average update: DISPLAYFORM0 With this update equation, the probability of a mutation increases as our population fails to adapt 173 meaning the current score is close to the previous score. The momentum update adds acceleration 174 to the mutation probability, allowing mutations to accumulate and add onto each other by keeping 175 the mutation probability high when the algorithm is stuck at a local maximum. By using a moving 176 average, the mutation probability becomes a smooth function and is less susceptible to outliers in the 177 population. While the momentum update may overshoot the target phrase by adding random noise, 178 overall it converges faster than a constant mutation probability by allowing for more acceleration in Genetic algorithms work well when the target space is large and a relatively large number of mutation 182 directions are potentially beneficial; the strength of these algorithms lies in being able to search 183 large amounts of space efficiently BID7. When an adversarial sample nears its target perturbation, 184 this strength of genetic algorithms turn into a weakness, however. Close to the end, adversarial 185 audio samples only need a few perturbations in a few key areas to get the correct decoding. In this 186 case, gradient estimation techniques tend to be more effective. Specifically, when edit distance of 187 the current decoding and the target decoding drops below some threshold, we switch to phase 2. When approximating the gradient of a black box system, we can use the technique proposed by Nitin DISPLAYFORM0 Here, x refers to the vector of inputs representing the audio file. δ i refers to a vector of all zeros, Of the audio samples for which we ran our algorithm on, we achieved a 89.25% similarity between the 217 final decoded phrase and the target using Levenshtein distance, with an average of 94.6% correlation 218 similarity between the final adversarial sample and the original sample. The average final Levenshtein 219 distance after 3000 iterations is 2.3, with 35% of the adversarial samples achieving an exact decoding 220 in less than 3000 generations, and 22% of the adversarial samples achieving an exact decoding in less 221 than 1000 generations. One thing to note is that our algorithm was 35% successful in getting the decoded phrase to match 223 the target exactly; however, noting from figure 5, the vast majority of failure cases are only a few edit 224 distances away from the target. This suggests that running the algorithm for a few more iterations 225 could produce a higher success rate, although at the cost of correlation similarity. Indeed, it becomes 226 apparent that there is a tradeoff between success rate and audio similarity such that this threshold 227 could be altered for the attacker's needs. One helpful visualization of the similarity between the original audio sample and the adversarial 233 audio sample through the overlapping of both waveforms, as shown in figure 4. As the visualization 234 shows, the audio is largely unchanged, and the majority of the changes to the audio is in the relatively 235 low volume noise applied uniformly around the audio sample. This in an audio sample that 236 still appears to transcribe to the original intended phrase when heard by humans, but is decoded as 237 the target adversarial phrase by the DeepSpeech model. That 35% of random attacks were successful in this respect highlights the fact that black box 239 adversarial attacks are definitely possible and highly effective at the same time. 4 Conclusion In combining genetic algorithms and gradient estimation we are able to achieve a black box adversarial 242 example for audio that produces better samples than each algorithm would produce individually. By 243 initially using a genetic algorithm as a means of exploring more space through encouragement of 244 random mutations and ending with a more guided search with gradient estimation, we are not only 245 able to achieve perfect or near-perfect target transcriptions on most of the audio samples, we were able 246 to do so while retaining a high degree of similarity. While this remains largely as a proof-of-concept 247 demonstration, this paper shows that targeted adversarial attacks are achievable on black box models 248 using straightforward methods. Furthermore, the inclusion of momentum mutation and adding noise exclusively to high frequencies 250 improved the effectiveness of our approach. Momentum mutation exaggerated the exploration at the 251 beginning of the algorithm and annealed it at the end, emphasizing the benefits intended by combining 252 genetic algorithms and gradient estimation. Restricting noise to the high frequency domain improved 253 upon our similarity both subjectively by keeping it from interfering with human voice as well as 254 objectively in our audio sample correlations. By combining all of these methods, we are able to 255 achieve our top . In , we introduce a new domain for black box attacks, specifically on deep, nonlinear 257 ASR systems that can output arbitrary length translations. Using a combination of existing and novel 258 methods, we are able to exhibit the feasibility of our approach and open new doors for future research. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HklaBGxoo7 | We present a novel black-box targeted attack that is able to fool state of the art speech to text transcription. |
Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning. However, have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations. To bridge this gap, we focus on one class of interactive tasks---sitting onto a chair. We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task. We experimentally demonstrate the strength of our approach over different single level and hierarchical baselines. We also show that our approach can be applied to motion prediction given an image input. A video highlight can be found at https://youtu.be/XWU3wzz1ip8/. The capability of synthesizing realistic human-scene interactions is an important basis for simulating human living space, where robots can be trained to collaborate with humans, e.g. avoiding collisions or expediting the completion of assistive tasks. Motion capture (mocap) data, by offering high quality recordings of articulated human pose, has provided a crucial resource for human motion synthesis. With large mocap datasets and deep learning algorithms, kinematics-based approaches have recently made rapid progress on motion synthesis and prediction (; ; ; ; Bütepage et al., 2017; ; ; ; a; b;). However, the lack of physical interpretability in their synthesized motion has been a major limitation of these approaches. The problem becomes especially clear when it comes to motions that involve substantial human-object or human-human interactions. Without modeling the physics, the sythensized interactions are often physically unrealistic, e.g. body parts penetrating obstacles or not reacting to collision. This generally limits the use of these approaches to either non-interactive motions, or a carefully set up virtual scene with high fidelity to the captured one. The graphics community has recently witnessed impressive progress on physics-based character animation (; b). These approaches, through imitating mocap examples via deep reinforcement learning, can synthesize realistic motions in physics simulated environments. Consequently, they can adapt to different physical contexts and thus attain a better generalization performance for interaction-based motions, e.g. walking on uneven terrain or stunt performance under obstacle disturbance. Nonetheless, these approaches still suffer from a drawback-a single model is trained for performing a single task with a distinct motion pattern (often time from a single mocap clip). As a , they might not generalize to higher-level interactive tasks that require flexible motion patterns. Take the example of a person sitting down on a chair. A person can start in any location and orientation relative to the chair (Fig. 1). A fixed motion pattern (e.g. turn left and sit) will be incapable of handling such variations. In this paper, we focus on one class of high-level interactive tasks-sitting onto a chair. As earlier mentioned, there are many possible human-chair configurations and different configurations may require different sequences of actions to accomplish the goal. For example, if the human is facing the chair, it needs to walk, turn either left or right, and sit; if the human is behind the chair, it needs to walk, side-walk and sit. To this end, we propose a hierarchical reinforcement learning (RL) method to address the challenge of generalization. Our key idea is the use of hierarchical control: we assume the main task (e.g. sitting onto a chair) can be decomposed into several subtasks (e.g. walk, turn, sit, etc.), where the motion of each subtask can be reliably learned from mocap data, and we train a meta controller using RL which can execute the subtasks properly to "complete" the main task from a given configuration. Such strategy is in line with the observation that humans have a repertoire of motion skills, and different subset of skills is selected and executed for different high-level tasks. Our contributions are three folds: we extend the prior work on physics-based motion imitation to the context of higher-level interactive tasks using a hierarchical approach; we experimentally demonstrate the strength of our hierarchical approach over different single level and hierarchical baselines; we show at the end that our approach can be applied to motion synthesis in human living space with the help of 3D scene reconstruction. Kinematics-based Models Kinematic modeling of human motions has a substantial literature in both vision and graphics domains. Conventional methods such as motion graphs require a large corpus of mocap data and face challenges in generalizing to new behaviors in new context. Recent progress in deep learning enables researchers to explore more efficient algorithms to model human motions, again, from large-scale mocap data. The focus in the vision community is often motion prediction (; ; ; Bütepage et al., 2017; ; ; a; b; ;, where a sequence of mocap poses is given as historical observation and the goal is to predict future poses. Recent work has even started to predict motions directly from a static image (; ;). In the graphics community, the focus has been primarily on motion synthesis, which aims to synthesis realistic motions from mocap examples (; Agrawal & van de ; ; . Regardless of the focus, this class of approaches still faces the challenge of generalization due to the lack of physical plausibility in the synthesized motion, e.g. foot sliding and obstacle penetrations. Physics-based Models Physics simulated character animation has a long history in computer graphics (; ; ; ; a; ; b). Our work is most related to the recent work by Peng et al. (;, which trained a virtual character to imitate mocap data using deep reinforcement learning. They demonstrated robust and realistic looking motions on a wide array of skills including locomotion and acrobatic motions. Notably, they have used a hierarchical model for the task of navigating on irregular terrain . However, their meta task only requires a single subtask (i.e. walk), and the meta controller focuses solely on steering. We address a more complex task (i.e. sitting onto a chair) which requires the execution of diverse subtasks (e.g. walk, turn, and sit). Another recent work that is closely related to ours is that of , which addressed the task of dressing also with a hierarchical model. However, their subtasks are executed in a pre-defined order, and the completion of subtasks is determined by handcoded rules. In contrast, our meta controller is trained and is free to select any subtask at any time point. This is crucial when the main task cannot always be completed by a fixed order of subtasks. Note that humanoid control in physics simulated environments is also a widely-used benchmark task in the RL community, for example, to investigate how to ease the design of the reward function. However, work in this domain focuses less on realistic motions. Hierarchical Reinforcement Learning Our model is inspired by a series of recent work on hierarchical control in deep reinforcement learning (; ;). Although in different contexts, they share the same attribute that the tasks of concern have high-dimensional action space, but can be decomposed into simpler, reusable subtasks. Such decomposition may even help in generalizing to new high-level tasks due to the shared subtasks. Object Affordances Our work is connected to the learning of object affordances in the vision domain. Affordances express the functionality of objects and how humans can interact with them. Prior work attempted to detect affordances of a scene, represented as a set of plausible human poses, by training on large video corpora (; ; . Instead, we learn the motion in a physics simulated environment using limited mocap examples and reinforcement learning. Another relevant work also detected affordances using mocap data , but focused only on static pose rather than motion. Our main task is the following: given a chair and a skeletal pose of a human in the 3D space, generate a sequence of skeletal poses that describes the motion of the human sitting onto the chair from the given pose (Fig. 1). Our system builds upon a physics simulated environment which contains an articulated structured humanoid and a rigid body chair model. Each joint of the humanoid (except the root) can receive a control signal and produce dynamics from the physics simulation. The goal is to learn a policy that controls the humanoid to successfully sit on the chair. Fig. 2 (left) illustrates the hierarchical architecture of our policy. At the lower level is a set of subtask controllers, each responsible for generating the control input of a particular subtask. As illustrated in Fig. 2 (right), we consider four subtasks: walk, left turn, right turn, and sit. 1 To synthesize realistic motions, the subtask policies are trained on mocap data to imitate real human motions. At the higher level, a meta controller is responsible for controlling the execution of subtasks to ultimately accomplish the main task. The subtask controllers and meta controller generate control input at different timescales-60 Hz for the former and 2Hz for the latter. The physics simulation runs at 240 Hz. Each subtask as well as the meta controlling task is formulated as an independent reinforcement learning problem. We leverage recent progress in deep RL and approximate each policy using a neural network. A subtask controller is a policy network π(a t |s t) that maps a state vector s t to an action a t at each timestep t. The state representation s is extracted from the current configuration of the simulation environment, and may vary for different subtasks. For example, turn requires only proprioceptive information of the humanoid, while sit requires not only such information, but also the pose of the chair relative to the humanoid. The action a is the signal for controlling the humanoid joints for each subtask. We use a humanoid model with 21 degrees of freedom, i.e. a ∈ R 21. The network architecture is fixed across the subtasks: we use a multi-layer perceptron with two hidden layers of size 64. The output of the network parameterizes the probability distribution of a, modeled by a Gaussian distribution with a fixed diagonal covariance matrix, i.e. π(a|s) = N (µ(s), Σ) and Σ = diag({σ i}). We can generate a t at each timestep by sampling from π(a t |s t). Each subtask is formulated as an independent RL problem. At timestep t, the state s t given by the simulation environment is fed into the policy network to output an action a t. The action a t is then fed back to the simulation environment to generates the state s t+1 at the next timestep and a reward signal r t. The design of the reward function is crucial and plays a key role in shaping the style of the humanoid's motion. A heuristically crafted reward may yield a task achieving policy, but may in unnatural looking motions and behaviors. Inspired by Peng et al. (2018a), we set the reward function of each subtask by a sum of two terms that simultaneously encourages the imitation of the mocap reference and the achievement of the task objectives: r S and r G account for the similarly to the reference motion and the achievement of the subtask goals, respectively. We use a consistent similarity reward r S across all subtasks: where r p and r v encourage the similarity of local joint angles q j and velocitiesq j between the humanoid and the reference motion, and ω p and ω v are the respective weights. Specifically, where d(·, ·) computes the angular difference between two angles. We empirically set ω p = 0.5, ω v = 0.05, α p = 1, and α v = 10. Next, we detail the state representation s and task objective reward r G for each subtask. The state s walk ∈ R 52 consists of a 50-d proprioceptive feature and a 2-d goal feature that specifies an intermediate walking target. The proprioceptive feature includes the local joint angles and velocities, the height and linear velocity of the root (i.e. torso) as well as its pitch and roll angles, and a 2-d binary vector indicating the contact of each foot with the ground (Fig. 3). Rather than walking in random directions, target-directed locomotion (Agrawal & van de) is necessary for accomplishing high-level tasks. Assuming a target is given, represented by a 2D point on the ground plane, the 2-d goal feature is given by [sin(ψ), cos(ψ)], where ψ is the azimuth angle to the target in the humanoid centric coordinates. The generation of targets will be detailed in the meta controller section (Sec. 5). We observe that it is challenging to directly train a target-directed walking policy with mocap examples. Therefore we adopt a two-stage training strategy where each stage uses a distinct task objective reward. In the first stage, we encourage similar steering patterns to the reference motion, i.e. the linear velocity of the root v ∈ R 3 should be similar between the humanoid and reference motion: In the second stage, we reward motion towards the target: where denotes the horizontal distance between the root and the target, and δt is the length of the timestep. 2) Left/Right Turn The states s lturn, s rturn ∈ R 50 reuse the 50-d proprioceptive feature from the walk subtask. The task objective reward encourages the rotation of the root to be matched between the humanoid and reference motion: where θ ∈ R 3 consists of the root's pitch, yaw, and roll. 3) Sit The sit subtask assumes that the humanoid is initially standing roughly in front of the chair and facing away. The task is simply to lower the body and be seated. Different from walk and turn, the state for sit should capture the pose information of the chair. Our state s sit ∈ R 57 consists of the same 50-d proprioceptive feature used in walk and turn, and additionally a 7-d feature describing the state of the chair in the humanoid centric coordinates. The 7-d chair state includes the displacement vector from the pelvis to the center of the seat surface, and the rotation of the chair in the humanoid centric coordinates represented as a quaternion (Fig. 3). The task objective reward encourages the pelvis to move towards the center of the seat surface: where t is the 3D distance between the pelvis and the center of the seat surface. The meta controller is also a policy network and shares the same architecture as the subtask controllers. As the goal now is to navigate the humanoid to sit on the chair, the input state s meta should encode the pose information of the chair. We reuse the 57-d state representation from the sit subtask which contains both the proprioceptive and chair information. Rather than directly controlling the humanoid joints, the output action a meta now controls the execution of subtasks. Specifically, a meta = {a switch, a target} consists of two components. a switch ∈ {walk, left turn, right turn, sit} is a discrete output which at each timestep picks a single subtask out of the four to execute. a target ∈ R 2 specifies the 2D target for the walk subtask, which is used to compute the goal state in s walk. Note that a target is only used when the walk subtask is picked for execution. The output of the policy network parameterizes the probability distributions of both a switch and a target, where a switch is modeled by a categorical distribution as in standard classification problems, and a target is modeled by a Gaussian distribution following the subtask controllers. The meta task is also formulated as an independent RL problem. At timestep t, the policy network takes the state s. Rather than evaluating the similarity to a mocap reference, the reward now should be providing feedback on the main task. We adopt a reward function that encourages the pelvis to move towards and be in contact with the seat surface: z contact indicates whether the pelvis is in contact with the seat surface, which can be detected by the physics simulator. V sit is defined as in Eq. 7. Since the subtasks and meta task are formulated as independent RL problems, they can be trained independently using standard RL algorithms. We first train each subtask controllers separately, and then train the meta controller using the trained subtask controllers. All controllers are trained in a standard actor-critic framework using the proximal policy optimization (PPO) algorithm. The training of the subtasks is also divided into two stages. First, in each episode, we initialize the pose of the humanoid to the first frame of the reference motion, and train the humanoid to execute the subtask by imitating the following frames. We apply the early termination strategy (a): an episode is terminated immediately if the height of the root falls below 0.78 meters for walk and turn, and 0.54 meters for sit. These thresholds are chosen according to the height of the humanoid. For turn, the episode is also terminated when the root's yaw angle differs from the reference motion for more than 45 •. For walk, we adopt the two-stage training strategy described in Sec. 4. In target-directed walking, we randomly sample a new 2D target in the front of the humanoid every 2.5 seconds or when the target is reached. For sit, the chair is placed at a fixed location behind the humanoid, and we use reference state initialization (a) to facilitate training. The training above enables the humanoid to perform the subtasks from the initial pose of the reference motion. However, this does not guarantee successful transitions between subtasks (e.g. walk→turn), which is required for the main task. Therefore in the second stage, we fine-tune the controllers by setting the initial pose to a sampled ending pose of another subtask, similar to the policy sequencing method in. For turn and sit, the initial pose is sampled from the ending pose of walk and turn, respectively. 2) Meta Controller Recall that the task is to have the humanoid sit down regardless of where it starts in the environment. The task's difficulty highly depends on the initial state: if it is already facing the seat, it only needs to turn and sit, while if it is behind the chair, it needs to first walk to the front and then sit down. Training can be challenging when starting from a difficult state, since the humanoid needs to by chance execute a long sequence of correct actions to receive the reward for sitting down. To facilitate training, we propose a multi-stage training strategy inspired by curriculum learning . The idea is to begin the training from easier states, and progressively increase the difficulty when the training converges. As illustrated in Fig. 4, we begin by only spawning the humanoid on the front side of the chair (Zone 1). Once trained, we change the initial position to the lateral sides (Zone 2) and continue the training. Finally, we train the humanoid to start from the rear side (Zone 3). Reference Motion We collect mocap data from the CMU Graphics Lab Motion Capture Database (CMU). Tab. 1 shows the mocap clips we used for each subtask. We extract relevant motion segments and retarget the motion to our humanoid model. We use a 21-DoF humanoid model provided by the Bullet Physics SDK (Bullet). Motion retargeting is performed using a Jacobianbased inverse kinematics method . Implementation Details Our simulation environment is based on OpenAI Roboschool, which uses the Bullet physics engine (Bullet). We use a randomly selected chair model from ShapeNet . The PPO algorithm for training is based on the implementation from OpenAI Baselines. Tab. 2 shows the hyerparamters we used for the PPO training. Subtask First we show qualitative of the individual subtask controllers trained using their corresponding reference motions. Each row in Fig. 5 shows the humanoid performance of one particular subtask: walk in one direction (row 1), following a target (row 2), turn in place both left (row 3) and right (row 4), and sit on a chair (row 5). We adopt two different metrics to quantitatively evaluate the main task: success rate and minimum distance. We declare a success whenever the pelvis of the humanoid has been continuously in contact with the seat surface for 3.0 seconds. We report the success rate over 10,000 trials by spawning the humanoid at random locations. Note that the success rate evaluates task completion with a hard constraint and does not reveal the progress when the humanoid fails. Therefore we also compute the per-trial minimum distance (in meters) between the pelvis and the center of the seat surface, and report the mean and standard deviation over the 10,000 trials. As noted in Sec. 6, the task can be challenging when the initial position of the humanoid is unconstrained. To better analyze the performance, we consider two different initialization settings: Easy and Hard. In the Easy setting, the humanoid is initialized from roughly 2 meters away on the front half plane of the chair (i.e. Zone 1 in Fig. 4), with an orientation roughly towards the chair. The task is expected to be completed by simply walking forward, turning around, and sitting down. In the Hard setting, humanoid is initialized again from roughly 2 meters away but on the lateral and rear sides of the chair (i.e. Zone 2 and 3 in Fig. 4). It needs to walk around the chair to sit down successfully. Easy Setting We benchmark our approach against various baselines in this setting. We start with two non-hierarchical (i.e. single-level) baselines. The first is a kinematics-based method: we select a mocap clip with a holistic motion sequence that successively performs walking, turning, and sitting on a chair. When a trial begins, we align the first frame of the sequence to the humanoid's initial pose by aligning the yaw of the root. Once aligned, we simply use the following frames of the sequence as the kinematic trajectory of the trial. Note that this method is purely kinematic and cannot reflect any physical interactions between the humanoid and chair. The second method extends the first one to a physics-based approach: we use the same kinematic sequence but now train a controller to imitate the motion. This is equivalent to training a subtask controller except the subtask is holistic (i.e. containing walk, turn, and sit in one reference motion). Both methods are considered nonhierarchical as neither performs task decomposition. Tab. 3 shows the quantitative . For the kinematics baseline, the success rate is not reported since we are unable to detect physical contact between the pelvis and chair. However, the 1.2656 mean minimum distance suggests that the humanoid on average remains far from the chair. For the physics baseline, we observe a similar mean minimum distance (i.e. 1.3316). The zero success rate is unsurprising given that the humanoid is unable to get close to the chair in most trials. As shown in the qualitative examples (Fig. 6), the motion generated by the kinematics baseline (row 1) is not physics realistic (e.g. sitting in air). The physics baseline (row 2), while following physics rules (e.g. falling on the ground eventually), still fails in approaching the chair. These holistic baselines perform poorly since they simply imitate the mocap example and repeat the same motion pattern regardless of their starting position. We now turn to a set of hierarchical baselines and our approach. We also consider two baselines. The first one always executes the subtasks in a pre-defined order, and the meta controller is only used to trigger transitions (i.e. a binary classification). Note that this is in similar spirit to. We consider two particular orders: walk→left turn→sit and walk→right turn→sit. The second one is a degenerated version of our approach that uses either only left turn or right turn: walk / left turn / sit and walk / right turn / sit. As shown in Tab. 3, hierarchical approaches outperform single level approaches, validating our hypothesis that hierarchical models, by breaking a task into reusable subtasks, can attain better generalization. Besides, our approach outperforms the pre-defined order baselines. This is because: the main task cannot always be completed by a fixed order of subtasks, and different scenarios, e.g. in Fig. 6, walk→right turn→sit when starting from the chair's right side (row 3), and walk→left turn→sit when starting from the chair's left side (row 4). Analysis As can be seen in Tab. 3, the success rate is still low even with the full model (i.e. 31.61%). This can be attributed to three factors: failures of subtask execution, failures due to subtask transitions, and an insufficient subtask repertoire. First, Tab. 4 (top) shows the success rate of individual subtasks, where the initial pose is set to the first frame of the reference motion (i.e. as in stage one of subtask training). We can see the execution does not always succeed (e.g. 67.59% for right turn). Second, Tab. 4 (bottom) shows the success rate for the same subtasks, but with the initial pose set to the last frame of the execution of another subtask (i.e. as in stage two of subtask training). With fine-tuning the success rate after transitions can be significantly improved, although still not perfect. Finally, Fig. 6 (row 5) shows a failure case where the humanoid needs a "back up" move when it is stuck in the state of directly confronting the chair. Building a more diverse subtask skill set is an interesting future research problem. To analyze the meta controller's behavior, we look at the statistics on the switching between subtasks. Fig. 7 shows the subtask transition matrices when the humanoid is started either from the right or left side of the chair. We can see that certain transitions are more favored in certain starting areas, e.g. walk→left turn is favored over walk→right turn when started from the left side. This is in line with the earlier observation that the two turning subtasks are complementary. Hard Setting We now increase the task's difficulty by initializing the humanoid in Zone 2 and 3 (Fig. 4), and show the effect of the proposed curriculum learning (CL) strategy. Tab. 6 shows the from different initialization zones. First, we observe a severe drop in the success rate when the humanoid is spawned in Zone 2 and 3 (e.g. from 31.61% to 4.05% for "Zone 3 w/o CL"). However, the success rate is higher in both zones when the proposed curriculum learning strategy is applied (e.g. from 4.05% to 7.05% in Zone 3). This suggests that a carefully tailored curriculum can improve the training outcome of a challenging task. Note that the difference in the minimum distance is less significant (e.g. 0.5549 for "Zone 2 w/o CL' versus 0.5526 for "Zone 2"), since without CL the humanoid can still approach the chair, but will fail to turn and sit due to the difficulty in learning. Fig. 8 shows two successful examples when the humanoid is spawned from the rear side of the chair. Interestingly, the humanoid learns a slightly different behavior (e.g. walk→sit without turn) compared to when starting from the front side (row 3 and 4 in Fig. 6). We show a vision-based application of our approach by synthesizing sitting motions from a single RGB image that depicts human living space with chairs. First, we recover the 3D scene configuration using the method of. We then align the observed scene with the simulated environment using the detected chair and its estimated 3D position and orientation. This enables us to transfer the synthesized sitting motion to the observed scene. Fig. 9 shows two images rendered with synthesized humanoid motion. While the motion looks physically plausible in these examples, this is not always the case in general, since we do not model the other objects (e.g. tables) in the scene. An interesting future direction is to learn the motion by simulating scenes with cluttered objects. It is also possible to synthesize motions based on the humans observed in the image, given the recent advance on extracting 3D human pose from a single image (b). | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HylvlaVtwr | Synthesizing human motions on interactive tasks using mocap data and hierarchical RL. |
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions. Generative modeling involves taking a set of samples drawn from an unknown data generating distribution P real and finding an estimate P model that closely resembles it. Generative adversarial networks (GAN) BID6 ) is a powerful framework used for fitting implicit generative models. The basic setup consists of two networks, the generator and the discriminator, playing against each other in a repeated zero-sum game setting. The goal here is to reach an equilibrium where P real, P model are close, and the alternating gradient updates procedure (AGD) is used to achieve this. However, this process is highly unstable and often in mode collapse BID7. This calls for an deeper investigation into training dynamics of GANs. In this paper, we propose studying GAN training dynamics as a repeated game in which both the players are using no-regret algorithms BID2 and discuss how AGD 1 falls under this paradigm. In contrast, much of the theory BID6 BID0 and recent developments BID15 BID8 are based on the unrealistic assumption that the discriminator is playing optimally (in the function space) at each step and as a , there is consistent minimization of a divergence between real and generated distributions. This corresponds to at least one player using the best-response algorithm (in the function space), and the ing game dynamics can be completely different in both these cases BID14. Thus, there is a clear disconnect between theoretical arguments used as motivation in recent literature and what actually happens in practice. We would like to point out that the latter view can still be useful for reasoning about the asymptotic equilibrium situation but we argue that regret minimization is the more appropriate way to think about GAN training dynamics. So, we analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We start with a short analysis of the artificial convex-concave case of the GAN game in section 2.2. This setting has a unique solution and guaranteed convergence (of averaged iterates) using no-regret algorithms can be shown with standard arguments from game theory literature. Here, we make explicit, the critical (previously not widely known) connection between AGD used in GAN training and regret minimization. This immediately yields a novel proof for the asymptotic convergence of GAN training, in the non-parametric limit. Prior to our work, such a BID6 ) required a strong assumption that the discriminator is optimal at each step. However, these convergence do not hold when the game objective function is non-convex, which is the practical case when deep neural networks are used. In non-convex games, global regret minimization and equilibrium computation are computationally hard in general. Recent gametheoretic literature indicates that AGD can end up cycling BID11 or converging to a (potentially bad) local equilibrium, under some conditions BID9. We hypothesize these to be the reasons for cycling and mode collapse observed during GAN training, respectively (section 2.3). In this work, we do not explore the cycling issue but focus our attention on the mode collapse problem. In contrast to our hypothesis, the prevalent view of mode collapse and instability BID0 is that it from attempting to minimize a strong divergence during training. However, as we argued earlier, GAN training with AGD does not consistently minimize a divergence and therefore, such a theory is not suitable to discuss convergence or to address the stability issue. Next, if mode collapse is indeed the of an undesirable local equilibrium, a natural question then is how we can avoid it? We make a simple observation that, in the GAN game, mode collapse situations are often accompanied by sharp gradients of the discriminator function around some real data points (section 2.4). Therefore, a simple strategy to mitigate mode collapse is to regularize the discriminator so as to constrain its gradients in the ambient data space. We demonstrate that this improves the stability using a toy experiment with one hidden layer neural networks. This gives rise to a new explanation for why WGAN and gradient penalties might be improving the stability of GAN training -they are mitigating the mode collapse problem by keeping the gradients of the discriminator function small in data space. From this motivation, we propose a training algorithm involving a novel gradient penalty scheme called DRAGAN (Deep Regret Analytic Generative Adversarial Networks) which enables faster training, achieves improved stability and modeling performance (over WGAN-GP BID8 which is the state-of-the-art stable training procedure) across a variety of architectures and objective functions. Below, we provide a short literature review. Several recent works focus on stabilizing the training of GANs. While some solutions BID17 BID18 require the usage of specific architectures (or) modeling objectives, some BID4 BID20 significantly deviate from the original GAN framework. Other promising works in this direction BID12 BID16 BID8 ) impose a significant computational overhead. Thus, a fast and versatile method for consistent stable training of GANs is still missing in the literature. Our work is aimed at addressing this. To summarize, our contributions are as follows:• We propose a new way of reasoning about the GAN training dynamics -by viewing AGD as regret minimization.• We provide a novel proof for the asymptotic convergence of GAN training in the nonparametric limit and it does not require the discriminator to be optimal at each step.• We discuss how AGD can converge to a potentially bad local equilibrium in non-convex games and hypothesize this to be responsible for mode collapse during GAN training.• We characterize mode collapse situations with sharp gradients of the discriminator function around some real data points.• A novel gradient penalty scheme called DRAGAN is introduced based on this observation and we demonstrate that it mitigates the mode collapse issue. We start with a brief description of the GAN framework (section 2.1). We discuss guaranteed convergence in the artificial convex-concave case using no-regret algorithms, and make a critical connection between GAN training process (AGD) and regret minimization (section 2.2). This immediately yields a novel proof for the asymptotic convergence of GAN training in the nonparametric limit. Then, we consider the practical non-convex case and discuss how AGD can converge to a potentially bad local equilibrium here (section 2.3). We characterize mode collapse situations with sharp gradients of the discriminator function around real samples and this provides an effective strategy to avoid them. This naturally leads to the introduction of our gradient penalty scheme DRAGAN (section 2.4). We end with a discussion and comparison with other gradient penalties in the literature (section 2.5). The GAN framework can be viewed as a repeated zero-sum game, consisting of two players -the generator, which produces synthetic data given some noise source and the discriminator, which is trained to distinguish generator's samples from the real data. The generator model G is parameterized by φ, takes a noise vector z as input, and produces a synthetic sample G φ (z). The discriminator model D is parameterized by θ, takes a sample x as input and computes D θ (x), which can be interpreted as the probability that x is real. The models G, D can be selected from any arbitrary class of functions -in practice, GANs typical rely on deep networks for both. Their cost functions are defined as DISPLAYFORM0, and DISPLAYFORM1 And the complete game can be specified as - DISPLAYFORM2 The generator distribution P model asymptotically converges to the real distribution P real if updates are made in the function space and the discriminator is optimal at each step BID6. According to Sion's theorem BID19, if Φ ⊂ R m, Θ ⊂ R n such that they are compact and convex sets, and the function J: Φ × Θ → R is convex in its first argument and concave in its second, then we havemin DISPLAYFORM0 That is, an equilibrium is guaranteed to exist in this setting where players' payoffs correspond to the unique value of the game BID13.A natural question then is how we can find such an equilibrium. A simple procedure that players can use is best-response algorithms (BRD). In each round, best-responding players play their optimal strategy given their opponent's current strategy. Despite its simplicity, BRD are often computationally intractable and they don't lead to convergence even in simple games. In contrast, a technique that is both efficient and provably works is regret minimization. If both players update their parameters using no-regret algorithms, then it is easy to show that their averaged iterates will converge to an equilibrium pair BID14. Let us first define no-regret algorithms. DISPLAYFORM1, where we define DISPLAYFORM2 We can apply no-regret learning to our problem of equilibrium finding in the GAN game J(·, ·) as follows. The generator imagines the function J(·, θ t) as its loss function on round t, and similarly the discriminator imagines −J(φ t, ·) as its loss function at t. After T rounds of play, each player computes the average iteratesφ DISPLAYFORM3 If V * is the equilibrium value of the game, and the players suffer regret R 1 (T) and R 2 (T) respectively, then one can show using standard arguments BID5 ) that - DISPLAYFORM4 T. In other words,θ T andφ T are "almost optimal" solutions to the game, where the "almost" approximation factor is given by the average regret terms DISPLAYFORM5. Under the no-regret condition, the former will vanish, and hence we can guarantee convergence in the limit. Next, we define a popular family of no-regret algorithms. Definition 2.2 (Follow The Regularized Leader). FTRL BID10 selects k t on round t by solving for arg min DISPLAYFORM6 where Ω(·) is some convex regularization function and η is a learning rate. Remark: Roughly speaking, if you select the regularization as Ω(·) = 1 2 · 2, then FTRL becomes the well-known online gradient descent or OGD BID21. Ignoring the case of constraint violations, OGD can be written in a simple iterative form: DISPLAYFORM7 The typical GAN training procedure using alternating gradient updates (or simultaneous gradient updates) is almost this -both the players applying online gradient descent. Notice that the min/max objective function in GANs involves a stochastic component, with two randomized inputs given on each round, x and z which are sampled from the data distribution and a standard multivariate normal, respectively. Let us write DISPLAYFORM8 Taking expectations with respect to x and z, we define the full (non-stochastic) game as DISPLAYFORM9 But the above online training procedure is still valid with stochastic inputs. That is, the equilibrium computation would proceed similarly, where on each round we sample x t and z t, and follow the updates DISPLAYFORM10 On a side note, a benefit of this stochastic perspective is that we can get a generalization bound on the mean parametersφ T after T rounds of optimization. The celebrated "online-to-batch conversion" BID3 implies that E x,z [J x,z (φ T, θ)], for any θ, is no more than the optimal value DISPLAYFORM11, where the expectation is taken with respect to the sequence of samples observed along the way, and any randomness in the algorithm. Analogously, this applies toθ T as well. A limitation of this , however, is that it requires a fresh sample x t to be used on every round. To summarize, we discussed in this subsection about how the artificial convex-concave case is easy to solve through regret minimization. While this is a standard in game theory and online learning literature, it is not widely known in the GAN literature. For instance, BID18 and BID7 discuss a toy game which is convex-concave and show cycling behavior. But, the simple solution in that case is to just average the iterates. Further, we made explicit, the critical connection between regret minimization and alternating gradient updates procedure used for GAN training. Now, BID6 argue that, if G and D have enough capacity (in the non-parametric limit) and updates are made in the function space, then the GAN game can be considered convex-concave. Thus, our analysis based on regret minimization immediately yields a novel proof for the asymptotic convergence of GANs, without requiring that the discriminator be optimal at each step. Moreover, the connection between regret minimization and GAN training process gives a novel way to reason about its dynamics. In contrast, the popular view of GAN training as consistently minimizing a divergence arises if the discriminator uses BRD (in the function space) and thus, it has little to do with the actual training process of GANs. As a , this calls into question the motivation behind many recent developments like WGAN and gradient penalties among others, which improve the training stability of GANs. In the next subsection, we discuss the practical non-convex case and why training instability arises. This provides the necessary ideas to investigate mode collapse from our new perspective. In practice, we choose G, D to be deep neural networks and the function J(φ, θ) need not be convexconcave anymore. The nice properties we had in the convex-concave case like the existence of a unique solution and guaranteed convergence through regret minimization no longer hold. In fact, regret minimization and equilibrium computation are computationally hard in general non-convex settings. However, analogous to the case of non-convex optimization (also intractable) where we focus on finding local minima, we can look for tractable solution concepts in non-convex games. Recent work by BID9 introduces the notion of local regret and shows that if both the players use a smoothed variant of OGD to minimize this quantity, then the non-convex game converges to some form of local equilibrium, under mild assumptions. The usual training procedure of GANs (AGD) corresponds to using a window size of 1 in their formulation. Thus, GAN training will eventually converge (approximately) to a local equilibrium which is described below or the updates will cycle. We leave it to future works to explore the equally important cycling issue and focus here on the former case. DISPLAYFORM0 That is, in a local equilibrium, both the players do not have much of an incentive to switch to any other strategy within a small neighborhood of their current strategies. Now, we turn our attention to the mode collapse issue which poses a significant challenge to the GAN training process. The training is said to have ed in mode collapse if the generator ends up mapping multiple z vectors to the same output x, which is assigned a high probability of being real by the discriminator BID7. We hypothesize this to be the of the game converging to bad local equilibria. The prevalent view of mode collapse and instability in GAN training BID0 is that it is caused due to the supports of real and model distributions being disjoint or lying on low-dimensional manifolds. The argument is that this would in strong distance measures like KL-divergence or JS-divergence getting maxed out, and the generator cannot get useful gradients to learn. In fact, this is the motivation for the introduction of WGAN. But, as we argued earlier, GAN training does not consistently minimize a divergence as that would require using intractable best-response algorithms. Hence, such a theory is not suitable to discuss convergence or to address the instability of GAN training. Our new view of GAN training process as regret minimization is closer to what is used in practice and provides an alternate explanation for mode collapse -the existence of undesirable local equilibria. The natural question now is how we can avoid them? The problem of dealing with multiple equilibria in games and how to avoid undesirable ones is an important question in algorithmic game theory BID14. In this work, we constrain ourselves to the GAN game and aim to characterize the undesirable local equilibria (mode collapse) in an effort to avoid them. In this direction, after empirically studying multiple mode collapse cases, we found that it is often accompanied by the discriminator function having sharp gradients around some real data points (See FIG0). This intuitively makes sense from the definition of mode collapse discussed earlier. Such sharp gradients encourage the generator to map multiple z vectors to a single output x and lead the game towards a degenerate equilibrium. Now, a simple strategy to mitigate this failure case would be to regularize the discriminator using the following penalty - DISPLAYFORM0 This strategy indeed improves the stability of GAN training. We show the of a toy experiment with one hidden layer neural networks in FIG0 and FIG1 to demonstrate this. This partly explains the success of WGAN and gradient penalties in the recent literature BID8 BID16, and why they improve the training stability of GANs, despite being motivated by reasoning based on unrealistic assumptions. However, we noticed that this scheme in its current form can be brittle and if over-penalized, the discriminator can end up assigning both a real point x and noise x + δ, the same probability of being real. Thus, a better choice of penalty is - DISPLAYFORM1 Finally, due to practical optimization considerations (this has also been observed in BID8), we instead use the penalty shown below in all our experiments. DISPLAYFORM2 2 At times, stochasticity seems to help in getting out of the basin of attraction of a bad equilibrium Figure 1: One hidden layer networks as G and D (MNIST). On the left, we plot inception score against time for vanilla GAN training and on the right, we plot the squared norm of discriminator's gradients around real data points for the same experiment. Notice how this quantity changes before, during and after mode collapse events. This still works as long as small perturbations of real data, x + δ are likely to lie off the data-manifold, which is true in the case of image domain and some other settings. Because, in these cases, we do want our discriminator to assign different probabilities of being real to training data and noisy samples. We caution the practitioners to keep this important point in mind while making their choice of penalty. All of the above schemes have the same effect of constraining the norm of discriminator's gradients around real points to be small and can therefore, mitigate the mode collapse situation. We refer to GAN training using these penalty schemes or heuristics as the DRAGAN algorithm. Additional details:• We use the vanilla GAN objective in our experiments, but our penalty improves stability using other objective functions as well. This is demonstrated in section 3.3.• The penalty scheme used in our experiments is the one shown in equation 1.• We use small pixel-level noise but it is possible to find better ways of imposing this penalty. However, this exploration is beyond the scope of our paper.• The optimal configuration of the hyperparameters for DRAGAN depends on the architecture, dataset and data domain. We set them to be λ ∼ 10, k = 1 and c ∼ 10 in most of our experiments. Several recent works have also proposed regularization schemes which constrain the discriminator's gradients in the ambient data space, so as to improve the stability of GAN training. Despite being from different motivations, WGAN-GP and LS-GAN are closely related approaches to ours. First, we show that these two approaches are very similar, which is not widely known in the literature. BID16 introduced LS-GAN with the idea of maintaining a margin between losses assigned to real and fake samples. Further, they also impose Lipschitz constraint on D and the two conditions together in a situation where the following holds for any real and fake sample pair (roughly) - DISPLAYFORM0 The authors argue that the ing discriminator function would have non-vanishing gradients almost everywhere between real and fake samples (section 6 of Qi FORMULA18). Next, BID8 proposed an extension to address various shortcomings of the original WGAN and they impose the following condition on D - DISPLAYFORM1 is some point on the line between a real and a fake sample, both chosen independently at random. This leads to D having norm-1 gradients almost everywhere between real and fake samples. Notice that this behavior is very similar to that of LS-GAN's discriminator function. Thus, WGAN-GP is a slight variation of the original LS-GAN algorithm and we refer to these methods as "coupled penalties".On a side note, we also want to point out that WGAN-GP's penalty doesn't actually follow from KR-duality as claimed in their paper. By Lemma 1 of BID8, the optimal discriminator D * will have norm-1 gradients (almost everywhere) only between those x and G φ (z) pairs which are sampled from the optimal coupling or joint distribution π *. Therefore, there is no basis for WGAN-GP's penalty (equation 3) where arbitrary pairs of real and fake samples are used. This fact adds more credence to our theory regarding why gradient penalties might be mitigating mode collapse. The most important distinction between coupled penalties and our methods is that we only impose gradient constraints in local regions around real samples. We refer to these penalty schemes as "local penalties". Coupled penalties impose gradient constraints between real and generated samples and we point out some potential issues that arise from this:• With adversarial training finding applications beyond fitting implicit generative models, penalties which depend on generated samples can be prohibitive.• The ing class of functions when coupled penalties are used will be highly restricted compared to our method and this affects modeling performance. We refer the reader to FIG2 and appendix section 5.2.2 to see this effect.• Our algorithm works with AGD, while WGAN-GP needs multiple inner iterations to optimize D. This is because the generated samples can be anywhere in the data space and they change from one iteration to the next. In contrast, we consistently regularize D θ (x) only along the real data manifold. To conclude, appropriate constraining of the discriminator's gradients can mitigate mode collapse but we should be careful so that it doesn't have any negative effects. We pointed out some issues with coupled penalties and how local penalties can help. We refer the reader to section 3 for further experimental . In section 3.1, we compare the modeling performance of our algorithm against vanilla GAN and WGAN variants in the standard DCGAN/CIFAR-10 setup. Section 3.2 demonstrates DRAGAN's improved stability across a variety of architectures. In section 3.3, we show that our method also works with other objective functions. Appendix contains samples for inspection, some of the missing plots and additional . Throughout, we use inception score BID18 which is a well-studied and reliable metric in the literature, and sample quality to measure the performance. DCGAN is a family of architectures designed to perform well with the vanilla training procedure. They are ubiquitous in the GAN literature owing to the instability of vanilla GAN in general settings. We use this architecture to model CIFAR-10 and compare against vanilla GAN, WGAN and WGAN-GP. As WGANs need 5 discriminator iterations for every generator iteration, comparing the modeling performance can be tricky. To address this, we report two scores for vanilla GAN and DRAGANone using the same number of generator iterations as WGANs and one using the same number of discriminator iterations. The are shown in FIG3 and samples are included in the appendix (Figure 8). Notice that DRAGAN beats WGAN variants in both the configurations, while vanilla GAN is only slightly better. A key point to note here is that our algorithm is fast compared to WGANs, so in practice, the performance will be closer to the DRAGAN d case. In the next section, we will show that if we move away from this specific architecture family, vanilla GAN training can become highly unstable and that DRAGAN penalty mitigates this issue. Ideally, we would want our training procedure to perform well in a stable fashion across a variety of architectures (other than DCGANs). Similar to and BID8, we remove the stabilizing components of DCGAN architecture and demonstrate improved stability & modeling performance compared to vanilla GAN training (see appendix section 5.2.3). However, this is a small set of architectures and it is not clear if there is an improvement in general. To address this, we introduce a metric termed the BogoNet score to compare the stability & performance of different GAN training procedures. The basic idea is to choose random architectures for players G and D independently, and evaluate the performance of different algorithms in the ing games. A good algorithm should achieve stable performance without failing to learn or ing in mode collapse, despite the potentially imbalanced architectures. In our experiment, each player is assigned a network from a diverse pool of architectures belonging to three different families (MLP, ResNet, DCGAN). To demonstrate that our algorithm performs better compared to vanilla GAN training and WGAN-GP, we created 100 such instances of hard games. Each instance is trained using these algorithms on CIFAR-10 (under similar conditions for a fixed number of generator iterations, which gives a slight advantage to WGAN-GP) and we plot how inception score changes over time. For each algorithm, we calculated the average of final inception scores and area under the curve (AUC) over all 100 instances. The are shown in TAB0. Notice that we beat the other algorithms in both metrics, which indicates some improvement in stability and modeling performance. Further, we perform some qualitative analysis to verify that BogoNet score indeed captures the improvements in stability. We create another set of 50 hard architectures and compare DRAGAN against vanilla GAN training. Each instance is allotted 5 points and we split this bounty between the two algorithms depending on their performance. If both perform well or perform poorly, they get 2.5 points each, so that we nullify the effect of such non-differentiating architectures. However, if one algorithm achieves stable performance compared to the other (in terms of failure to learn or mode collapses), we assign it higher portions of the bounty. Results were judged by two of the authors in a blind manner: The curves were shown side-by-side with the choice of algorithm for each side being randomized and unlabeled. The vanilla GAN received an average score of 92.5 while our algorithm achieved an average score of 157.5 and this correlates with BogoNet score from earlier. See appendix section 5.3 for some additional details regarding this experiment. Our algorithm improves stability across a variety of objective functions and we demonstrate this using the following experiment. BID15 show that we can interpret GAN training as minimizing various f -divergences when an appropriate game objective function is used. We show experiments using the objective functions developed for Forward KL, Reverse KL, Pearson χ 2, Squared Hellinger, and Total Variation divergence minimization. We use a hard architecture from the previous subsection to demonstrate the improvements in stability. Our algorithm is stable in all cases except for the total variation case, while the vanilla algorithm failed in all the cases (see FIG4 for two examples and FIG3 in appendix for all five). Thus, practitioners can now choose their game objective from a larger set of functions and use DRAGAN (unlike WGANs which requires a specific objective function). In this paper, we propose to study GAN training process as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view and hypothesize that mode collapse occurs due to the existence of undesirable local equilibria. A simple observation is made about how the mode collapse situation often exhibits sharp gradients of the discriminator function around some real data points. This characterization partly explains the workings of previously proposed WGAN and gradient penalties, and motivates our novel penalty scheme. We show evidence of improved stability using DRAGAN and the ing improvements in modeling performance across a variety of settings. We leave it to future works to explore our ideas in more depth and come up with improved training algorithms. In this section, we provide samples from an additional experiment run on CelebA dataset (Figure 7). The samples from the experiment in section 3.1 are shown in Figure 8. Further, BID17 suggest that walking on the manifold learned by the generator can expose signs of memorization. We use DCGAN architecture to model MNIST and CelebA datasets using DRAGAN penalty, and the latent space walks of the learned models are shown in Figure 9 and Figure 10. The demonstrate that the generator is indeed learning smooth transitions between different images, when our algorithm is used. We design a simple experiment where G and D are both fully connected networks with just one hidden layer. Vanilla GAN performs poorly even in this simple case and we observe severe mode collapses. In contrast, our algorithm is stable throughout and obtains decent quality samples despite the constrained setup. We analyze the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. As it can be seen in FIG1, both of them approximately converge to the real distribution but notice that in the case of WGAN-GP, D θ (x) seems overly constrained in the data space. In contrast, DRAGAN's discriminator is more flexible. Orange is real samples, green is generated samples. The level sets of D θ (x) are shown in the , with yellow as high and purple as low. DCGAN architecture has been designed following specific guidelines to make it stable BID17. We restate the suggested rules here.1. Use all-convolutional networks which learn their own spatial downsampling (discriminator) or upsampling (generator) 2. Remove fully connected hidden layers for deeper architectures 3. Use batch normalization in both the generator and the discriminator 4. Use ReLU activation in the generator for all layers except the output layer, which uses tanh We show below that such constraints can be relaxed when using our algorithm and still maintain training stability. Below, we present a series of experiments in which we remove different stabilizing components from the DCGAN architecture and analyze the performance of our algorithm. Specifically, we choose the following four architectures which are difficult to train (in each case, we start with base DCGAN architecture and apply the changes) -• No BN and a constant number of filters in the generator• 4-layer 512-dim ReLU MLP generator• tanh nonlinearities everywhere• tanh nonlinearity in the generator and 4-layer 512-dim LeakyReLU MLP discriminator Notice that, in each case, our algorithm is stable while the vanilla GAN training fails. A similar approach is used to demonstrate the stability of training procedures in and BID8. Due to space limitations, we only showed plots for two cases in section 3.3. Below we show the for all five cases. We used three families of architectures with probabilities -DCGAN (0.6), ResNet (0.2), MLP (0.2). Next, we further parameterized each family to create additional variation. For instance, the DCGAN family can in networks with or without batch normalization, have LeakyReLU or Tanh nonlinearities. The number and width of filters, latent space dimensionality are some other possible variations in our experiment. Similarly, the number of layers and hidden units in each layer for MLP are chosen randomly. For ResNets, we chose their depth randomly. This creates a set of hard games which test the stability of a given training algorithm. We showed qualitative analysis of the inception score plots in section 3.2 to verify that BogoNet score indeed captures the improvements in stability. Below, we show some examples of how the bounty splits were done. The plots in FIG2 were scored as (averages are shown in DRAGAN, Vanilla GAN order):A -, B -(3.5, 1.5), C -(2.25, 2.75), D - | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | ryepFJbA- | Analysis of convergence and mode collapse by studying GAN training process as regret minimization |
Deep neural networks (DNNs) have attained surprising achievement during the last decade due to the advantages of automatic feature learning and freedom of expressiveness. However, their interpretability remains mysterious because DNNs are complex combinations of linear and nonlinear transformations. Even though many models have been proposed to explore the interpretability of DNNs, several challenges remain unsolved: 1) The lack of interpretability quantity measures for DNNs, 2) the lack of theory for stability of DNNs, and 3) the difficulty to solve nonconvex DNN problems with interpretability constraints. To address these challenges simultaneously, this paper presents a novel intrinsic interpretability evaluation framework for DNNs. Specifically, Four independent properties of interpretability are defined based on existing works. Moreover, we investigate the theory for the stability of DNNs, which is an important aspect of interpretability, and prove that DNNs are generally stable given different activation functions. Finally, an extended version of deep learning Alternating Direction Method of Multipliers (dlADMM) are proposed to solve DNN problems with interpretability constraints efficiently and accurately. Extensive experiments on several benchmark datasets validate several DNNs by our proposed interpretability framework. | [
0,
0,
0,
1,
0,
0,
0,
0
] | BylssnVFwH | We propose a novel framework to evaluate the interpretability of neural network. |
Reinforcement learning (RL) typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation. However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms. We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods. The standard treatment of the reinforcement learning (RL) problem is the Markov Decision Process (MDP) which includes a discount factor 0 ≤ γ ≤ 1 that exponentially reduces the present value of future rewards . A reward r t received in t-time steps is devalued to γ t r t, a discounted utility model introduced by. This establishes a timepreference for rewards realized sooner rather than later. The decision to exponentially discount future rewards by γ leads to value functions that satisfy theoretical convergence properties . The magnitude of γ also plays a role in stabilizing learning dynamics of RL algorithms and has recently been treated as a hyperparameter of the optimization . However, both the magnitude and the functional form of this discounting function establish priors over the solutions learned. The magnitude of γ chosen establishes an effective horizon for the agent of 1/(1 − γ), far beyond which rewards are neglected . This effectively imposes a time-scale of the environment, which may not be accurate. Further, the exponential discounting of future rewards is consistent with a prior belief that there is a known constant per-time-step hazard rate or probability of dying of 1 − γ . Additionally, discounting future values exponentially and according to a single discount factor γ does not harmonize with the measured value preferences in humans 1 and animals (; ; ;). A wealth of empirical evidence has been amassed that humans, monkeys, rats and pigeons instead discount future returns hyperbolically, where d k (t) = 1 1+kt, for some positive k > 0 (; 1992; ; ; ;). This discrepancy between the time-preferences of animals from the exponential discounted measure of value might be presumed irrational. showed that hyperbolic time-preferences is mathematically consistent with the agent maintaining some uncertainty over the prior belief of the hazard rate in the environment. Hazard rate h(t) measures the per-time-step risk the agent incurs as it acts in the environment due to a potential early death. Precisely, if s(t) is the probability that the agent is alive at time t then the hazard rate is h(t) = − d dt lns(t). We consider the case where there is a fixed, but potentially unknown hazard rate h(t) = λ ≥ 0. The prior belief of the hazard rate p(λ) implies a specific discount function. Under this formalism, the canonical case in RL of discounting future rewards according to d(t) = γ t is consistent with the belief that there exists a single hazard rate λ = e −γ known with certainty. Further details are available in Appendix A. Figure 1: Hyperbolic versus exponential discounting. Humans and animals often exhibit hyperbolic discounts (blue curve) which have shallower discount declines for large horizons. In contrast, RL agents often optimize exponential discounts (orange curve) which drop at a constant rate regardless of how distant the return. Common RL environments are also characterized by risk, but often in a narrower sense. In deterministic environments like the original Arcade Learning Environment (ALE) stochasticity is often introduced through techniques like no-ops and sticky actions where the action execution is noisy. Physics simulators may have noise and the randomness of the policy itself induces risk. But even with these stochastic injections the risk to reward emerges in a more restricted sense. In Section 2 we show that a prior distribution reflecting the uncertainty over the hazard rate, has an associated discount function in the sense that an MDP with either this hazard distribution or the discount function, has the same value function for all policies. This equivalence implies that learning policies with a discount function can be interpreted as making them robust to the associated hazard distribution. Thus, discounting serves as a tool to ensure that policies deployed in the real world perform well even under risks they were not trained under. We propose an algorithm that approximates hyperbolic discounting while building on successful Qlearning tools and their associated theoretical guarantees. We show learning many Q-values, each discounting exponentially with a different discount factor γ, can be aggregated to approximate hyperbolic (and other non-exponential) discount factors. We demonstrate the efficacy of our approximation scheme in our proposed Pathworld environment which is characterized both by an uncertain per-time-step risk to the agent. Conceptually, Pathworld emulates a foraging environment where an agent must balance easily realizable, small meals versus more distant, fruitful meals. We then consider higher-dimensional deep RL agents in the ALE, where we measure the benefits of hyperbolic discounting. This approximation mirrors the work of; which empirically demonstrates that modeling a finite set of µAgents simultaneously can approximate hyperbolic discounting function. Our method then generalizes to other non-hyperbolic discount functions and uses deep neural networks to model the different Q-values from a shared representation. Surprisingly and in addition to enabling new non-exponential discounting schemes, we observe that learning a set of Q-values is beneficial as an auxiliary task . Adding this multi-horizon auxiliary task often improves over a state-of-the-art baseline, Rainbow in the ALE . This work questions the RL paradigm of learning policies through a single discount function which exponentially discounts future rewards through the following contributions: 1. Hazardous MDPs. We formulate MDPs with hazard present and demonstrate an equivalence between undiscounted values learned under hazards and (potentially nonexponentially) discounted values without hazard. 2. Hyperbolic (and other non-exponential)-agent. A practical approach for training an agent which discounts future rewards by a hyperbolic (or other non-exponential) discount function and acts according to this. 3. Multi-horizon auxiliary task. A demonstration of multi-horizon learning over many γ simultaneously as an effective auxiliary task. To study MDPs with hazard distributions and general discount functions we introduce two modifications. The hazardous MDP now is defined by the tuple < S, A, R, P, H, d >. In standard form, the state space S and the action space A may be discrete or continuous. The learner observes samples from the environment transition probability P (s t+1 |s t, a t) for going from s t ∈ S to s t+1 ∈ S given a t ∈ A. We will consider the case where P is a sub-stochastic transition function, which defines an episodic MDP. The environment emits a bounded reward r: S × A → [r min, r max] on each transition. In this work we consider non-infinite episodic MDPs. The first difference is that at the beginning of each episode, a hazard λ ∈ [0, ∞) is sampled from the hazard distribution H. This is equivalent to sampling a continuing probability γ = e −λ. During the episode, the hazard modified transition function will be P λ, in that P λ (s |s, a) = e −λ P (s |s, a). The second difference is that we now consider a general discount function d(t). This differs from the standard approach of exponential discounting in RL with γ according to d(t) = γ t, which is a special case. This setting makes a close connection to partially observable Markov Decision Process (POMDP) where one might consider λ as an unobserved variable. However, the classic POMDP definition contains an explicit discount function γ as part of its definition which does not appear here. A policy π: S → A is a mapping from states to actions. The state action value function Q is the expected discounted rewards after taking action a in state s and then following policy π until termination. where λ ∼ H and E π,P λ implies that s t+1 ∼ P λ (·|s t, a t) and a t ∼ π(·|s t). In the hazardous MDP setting we observe the same connections between hazard and discount functions delineated in Appendix A. This expresses an equivalence between the value function of an MDP with a discount and MDP with a hazard distribution. For example, there exists an equivalence between the exponential discount function d(t) = γ t to the undiscounted case where the agent is subject to a (1 − γ) per time-step of dying . The typical Q-value (left side of Equation 2) is when the agent acts in an environment without hazard λ = 0 or H = δ and discounts future rewards according to d(t) = γ t = e −λt which we denote as Q The alternative Q-value (right side of Equation 2) is when the agent acts under hazard rate λ = − ln γ but does not discount future rewards which we denote as Q where δ(x) denotes the Dirac delta distribution at x. This follows from P λ (s |s, a) = e −λ P (s |s, a) We also show a similar equivalence between hyperbolic discounting and the specific hazard distribu- For notational brevity later in the paper, we will omit the explicit hazard distribution H-superscript if the environment is not hazardous. This formulation builds upon's relate of hazard rate and discount functions and shows that this holds for generalized Q-values in reinforcement learning. We now show how one can re-purpose exponentially-discounted Q-values to compute hyperbolic (and other-non-exponential) discounted Q-values. The central challenge with using non-exponential discount strategies is that most RL algorithms use some form of TD learning . This family of algorithms exploits the Bellman equation which, when using exponential discounting, relates the value function at one state with the value at the following state. where expectation E π,P denotes sampling a ∼ π(·|s), s ∼ P (·|s, a), and a ∼ π(·|s). Being able to reuse TD methods without being constrained to exponential discounting is thus an important challenge. We propose here a scheme to deduce hyperbolic as well as other non-exponentially discounted Q-values when our discount function has a particular form. which we will refer to as the exponential weighting condition, then Proof. Applying the condition on d, The exchange in the above proof is valid if The exponential weighting condition is satisfied for hyperbolic discounting and other discounting that we might want to consider (see Appendix F for examples). As an example, the hyperbolic discount can be expressed as the integral of a function f (γ, t) for γ = in Equation 9. This equationn tells us an integral over a function f (γ, t) = 1 k γ 1/k+t−1 = w(γ)γ t yields the desired hyperbolic discount factor Γ k (t) = This prescription gives us a tool to produce general forms of non-exponentially discounted Q-values using our familiar exponentially discounted Q-values traditionally learned in RL . Section 3 describes an equivalence between hyperbolically-discounted Q-values and integrals of exponentially-discounted Q-values, however, the method required evaluating an infinite set of value functions. We therefore present a practical approach to approximate discounting Γ(t) = 1 1+kt using a finite set of functions learned via standard Q-learning . To avoid estimating an infinite number of Q γ π -values we introduce a free hyperparameter (n γ) which is the total number of Q γ π -values to consider, each with their own γ. We use a practically-minded approach to choose G that emphasizes evaluating larger values of γ rather than uniformly choosing points and empirically performs well as seen in Section 5. Our approach is described in Appendix G. Each Q γi π computes the discounted sum of returns according to that specific discount factor We previously proposed two equivalent approaches for computing hyperbolic Q-values, but for simplicity we consider the one presented in Lemma 3.1. The set of Q-values permits us to estimate the integral through a Riemann sum (Equation 11) which is described in further detail in Appendix I. where we estimate the integral through a lower bound. We consolidate this entire process in Figure 11 where we show the full process of rewriting the hyperbolic discount rate, hyperbolically-discounted Q-value, the approximation and the instantiated agent. This approach is similar to that of where each µAgent models a specific discount factor γ. However, this differs in that our final agent computes a weighted average over each Q-value rather than a sampling operation of each agent based on a γ-distribution. The benefits of hyperbolic discounting will be greatest under two conditions: uncertain hazard and non-trivial intertemporal decisions. The first condition can arise under a unobserved hazard-rate variable λ drawn independently at the beginning of each episode from H = p(λ). The second condition emerges with a choice between a smaller nearby rewards versus larger distant rewards. In the absence of both properties we would not expect any advantage to discounting hyperbolically. To see why, if there is a single-true hazard rate λ env, than an optimal γ * = e −λenv exists and future rewards should be discounted exponentially according to it. Further, if there is a single path through the environment with perfect alignment of short-and long-term objectives, all discounting schemes yield the same optimal policy. We note two sources for discounting rewards in the future: time delay and survival probability (Section 2). In Pathworld we train to maximize hyperbolically discounted returns (t Γ k (t)R(s t, a t)) under no hazard (H = δ(λ − 0)) but then evaluate the undiscounted returns d(t) = 1.0 ∀ t with the paths subject to hazard H = 1 k exp(−λ/k). Through this procedure, we are able to train an agent that is robust to hazards in the environment. The agent makes one decision in Pathworld (Figure 2): which of the N paths to investigate. Once a path is chosen, the agent continues until it reaches the end or until it dies. This is similar to a multi-armed bandit, with each action subject to dynamic risk. The paths vary quadratically in length with the index d(i) = i 2 but the rewards increase linearly with the path index r(i) = i. This presents... Figure 2: The Pathworld. Each state (white circle) indicates the accompanying reward r and the distance from the starting state d. From the start state, the agent makes a single action: which which path to follow to the end. Longer paths have a larger rewards at the end, but the agent incurs a higher risk on a longer path. a non-trivial decision for the agent. At deployment, an unobserved hazard λ ∼ H is drawn and the agent is subject to a per-time-step risk of dying of (1 − e −λ). This environment differs from the adjusting-delay procedure presented by and then later modified by. Rather then determining time-preferences through variable-timing of rewards, we determine time-preferences through risk to the reward. Figure 3: In each episode of Pathworld an unobserved hazard λ ∼ p(λ) is drawn and the agent is subject to a total risk of the reward not being realized of Figure 3 showing that our approximation scheme well-approximates the true valueprofile. Figure 3 validates that our approach well-approximates the true hyperbolic value of each path when the hazard prior matches the true distribution. Agents that discount exponentially according to a single γ (the typical case in RL) incorrectly value the paths. We examine further the failure of exponential discounting in this hazardous setting. For this environment, the true hazard parameter in the prior was k = 0.05 (i.e. λ ∼ 20exp(−λ/0.05)). Therefore, at deployment, the agent must deal with dynamic levels of risk and faces a non-trivial decision of which path to follow. Even if we tune an agent's γ = 0.975 such that it chooses the correct arg-max path, it still fails to capture the functional form (Figure 3) and it achieves a high error over all paths (Table 1). If the arg-max action was not available or if the agent was proposed to evaluate non-trivial intertemporal decisions, it would act sub-optimally. In Appendix B we consider additional experiments where the agent's prior over hazard more realistically does not exactly match the environment true hazard rate and demonstrate the benefit of appropriate priors. With our approach validated in Pathworld, we now move to the high-dimensional environment of Atari 2600, specifically, ALE. We use the Rainbow variant from Dopamine which implements three of the six considered improvements from the original paper: distributional RL, predicting n-step returns and prioritized replay buffers. The agent (Figure 4) maintains a shared representation h(s) of state, but computes Q-value logits for each of the N γ i via Q π (s, a) = W i h(s) + b i where W i and b i are the learnable parameters of the affine transformation for that head. A ReLU-nonlinearity is used within the body of the network .: Multi-horizon model predicts Q-values for n γ separate discount functions thereby modeling different effective horizons. Each Q-value is a lightweight computation, an affine transformation off a shared representation. By modeling over multiple time-horizons, we now have the option to construct policies that act according to a particular value or a weighted combination. Hyperparameter details are provided in Appendix K and when applicable, they default to the standard Dopamine values. We find strong performance improvements of the hyperbolic agent built on Rainbow (Hyper-Rainbow; blue bars) on a random subset of Atari 2600 games in Figure 5. To dissect the Hyper-Rainbow improvements, recognize that two properties from the base Rainbow agent have changed: 1. Behavior policy, µ. The agent acts according to hyperbolic Q-values computed by our approximation described in Section 4 2. Learn over multiple horizons. The agent simultaneously learns Q-values over many γ rather than a Q-value for a single γ On this subset of 19 games, Hyper-Rainbow improves upon 14 games and in some cases, by large margins. But we seek here a more complete understanding of the underlying driver of this improvement in ALE through an ablation study. The second modification can be regarded as introducing an auxiliary task . Therefore, to attribute the performance of each properly we construct a Rainbow agent augmented with the multi-horizon auxiliary task (referred to as Multi-Rainbow and shown in orange) but have it still act according to the original policy. That is, Multi-Rainbow acts to maximize expected rewards discounted by a fixed γ action but now learns over multiple horizons as shown in Figure 4. Figure 5: We compare the Hyper-Rainbow (in blue) agent versus the Multi-Rainbow (orange) agent on a random subset of 19 games from ALE (3 seeds each). For each game, the percentage performance improvement for each algorithm against Rainbow is recorded. There is no significant difference whether the agent acts according to hyperbolically-discounted (Hyper-Rainbow) or exponentiallydiscounted (Multi-Rainbow) Q-values suggesting the performance improvement in ALE emerges from the multi-horizon auxiliary task. We find that the Multi-Rainbow agent performs nearly as well on these games, suggesting the effectiveness of this as a stand-alone auxiliary task. This is not entirely unexpected given the rather special-case of hazard exhibited in ALE through sticky-actions . We examine further and investigate the performance of this auxiliary task across the full Arcade Learning Environment using the recommended evaluation by . Doing so we find strong empirical benefits of the multi-horizon auxiliary task over the state-of-the-art Rainbow agent as shown in Figure 6. Game Name Auxiliary Task Improvement for Rainbow Agent Figure 6: Performance improvement over Rainbow using the multi-horizon auxiliary task in Atari Learning Environment (3 seeds each). To understand the interplay of the multi-horizon auxiliary task with other improvements in deep RL, we test a random subset of 10 Atari 2600 games against improvements in Rainbow . On this set of games we measure a consistent improvement with multi-horizon C51 (Multi-C51) in 9 out of the 10 games over the base C51 agent in Figure 7. Figure 7 indicates that the current implementation of Multi-Rainbow does not generally build successfully on the prioritized replay buffer. On the subset of ten games considered, we find that four out of ten games (Pong, Venture, Gravitar and Zaxxon) are negatively impacted despite finding it to be of considerable benefit and specifically beneficial in three out of these Hyperbolic discounting in economics. Hyperbolic discounting is well-studied in the field of economics . proposes a softer interpretation than (which produces a per-time-step of death via the hazard rate) and demonstrates that uncertainty over the timing of rewards can also give rise to hyperbolic discounting and preference reversals, a hallmark of hyperbolic discounting. Hyperbolic discounting was initially presumed to not lend itself to TD-based solutions but the field has evolved on this point. proposes solution directions that find models that discount quasi-hyperbolically even though each learns with exponential discounting but reaffirms the difficulty. proposes hyperbolically discounted temporal difference (HDTD) learning by making connections to hazard. Behavior RL and hyperbolic discounting in neuroscience. TD-learning has long been used for modeling behavioral reinforcement learning (; ;). TD-learning computes the error as the difference between the expected value and actual value where the error signal emerges from unexpected rewards. However, these computations traditionally rely on exponential discounting as part of the estimate of the value which disagrees with empirical evidence in humans and animals (; ; ; 1992). Hyperbolic discounting has been proposed as an alternative to exponential discounting though it has been debated as an accurate model . Naive modifications to TD-learning to discount hyperbolically present issues since the demonstrated that distributed exponential discount factors can directly model hyperbolic discounting. This work proposes the µAgent, an agent that models the value function with a specific discount factor γ. When the distributed set of µAgent's votes on the action, this was shown to approximate hyperbolic discounting well in the adjusting-delay assay experiments . Using the hazard formulation established in , we demonstrate how to extend this to other non-hyperbolic discount functions and demonstrate the efficacy of using a deep neural network to model the different Q-values from a shared representation. Towards more flexible discounting in reinforcement learning. RL researchers have recently adopted more flexible versions beyond a fixed discount factor (; ; ;). Optimal policies are studied in where two value functions with different discount factors are used. Introducing the discount factor as an argument to be queried for a set of timescales is considered in both Horde and γ-nets . proposes the Average Reward Independent Gamma Ensemble framework which imitates the average return estimator. generalizes the original discounting model through discount functions that vary with the age of the agent, expressing time-inconsistent preferences as in hyperbolic discounting. The need to increase training stability via effective horizon was addressed in François- who proposed dynamic strategies for the discount factor γ. Meta-learning approaches to deal with the discount factor have been proposed in Xu, van. characterizes rational decision making in sequential processes, formalizing a process that admits a state-action dependent discount rates. Operating over multiple time scales has a long history in RL. generalizes the work of and to formalize a multi-time scale TD learning model theory. Previous work has been explored on solving MDPs with multiple reward functions and multiple discount factors though these relied on separate transition models . considers decomposing a reward function into separate components each with its own discount factor. In our work, we continue to model the same rewards, but now model the value over different horizons. Recent work in difficult exploration games demonstrates the efficacy of two different discount factors one for intrinsic rewards and one for extrinsic rewards. Finally, and concurrent with this work, proposes the TD(∆)-algorithm which breaks a value function into a series of value functions with smaller discount factors. Auxiliary tasks in reinforcement learning. Finally, auxiliary tasks have been successfully employed and found to be of considerable benefit in RL. used auxiliary tasks to facilitate representation learning. Building upon this, work in RL has consistently demonstrated benefits of auxiliary tasks to augment the low-information coming from the environment through extrinsic rewards (; ; ; ;) 8 DISCUSSION AND FUTURE WORK This work builds on a body of work that questions one of the basic premises of RL: one should maximize the exponentially discounted returns via a single discount factor. By learning over multiple horizons simultaneously, we have broadened the scope of our learning algorithms. Through this we have shown that we can enable acting according to new discounting schemes and that learning multiple horizons is a powerful stand-alone auxiliary task. Our method well-approximates hyperbolic discounting and performs better in hazardous MDP distributions. This may be viewed as part of an algorithmic toolkit to model alternative discount functions. However, this work still does not fully capture more general aspects of risk since the hazard rate may be a function of time. Further, hazard may not be an intrinsic property of the environment but a joint property of both the policy and the environment. If an agent purses a policy leading to dangerous state distributions then it will naturally be subject to higher hazards and vice-versathis creates a complicated circular dependency. We would therefore expect an interplay between time-preferences and policy. This is not simple to deal with but recent work proposing state-action dependent discounting formalizes time preferences in which future rewards are discounted based on the probability that the agent will not survive to collect them due to an encountered risk or hazard. Definition A.1. Survival s(t) is the probability of the agent surviving until time t. s(t) = P (agent is alive|at time t) A future reward r t is less valuable presently if the agent is unlikely to survive to collect it. If the agent is risk-neutral, the present value of a future reward r t received at time-t should be discounted by the probability that the agent will survive until time t to collect it, s(t). Consequently, if the agent is certain to survive, s(t) = 1, then the reward is not discounted per Equation 14. From this it is then convenient to define the hazard rate. Definition A.2. Hazard rate h(t) is the negative rate of change of the log-survival at time t or equivalently expressed as h(t) = − ds(t) dt 1 s(t). Therefore the environment is considered hazardous at time t if the log survival is decreasing sharply. demonstrates that the prior belief of the risk in the environment implies a specific discounting function. When the risk occurs at a known constant rate than the agent should discount future rewards exponentially. However, when the agent holds uncertainty over the hazard rate then hyperbolic and alternative discounting rates arise. We recover the familiar exponential discount function in RL based on a prior assumption that the environment has a known constant hazard. Consider a known hazard rate of h(t) = λ ≥ 0. Definition A.2 sets a first order differential equation. The solution for the survival rate is s(t) = e −λt which can be related to the RL discount factor γ This interprets γ as the per-time-step probability of the episode continuing. This also allows us to connect the hazard rate λ ∈ [0, ∞] to the discount factor γ ∈. As the hazard increases λ → ∞, then the corresponding discount factor becomes increasingly myopic γ → 0. Conversely, as the environment hazard vanishes, λ → 0, the corresponding agent becomes increasingly far-sighted γ → 1. In RL we commonly choose a single γ which is consistent with the prior belief that there exists a known constant hazard rate λ = −ln(γ). We now relax the assumption that the agent holds this strong prior that it exactly knows the true hazard rate. From a Bayesian perspective, a looser prior allows for some uncertainty in the underlying hazard rate of the environment which we will see in the following section. We may not always be so confident of the true risk in the environment and instead reflect this underlying uncertainty in the hazard rate through a hazard prior p(λ). Our survival rate is then computed by weighting specific exponential survival rates defined by a given λ over our prior p(λ) shows that under an exponential prior of hazard p(λ) = 1 k exp(−λ/k) the expected survival rate for the agent is hyperbolic We denote the hyperbolic discount by Γ k (t) to make the connection to γ in reinforcement learning explicit. shows that different priors over hazard correspond to different discount functions. We reproduce two figures in Figure 8 showing the correspondence between different hazard rate priors and the ant discount functions. The common approach in RL is to maintain a delta-hazard (black line) which leads to exponential discounting of future rewards. Different priors lead to non-exponential discount functions. There is a correspondence between hazard rate priors and the ing discount function. In RL, we typically discount future rewards exponentially which is consistent with a Dirac delta prior (black line) on the hazard rate indicating no uncertainty of hazard rate. However, this is a special case and priors with uncertainty over the hazard rate imply new discount functions. All priors have the same mean hazard rate E[p(λ)] = 1. In Figure 9 we consider the case that the agent still holds an exponential prior but has the wrong coefficient k and in Figure 10 we consider the case where the agent still holds an exponential prior but the true hazard is actually drawn from a uniform distribution with the same mean. Through these two validating experiments, we demonstrate the robustness of estimating hyperbolic discounted Q-values in the case when the environment presents dynamic levels of risk and the agent faces non-trivial decisions. Hyperbolic discounting is preferable to exponential discounting even when the agent's prior does not precisely match the true environment hazard rate distribution, by coefficient (Figure 9) or by functional form (Figure 10).. Predictably, the mismatched priors in a higher prediction error of value but performs more reliably than exponential discounting, ing in a cumulative lower error. Numerical in Table 2. Table 2: The average mean squared error (MSE) over each of the paths in Figure 9. As the prior is further away from the true value of k = 0.05, the error increases. However, notice that the errors for large factor-of-2 changes in k in generally lower errors than if the agent had considered only a single exponential discount factor γ as in Table 1. Figure 10: If the true hazard rate is now drawn according to a uniform distribution (with the same mean as before) the original hyperbolic discount matches the functional form better than exponential discounting. Numerical in Table 3. hyperbolic value 0.235 γ = 0.975 0.266 γ = 0.95 0.470 γ = 0.99 4.029 Table 3: The average mean squared error (MSE) over each of the paths in Figure 10 when the underlying hazard is drawn according to a uniform distribution. We find that hyperbolic discounting is more robust to hazards drawn from a uniform distribution than exponential discounting. Let's start with the case where we would like to estimate the value function where rewards are discounted hyperbolically instead of the common exponential scheme. We refer to the hyperbolic Q-values as Q Γ π below in Equation 21 We may relate the hyperbolic Q Γ π -value to the values learned through standard Q-learning. To do so, notice that the hyperbolic discount Γ t can be expressed as the integral of a certain function f (γ, t) for γ = in Equation 22. The integral over this specific function f (γ, t) = γ kt yields the desired hyperbolic discount factor Γ k (t) by considering an infinite set of exponential discount factors γ over its domain γ ∈. Recognize that the integrand γ kt is the standard exponential discount factor which suggests a connection to standard Q-learning . This suggests that if we could consider an infinite set of γ then we can combine them to yield hyperbolic discounts for the corresponding time-step t. We build on this idea of modeling many γ throughout this work. where Γ k (t) has been replaced on the first line by as a weighting over exponentiallydiscounted Qvalues using the same weights: can be expressed as a weighting over exponential discount functions with weights (see Table 1). 3. The integral in box 2 can be approximated with a Riemann sum over the discrete intervals: Following Section A we also show a similar equivalence between hyperbolic discounting and the specific hazard distribution Where the first step uses Equation 19. This equivalence implies that discount factors can be used to learn policies that are robust to hazards. We expand upon three special cases to see how functions f (γ, t) = w(γ)γ t may be related to different discount functions d(t). We summarize in Table 4 how a particular hazard prior p(λ) can be computed via integrating over specific weightings w(γ) and the corresponding discount function. Table 4: Different hazard priors H = p(λ) can be alternatively expressed through weighting exponential discount functions γ t by w(γ). This table matches different hazard distributions to their associated discounting function and the weighting function per Lemma 3.1. The typical case in RL is a Dirac Delta Prior over hazard rate δ(λ − k). We only show this in detail for completeness; one would not follow such a convoluted path to arrive back at an exponential discount but this approach holds for richer priors. The derivations can be found in the Appendix F. Three cases: For the three cases we begin with the Laplace transform on the prior p(λ) = ∞ λ=0 p(λ)e −λt dλ and then chnage the variables according to the relation between γ = e −λ, Equation 17. A delta prior p(λ) = δ(λ − k) on the hazard rate is consistent with exponential discounting. where δ(λ − k) is a Dirac delta function defined over variable λ with value k. The change of variable γ = e −λ (equivalently λ = − ln γ) yields differentials dλ = − 1 γ dγ and the limits λ = 0 → γ = 1 and λ = ∞ → γ = 0. Additionally, the hazard rate value λ = k is equivalent to the γ = e −k. where we define a γ k = e −k to make the connection to standard RL discounting explicit. Additionally and reiterating, the use of a single discount factor, in this case γ k, is equivalent to the prior that a single hazard exists in the environment. Again, the change of variable γ = e −λ yields differentials dλ = − 1 γ dγ and the limits λ = 0 → γ = 1 and λ = ∞ → γ = 0. where p(·) is the prior. With the exponential prior p(λ) = 1 k exp(−λ/k) and by substituting λ = −lnγ we verify Equation 9 Finally if we hold a uniform prior over hazard, shows the Laplace transform yields Use the same change of variables to relate this to γ. The bounds of the integral become λ = 0 → γ = 1 and λ = k → γ = e −k. which recovers the discounting scheme. We provide further detail for which γ we choose to model and motivation why. We choose a γ max which is the largest γ to learn through Bellman updates. If we are using k as the hyperbolic coefficient in Equation 19 and we are approximating the integral with n γ our γ max would be However, allowing γ max → 1 get arbitrarily close to 1 may in learning instabilities. Therefore we compute an exponentiation base of b = exp(ln(1 − γ 1/k max)/n γ ) which bounds our γ max at a known stable value. This induces an approximation error which is described more in Appendix H. Instead of evaluating the upper bound of Equation 9 at 1 we evaluate at γ max which yields γ kt max /(1+kt). Our approximation induces an error in the approximation of the hyperbolic discount. This approximation error in the Riemann sum increases as the γ max decreases as evidenced by Figure 12. When the maximum value of γ max → 1 then the approximation becomes more accurate as supported in Table 5 up to small random errors. As discussed, we can estimate the hyperbolic discount in two different ways. We illustrate the ing estimates here and ing approximations. We use lower-bound Riemann sums in both cases for simplicity but more sophisticated integral estimates exist. As noted earlier, we considered two different integrals for computed the hyperbolic coefficients. Under the form derived by the Laplace transform, the integrals are sharply peaked as γ → 1. The difference in integrals is visually apparent comparing in Figure 13. Figure 12: By instead evaluating our integral up to γ max rather than to 1, we induce an approximation error which increases with t. Numerical in Table 5. Figure 12. (a) We approximate the integral of the function γ kt via a lower estimate of rectangles at specific γ-values. The sum of these rectangles approximates the hyperbolic discounting scheme 1/(1 + kt) for time t. (b) Alternative form for approximating hyperbolic coefficients which is sharply peaked as γ → 1 which led to larger errors in estimation under our initial techniques. J PERFORMANCE OF DIFFERENT REPLAY BUFFER PRIORITIZATION SCHEME As found through our ablation study in Figure 7, the Multi-Rainbow auxiliary task interacted poorly with the prioritized replay buffer when the TD-errors were averaged evenly across all heads. As an alternative scheme, we considered prioritizing according to the largest γ, which is also the γ defining the Q-values by which the agent acts. The (preliminary 5) of this new prioritization scheme is in Figure 14. -10 3 -10 2 -10 1 -10 0 0 10 0 10 1 10 2 10 3 Percent Improvement (log) Game Name Multi-Rainbow Improvement over Rainbow (prioritize-largest) Figure 14: The (preliminary) performance improvement over Rainbow using the multi-horizon auxiliary task in Atari Learning Environment when we instead prioritize according to the TD-errors computed from the largest γ (3 seeds each). To this point, there is evidence that prioritizing according to the TD-errors generated by the largest gamma is a better strategy than averaging. Final of the multi-horizon auxiliary task on Rainbow (Multi-Rainbow) in Table 7. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkezdaEtvH | A deep RL agent that learns hyperbolic (and other non-exponential) Q-values and a new multi-horizon auxiliary task. |
Emotion is playing a great role in our daily lives. The necessity and importance of an automatic Emotion recognition system is getting increased. Traditional approaches of emotion recognition are based on facial images, measurements of heart rates, blood pressure, temperatures, tones of voice/speech, etc. However, these features can potentially be changed to fake features. So to detect hidden and real features that is not controlled by the person are data measured from brain signals. There are various ways of measuring brain waves: EEG, MEG, FMRI, etc. On the bases of cost effectiveness and performance trade-offs, EEG is chosen for emotion recognition in this work. The main aim of this study is to detect emotion based on EEG signal analysis recorded from brain in response to visual stimuli. The approaches used were the selected visual stimuli were presented to 11 healthy target subjects and EEG signal were recorded in controlled situation to minimize artefacts (muscle or/and eye movements). The signals were filtered and type of frequency band was computed and detected. The proposed method predicts an emotion type (positive/negative) in response to the presented stimuli. Finally, the performance of the proposed approach was tested. The average accuracy of machine learning algorithms (i.e. J48, Bayes Net, Adaboost and Random Forest) are 78.86, 74.76, 77.82 and 82.46 respectively. In this study, we also applied EEG applications in the context of neuro-marketing. The empirically demonstrated detection of the favourite colour preference of customers in response to the logo colour of an organization or Service. Emotion is playing a great role in our daily lives. The necessity and importance of an automatic 25 Emotion recognition system is getting increased. Traditional approaches of emotion recognition 26 are based on facial images, measurements of heart rates, blood pressure, temperatures, tones of 27 voice/speech, etc. However, these features can potentially be changed to fake features. Thus, to simple and portable device. The brainwave activity is broadly divided into five frequency bands. The boundary between the frequency bands is not strict but not varying much. The frequency bands 63 include delta(0.5-4Hz), theta(5-8Hz), alpha(9-12Hz), beta(13-30Hz) and gamma(above 30Hz). For this study, EEG data is collected using Emotiv EPOC device with 14 electrodes located at AF3, gamma band is also responsible for arousal. In other words, the EEG brain activity from parietal 90 and frontal lobe of the brain is more emotionally informative where as gamma, alpha and beta waves We collected and prepared three image data sets for stimuli presentation and classification. These To answer research question 1 and 2, the top ranked features for each subject are extracted using | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rygpmmF8IS | This paper presents EEG based emotion detection of a person towards an image stimuli and its applicability on neuromarketing. |
We present a probabilistic framework for session based recommendation. A latent variable for the user state is updated as the user views more items and we learn more about their interests. We provide computational solutions using both the re-parameterization trick and using the Bouchard bound for the softmax function, we further explore employing a variational auto-encoder and a variational Expectation-Maximization algorithm for tightening the variational bound. Finally we show that the Bouchard bound causes the denominator of the softmax to decompose into a sum enabling fast noisy gradients of the bound giving a fully probabilistic algorithm reminiscent of word2vec and a fast online EM algorithm. Our model describes a generative process for the types of products that user's co-view in sessions. We use u to denote a user or a session, we use t time to denote sequential time and v to denote which product they viewed from 1 to P where P is the number of products, the user's interest is described by a K dimensional latent variable ω u which can be interpreted as the user's interest in K topics. The session length of user u is given by T u. We then assume the following generative process for the views in each session: ω u ∼ N (0 K, I K), v u,1,.., v u,Tu ∼ categorical(softmax(Ψω u + ρ)). This model is a linear version of the model presented in. Consider the case where we have estimated that Ψ and ρ. In production we have observed a user's online viewing history v u,1,.., v u,Tu and we would like to produce a representation of the user's interests. Our proposal is to use Bayesian inference in order to infer p(ω|v u,1, .., v u,Tu, Ψ, ρ) as a representation of their interests. This representation of interests can then be used as a feature for training a recommender system. If we have a recommender system that has just seven products and the products have embeddings: We now consider how different user histories affect p(ω|v u,1, .., v u,Tu, Ψ, ρ). Approximation of this quantity can be made accurately and easily using the Stan probabilistic programming language or using variational approximations. In Figure 1 -2 the intuitive behavior of this simple model is demonstrated. The of the three approximate methods are presented and shown to be in good agreement. A single product view indicates interest in that class of products, but considerable uncertainty remains. Many product views in the same class represent high certainty that the user is interested in that class. For next item prediction we consider both taking the plug-in predictive based on the posterior mean VB (approx) and using Monte Carlo samples to approximate the true predictive distribution MCMC and VB (MC). The model we introduce has the form: If we use a normal distribution ω ∼ N (µ q, Σ q), then variational bound has the form: but we still need to be able to integrate under the denominator of the softmax. The Bouchard bound introduces a further approximation and additional variational parameters a, ξ but produces an analytical bound: Where λ JJ (·) is the Jaakola and Jordan function:. Alternatively the re-parameterization trick proceeds by simulating: (s) ∼ N (0 K, I K), and then computing:, and then optimizing the noisy lower bound: In order to prevent the variational parameters growing with the amount of data we employ a variational auto-encoder. This involves using a flexible function i.e. µ q, Σ q = f Ξ (v 1, ...v T), or in the case of the Bouchard bound: Where any function (e.g. a deep net) can be used for f Ξ (·) and f Bouch Ξ (·). We demonstrate that our method using the RecoGym simulation environment. We fit the model to the training set, we then evaluate by providing the model v 1,..v Tu−1 events and testing the model's ability to predict v Tu. A further approximate algorithm which is useful when P is large is to note that the bound can be written as a sum that decomposes not only in data but also over the denominator of the softmax, The noisy lower bound becomes: where v 1,..., v T are the items associated with the session and n 1,...n S are S < P negative items randomly sampled. Similar to the word2vec algorithm but without any non-probabilistic heuristics. We consider two alternative methods for training the model: Bouch/AE -A linear variational auto-encoder using the Bouchard bound; RT/Deep AE -A deep auto-encoder again using the re-parameterization trick. The deep auto-encoder consists of mapping an input of size P to three linear rectifier layers of K units each. Results showing recall@5 and discounted cumulative gain at 5 are shown in Table 1. The EM algorithm allows an approximation to be made of q(ω) assuming (Ψ, ρ) and a user history v 1,.., v T are known and can be used in place of a variational auto-encoder. The algorithm here is the dual of the one presented in as we assume the embedding Ψ is fixed and ω is updated where the algorithm they present does the opposite. The EM algorithm consists of cycling the following update equations: We further note that the EM algorithm is (with the exception of the a variational parameter) a fixed point update (of the natural parameters) that decomposes into a sum. The terms in the sum come from the softmax in the denominator. After substituting a co-ordinate descent update of a with a gradient descent step update, then the entire fixed point update becomes a sum: That is the EM algorithm can be written: As noted in Cappé and when an EM algorithm can be written as a fixed point update over a sum, then the Robbins Monro algorithm can be applied. Allowing updates of the form (p is chosen randomly): where ∆ is a slowly decaying Robbins Monro sequence with ∆ 1 = 1 (meaning no initial value of (Σ −1) ) is needed. For large P this algorithm is many times faster than the generic EM algorithm. What is distinct about both this online EM algorithm and the negative sampling SGD approach is that it is the denominator of the softmax that may be sub-sampled rather than individual records. The Bouchard bound is also used for decomposing the softmax into a sum in but they do per-batch optimization of the variational parameters, instead we use an auto-encoder allowing direct SGD. Our method also differs from again in using an auto-encoder allowing the more flexible SGD algorithm in place of stochastic variational inference which requires complete data exponential family assumptions. Finally unlike those methods we are considering variational inference of a latent variable model as well as using variational bounds to approximate the softmax. | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BkegKynEKH | Fast variational approximations for approximating a user state and learning product embeddings |
An important question in task transfer learning is to determine task transferability, i.e. given a common input domain, estimating to what extent representations learned from a source task can help in learning a target task. Typically, transferability is either measured experimentally or inferred through task relatedness, which is often defined without a clear operational meaning. In this paper, we present a novel metric, H-score, an easily-computable evaluation function that estimates the performance of transferred representations from one task to another in classification problems. Inspired by a principled information theoretic approach, H-score has a direct connection to the asymptotic error probability of the decision function based on the transferred feature. This formulation of transferability can further be used to select a suitable set of source tasks in task transfer learning problems or to devise efficient transfer learning policies. Experiments using both synthetic and real image data show that not only our formulation of transferability is meaningful in practice, but also it can generalize to inference problems beyond classification, such as recognition tasks for 3D indoor-scene understanding. Transfer learning is a learning paradigm that exploits relatedness between different learning tasks in order to gain certain benefits, e.g. reducing the demand for supervision BID22 ). In task transfer learning, we assume that the input domain of the different tasks are the same. Then for a target task T T, instead of learning a model from scratch, we can initialize the parameters from a previously trained model for some related source task T S. For example, deep convolutional neural networks trained for the ImageNet classification task have been used as the source network in transfer learning for target tasks with fewer labeled data BID7 ), such as medical image analysis BID24 ) and structural damage recognition in buildings (Gao & Mosalam). An imperative question in task transfer learning is transferability, i.e. when a transfer may work and to what extent. Given a metric capable of efficiently and accurately measuring transferability across arbitrary tasks, the problem of task transfer learning, to a large extent, is simplified to search procedures over potential transfer sources and targets as quantified by the metric. Traditionally, transferability is measured purely empirically using model loss or accuracy on the validation set (; ; BID5). There have been theoretical studies that focus on task relatedness BID1; BID19; BID21; BID2 ). However, they either cannot be computed explicitly from data or do not directly explain task transfer performance. In this study, we aim to estimate transferability analytically, directly from the training data. We quantify the transferability of feature representations across tasks via an approach grounded in statistics and information theory. The key idea of our method is to show that the error probability of using a feature of the input data to solve a learning task can be characterized by a linear projection of this feature between the input and output domains. Hence we adopt the projection length as a metric of the feature's effectiveness for the given task, and refer to it as the H-score of the feature. More generally, H-score can be applied to evaluate the performance of features in different tasks, and is particularly useful to quantify feature transferability among tasks. Using this idea, we define task transferability as the normalized H-score of the optimal source feature with respect to the target task. As we demonstrate in this paper, the advantage of our transferability metric is threefold. (i) it has a strong operational meaning rooted in statistics and information theory; (ii) it can be computed directly and efficiently from the input data, with fewer samples than those needed for empirical learning; (iii) it can be shown to be strongly consistent with empirical transferability measurements. In this paper, we will first present the theoretical of the proposed transferability metric in Section 2-4. Section 5 presents several experiments on real image data, including image classificaton tasks using the Cifar 100 dataset and 3D indoor scene understanding tasks using the Taskonomy dataset created by. A brief review of the related works is included in Section 6. In this section, we will introduce the notations used throughout this paper, as well as some related concepts in Euclidean information theory and statistics. X, x, X and P X represent a random variable, a value, the alphabet and the probability distribution respectively. √ P X denotes the vector with entries P X (x) and [√ P X] the diagonal matrix of P X (X). For joint distribution P Y X, P Y X represents the |Y| × |X | probability matrix. Depending on the context, f(X) is either a |X |-dimensional vector whose entries are f (x), or a |X | × k feature matrix. Further, we define a task to be a tuple T = (X, Y, P XY), where X is the training features and Y is the training label, and P XY the joint probability (possibly unknown). Subscripts S and T are used to distinguish the source task from the target task. Our definiton of transferability uses concepts in local information geometry developed by BID18, which characterizes probability distributions as vectors in the information space. Consider the following binary hypothesis testing problem: test whether i.i.d. samples DISPLAYFORM0 are drawn from distribution P 1 or distribution P 2, where P 1, P 2 belong to an -neighborhood N (P 0) {P | x∈X DISPLAYFORM1 as the information vector corresponding to P i for i = 1, 2. DISPLAYFORM2 for the binary hypothesis testing problem. Let E f be the error exponent of decision region {x m | l(x m) > T } for T ≥ 0, which characterizes the asymptotic error probability P e of l (i.e. lim m→∞ − 1 m log(P e) = E k f ). E f can be written as the squared length of a projection: DISPLAYFORM3 When f (x) = log DISPLAYFORM4 P2(x) is the log likelihood ratio, l is the minimum sufficient statistics that achieves the largest error exponent DISPLAYFORM5 by the Chernoff theorem. (See Appendix A for details.) In the rest of this paper, we assume is small. Definition 1. MatrixB is the Divergence Transition Matrix (DTM) of a joint probability DISPLAYFORM0 The singular values ofB satisfy that DISPLAYFORM1 be the left and right singular vectors ofB. Define functions DISPLAYFORM2 for each i = 1,..., K − 1. BID18 further proved that f * i and g * i are solutions to the maximal HGR correlation problem studied by BID12 BID10; BID23, defined as follows: DISPLAYFORM3 The maximal HGR problem finds the K strongest, independent modes in P XY from data. It can be solved efficiently using the Alternating Conditional Expectation (ACE) algorithm with provable error bound (see Appendix B). BID13 further showed that f * and g * are the universal minimum error probability features in the sense that they can achieve the smallest error probability over all possible inference tasks. In this section, we present a performance metric of a given feature representation for a learning task. For a classification task involving input variable X and label Y, most learning algorithms work by finding a k-dimensional functional representation f (x) of X that is most discriminative for the classification. To measure how effective f (x) is in predicting Y, rather than train the model via gradient descent and evaluate its accuracy, we present an analytical approach based on the definition below: Definition 2. Given data matrix X ∈ R m×d and label Y ∈ {1, . . ., |Y|}. Let f (x) be a kdimensional, zero-mean feature function. The H-Score of f (x) with respect to the learning task represented by P Y X is: DISPLAYFORM0 This definition is intuitive from a nearest neighbor search perspective. i.e. a high H-score implies the inter-class variance cov(E P X|Y [f (X)|Y ]) of f is large, while feature redundancy tr(cov(f (X))) is small. Such a feature is discriminative and efficient for learning label Y. More importantly, H(f) has a deeper operational meaning related to the asymptotic error probability for a decision rule based on f in the hypothesis testing context, discussed in the next section. Without loss of generality, we consider the binary classification task as a hypothesis testing problem defined in Section 2.1, with P 1 = P X|Y =0, and P 2 = P X|Y =1. For any k-dimensional feature representation f (x), we can quantify its performance with respect to the learning task using its error DISPLAYFORM0 See Appendix C for the proof. The above theorem shows that H-score H(f) is proportional to the error exponent of the decision region based on f (x) when f (x) is zero-mean with identity covariance. To compute the H-score of arbitrary f, we can center the features f S (x) − E[f S (x)], and incorporate normalization into the computation of the error exponent, which in Definition 2. The details are presented in Appendix D.The proof for Theorem 1 uses the fact that H(f) = B Ξ 2 F, whereB is the DTM matrix, Ξ [ξ 1 · · · ξ k] is the matrix composed of information vectors ξ i and c is a constant. This allows us to infer an upper bound for the H-score of a given learning task: The first inequality is achieved when Ξ is composed of the right singular vectors ofB, i.e. DISPLAYFORM1 DISPLAYFORM2 The corresponding feature functions f opt (X)is in fact the same as the universal minimum error probability features from the maximum HGR correlation problem. The final inequality in Corollary 1 is due to the fact all singular values ofB are less than or equal to 1. Next, we apply H-score to efficiently measure the effectiveness of task transfer learning. We will also discuss how this approach can be used to solve the source task selection problem. A typical way to transfer knowledge from the source task T S to target task T T is to train the target task using source feature representation f S (x). In a neural network setting, this idea can be implemented by copying the parameters from the first N layers in the trained source model to the target model, assuming the model architecture on those layers are the same. The target classifier then can be trained while freezing parameters in the copied layers FIG1 ). Under this model, a natural way to quantify transferability is as follows: Definition 3 (Task transferability). Given source task T S and target task T T, and trained source feature representation f S (x), the transferability from T S to T T is T(S, T) DISPLAYFORM0, where f Topt (x) is the minimum error probability feature of the target task. The statement T(S, T) = r means the error exponent of transfering from T S via feature representation f S is 1 r of the optimal error exponent for predicting the target label Y T. This definition also implies 0 ≤ T(S, T) ≤ 1, which satisfies the data processing inequality if we consider the transferred feature f S (X) as post-processing of input X for solving the target task. And it can not increase information about predicting the target task T. A common technique in task transfer learning is fine-tuning, which adds before the target classifier additional free layers, whose parameters are optimized with respect to the target label. For the operational meaning of transferability to hold exactly, we require the fine tuning layers consist of only linear transformations, such as the model illustrated in Figure 2.a. It can be shown that under the local assumption, H-score is equivalent to the log-loss of the linear transfer model up to a constant offset (Appendix E). Nevertheless, later we will demonstrate empirically that this transferability metric can still be used for comparing the relative task transferability with fine-tuning. With a known f, computing H-score from m sample data only takes O(mk 2) time, where k is the dimension of f (x) for k < m. The majority of the computation time is spent on computing the sample covariance matrix cov(f (X)).The remaining question is how to obtain f Topt efficiently. We use the fact that DISPLAYFORM0, where f and g are the solutions of the HGR-Maximum Correlation problem. This problem can be solved efficiently using the ACE algorithm for discrete variable X. For a continuous random variable X, we can obtain f opt through a different formulation of the HGR maximal correlation problem: DISPLAYFORM1 DISPLAYFORM2 This is also known as the soft HGR problem studied by BID26, who reformulated the original maximal HGR correlation objective to eliminate the whitening constraints while having theoretically equivalent solution. In practice, we can utilize neural network layers to model functions f and g, as shown in Figure 2.b. Given two branches of k output units for both f and g, the loss function can be evaluated in O(mk 2), where m is the batch size. BID18 showed that the sample complexity of ACE is only 1/k of the complexity of estimating P Y,X directly. This also applies to the soft HGR problem due to their theoretical equivalence. Hence transferability can be computed with much less samples than actually training the transfer network. It's also worth noting that, when f is fixed, maximizing the objective in Equation 3 with respect to zero-mean function g in the definition of H-score. In many cases though, the computation of H T (f opt) can even be skipped entirely, such as the problem below: Definition 4 (Source task selection). Given N source tasks T S1,..., T S N with labels Y S1,..., Y S N and a target task T T with label Y T. Let f S1,..., f S N be the minimum error probability feature functions of the source tasks. Find the source task T Si that maximizes the testing accuracy of predicting Y T with feature f Si.We can solve this problem by selecting the source task with the largest transferability to T T. In fact, we only need to compute the numerator in the transferability definition since the denominator is the same for all source tasks, i.e. DISPLAYFORM3 Under the local assumption that P X|Y ∈ N (P X), we can show that mutual information DISPLAYFORM0 (See Appendix F for details.) Hence H-score is related to mutual information by H(f (x)) ≤ 2I(X; Y) for any zero-mean features f (x) satisfying the aforementioned conditions. Figure 3 compares the optimal H-score of a synthesized task when |Y| = 6 with the mutual information between input and output variables, when the feature dimension k changes. The value of H-score increases as k increases, but reaches the upper bound when k ≥ 6, since the rank of the joint probability between X and Y T, as well as the rank of its DTM is 6. As expected, the H-score values are below 2I(X; Y), with a gap due to the constant o. This relationship shows that H-score is consistent with mutual information with a sufficiently large feature dimension k. In practice, H-score is much easier to compute than mutual information when the input variable X (or f S (X)) is continous, as mutual information are either computed based on binning, which has extra bias due to bin size, or more sophisticated methods such as kernel density estimation or neural networks BID8 ). On the other hand, H-score only needs to estimate conditional expectations, which requires less samples. In this section, we validate and analyze our transferability metric through experiments on real image data.1 The tasks considered cover both object classification and non-classification tasks in computer vision, such as depth estimation and 3D (occlusion) edge detection. To demonstrate that our transferability metric is indeed a suitable measurement for task transfer performance, we compare it with the empirical performance of transfering features learned from ImageNet 1000-class classification BID16 ) to Cifar 100-class classification BID15 ), using a network similar to FIG1. Comparing to ImageNet-1000, Cifar-100 has smaller sample size and its images have lower resolution. Therefore it is considered to be a more challenging task than ImageNet, making it a suitable case for transfer learning. In addition, we use a pretrained ResNet-50 as the source model due to its high performance and regular structure. Validation of H-score. The training data for the target task in this experiemnt consists of 20, 000 images randomly sampled from Cifar-100. It is further split 9:1 into a training set and a testing set. The transfer network was trained using stochastic gradient descent with batch size 20, 000 for 100 epochs. FIG3.a compares the H-score of transferring from five different layers (4a-4f) in the source network with the target log-loss and test accuracy of the respective features. As H-score increases, log-loss of the target network decreases almost linearly while the training and testing accuracy increase. Such behavior is consistent with our expectation that H-score reflects the learning performance of the target task. We also demonstrated that target sample size does not affect the relationship between H-score and log-loss FIG3.b).This experiment also showed another potential application of H-score for selecting the most suitable layer for fine-tuning in transfer learning. In the example, transfer performance is better when an upper layer of the source networks is transferred. This could be because the target task and the source task are inherently similar such that the representation learned for one task can still be discriminative for the other. Validation of Transferability. We further tested our transferability metric for selecting the best target task for a given source task. In particular, we constructed 4 target classification tasks with 3, 5, 10, and 20 object categories from the Cifar-100 dataset. We then computed the transferability from ImageNet-1000 (using the feature representation of layer 4f) to the target tasks. The are compared to the empirical transfer performance trained with batch size 64 for 50 epochs in Figure4.c. We observe a similar behavior as the H-score in the case of a single target task in FIG3.a, showing that transferability can directly predict the empirical transfer performance. In this experiment, we solve the source task selection problem for a collection of 3D sceneunderstanding tasks using the Taskonomy dataset from. In the following, we will introduce the experiment setting and explain how we adapt the transferability metric to pixel-to-pixel recognition tasks. Then we compare transferability with task affinity, an empirical transferability metric proposed by.Data and Tasks. The Taskonomy dataset contains 4,000,000 images of indoor scenes of 600 buildings. Every image has annotations for 26 computer vision tasks. We randomly sampled 20, 000 images as training data. Eight tasks were chosen for this experiment, covering both classifications and lower-level pixel-to-pixel tasks. Table 6 summaries the specifications of these tasks and sample outputs are shown in FIG4. For classification tasks, H-score can be easily calculated given the source features. But for pixel-topixel tasks such as Edges and Depth, their ground truths are represented as images, which can not be quantized easily. As a workaround, we cluster the pixel values in the ground truth images into a palette of 16 colors. Then compute the H-score of the source features with respect to each pixel, before aggregating them into a final score by averaging over the whole image. We ran the experiment on a workstation with 3.40 GHz ×8 CPU and 16 GB memory. Each pairwise H-score computation finished in less than one hour including preprocessing. Then we rank the source tasks according to their H-scores of a given target task and compare the ranking with that in. Pairwise Transfer Results. Source task ranking using transferability and affinity are visualized side by side in Figure 7, with columns representing source tasks and rows representing target tasks. For classification tasks (the bottom two rows in the transferability matrix), the top two transferable source tasks are identical for both methods. The best source task is the target task itself, as the encoder is trained on a task-specific network with much larger sample size. Scene Class. and Object Class. are ranked second for each other, as they are semantically related. Similar observations can be found in 2D pixel-to-pixel tasks (top two rows). The on lower rankings are noisier. A slightly larger difference between the two rankings can be found in 3D pixel-to-pixel tasks, especially 3D Occlusion Edges and 3D Keypoints. Though the top four ranked tasks of both methods are exactly the four 3D tasks. It could indicate that these low level vision tasks are closely related to each other so that the transferability among them are inherently ambiguous. We also computed the ranking correlations between transferability and affinity using Spearman's R and Discounted Cumulative Gain (DCG). Both criterion show positive correlations for all target tasks. The correlation is especially strong with DCG as higher ranking entities are given larger weights. The above observations inspire us to define a notion of task relatedness, as some tasks are frequently ranked high for each other. Specifically, we represent each task with a vector consisting of H-scores of all the source tasks, then apply agglomerative clustering over the task vectors. As shown in the dendrogram in Figure 7, 2D tasks and most 3D tasks are grouped into different clusters, but on a higher level, all pixel-to-pixel tasks are considered one category compared to the classifications tasks. Higher Order Transfer. Sometimes we need to combine features from two or more source tasks for better transfer performance. A common way to combine features from multiple models in deep neural networks is feature concatenation. For such problems, our transferability definition can be easily adapted to high order feature transfer, by computing the H-score of the concatenated features. Figure 8 shows the ranking of all combinations of source task pairs for each target task. For all tasks except for 3D Occlusion Edges and Depth, the best seond-order source feature is the combination of the top two tasks of the first-order ranking. We examine the exception in Figure 9, by visualizing the pixel-by-pixel H-scores of first and second order transfers to Depth using a heatmap (lighter color implies a higher H-score). Note that different source tasks can be good at predicting different parts of the image. The top row shows the of combining tasks with two different "transferability patterns" while the bottom row shows those with similar patterns. Combining tasks with different transferability patterns has a more significant improvement to the overall performance of the target task. Transfer learning. Transfer learning can be devided into two categories: domain adaptation, where knowledge transfer is achieved by making representations learned from one input domain work on a different input domain, e.g. adapt models for RGB images to infrared images BID27 ); and task transfer learning, where knowledge is transferred between different tasks on the same input domain BID25 ). Our paper focus on the latter prolem. Empirical studies on transferability. compared the transfer accuracy of features from different layers in a neural network between image classification tasks. A similar study was performed for NLP tasks by BID5. determined the optimal transfer hierarchy over a collection of perceptual indoor scene understanidng tasks, while transferability was measured by a non-parameteric score called "task affinity" derived from neural network transfer losses coupled with an ordinal normalization scheme. Task relatedness. One approach to define task relatedness is based on task generation. Generalization bounds have been derived for multi-task learning BID1 ), learning-to-learn BID19 ) and life-long learning BID21 ). Although these studies show theoretical on transferability, it is hard to infer from data whether the assumptions are satisfied. Another approach is estimating task relatedness from data, either explicitly BID3; Zhang Representation learning and evaluation. Selecting optimal features for a given task is traditionally performed via feature subset selection or feature weight learning. Subset selection chooses features with maximal relevance and minimal redundancy according to information theoretic or statistical criteria BID20; ). The feature weight approach learns the task while regularizing feature weights with sparsity constraints, which is common in multi-task learningLiao & Carin FORMULA9; BID0. In a different perspective, BID13 consider the universal feature selection problem, which finds the most informative features from data when the exact inference problem is unknown. When the target task is given, the universal feature is equivalent to the minimum error probability feature used in this work. In this paper, we presented H-score, an information theoretic approach to estimating the performance of features when transferred across classification tasks. Then we used it to define a notion of task transferability in multi-task transfer learning problems, that is both time and sample complexity efficient. The ing transferability metric also has a strong operational meaning as the ratio between the best achievable error exponent of the transferred representation and the minium error exponent of the target task. Our transferability score successfully predicted the performance for transfering features from ImageNet-1000 classification task to Cifar-100 task. Moreover, we showed how the transferability metric can be applied to a set of diverse computer vision tasks using the Taskonomy dataset. In future works, we plan to extend our theoretical to non-classification tasks, as well as relaxing the local assumptions on the conditional distributions of the tasks. We will also investigate properties of higher order transferability, developing more scalable algorithms that avoid computing the H-score of all task pairs. On the application side, as transferability tells us how different tasks are related, we hope to use this information to design better task hierarchies for transfer learning. DISPLAYFORM0 x m with the following hypotheses: DISPLAYFORM1 Let P x m be the empirical distribution of the samples. The optimal test, i.e., the log likelihood ratio test can be stated in terms of information-theoretic quantities as follows: DISPLAYFORM2 Figure 10: The binary hypothesis testing problem. The blue curves shows the probility density functions for P 1 and P 2. The rejection region and the acceptance region are highlighted in red and blue, respectively. The vertical line indicates the decision threshold. Further, using Sannov's theorem, we have that asymptotically the probability of type I error DISPLAYFORM3 where P * DISPLAYFORM4 m log T } denotes the rejection region. Similarly, for type II error DISPLAYFORM5 where P * 2 = argmin P ∈A D(P ||P 2) and A = {x m : FIG1 The overall probability of error is P (m) e = αP r(H 0) + βP r(H 1) and the best achievable exponent in the Bayesian probability of error (a.k.a. Chernoff exponent) is defined as: DISPLAYFORM6 DISPLAYFORM7 See Cover & BID6 for more information on error exponents and its related theorems. Under review as a conference paper at ICLR 2019 Now consider the same binary hypothesis testing problem, but with the local constraint DISPLAYFORM0 denote the information vectors corresponding to P i for i = 1, 2.Makur et al. FORMULA3 uses local information geometry to connect the error exponent in hypothesis testing to the length of certain information vectors, summarized in the following two lemmas. Lemma 1. Given zero-mean, unit variance feature function f (x): X → R, the optimal error exponent (a.k.a. Chernoff exponent) of this hypothesis testing problem is DISPLAYFORM1 Lemma 2. Given zero-mean, unit variance feature function f (x): X → R, the error exponent of a mismatched decision function of the form l = DISPLAYFORM2 where ξ(x) = P 0 (x)f (x) is the feature vector associated with f (x).As our discussion of transferability mostly concerns with multi-dimensional features, we present the k-dimensional generalization of Lemma 2 below: (Equation 1 in the main paper.) DISPLAYFORM3 for all i, and cov(f (X)) = I, we define a k-d statistics of the form l k = (l 1, ..., l k) where DISPLAYFORM4 be the corresponding feature vectors with DISPLAYFORM5 Proof. According to Cramér's theorem, the error exponent under P i is DISPLAYFORM6. With the techniques developed in local information geometry, the above above problem is equivalent to the following problem: DISPLAYFORM7, it is easy to show that E 1 (λ) = E 1 (λ) when λ = 1 2. Then the overall error probability has the exponent as shown in Equation. Given random variables X and Y, the HGR maximal correlation ρ(X; Y) defined in Equation 2 is a generalization of the Pearson's correlation coefficient to capture non-linear dependence between random variables. According to BID23, it satisfies all seven natural postulates of a suitable dependence measure. Some notable properties are listed below: When the feature dimension is 1, the solution of the maximal HGR correlation is ρ(X; Y) = σ 1, the largest singular value of the DTM matrixB. For k-dimensional features, ρ(X; Y) = k i (σ i). However, computingB requires estimating the joint probability P Y X from data, which is inpractical in real applications. BID4 proposed an efficient algorithm, alternating condition expectation (ACE), inspired by the power method for computing matrix singular values. Require: training samples {(( DISPLAYFORM0 In Algorithm 1, we first initialize g as a random k-dimensional zero-mean function. Then iteratively update f (x) and g(y) for all x ∈ X and y ∈ Y. The conditional distributions on Line 4 and 6 are computed as the empirical average over m samples. The normalization steps on Lines 5 and 7 can also be implemented using the Gram-Schmidt process. Note that the ACE algorithm has several variations in previous works, including a kernel-smoothed version BID4 ) and a parallel version with improved efficiency BID13 ). An alternative formulation that supports continuous X and large feature dimension k has also been proposed recently BID26 ).Next we look at the convergence property of the ACE algorithm. Let f (X), g(Y) be the true maximal correlation functions, and letf (X),g(Y) be estimations computed with Algorithm 1 from m i.i.d. sampled training data. Similarly, denote by DISPLAYFORM1 ] the true and estimated maximal correlations, respectively. Using Sanov's Theorem, we can show that for a small ∆ > 0, the probability that the ratio between the true and estimated maximal correlation is within 1 ± ∆ drops exponentially as the number of samples increases. Hence the ACE algorithm converges in exponential time. The following theorem gives the precise sampling complexity for k = 1. Theorem 2. For any random variables X and Y with joint probability distribution P Y X, if X and Y are not independent, then for any f: X → R and g: DISPLAYFORM2 for any given ∆ > 0. To simplify the proof, we first consider the case when the feature function is 1-dimensional.i.e. f: X → R. We have the following lemma: DISPLAYFORM0 where ξ is the feature vector corresponding to f.Proof. Since ξ(x) = P X (x)f (x), we have DISPLAYFORM1 The last equality uses the assumption that E[f (x)] = 0.Theorem 3 (1D version of Theorem 1). Given P X|Y =0, P X|Y =1 ∈ N X (P 0,X) and features f: DISPLAYFORM2 we first derive the following properties of the conditional expectations of f (x): DISPLAYFORM3 On the R.H.S. of Equation 5, we apply Lemma 4 to write B ξ DISPLAYFORM4 Next consider the L.H.S. of the equation, by Lemma 2, we have DISPLAYFORM5 2 for some constant c 0. DISPLAYFORM6 For k ≥ 2, Lemma 4 can be restated as follows: DISPLAYFORM7 where columns of Ξ are the information vectors corresponding to f 1 (x),..., f k (x).Proof. First, note that DISPLAYFORM8 T where 1 is a column vector with all entries 1 and length |Y|. Since E[f (X)] = 0, we havẽ DISPLAYFORM9 It follows that DISPLAYFORM10 Finally, we derive the multi-dimensional case for Theorem 1.Proof of Theorem 1. Using Lemma 5 and a similar argument as in the simplified proof, the R.H.S of the equation becomes DISPLAYFORM11 By Lemma 3, the L.H.S. of the equation can be written as DISPLAYFORM12 Equation FORMULA51 gives a more understandable expression of the normalization term. We can also writẽ BΞ as follows:B DISPLAYFORM13 T where 1 is a column vector with all entries 1 and length |Y|, we have DISPLAYFORM14 On the other hand, DISPLAYFORM15 The last equality is derived by substituting and into. In softmax regression, given m training examples {( DISPLAYFORM0, the cross-entropy loss of the model is DISPLAYFORM1 Minimizing is equivalent to minimizing D P Y X ||P X Q Y |X where P Y X is the joint empirical distribution of (X, Y).Using information geometry, it can be shown that under a local assumption DISPLAYFORM2 which reveals a close connection between log loss and the modal decomposition ofB. In consequence, it is reasonable to measure the classification performance with B − ΨΦ T 2 F given a pair of (f, θ) associated with (Ψ, Φ).In the context of estimating transferability, we are interested in a one-sided problem, where Φ S is given by the source feature and Ψ becomes the only variable. Training the network is equivalent to finding the optimal weight Ψ * that minimizes the log-loss. By taking the derivative of the Objective function with respect to Ψ, we get DISPLAYFORM3 Substituting FORMULA3 in the Objective of FORMULA3, we can derive the following close-form solution for the log loss. DISPLAYFORM4 The first term in is fixed given T T while the second term has exactly the form of H-score. This implies that log loss is negatively linearly related to H-score. We demonstrates this relationship experimentally, using a collection of synthesized tasks FIG1 ). In particular, the target task is generated based on a random stochastic matrix P Y0|X, and 20 source tasks are generated with the conditional probability matrix P Yi|X = P Y0|X + iλI for some positive constant λ. The universal minimum error probability features for each source task are used as the source features f Si (x), while the respective log-loss are obtained through training a simple neural network in Figure 2 with cross-entropy loss. The relationship is clearly linear with a constant offset. Proposition 1. Under the local assumption that DISPLAYFORM0, whereB of the DTM matrix of X and Y.Proof. First, we define φ DISPLAYFORM1 and let Φ X|Y ∈ R |X |×|Y| denote its matrix version. Then we have DISPLAYFORM2 Next, we express the mutual information in terms of information vector φ X|Y, Here we present some detailed on the comparison between H-score and the affinity score in for pairwise transfer. DISPLAYFORM3 The of the classification tasks are shown in FIG1 and the of Depth is shown in 13.We can see in general, although affinity and transferability have totally different value ranges, they tend to agree on the top few ranked tasks. During the quantization process of the pixel-to-pixel task labels (ground truth images), we are primarily concerned with two factors: computational complexity and information loss. Too much information loss will lead to bad approximation of the original problems. On the other hand, having little information loss requires larger cluster size and computation cost. Figure FORMULA3 shows that even after quantization, much of the information in the images are retained. To test the sensitivity of the cluster size, we used cluster centroids to recover the ground truth image pixel-by-pixel. The 3D occlusion Edge detection on a sample image is shown in FIG1. When the cluster number is set to N = 5 (right), most detected Edges in the ground truth image (left) are lost. We found that N = 16 strikes a good balance between recoverability and computation cost. | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BkxAUjRqY7 | We present a provable and easily-computable evaluation function that estimates the performance of transferred representations from one learning task to another in task transfer learning. |
This paper presents a generic framework to tackle the crucial class mismatch problem in unsupervised domain adaptation (UDA) for multi-class distributions. Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem. Compared with pseudo labels, class prototypes are more accurate and reliable since they summarize over all the instances and are able to represent the inherent semantic distribution shared across domains. Therefore, we propose a novel Prototype-Assisted Adversarial Learning (PAAL) scheme, which incorporates instance probabilistic predictions and class prototypes together to provide reliable indicators for adversarial domain alignment. With the PAAL scheme, we align both the instance feature representations and class prototype representations to alleviate the mismatch among semantically different classes. Also, we exploit the class prototypes as proxy to minimize the within-class variance in the target domain to mitigate the mismatch among semantically similar classes. With these novelties, we constitute a Prototype-Assisted Conditional Domain Adaptation (PACDA) framework which well tackles the class mismatch problem. We demonstrate the good performance and generalization ability of the PAAL scheme and also PACDA framework on two UDA tasks, i.e., object recognition (Office-Home,ImageCLEF-DA, andOffice) and synthetic-to-real semantic segmentation (GTA5→CityscapesandSynthia→Cityscapes). Unsupervised domain adaptation (UDA) aims to leverage the knowledge of a labeled data set (source domain) to help train a predictive model for a unlabeled data set (target domain). Deep UDA methods bring noticeable performance gain to many tasks (; ; ; ; ; a) by exploiting supervision from heterogeneous sources. Some methods exploit maximum mean discrepancy (MMD) or other distribution statistics like central moments (; ;) for domain adaptation. Recently, generative adversarial learning provides a promising alternative solution to UDA problem. Since the labels of the target instances are not given in UDA, adversarial learning scheme for adaptation suffers from the cross-domain misalignment, where the target instances from a class A are potentially misaligned with source instances from another class B. Inspired by the pseudo-labeling strategy from semi-supervised learning, previous methods either used the pseudo labels in the target domain to perform joint distribution discrepancy minimization (; or developed conditional adversarial learning methods that involve one high-dimensional domain discriminator or multiple discriminators (b;). Though effective, these conditional domain adversarial learning methods align different instances from different domains relying only on their own predictions. Simple probabilistic predictions or pseudo labels may not accurately represent the semantic information of input instances, misleading the alignment. A toy example is given in Fig. 1(a). The pseudo label of the chosen instance x is inclined to be class'square' while the ground truth label is class'circle'. Only guided by the instance prediction, the'circle' class in the target domain and the'square' class in the source domain are easily confused, causing the misalignment in the adversarial domain adaptation. To remedy the misalignment, we propose to exploit the class prototypes for adversarial domain alignment, instead of using only the possibly inaccurate predictions. Prototypes are global feature representations of different classes and are relevant to the inherent semantic structures shared across (a) conditional adversarial learning (b) prototype-assisted adversarial learning Figure 1: Illustration of two adversarial learning schemes. Different from class-agnostic adversarial learning that pursues the marginal distribution alignment but ignores the semantic consistency, (a) conditional adversarial learning relies heavily on the instance-level pseudo labels to perform conditional distribution alignment, while (b) our prototype-assisted adversarial learning integrates the instance-level pseudo labels and global class prototypes to make the conditional indicators more reliable. Class information is denoted in different shapes with source in solid and target in hollow. domains. As shown in Fig. 1(b), class prototypes are expected to remedy the negative effects of inaccurate probabilistic predictions. Motivated by this, we propose a Prototype-Assisted Adversarial Learning (PAAL) scheme which complements instance predictions with class prototypes to obtain more reliable conditional information for guiding the source-target feature representation alignment. Specifically, we summarize the class prototypes from all instances according to their predictions. In this way, on one hand, we lower the dependence of class prototypes on instance predictions which may be inaccurate, and on the other hand, we encourage the instances with greater certainty to contribute more to their corresponding class prototypes. The prototypes are updated dynamically through a moving average strategy to make them more accurate and reliable. Then by broadcasting class prototypes to each instance according to its probability prediction, the inaccurate semantic distribution depicted by instance predictions can be alleviated. Based on reliable prototype-based conditional information, we align both the instance feature representations and the class prototypes through the proposed PAAL scheme to relieve the alignment among semantically dissimilar instances. However, such a conditional domain alignment may promote the confusion among semantically similar instances across domains to some degree. To further alleviate it, we introduce an intra-class objective in the target domain to pursue the class compactness. Built on the proposed PAAL scheme and this intra-class compactness objective, we develop a Prototype-Assisted Conditional Domain Adaptation (PACDA) framework for solving UDA problems. Extensive experimental evaluations on both object recognition and semantic segmentation tasks clearly demonstrate the advantages of our approaches over previous state-of-the-arts; ). The contributions of this work can be summarized into three folds: 1) To the best of our knowledge, we are the first to leverage the class prototypes in conditional adversarial learning to prevent the misalignment in UDA; 2) We propose a simple yet effective domain adversarial learning framework PACDA to remedy the misalignment among semantically similar instances as well as semantically dissimilar instances; 3) The proposed PAAL scheme and PACDA framework are generic, and our framework achieves the state-of-the-art on several unsupervised domain adaptation tasks including object recognition and semantic segmentation. Unsupervised Domain Adaptation. UDA is first modeled as the covariate shift problem where marginal distributions of different domains are different but their conditional distributions are the same. To address it, (Dudík et al., 2006;) exploit a nonparametric instance re-weighting scheme. Another prevailing paradigm (; ;) aims to learn feature transformation with some popular cross-domain metrics, e.g., the empirical maximum mean discrepancy (MMD) statistics. Recently, a large number of deep UDA works (; ; ;) have been developed and boosted the performance of various vision tasks. Generally, they can be divided into discrepancy-based and adversarial-based methods. Discrepancy-based methods address the dataset shift by mitigating specific discrepancies defined on different layers of a shared model between domains, e.g. resembling shallow feature transforma-tion by matching higher moment statistics of features from different domains . Recently, adversarial learning has become a dominantly popular solution to domain adaptation problems. It leverages an extra domain discriminator to promote domain confusion. designs a gradient reversal layer inside the classification network and utilizes an inverted label GAN loss to fool the discriminator. Pseudo-labeling. UDA can be regarded as a semi-supervised learning (SSL) task where unlabeled data are replaced by the target instances. Therefore, some popular SSL strategies, e.g., entropy minimization (; b), mean-teacher , and virtual adversarial training , have been successfully applied to UDA. Pseudo-labeling is favored by most UDA methods due to its convenience. For example, (; exploit the intermediate pseudo-labels with tri-training and self-training, respectively. obtains target-specific prototypes with the help of pseudo labels and aligns prototypes across domains at different levels. Recently, curriculum learning , self-paced learning and re-weighting schemes ) are further leveraged to tackle possible false pseudo-labels. Conditional Domain Adaptation. Apart from the explicit integration with the last classifier layer, pseudo-labels can also be incorporated into adversarial learning to enhance the feature-level domain alignment. Concerning shallow methods , pseudo-labels can help mitigate the joint distribution discrepancy via minimizing multiple class-wise MMD measures. proposes to align the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy criterion. Recently, (b;) leverages the probabilities with multiple domain discriminators to enable fine-grained alignment of different data distributions in an end-to-end manner. In contrast, conditions the adversarial domain adaptation on discriminative information via the outer product of feature representation and classifier prediction. Motivated by the semantically-consistent GAN, imposes a multi-way adversarial loss instead of a binary one on the domain alignment. However, these methods all highly rely on the localized pseudo-labels to align labelconditional feature distributions and ignore the global class-level semantics. As far as we know, we are the first to exploit class prototypes to guide the domain adversarial learning. Compared with (;, our PACDA framework complements the original feature representations with reliable semantic features and merely involves two low-dimensional domain discriminators, making the domain alignment process simple, conditional, and reliable. In this section, we first begin with the basic settings of UDA and then give detailed descriptions on the proposed PAAL scheme and the PACDA framework. Though proposed for image classification, they can also be easily applied to semantic segmentation. In a vanilla UDA task, we are given label-rich source domain data {( sampled from the joint distribution P s (x s, y s) and unlabeled target domain data {x, where x i s ∈ X S and y i s ∈ Y S denote an image and its corresponding label from the source domain dataset, x i t ∈ X T denotes an image from the target domain dataset and P s = Q t . The goal of UDA is to learn a discriminative model from X S, Y S, and X T to predict labels for unlabeled target samples X T . As described in , a vanilla domain adversarial learning framework consists of a feature extractor network G, a classifier network F, and a discriminator network D. Given an image x, we denote the feature representation vector extracted by G as f = G(x) ∈ R d and the probability prediction obtained by F as p = F (f) ∈ R c where d means the feature dimension and c means the number of classes. The vanilla domain adversarial learning method in can be formulated as optimizing the following minimax optimization problem: min. M ema represents the global class prototype matrix while M s,t is computed by source or target instances within current batch. where the binary domain classifier D: predicts the domain assignment probability over the input features, L y (G, F) is the cross-entropy loss of source domain data as for the classification task, and λ adv is the trade-off parameter. The misalignment in UDA of multi-class distributions challenges the popular vanilla adversarial learning. In previous works (; ;, target domain data are conditioned only on corresponding pseudo labels predicted by the model for adversarial domain alignment. The general optimization process of these methods is the same as aforementioned vanilla domain adversarial learning, except that feature representations jointly with predictions are considered by the discriminator D: is the conditional adversarial loss that leverages the classification predictions p s and p t . A classic previous work implicitly conditions the feature representation on the prediction through the outer product f ⊗ p, and uses one shared discriminator to align the conditioned feature representations. further proves that using the outer product can perform much better than simple concatenation f ⊕ p. Different from, (b;) explicitly utilize multiple class-wise domain discriminators to align the feature representations relying on the corresponding predictions. However, the pseudo labels may be inaccurate due to the domain shift. Therefore, only conditioning the alignment on pseudo labels can not safely remedy the misalignment. Compared with the pseudo labels, the class prototypes are more robust and reliable in terms of representing the shared semantic structures. To acquire more reliable and accurate conditional information for domain adversarial learning, we propose to complement instance predictions with class prototypes and reformulate the adversarial loss to: c×d denotes the global class prototype matrix in our prototype-assisted adversarial learning loss L paal adv (G, D). In reality, the reliable conditional information is obtained through broadcasting the global class prototypes to each independent instance according to its prediction p. We propose to summarize feature representations of the instances within the same class as the corresponding prototype. Then the probability prediction is leveraged to obtain accurate class prototypes. Using predictions as weights can adaptively control the contributions of typical and non-typical instances to the class prototype, making class prototypes more reliable. Specifically, we first gather the feature representation of each instance relying on its prediction to generate the batch-level class prototypes. Then the global class prototypes can be obtained by virtue of an averaging strategy such as exponential moving average (ema) on the batch ones. This process can be formulated as Here n means the batch size, p k,i represents the probability of the i-th instance belonging to the k-th semantic class, λ ema is an empirical weight, M ∈ R c×d is the batch-level class prototype matrix and M ema is the global one computed by certain source domain data and contributes to more reliable conditional information exploited by discriminators. Similarly, batch-level class prototypes are broadcast to each instance in this batch through M T a p a which can be denoted as f a, a ∈ {s, t}. 3.3 PROTOTYPE-ASSISTED CONDITIONAL DOMAIN ADAPTATION (PACDA) FRAMEWORK With our prototype-based conditional information, we further propose a Prototype-Assisted Conditional Domain Adaptation (PACDA) framework. This framework aligns both instance-level and prototype-level feature representations through PAAL and promotes the intra-class compactness in target domain such that the misalignment can be substantially alleviated even though no supervision is available in the target domain. Its overall architecture is shown in Fig. 2. Besides the backbone feature extractor G and the task classifier F, there are two discriminators in our framework PACDA, i.e., the instance-level feature discriminator D f and the prototype-level feature discriminator D p. We can formulate our general objective function as (where λ denotes balance factors among different loss functions, L y is the supervised classification loss on source domain data described by Eq., L f adv is the adversarial loss to align instance feature representations across domains, L p adv is the adversarial loss to align class prototype representations across domains, and L t is the loss to promote the intra-class compactness in target domain. Instance-Level Alignment Conditioning the instance feature representation on our prototype-based conditional information, we seek to align feature representations across domains at the instance-level through discriminator D f. With the assistance of the accurate semantic structures embedded in class prototypes, misalignment among semantically dissimilar instances can be effectively alleviated. We can define the instance-level adversarial loss L adv f as Prototype-Level Alignment Instance-level alignment only implicitly aligns the multi-class distribution across domains, which may not ensure the semantic consistency between two domains. Besides, since in practice global class prototypes are collected from only source domain data, which possibly cannot accurately represent inherent semantic structures in the target domain due to the domain shift. Taking into account these two causes, we perform the prototype-level alignment with discriminator D p to explicitly align the class prototype representations across domains. The specific loss function Intra-Class Compactness Although adversarial alignment based on PAAL can relieve the misalignment among obviously semantically different instances, it cannot well handle the misalignment among semantically similar instances. Specifically, incorporating class prototypes into instance predictions would confuse semantically similar instances during domain alignment and in the misalignment among them. To solve this problem, our framework further promotes the intra-class compactness in the target domain to enlarge the margin between instances of semantically similar classes. Taking the prototypes as proxy, we minimize the following loss for target domain samples to encourage the intra-class compactness: Thus, the complete minimax optimization problem of our PACDA framework can be formulated as With only two low-dimensional (2 × d) discriminators added, we effectively remedy the misalignment in domain adversarial learning. Some theoretical insights with the help of domain adaptation theory is discussed in the Appendix. We conduct experiments to verify the effectiveness and generalization ability of our methods, i.e., PACDA (full) in Eq. and PAAL (λ p adv = λ t = 0) on two different UDA tasks, including cross-domain object recognition on ImageCLEF-DA 1, Office31 and OfficeHome , and synthetic-to-real semantic segmentation for GTA5 →Cityscapes and Synthia → Cityscapes. Datasets. Office-Home is a new challenging dataset that consists of 65 different object categories found typically in 4 different Office and Home settings, i.e., Artistic (Ar) images, Clip Art (Ca), Product images (Pr), and Real-World (Re) images. ImageCLEF-DA is a standard dataset built for the'ImageCLEF2014:domain-adaptation' competition. We follow to select 3 subsets, i.e., C, I, and P, which share 12 common classes. Office31 is a popular dataset that includes 31 object categories taken from 3 domains, i.e., Amazon (A), DSLR (D), and Webcam (W). Cityscapes is a realistic dataset of pixel-level annotated urban street scenes. We use its original training split and validation split as the training target data and testing target data respectively. GTA5 consists of 24,966 densely labeled synthetic road scenes annotated with the same 19 classes as Cityscapes. For Synthia, we take the SYNTHIA-RAND-CITYSCAPES set as the source domain, which is composed of 9,400 synthetic images compatible with annotated classes of Cityscapes. Implementation Details. For object recognition, we follow the standard protocol , i.e. using all the labeled source instances and all the unlabeled target instances for UDA, and report the average accuracy based on three random trials for fair comparisons. ), we experiment with ResNet-50 model pretrained on ImageNet. Specifically, we follow to choose the network parameters, and all convolutional layers and the classifier layer are trained through backpropagation, where λ t =5e-3, λ ema =5e-1, λ f adv and λ p adv increase from 0 to 1 with the same strategy as . Regarding the domain discriminator, we design a simple two-layer classifier (256→1024→1) for both D f and D p. Empirically, we fix the batch size to 36 with the initial learning rate being 1e-4. For semantic segmentation, we adopt DeepLab-V2 (a) based on ResNet-101 as done in (; b;). Following DCGAN , the discriminator network consists of three 4 × 4 convolutional layers with stride 2 and channel numbers {256, 512, 1}. In training, we use SGD to optimize the network with momentum (0.9), weight decay (5e-4), and initial learning rate (2.5e-4). We use the same learning rate policy as in (a). Discriminators are optimized by Adam with momentum (β 1 = 0.9, β 2 = 0.99), initial learning rate (1e-4) along with the same decreasing strategy as above. For both tasks, λ f adv is set to 1e-3 following and λ ema is set to 0.7. For GTA5→Cityscapes, λ p adv =1e-3 and λ t =1e-5. For Synthia→Cityscapes, λ p adv =1e-4 and λ t =1e-4. All experiments are implemented via PyTorch on a single Titan X GPU. The total iteration number is set as 10k for object recognition and 100k for semantic segmentation. For objection recognition tasks, we choose the hyper-parameters which have the minimal mean entropy of target data on Ar→Cl for convenience. For semantic segmentation tasks, training split of Cityscapes is used for the hyper-parameters selection. Data augmentation skills like random scale or random flip and ten-crop ensemble evaluation are not adopted. Cross-Domain Object Recognition. The comparison between our methods (i.e., PAAL and PACDA) and state-of-the-art (SOTA) approaches (; on Office-Home, Office31, and ImageCLEF-DA are shown in Tables 1 and 2, respectively. As indicated in these tables, PACDA improves previous approaches in the average accuracy for all three benchmarks (e.g., 67.3%→68.7% for Office-Home, 88.1%→88.8% for ImageCLEF-DA, and 87.7%→89.3% for Office31). Generally, PACDA performs the best for most transfer tasks. Taking 89.9 78.5 94.7 79.5 92.0 89.7 87.4 90.1 92.5 72.1 98.8 69.9 100. 87.2 CAT 91 Table 3: Comparison of synthetic-to-real semantic segmentation using the same architecture with NonAdapt and AdaptSeg , AdvEnt (b), CLAN and AdaptPatch . Top: GTA5 → Cityscapes. Bottom: Synthia → Cityscapes. a careful look at PAAL, we find that it always beats CDAN and achieves competitive performance with SOTA methods like CAT . Synthetic-to-real Semantic Segmentation. We compare PAAL and PACDA with SOTA methods (; b;) on synthetic-to-real semantic segmentation. Following (b), we evaluate models on all 19 classes for GTA5→Cityscapes while on only 13 classes for Synthia→Cityscapes. As shown in Table 3, without bells and whistles, our PAAL method outperforms all of those methods and our PACDA framework further achieves new SOTA on both tasks, i.e., 43.8%→46.6% for GTA5→Cityscapes and 47.8%→49.2% for Synthia→Cityscapes in terms of the mean IoU (mIoU) value. Quantitative Analysis. To verify the effectiveness of each component in Eq., we introduce a variant named PAAL f,p that merely ignores the intra-class objective (λ t = 0). The empirical convergence curves about Ar→Cl in Fig.(a) imply that all of our variants tend to converge after 10k iterations, and the second term can help accelerate the convergence. Fig. 3(b) shows that all terms in the PACDA framework, i.e., PAAL alignment at different levels and the intra-class objective, can bring evident improvement on both tasks. As shown in Fig.(c), we provide the proxy A-distances of different methods for Ar→Cl and C→I. The A-distance Dist A =2(1 − 2) is a popular measure for domain discrepancy, where is the test error of a binary classifier trained on the learned features. All the UDA methods have smaller distances than'source only' by aligning different domains. Besides, our PACDA has the minimum distance for both tasks, implying that it can learn better features to bridge the domain gap between domains. To testify the sensitivity of our PACDA, in Fig. Qualitative Analysis. For object recognition, we study the t-SNE visualizations of aligned features generated by different UDA methods in Fig. 4. As expected, conditional methods including CDAN and PAAL can semantically align multi-class distributions much better than DANN. Besides, PAAL learns slightly better features than CDAN due to less misalignment. Once considering the intra-class objective, PACDA further enhances PAAL by pushing away semantically confusing classes, which achieves the best adaptation performance. For semantic segmentation, we present some qualitative in Fig. 5. Similarly, PAAL effectively improves the adaptation performance and PAAL f,p as well as PACDA can further improve the segmentation . In this work, we developed the prototype-assisted adversarial learning scheme to remedy the misalignment for UDA tasks. Unlike previous conditional ones whose performance is vulnerable to inaccurate instance predictions, our proposed scheme leverages the reliable and accurate class prototypes for aligning multi-class distributions across domains and is demonstrated to be more effective to prevent the misalignment. Then we further augment this scheme by imposing the intra-class compactness with the prototypes as proxy. Extensive evaluations on both object recognition and semantic segmentation tasks clearly justify the effectiveness and superiority of our UDA methods over well-established baselines. We try to explain why our PAAL works well for UDA according to the domain adaptation theory proposed in . Denote by P (F) = E (f,y)∈P [F (f) = y] the risk of a classifier model F ∈ H w.r.t. the distribution P, and by P (F 1, F 2) = E (f,y)∈P [F 1 (f) = F 2 (f)] the disagreement between hypotheses F 1, F 2 ∈ H. Particularly, gives a well-known upper bound on the target risk Q (F) of classifier F in the following, where F * is the ideal classifier induced from F * = arg min F ∈H [P (F) + Q (F)], and the last term is related to the classical H-divergence d H∆H (P, Q) = 2 sup F,F * ∈H | P (F, F *) − Q (F, F *)|. Besides, according to , the empirical H-divergence calculated by m respective samples from distributions P and Q converges uniformly to the true H-divergence for classifier classes H of finite VC dimension d, which is expressed as d H∆H (P, Q) ≤d H∆H (P, Q) + 4 d log(2m) + log(2/δ) m. The work introduces a binary domain discriminator to minimize the empirical H-divergenced H∆H (P, Q), which aligns the marginal distributions well. However, if two multi-class distributions P and Q are not semantically aligned, there may not be any classifier with low risk in both domains, which means the second term of the upper bound in Eq. is very large. The proposed PAAL scheme leverages reliable conditional information in the adversarial learning module so that semantically similar samples from different domains are implicitly aligned, thus it has a high possibility of decreasing the second term. Compared with, the input to domain the adversarial learning module is much more compact (2 × d c × d), which helps decrease the second term in Eq.. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Byg79h4tvB | We propose a reliable conditional adversarial learning scheme along with a simple, generic yet effective framework for UDA tasks. |
We present a new methodology that constructs a family of \emph{positive definite kernels} from any given dissimilarity measure on structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs. Our approach, which we call D2KE (from Distance to Kernel and Embedding), draws from the literature of Random Features. However, instead of deriving random feature maps from a user-defined kernel to approximate kernel machines, we build a kernel from a random feature map, that we specify given the distance measure. We further propose use of a finite number of random objects to produce a random feature embedding of each instance. We provide a theoretical analysis showing that D2KE enjoys better generalizability than universal Nearest-Neighbor estimates. On one hand, D2KE subsumes the widely-used \emph{representative-set method} as a special case, and relates to the well-known \emph{distance substitution kernel} in a limiting case. On the other hand, D2KE generalizes existing \emph{Random Features methods} applicable only to vector input representations to complex structured inputs of variable sizes. We conduct classification experiments over such disparate domains as time series, strings, and histograms (for texts and images), for which our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time. In many problem domains, it is easier to specify a reasonable dissimilarity (or similarity) function between instances than to construct a feature representation. This is particularly the case with structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs, where it is typically less than clear how to construct the representation of entire structured inputs with potentially widely varying sizes, even when given a good feature representation of each individual component. Moreover, even for complex structured inputs, there are many well-developed dissimilarity measures, such as the Dynamic Time Warping measure between time series, Edit Distance between strings, Hausdorff distance between sets, and Wasserstein distance between distributions. However, standard machine learning methods are designed for vector representations, and classically there has been far less work on distance-based methods for either classification or regression on structured inputs. The most common distance-based method is Nearest-Neighbor Estimation (NNE), which predicts the outcome for an instance using an average of its nearest neighbors in the input space, with nearness measured by the given dissimilarity measure. Estimation from nearest neighbors, however, is unreliable, specifically having high variance when the neighbors are far apart, which is typically the case when the intrinsic dimension implied by the distance is large. To address this issue, a line of research has focused on developing global distance-based (or similaritybased) machine learning methods BID38 BID16 BID1 BID12, in large part by drawing upon connections to kernel methods BID43 or directly learning with similarity functions BID1 BID12 BID2 BID29; we refer the reader in particular to the survey in BID7. Among these, the most direct approach treats the data similarity matrix (or transformed dissimilarity matrix) as a kernel Gram matrix, and then uses standard kernel-based methods such as Support Vector Machines (SVM) or kernel ridge regression with this Gram matrix. A key caveat with this approach however is that most similarity (or dissimilarity) measures do not provide a positive-definite (PD) kernel, so that the empirical risk minimization problem is not well-defined, and moreover becomes non-convex BID33 BID28.A line of work has therefore focused on estimating a positive-definite (PD) Gram matrix that merely approximates the similarity matrix. This could be achieved for instance by clipping, or flipping, or shifting eigenvalues of the similarity matrix BID36, or explicitly learning a PD approximation of the similarity matrix BID6 BID8. Such modifications of the similarity matrix however often leads to a loss of information; moreover, the enforced PD property is typically guaranteed to hold only on the training data, ing in an inconsistency between the set of testing and training samples BID7 1.Another common approach is to select a subset of training samples as a held-out representative set, and use distances or similarities to structured inputs in the set as the feature function BID20 BID36 ). As we will show, with proper scaling, this approach can be interpreted as a special instance of our framework. Furthermore, our framework provides a more general and richer family of kernels, many of which significantly outperform the representative-set method in a variety of application domains. To address the aforementioned issues, in this paper, we propose a novel general framework that constructs a family of PD kernels from a dissimilarity measure on structured inputs. Our approach, which we call D2KE (from Distance to Kernel and Embedding), draws from the literature of Random Features BID39, but instead of deriving feature maps from an existing kernel for approximating kernel machines, we build novel kernels from a random feature map specifically designed for a given distance measure. The kernel satisfies the property that functions in the corresponding Reproducing Kernel Hilbert Space (RKHS) are Lipschitz-continuous w.r.t. the given distance measure. We also provide a tractable estimator for a function from this RKHS which enjoys much better generalization properties than nearest-neighbor estimation. Our framework produces a feature embedding and consequently a vector representation of each instance that can be employed by any classification and regression models. In classification experiments in such disparate domains as strings, time series, and histograms (for texts and images), our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time, especially when the number of data samples is large and/or the size of structured inputs is large. We highlight our main contributions as follows:• From the perspective of distance kernel learning, we propose for the first time a methodology that constructs a family of PD kernels via Random Features from a given distance measure for structured inputs, and provide theoretical and empirical justifications for this framework.• From the perspective of Random Features (RF) methods, we generalize existing Random Features methods applied only to vector input representations to complex structured inputs of variable sizes. To the best of our knowledge, this is the first time that a generic RF method has been used to accelerate kernel machines on structured inputs across a broad range of domains such as time-series, strings, and the histograms. Distance-Based Kernel Learning. Existing approaches either require strict conditions on the distance function (e.g. that the distance be isometric to the square of the Euclidean distance) BID22 BID42, or construct empirical PD Gram matrices that do not necessarily generalize to the test samples BID36 BID38 BID34 BID16. BID22 and BID42 provide conditions under which one can obtain a PD kernel through simple transformations of the distance measure, but which are not satisfied for many commonly used dissimilarity measures such as Dynamic Time Warping, Hausdorff distance, and Earth Mover's distance (Haasdonk & Bahlmann, 1A generalization error bound was provided for the similarity-as-kernel approach in BID7, but only for a positive-definite similarity function.). Equivalently, one could also find a Euclidean embedding (also known as dissimilarity representation) approximating the dissimilarity matrix as in Multidimensional Scaling BID36 BID38 BID34 BID16 2. Differently, BID29 presented a theoretical foundation for an SVM solver in Krein spaces and directly evaluated a solution that uses the original (indefinite) similarity measure. There are also some specific approaches dedicated to building a PD kernel on some structured inputs such as text and time-series BID11 BID13, that modify a distance function over sequences to a kernel by replacing the minimization over possible alignments into a summation over all possible alignments. This type of kernel, however, in a diagonal-dominance problem, where the diagonal entries of the kernel Gram matrix are orders of magnitude larger than the off-diagonal entries, due to the summation over a huge number of alignments with a sample itself. Interest in approximating non-linear kernel machines using randomized feature maps has surged in recent years due to a significant reduction in training and testing times for kernel based learning algorithms BID14. There are numerous explicit nonlinear random feature maps that have been constructed for various types of kernels, including Gaussian and Laplacian Kernels BID39 BID48, intersection kernels BID30 ), additive kernels BID47, dot product kernels BID25 BID37, and semigroup kernels BID31. Among them, the Random Fourier Features (RFF) method, which approximates a Gaussian Kernel function by means of multiplying the input with a Gaussian random matrix, and its fruitful variants have been extensively studied both theoretically and empirically BID46 BID17 BID41 BID0 BID10. To accelerate the RFF on input data matrix with high dimensions, a number of methods have been proposed to leverage structured matrices to allow faster matrix computation and less memory consumption BID27 BID23 BID9.However, all the aforementioned RF methods merely consider inputs with vector representations, and compute the RF by a linear transformation that is either a matrix multiplication or an inner product under Euclidean distance metric. In contrast, D2KE takes structured inputs of potentially different sizes and computes the RF with a structured distance metric (typically with dynamic programming or optimal transportation). Another important difference between D2KE and existing RF methods lies in the fact that existing RF work assumes a user-defined kernel and then derives a randomfeature map, while D2KE constructs a new PD kernel through a random feature map and makes it computationally feasible via RF. The table 1 lists the differences between D2KE and existing RF methods. A very recent piece of work BID49 has developed a kernel and a specific algorithm for computing embeddings of single-variable real-valued time-series. However, despite promising , this method cannot be applied on discrete structured inputs such as strings, histograms, and graphs. In contrast, we have an unified framework for various structured inputs beyond the limits of BID49 and provide a general theoretical analysis w.r.t KNN and other generic distance-based kernel methods. We consider the estimation of a target function f: X → R from a collection of samples {( DISPLAYFORM0, where x i ∈ X is the structured input object, and y i ∈ Y is the output observation associated with the target function f (x i). For instance, in a regression problem, y i ∼ f (x i) + ω i ∈ R for some random noise ω i, and in binary classification, we have y i ∈ {0, 1} with P(y i = 1|x i) = f (x i). We are given a dissimilarity measure d: X × X → R between input objects instead of a feature representation of x.2A proof of the equivalence between PD of similarity matrix and Euclidean of dissimilarity matrix can be found in BID4.Note that the size structured inputs x i may vary widely, e.g. strings with variable lengths or graphs with different sizes. For some of the analyses, we require the dissimilarity measure to be a metric as follows. DISPLAYFORM1 An ideal feature representation for the learning task is (i) compact and (ii) such that the target function f (x) is a simple (e.g. linear) function of the ing representation. Similarly, an ideal dissimilarity measure d(x 1, x 2) for learning a target function f (x) should satisfy certain properties. On one hand, a small dissimilarity d(x 1, x 2) between two objects should imply small difference in the function DISPLAYFORM2 On the other hand, we want a small expected distance among samples, so that the data lies in a compact space of small intrinsic dimension. We next build up some definitions to formalize these properties. Assumption 2 (Lipschitz Continuity). For any DISPLAYFORM3 We would prefer the target function to have a small Lipschitz-continuity constant L with respect to the dissimilarity measure d(., .). Such Lipschitz-continuity alone however might not suffice. For example, one can simply set d(x 1, x 2) = ∞ for any x 1 x 2 to satisfy Eq. equation 1. We thus need the following quantity that measures the size of the space implied by a given dissimilarity measure. DISPLAYFORM4 Assuming the input domain X is compact, the covering number N(δ; X, d) measures its size w.r.t. the distance measure d. We show how the two quantities defined above affect the estimation error of a Nearest-Neighbor Estimator. DISPLAYFORM0 We extend the standard analysis of the estimation error of k-nearest-neighbor from finite-dimensional vector spaces to any structured input space X, with an associated distance measure d, and a finite covering number N(δ; X, d), by defining the effective dimension as follows. Assumption 3 (Effective Dimension). Let the effective dimension p X,d > 0 be the minimum p satisfying DISPLAYFORM1 Here we provide an example of effective dimension in case of measuring the space of Multiset. A multiset is a set that allows duplicate elements. Consider two multisets DISPLAYFORM0 be a ground distance that measures the distance between two elements u i, v j ∈ V in a set. The (modified) Hausdorff Distance BID15 DISPLAYFORM1 Let N(δ; V, ∆) be the covering number of V under the ground distance ∆. Let X denote the set of all sets of size bounded by L. By constructing a covering of X containing any set of size less or equal than L with its elements taken from the covering of V, we have N(δ; DISPLAYFORM2 Equipped with the concept of effective dimension, we can obtain the following bound on the estimation error of the k-Nearest-Neighbor estimate of f (x).Theorem 1. Let V ar(y| f (x)) ≤ σ 2, andf n be the k-Nearest Neighbor estimate of the target function f constructed from a training set of size n. Denote p:= p X,d. We have DISPLAYFORM3 for some constant c > 0. For σ > 0, minimizing RHS w.r.t. the parameter k, we have DISPLAYFORM4 Proof. The proof is almost the same to a standard analysis of k-NN's estimation error in, for example, BID21, with the space partition number replaced by the covering number, and dimension replaced by the effective dimension in Assumption 3.When p X,d is reasonably large, the estimation error of k-NN decreases quite slowly with n. Thus, for the estimation error to be bounded by, requires the number of samples to scale exponentially in p X,d. In the following sections, we develop an estimatorf based on a RKHS derived from the distance measure, with a considerably better sample complexity for problems with higher effective dimension. We aim to address the long-standing problem of how to convert a distance measure into a positivedefinite kernel. Here we introduce a simple but effective approach D2KE that constructs a family of positive-definite kernels from a given distance measure. Given an structured input domain X and a distance measure d(., .), we construct a family of kernels as DISPLAYFORM0 where ω ∈ Ω is a random structured object whose elements could be real-valued time-series, strings, and histograms, p(ω) is a distribution over Ω, and φ ω (x) is a feature map derived from the distance of x to all random objects ω ∈ Ω. The kernel is parameterized by both p(ω) and γ. Relationship to Distance Substitution Kernel. An insightful interpretation of the kernel in Equation can be obtained by expressing the kernel in Equation FORMULA12 as DISPLAYFORM1 where the soft minimum function, parameterized by p(ω) and γ, is defined as DISPLAYFORM2 Therefore, the kernel k(x, y) can be interpreted as a soft version of the distance substitution kernel BID22, where instead of substituting d(x, y) into the exponent, it substitutes a soft version of the form DISPLAYFORM3 Note when γ → ∞, the value of Equation FORMULA15 is determined by min ω ∈Ω d(x, ω) + d(ω, y), which equals d(x, y) if X ⊆ Ω, since it cannot be smaller than d(x, y) by the triangle inequality. In other words, when X ⊆ Ω, DISPLAYFORM4 On the other hand, unlike the distance-substituion kernel, our kernel in Equation FORMULA13 is always PD by construction. 1: Draw R samples from p(ω) to get {ω j} R j=1. 2: Set the R-dimensional feature embedding aŝ DISPLAYFORM0 3: Solve the following problem for some µ > 0: DISPLAYFORM1 Random Feature Approximation. The reader might have noticed that the kernel in Equation FORMULA12 cannot be evaluated analytically in general. However, this does not prohibit its use in practice, so long as we can approximate it via Random Features (RF) BID39, which in our case is particularly natural as the kernel itself is defined via a random feature map. Thus, our kernel with the RF approximation can not only be used in small problems but also in large-scale settings with a large number of samples, where standard kernel methods with O(n 2) complexity are no longer efficient enough and approximation methods, such as Random Features, must be employed BID39. Given the RF approximation, one can then directly learn a target function as a linear function of the RF feature map, by minimizing a domain-specific empirical risk. It is worth noting that a recent work BID45 ) that learns to select a set of random features by solving an optimization problem in an supervised setting is orthogonal to our D2KE approach and could be extended to develop a supervised D2KE method. We outline this overall RF based empirical risk minimization for our class of D2KE kernels in Algorithm 1. It is worth pointing out that in line 2 of Algorithm 1 the random feature embeddings are computed by a structured distance measure between the original structured inputs and the generated random structured inputs, followed by the application of the exponent function parameterized by γ. This is in contrast with traditional RF methods that translate the input data matrix into the embedding matrix via a matrix multiplication with random Gaussian matrix followed by a non-linearity. We will provide a detailed analysis of our estimator in Algorithm 1 in Section 5, and contrast its statistical performance to that of K-nearest-neighbor. Relationship to Representative-Set Method. A naive choice of p(ω) relates our approach to the representative-set method (RSM): setting Ω = X, with p(ω) = p(x). This gives us a kernel Equation that depends on the data distribution. One can then obtain a Random-Feature approximation to the kernel in Equation by holding out a part of the training data {x j} R j=1 as samples from p(ω), and creating an R-dimensional feature embedding of the form: DISPLAYFORM2 as in Algorithm 1. This is equivalent to a 1/ √ R-scaled version of the embedding function in the representative-set method (or similarity-as-features method) BID20 BID36 BID38 BID34 BID7 BID16, where one computes each sample's similarity to a set of representatives as its feature representation. However, here by interpreting Equation as a random-feature approximation to the kernel in Equation, we obtain a much nicer generalization error bound even in the case R → ∞. This is in contrast to the analysis of RSM in BID7, where one has to keep the size of the representative set small (of the order O(n)) in order to have reasonable generalization performance. The choice of p(ω) plays an important role in our kernel. Surprisingly, we found that many "close to uniform" choices of p(ω) in a variety of domains give better performance than for instance the choice of the data distribution p(ω) = p(x) (as in the representative-set method). Here are some examples from our experiments: i) In the time-series domain with dissimilarity computed via Dynamic Time Warping (DTW), a distribution p(ω) corresponding to random time series of length uniform in ∈, and with Gaussian-distributed elements, yields much better performance than the Representative-Set Method (RSM); ii) In string classification, with edit distance, a distribution p(ω) corresponding to random strings with elements uniformly drawn from the alphabet Σ yields much better performance than RSM; iii) When classifying sets of vectors with the Hausdorff distance in Equation, a distribution p(ω) corresponding to random sets of size uniform in ∈ with elements drawn uniformly from a unit sphere yields significantly better performance than RSM.We conjecture two potential reasons for the better performance of the chosen distributions p(ω) in these cases, though a formal theoretical treatment is an interesting subject we defer to future work. Firstly, as p(ω) is synthetic, one can generate unlimited number of random features, which in a much better approximation to the exact kernel in Equation. In contrast, RSM requires held-out samples from the data, which could be quite limited for a small data set. Second, in some cases, even with a small or similar number of random features to RSM, the performance of the selected distribution still leads to significantly better . For those cases we conjecture that the selected p(ω) generates objects that capture semantic information more relevant to the estimation of f (x), when coupled with our feature map under the dissimilarity measure d(x, ω). In this section, we analyze the proposed framework from the perspectives of error decomposition. Let H be the RKHS corresponding to the kernel in Equation. Let DISPLAYFORM0 be the population risk minimizer subject to the RKHS norm constraint f H ≤ C. And let DISPLAYFORM1 be the corresponding empirical risk minimizer. In addition, letf R be the estimated function from our random feature approximation (Algorithm 1). Then denote the population and empirical risks as L(f) andL(f) respectively. We have the following risk decomposition DISPLAYFORM2 In the following, we will discuss the three terms from the rightmost to the leftmost. Function Approximation Error. The RKHS implied by the kernel in Equation FORMULA12 is DISPLAYFORM3 which is a smaller function space than the space of Lipschitz-continuous function w.r.t. the distance d(x 1, x 2). As we show, any function f ∈ H is Lipschitz-continous w.r.t. the distance d(., .). where L f = γC.We refer readers to the detailed proof in Appendix A.1. While any f in the RKHS is Lipschitzcontinuous w.r.t. the given distance d(., .), we are interested in imposing additional smoothness via the RKHS norm constraint f H ≤ C, and by the kernel parameter γ. The hope is that the best function f C within this class approximates the true function f well in terms of the approximation error L(f C) − L(f). The stronger assumption made by the RKHS gives us a qualitatively better estimation error, as discussed below. Estimation Error. Define D λ as DISPLAYFORM0 is the eigenvalues of the kernel in Equation and λ is a tuning parameter. It holds that for any λ ≥ D λ /n, with probability at least 1 − δ, L(f n) − L(f C) ≤ c(log 1 δ) 2 C 2 λ for some universal constant c BID50. Here we would like to set λ as small as possible (as a function of n). By using the following kernel-independent bound: D λ ≤ 1/λ, we have λ = 1/ √ n and thus a bound on the estimation error DISPLAYFORM1 The estimation error is quite standard for a RKHS estimator. It has a much better dependency w.r.t. n (i.e. n −1/2) compared to that of k-nearest-neighbor method (i.e. n −2/(2+p X, d) ) especially for higher effective dimension. A more careful analysis might lead to tighter bound on D λ and also a better rate w.r.t. n. However, the analysis of D λ for our kernel in Equation FORMULA12 is much more difficult than that of typical cases as we do not have an analytic form of the kernel. Random Feature Approximation. DenoteL as the empirical risk function. The error from RF approximation DISPLAYFORM2 where the first and third terms can be bounded via the same estimation error bound in Equation FORMULA3, as bothf R andf n have RKHS norm bounded by C. Therefore, in the following, we focus only on the second term of empirical risk. We start by analyzing the approximation error of the kernel DISPLAYFORM3, we have uniform convergence of the form P max DISPLAYFORM4, where p X,d is the effective dimension of X under metric d(., .). In other words, to guarantee |∆ R (x 1, x 2)| ≤ with probability at least 1 − δ, it suffices to have DISPLAYFORM5 We refer readers to the detailed proof in Appendix A.2. Proposition 2 gives an approximation error in terms of kernel evaluation. To get a bound on the empirical riskL(f R) −L(f n), consider the optimal solution of the empirical risk minimization. By the Representer theorem we havê DISPLAYFORM6. Therefore, we have the following corollary. Corollary 1. To guaranteeL(f R) −L(f n) ≤, with probability 1 − δ, it suffices to have DISPLAYFORM7 where M is the Lipschitz-continuous constant of the loss function (., y), and A is a bound on α 1 /n. We refer readers to the detailed proof in Appendix A.3. For most of loss functions, A and M are typically small constants. Therefore, Corollary 1 states that it suffices to have number of Random Features proportional to the effective dimension O(p X,d / 2) to achieve an approximation error. Combining the three error terms, we can show that the proposed framework can achieve -suboptimal performance. Claim 1. Letf R be the estimated function from our random feature approximation based ERM estimator in Algorithm 1, and let f * denote the desired target function. Suppose further that for some absolute constants c 1, c 2 > 0 (up to some logarithmic factor of 1/ and 1/δ):1. The target function f * lies close to the population risk minimizer f C lying in the RKHS spanned by the D2KE kernel: DISPLAYFORM8 We then have that: L(f R) − L(f *) ≤ with probability 1 − δ. We evaluate the proposed method in four different domains involving time-series, strings, texts, and images. First, we discuss the dissimilarity measures and data characteristics for each set of experiments. Then we introduce comparison among different distance-based methods and report corresponding . Distance Measures. We have chosen three well-known dissimilarity measures: 1) Dynamic Time Warping (DTW), for time-series BID3; 2) Edit Distance (Levenshtein distance), for strings BID32 ); 3) Earth Mover's distance BID40 for measuring the semantic distance between two Bags of Words (using pretrained word vectors), for representing documents. 4) (Modified) Hausdorff distance BID24 BID15 for measuring the semantic closeness of two Bags of Visual Words (using SIFT vectors), for representing images. Note that Bag of (Visual) Words in 3) and 4) can also be regarded as a histogram. Since most distance measures are computationally demanding, having quadratic complexity, we adapted or implemented C-MEX programs for them; other codes were written in Matlab. Datasets. For each domain, we selected 4 datasets for our experiments. For time-series data, all are multivariate time-series and the length of each time-series varies from 2 to 205 observations; three are from the UCI Machine Learning repository BID18, the other is generated from the IQ (In-phase and Quadrature components) samples from a wireless line-of-sight communication system from GMU. For string data, the size of alphabet is between 4 and 8 and the length of each string ranges from 34 to 198; two of them are from the UCI Machine Learning repository and the other two from the LibSVM Data Collection BID5. For text data, all are chosen partially overlapped with these in BID26. The length of each document varies from 9.9 to 117. For image data, all of datasets were derived from Kaggle; we computed a set of SIFTdescriptors to represent each image and the size of SIFT feature vectors of each image varies from 1 to 914. We divided each dataset into 70/30 train and test subsets (if there was no predefined train/test split). Properties of these datasets are summarized in TAB5 in Appendix B. Baselines. We compare D2KE against 5 state-of-the-art baselines, including 1) KNN: a simple yet universal method to apply any distance measure to classification tasks; 2) DSK_RBF BID22: distance substitution kernels, a general framework for kernel construction by substituting a problem specific distance measure in ordinary kernel functions. We use a Gaussian RBF kernel; 3) DSK_ND BID22: another class of distance substitution kernels with negative distance; 4) KSVM BID29: learning directly from the similarity (indefinite) matrix followed in the original Krein Space; 5) RSM BID36: building an embedding by computing distances from randomly selected representative samples. Among these baselines, KNN, DSK_RBF, DSK_ND, and KSVM have quadratic complexity O(N 2 L 2) in both the number of data samples and the length of the sequences, while RSM has computational complexity O(N RL 2), linear in the number of data samples but still quadratic in the length of the sequence. These compare to our method, D2KE, which has complexity O(N RL), linear in both the number of data samples and the length of the sequence. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR for all embedding-based methods (RSM and D2KE) and use LIBSVM BID5 for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). More details of experimental setup are provided in Appendix B. TAB1, 4, and 5, D2KE can consistently outperform or match the baseline methods in terms of classification accuracy while requiring far less computation time. There are several observations worth making here. First, D2KE performs much better than KNN, supporting our claim that D2KE can be a strong alternative to KNN across applications. Second, compared to the two distance substitution kernels DSK_RBF and DSK_ND and the KSVM method operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly PD kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random objects (time-series, strings, or sets) sampled by D2KE perform significantly better, as we discussed in section 4. More detailed discussions of the experimental for each domain are given in Appendix C. In this work, we have proposed a general framework for deriving a positive-definite kernel and a feature embedding function from a given dissimilarity measure between input objects. The framework is especially useful for structured input domains such as sequences, time-series, and sets, where many well-established dissimilarity measures have been developed. Our framework subsumes at least two existing approaches as special or limiting cases, and opens up what we believe will be a useful new direction for creating embeddings of structured objects based on distance to random objects. A promising direction for extension is to develop such distance-based embeddings within a deep architecture to support use of structured inputs in an end-to-end learning system. DISPLAYFORM0 Proof. Note the function g(t) = exp(−γt) is Lipschitz-continuous with Lipschitz constant γ. Therefore, DISPLAYFORM1 Proof. Our goal is to bound the magnitude of DISPLAYFORM2 Hoefding's inequality, we have DISPLAYFORM3 a given input pair (x 1, x 2). To get a unim bound that holds ∀(x 1, x 2) ∈ X × X, we find an -covering E of X w.r.t. d(., .) of size N(, X, d). Applying union bound over the -covering E for x 1 and x 2, we have P max DISPLAYFORM4 Then by the definition of E we have |d(DISPLAYFORM5 Together with the fact that exp(−γt) is Lipschitz-continuous with parameter γ for t ≥ 0, we have DISPLAYFORM6 for γ chosen to be ≤ 1. This gives us DISPLAYFORM7 Combining equation 13 and equation 14, we have P max DISPLAYFORM8 Choosing = t/6γ yields the . A.3 P C 1Proof. First of all, we have DISPLAYFORM9 by the optimality of {α j} n j=1 w.r.t. the objective using the approximate kernel. Then we havê DISPLAYFORM10 where A is a bound on α 1 /n. Therefore to guaranteê DISPLAYFORM11 Then applying Theorem 2 leads to the . B G E S General Setup. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. Following BID22, we use an exact RBF kernel for DSK_RBF while choosing squared distance for DSK_ND. We use the Matlab implementation provided by BID29 to run experiments for KSVM. Similarly, we adopted a simple method -random selection -to obtain R = data samples as the representative set for RSM BID36 ). For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR for all embedding-based methods (RSM, and D2KE) and use LIBSVM BID5 for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM).All datasets are collected from popular public websites for Machine Learning and Data Science research, including the UCI Machine Learning repository BID18, the LibSVM Data Collection BID5, and the Kaggle Datasets, except one time-series dataset IQ that is shared from researchers from George Mason University. TAB5 lists the detailed properties of the datasets from four different domains. All computations were carried out on a DELL dual-socket system with Intel Xeon processors at 2.93GHz for a total of 16 cores and 250 GB of memory, running the SUSE Linux operating system. To accelerate the computation of all methods, we used multithreading with 12 threads total for various distance computations in all experiments. C D E R T -S, S, IC.1 R - For time-series data, we employed the most successful distance measure -DTW -for all methods. For all datasets, a Gaussian distribution was found to be applicable, parameterized by its bandwidth σ. The best values for σ and for the length of random time series were searched in the ranges [1e-3 1e3] and , respectively. TAB1, D2KE can consistently outperform or match all other baselines in terms of classification accuracy while requiring far less computation time for multivariate time-series. The first interesting observation is that our method performs substantially better than KNN, often by a large margin, i.e., D2KE achieves 26.62% higher performance than KNN on IQ_radio. This is because KNN is sensitive to the data noise common in real-world applications like IQ_radio, and has notoriously poor performance for high-dimensional data sets like Auslan. Moreover, compared to the two distance substitution kernels DSK_RBF and DSK_ND, and KSVM operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly p.d. kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random time series sampled by D2KE performs significantly better, as we discussed in section 4. First, RSM simply chooses a subset of the original data points and computes the distances between the whole dataset and this representative set; this may suffer significantly from noise or redundant information in the time-series. In contrast, our method samples a short random sequence that could both denoise and find the patterns in the data. Second, the number of data points that can be sampled is limited by the total size of the data while the number of possible random sequences drawn from the distribution is unlimited, making the feature space much more abundant. Third, RSM may incur significant computational cost for long time-series, due to its quadratic complexity in length. Setup. For string data, there are various well-known edit distances. Here, we choose Levenshtein distance as our distance measure since it can capture global alignments of the underlying strings. We first compute the alphabet from the original data and then uniformly sample characters from this alphabet to generate random strings. We search for the best parameters for γ in the range [1e-5 1], and for the length of random strings in the range , respectively. Results. As shown in TAB2, D2KE consistently performs better than or similarly to other distancebased baselines. Unlike the previous experiments where DTW is not a distance metric, Levenshtein distance is indeed a distance metric; this helps improve the performance of our baselines. However, D2KE still offers a clear advantage over baseline. It is interesting to note that the performance of DSK_RBF is quite close to our method's, which may be due to DKS_RBF with Levenshtein distance producing a c.p.d. kernel which can essentially be converted into a p.d. kernel. Notice that on relatively large datasets, our method, D2KE, can achieve better performance, and often with far less computation than other baselines with quadratic complexity in both number and length of data samples. For instance, on mnist-str8 D2KE obtains higher accuracy with an order of magnitude less runtime compared to DSK_RBF and DSK_ND, and two orders of magnitude less than KSVM, due to higher computational costs both for kernel matrix construction and for eigendecomposition. Setup. For text data, following BID26 we use the earth mover's distance as our distance measure between two documents, since this distance has recently demonstrated a strong performance when combining with KNN for document classifications. We first compute the Bag of Words for each document and represent each document as a histogram of word vectors, where google pretrained word vectors with dimension size 300 is used. We generate random documents consisting of each random word vectors uniformly sampled from the unit sphere of the embedding vector space R 300. We search for the best parameters for γ in the range [1e-2 1e1], and for length of random document in the range .Results. As shown in TAB3, D2KE outperforms other baselines on all four datasets. First of all, all distance based kernel methods perform better than KNN, illustrating the effectiveness of SVM over KNN on text data. Interestingly, D2KE also performs significantly better than other baselines by a notiably margin, in large part because document classification mainly associates with "topic" learning where our random documents of short length may fit this task particularly well. For the datasets with large number of documents and longer length of document, D2KE achieves about one order of magnitude speedup compared with other exact kernel/similarity methods, thanks to the use of random features in D2KE. Setup. For image data, following BID36 BID22 we use the modified Hausdorff distance (MHD) BID15 as our distance measure between images, since this distance has shown excellent performance in the literature BID44 BID19. We first applied the open-source OpenCV library to generate a sequence of SIFT-descriptors with dimension 128, then MHD to compute the distance between sets of SIFTdescriptors. We generate random images of each SIFT-descriptor uniformly sampled from the unit sphere of the embedding vector space R 128. We search for the best parameters for γ in the range [1e-3 1e1], and for length of random SIFT-descriptor sequence in the range .Results. As shown in TAB4, D2KE performance outperforms or matches other baselines in all cases. First, D2KE performs best in three cases while DSK_RBF is the best on dataset decor. This may be because the underlying SIFT features are not good enough and thus random features is not effective to find the good patterns quickly in images. Nevertheless, the quadratic complexity of DSK_RBF, DSK_ND, and KSVM in terms of both the number of images and the length of SIFT descriptor sequences makes it hard to scale to large data. Interestingly, D2KE still performs much better than KNN and RSM, which again supports our claim that D2KE can be a strong alternative to KNN and RSM across applications. | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HyldojC9t7 | From Distance to Kernel and Embedding via Random Features For Structured Inputs |
Deep Convolution Neural Networks (CNNs), rooted by the pioneer work of \cite{Hinton1986,LeCun1985,Alex2012}, and summarized in \cite{LeCunBengioHinton2015}, have been shown to be very useful in a variety of fields. The state-of-the art CNN machines such as image rest net \cite{He_2016_CVPR} are described by real value inputs and kernel convolutions followed by the local and non-linear rectified linear outputs. Understanding the role of these layers, the accuracy and limitations of them, as well as making them more efficient (fewer parameters) are all ongoing research questions. Inspired in quantum theory, we propose the use of complex value kernel functions, followed by the local non-linear absolute (modulus) operator square. We argue that an advantage of quantum inspired complex kernels is robustness to realistic unpredictable scenarios (such as clutter noise, data deformations). We study a concrete problem of shape detection and show that when multiple overlapping shapes are deformed and/or clutter noise is added, a convolution layer with quantum inspired complex kernels outperforms the statistical/classical kernel counterpart and a "Bayesian shape estimator". The superior performance is due to the quantum phenomena of interference, not present in classical CNNs. The convolution process in machine learning maybe summarized as follows. Given an input f L−1 (x) ≥ 0 to a convolution layer L, it produces an output DISPLAYFORM0 From g L (y) a local and non-linear function is applied, f L (y) = f (g L (y)), e.g., f = ReLu (rectified linear units) or f = |.|, the magnitude operator. This output is then the input to the next convolution layer (L+1) or simply the output of the whole process. We can also write a discrete form of these convolutions, as it is implemented in computers. We write g DISPLAYFORM1, where the continuous variables y, x becomes the integers i, j respectively, the kernel function K(y − x) → w ij becomes the weights of the CNN and the integral over dx becomes the sum over j. These kernels are learned from data so that an error (or optimization criteria) is minimized. The kernels used today a real value functions. We show how our understanding of the optimization criteria "dictate" the construction of the quantum inspired complex value kernel. In order to concentrate and study our proposal of quantum inspired kernels, we simplify the problem as much as possible hoping to identify the crux of the limitation of current use of real value kernels. We place known shapes in an image, at any location, and in the presence of deformation and clutter noise. These shapes may have been learned by a CNN. Our main focus is on the feedforward performance, when new inputs are presented. Due to this focus, we are able to construct a Bayesian a posteriori probability model to the problem, which is based on real value prior and likelihood models, and compare it to the quantum inspired kernel method. The main advantage of the quantum inspired method over existing methods is its high resistance to deviations from the model, such as data deformation, multiple objects (shapes) overlapping, clutter noise. The main new factor is the quantum interference phenomenon BID1 BID0, and we argue it is a desired phenomena for building convolution networks. It can be carried out by developing complex value kernels driven by classic data driven optimization criteria. Here we demonstrate its strength on a shape detection problem where we can compare it to state of the art classical convolution techniques. We also can compare to the MAP estimator of the Bayesian model for the shape detection problem. To be clear, we do not provide (yet) a recipe on how to build kernels for the full CNN framework for machine learning, and so the title of this paper reflects that. Here, we plant a seed on the topic of building complex value kernels inspired in quantum theory, by demonstrating that for a given one layer problem of shape detection (where the classic data optimization criteria is well defined), we can build such complex value kernel and demonstrate the relevance of the interference phenomena. To our knowledge such a demonstration is a new contribution to the field. We also speculate on how this process can be generalized. We are given an image I with some known objects to be detected. The data is a set of N feature points, DISPLAYFORM0 Here we focus on 2-dimensional data, so D = 2. An image I may be described by the set of feature points, as shown for example in figure 1. Or, the feature points can be extracted from I, using for example, SIFT features, HOG features, or maximum of wavelet responses (which are convolutions with complex value kernels). It maybe be the first two or so layers of a CNN trained in image recognition tasks. The problem of object detection has been well addressed for example by the SSD machine BID7, using convolution networks. Here as we will demonstrate, given the points, we can construct a ONE layer CNN that solves the problem of shape detection and so we focus on this formulation. It allows us to study in depth its performance (including an analytical study not just empirical). 8, 8), radius 3, and with 100 points, is deformed as follows: for each point x i a random value drawn from a uniform distribution with range (−η i, η i), η i = 0.05, is added along the radius. (b) 1000 points of clutter are added by sampling from a uniform distribution inside a box of size 9 × 9, with corners at points:,,,. We organize the paper as follows. Section 2 presents a general description of shapes, which is easily adapted to any optimization method. Section 3 presents the Bayesian method and the Hough transform method (and a convolution implementation) to the shape detection problem. Section 4 lays out our main proposal of using quantum theory to address the shape detection problem. The theory also leads naturally to a classical statistical method behaving like a voting scheme, and we establish a connection to Hough transforms. Section 5 presents a theoretical and empirical analysis of the quantum method for shape detection and a comparison with the classical statistical method. We demonstrate that for large deformations or clutter noise scenarios the quantum method outperforms the classical statistical method. Section 6 concludes the paper. A shape S may be defined by the set of points x satisfying S Θ (x) = 0, where Θ is a set of parameters describing S. Let µ be a shape's center (in our setting µ = (µ x, µ y)). The choice of µ is in general arbitrary, though frequently there is a natural choice for µ for a given shape, such as its "center of mass": the average position of its coordinates. We consider all the translations of S Θ (x) to represent the same shape, so that a shape is translation invariant. It is then convenient to describe the shapes as S Θ (x − µ), with the parameters Θ not including the parameters of µ. Thus we describe a shape by the set of points X such that DISPLAYFORM0 The more complex a shape is, the larger is the set of parameters required to describe it. For example, to describe a circle, we use three parameters {µ x, µ y, r} representing the center and the radius of the circle, i.e., DISPLAYFORM1 (see figure 1 a.) An ellipse can be described by DISPLAYFORM2, where Θ = {Σ} is the covariance matrix, specified by three independent parameters. We also require that a shape representation be such that if all the values of the parameters in Θ are 0, then the the set of points that belong to the shape "collapses" to just X = {µ}. This is the case for the parameterizations of the two examples above: the circle and the ellipse. Energy Model: Given a shape model we can create an energy model per data point x as DISPLAYFORM3 where the parameter p ≥ 0 defines the L p norm (after the sum over the points is taken and the 1/p root is applied). The smaller E S Θ (x − µ), the more it is likely that the data point x belongs to the shape S Θ with center µ. In this paper, we set p = 1 because of its simplicity and robust properties. To address realistic scenarios we must study the detection of shapes under deformations. When deformations are present, the energy is no longer zero for deformed points associated to the shape S Θ (x − µ). Let each ideal shape data point x S i be deformed by adding η i to its coordinates, so DISPLAYFORM0 Deformations of a shape are only observed in the directions perpendicular to the shape tangents, i.e., along the direction of ∇ x S Θ (x − µ) xi, where ∇ x is the gradient operator. For example, for a (deformed) circle shape, Θ = {r} and S r (x−µ) = 1− DISPLAYFORM1, and so DISPLAYFORM2 r 2 ∝r i, wherer i is a unit vector pointing outwards in the radius direction at point DISPLAYFORM3 Given a set of data points X = {x 1, x 2, ..., x N} in R D originated from a shape S Θ (x i − µ) = 0. We assume that each data point is independently deformed by η i (a random variable, since the direction is along the shape gradient), conditional on being a shape point. Based on the energy model, for p = 1 (for simplicity and robust properties), we can write the likelihood model as DISPLAYFORM0 where C is a normalization constant and λ a constant that scale the errors/energies. The product over all points is a consequence of the conditional independence of the deformations given the shape parameter (Θ, µ). Assuming a prior distributions on the parameters to be uniform, we conclude that the a posteriori distribution is simply the likelihood model up to a normalization, i.e., DISPLAYFORM1 where Z is a normalization constant (does not depend on the parameters). The parameters that maximize the likelihood L(Θ, µ) = log P(Θ, DISPLAYFORM2 A Hough transform cast binary votes from each data point. The votes are for the shape parameter values that are consistent with the data point. More precisely, each vote is given by DISPLAYFORM0 where u(x) is the Heaviside step function, u(x) = 1 if x ≥ 0 and zero otherwise, i.e., u = 1 if DISPLAYFORM1 α and u = 0 otherwise. The parameter α clearly defines the error tolerance for a data point x i to belong to the shape S Θ (x − µ), the larger is α the smaller is the tolerance. One can carry out this Hough transform for center detection as a convolution process. More precisely, create a kernel, DISPLAYFORM2 | for x in a rectangular (or square) shape that includes all x for which u 1 α − |S Θ (x)| = 1. The Hough transform for center detection is then the convolution of the kernel with the input image. The of the convolution at each location is the Hough vote for that location to be the center. (b) The Bayesian method (showing the probability value). The radius is fed to the method. The method mixes all data yielding the highest probability in the wrong place. increasing the parameter p can only improve a little as all data participate in the final estimation. (c) The Hough method with α = 2.769 estimated to include all circle points that have been deformed. The method is resistant to clutter noise. When we have one circle with deformations (e.g., see FIG0), the Bayesian approach is just the "perfect" model. Even if noise distributed uniformly across the image is added (e.g., see figure 1b), the Bayesian method will work very well. However, as one adds clutter noise to the data (noise that is not uniform and "may correspond to clutter" in an image) as shown in figure 2, the Bayesian method mix all the data, has no mechanism to discard any data, and the Hough method outperforms the Bayesian one. Even applying robust measures, decreasing p in the energy model, will have limited effect compared to the Hough method that can discard completely the data. Consider another scenario of two overlapping and deformed circles, shown in FIG2. Again, the Bayesian approach does not capture the complexity of the data, two circles and not just one, and end up yielding the best one circle fit in the "middle", while the Hough method cope with this data by widening the center detection probabilities (bluring the center probabilities) and thus, including both true centers. Still, the Hough method is not able to suggest that there are two circles/two peaks. In summary, the Bayesian model is always the best one, as long as the data follows the exact model generation. However, it is weak at dealing with *real world uncertainty* on the data (clutter data, multiple figures), other scenarios that occur often. The Hough method, modeled after the same true positive event (shape detection) is more robust to these data variations and for the center detection problem can be carried out as a convolution. The radius is fed to the method. The method mixes all data yielding the highest probability approximately in the "middle" of both centers and no suggestion of two peaks/circles/centers exists. (c) The Hough method with α = 2.769 estimated to include all circle points that have been deformed. The method yields a probability that is more diluted and includes the correct centers, but does not suggest two peaks. Quantum theory was developed for system of particles that evolve over time. For us to utilize here the benefits of such a theory for the shape detection problem we invoke a hidden time parameter. We refer to this time parameter as hidden since the input is only one static picture of a shape. A hidden shape dynamics is not a new idea in computer vision, for example, scale space was proposed to describe shape evolution and allows for better shapes comparisons BID11 BID6. Hidden shape dynamics was also employed to describe a time evolution equation to produce shape-skeletons BID9. Since our optimization criteria per point is given by "the energy" of, we refer to classic concept of action, the one that is optimized to produce the optimal path, as DISPLAYFORM0 | where we are adopting for simplicity p = 1. The idea is that a shapes evolve from the center µ = x(t = 0) to the shape point x = x(t = T) in a time interval T. During this evolutions all other parameters also evolve from Θ(t = 0) = 0 to Θ(t = T) = Θ. The evolution is reversible, so we may say equivalently, the shape point x contracts to the center µ in the interval of time T.Following the path integral point of view of quantum theory BID1, we consider the wave propagation to evolve by the integral over all path DISPLAYFORM1 where ψ Θ(t) (x(t)) is the probability amplitude that characterize the state of the shape, P T 0 is a path of shape contraction, from an initial state (x, Θ) = (x, Θ) to a final state (x(T), Θ(T)) = (µ, 0). The integral is over all possible paths that initialize in (x, Θ) = (x, Θ) and end in (x(T), Θ(T)) = (µ, 0). The Kernel K is of the form DISPLAYFORM2 where a new parameter,, is introduced. It has the notation used in quantum mechanics for the reduced Planck's constant, but here it will have its own interpretation and value (see section 5.1.3).We now address the given image described by X = {x 1, x 2, ..., x N} ⊂ R 2 (e.g., see FIG0). We consider an empirical estimation of ψ Θ (x) to be given by a set of impulses at the empirical data set DISPLAYFORM3, where δ(x) is the Dirac delta function. The normalization ensure the probability 1 when integrated everywhere. Note that ψ Θ (x) is a pure state, a superposition of impulses. Then, substituting this state into equation FORMULA18, with the kernel provided by, yields the evolution of the probability amplitude DISPLAYFORM4 where C = e i T |S Θ (x−µ)| dµ. Thus shape points with deformations, x i, are interpreted as evidence of different quantum paths, not just the optimal classical path (which has no deformation). Equation 5 is a convolution of the kernel K(x) = e i T |S Θ (x)| throughout the center candidates, except it is discretized at the locations where data is available. According to quantum theory, the probability associated with this probability amplitude (a pure state) is given by P(Θ) = |ψ Θ (µ)| 2, i.e., DISPLAYFORM5 which can also be expanded as DISPLAYFORM6 It is convenient to define the phase DISPLAYFORM7 Note the interference phenomenon arising from the cosine terms in the probability. More precisely, a pair of data points that belongs to the shape will have a small magnitude difference, |φ ij | 1, and will produce a large cosine term, cos φ ij ≈ 1. Two different data points that belong to the clutter will likely produce different phases, scaled inversely according to, so that small values of will create larger phase difference. Pairs of clutter data points, not belonging to the shape, with large and varying phase differences, will produce quite different cosine terms, positive and/or negative ones. If an image contains a large amount of clutter, the clutter points will end up canceling each other. If an image contains little clutter, the clutter points will not contribute much. This effect can be described by the following property for large numbers: if N 1 then DISPLAYFORM0 N, when each k is a random variable. Figure 4 shows the performance of the quantum method on the same data as shown in FIG1 and FIG2. The accuracy of the detection of the centers and the identification of two centers shows how the quantum inspired method outperforms the classical counterparts. In figure 4a, due to interference, clutter noise cancels out (negative terms on the probability equation 6 balance positive ones), and the center is peaked. We do see effects of the noise inducing some fluctuation. In figure 4b the two circle center peaks outperform both classical methods as depicted in FIG2. A more thorough analysis is carried out in the next section to better understand and compare the performance of these different methods. Note that even though the probability reflects a pair-wise computation as seen in FORMULA23, we evaluate it by taking the magnitude square of the probability amplitude (given by equation FORMULA21), which is computed as a sum of N complex numbers. Thus, the complexity of the computations is linear in the data set size. After all, it is a convolution process.(a) Quantum Probability on figure 2a (b) Quantum Probability on figure 3aFigure 4: Quantum Probability depicted for input shown in FIG1, respectively. The parameters used where T = 1, = 0.12. The quantum method outperform the classical methods, as the center detection shown in (a) is more peaked than in the Hough method and in (b) the two peaks emerge. These are of the interference phenomena, as cancellation of probabilities (negative terms on the probability equation 6) contribute to better resolve the center detection problem. we derive a classical probability from the quantum probability amplitude via the Wick rotation. It is a mathematical technique frequently employed in physics, which transforms quantum physical systems into statistical physical systems and vice-versa. It consists in replacing the term i T by a real parameter α in the probability amplitude. Considering the probability amplitude equation FORMULA21, the Wick rotation yields DISPLAYFORM0 We can interpret this as follows. Each data point x i produces a vote v(Θ, µ|x i) = e − α |S Θ (xi−µ)|, with values between 0 and 1. The parameter α controls the weight decay of the vote. Interestingly, this probability model resembles a Hough transform with each vote FORMULA26 being approximated by the binary vote described by We analyze the quantum method described by the probability, derived from the amplitude, and compare it with the classical statistical method described by and its approximation. This analysis and experiments is carried for a simple example, the circle. We consider the circle shape, S r * (x − µ) = 1 − (x−µ) 2 (r *) 2 of radius r * and its evaluation not only at the true center µ * but also at small displacements from it µ = µ * + δµ where δµ r * < 1 with δµ = |δµ|. The points of an original circle are deformed to create the final "deformed" circle shape. Each point is moved by a random vector η i pointing along the radius, i.e., η i = η ir * i withr * i being the unit vector along the radius. Thus, we may write each point as DISPLAYFORM0 The deformation is assumed to vary independently and uniformly point wise. Thus, η i ∈ (−η, η) and P(η i) = 1 2η. Plugging in the deformations and center displacement into the shape representation, S i = S r * (x i − µ), we get DISPLAYFORM1 DISPLAYFORM2 For the special case of the evaluation of the shape at the true center, δµ = 0 we obtain DISPLAYFORM3 The action for each path is given by |S Θ (x i − µ)| and we have multiple paths samples from the data. Note that when we apply the quantum method, we interpret data derived from shape deformation as evidence of quantum trajectories (paths) that are not optimal while the classical interpretation for such data is a statistical error/deformation. Both are different probabilistic interpretations that lead to different methods of evaluations of the optimal parameters as we analyze next. We interpret the probability amplitude in equation, the sum over i = 1,..., N, as a sum over many independent samples. In general given a function f (|S|), then the sum over all points, DISPLAYFORM0, can be interpreted as N C times the statistical average of the function f over the random variable S i. In this case, the random variable S i (η i, δµ i) represent two independent and uniform random variables, (η i, δµ i), or (a i, b i).Inserting shape equation FORMULA28 into the quantum probability amplitude of equation FORMULA21 DISPLAYFORM1 where I ab (e) = 1 4ab DISPLAYFORM2 2 −2aibi)| and at the true center we get DISPLAYFORM3 The ratio of the probabilities (magnitude square of the probability amplitudes) for the circle is then given by DISPLAYFORM4 These integrals can be evaluated numerically (or via integration of a Taylor series expansions, and then a numerical evaluation of the expansion). Inserting shape equation FORMULA28 into the vote for the Hough transform giving by equation FORMULA14 in DISPLAYFORM0 and interpreting the Hough total vote, V DISPLAYFORM1 as an average over a function of the random variable |S i | multiplied by the number of votes, we get DISPLAYFORM2 where DISPLAYFORM3 and at the true cen- DISPLAYFORM4 The ratio of the votes at the true center and the displaced center is then given by DISPLAYFORM5 We now address the choice of hyper-parameters of the models, namely for the quantum model and α for the classical counterpart so that the detection of the true center is as accurate as possible. In this section, without loss of generality, we set T = 1. In this way we can concentrate on understanding role (and not T). The amplitude probability given by equation FORMULA21 has a parameter where the inverse of scales up the magnitude of the shape values. The smaller is, the more the phase ϕ i = 1 |S Θ (x i − µ)| reaches any point in the unit circle. A large can make the phase ϕ i very small and a small can send each shape point to any point in the unit circle.The parameter large can help in aligning shape points to similar phases. That suggests as large as possible. At the same time, should help in misaligning pair of points where at least one of them does not belong to the shape. That suggests small values of. Similarly, if one is evaluating a shape with the "wrong" set of parameters, we woulud want the shape points to cancel each other. In our example of the circle, we would like that shape points evaluated at a center displacement from the true center to yield some cancellation. That suggests small. One can explore the parameter that maximize the ratio Q C (a, b,) given by equation. We can also attempt to analytically balance both requests (high amplitude at the true center and low amplitude at the displaced center) by choosing values of such that ϕ i = 1 |S r * (x i − µ *)| ≤ π ∀i = 1,..., N C. More precisely, by choosing DISPLAYFORM0 Figure 5 suggests this choice of gives high ratios for Q(a, b,).Now we discuss the estimation of α. we note that for DISPLAYFORM1 2 all shape points will vote for the true center. Thus, choosing α so that largest value of its inverse is DISPLAYFORM2 will guarantee all votes and give a lower vote for shape points evaluated from the displaced center. One could search for higher values of α, smaller inverse values, so that reducing votes at the true center and expecting to reduce them further at the displaced center, i.e., to maximize H(a, b, α) in equation. FIG4 suggests such changes do not improve the Hough transform performance. FIG4 demonstrates that the quantum method outperforms the classical Hough transform on accuracy detection. We can also perform a similar analysis adding noise, to obtain similar . This will require another two pages (a page of analysis and a page of graphs), and if the conference permits, we will be happy to add. Deep Convolution Neural Networks (CNNs), rooted on the pioneer work of BID8; BID4; BID3, and summarized in BID5, have been shown to be very useful in a variety of fields. Inspired in quantum theory, we investigated the use of complex value kernel functions, followed by the local non-linear absolute (modulus) operator square. We studied a concrete problem of. For each of the figures 5a, 5b,5c we vary we vary b = 1 2 a, a, 2a (or center displacements δµ = 0.25, 0.5, 1), respectively. These figures depict ratios Q(a, b,) × (blue) for ∈ (0.047, 0.2802) and H(a, b, α) × ← − α (red) for ← − α ∈ (22.727, 2.769) (The reverse arrow implies the x-axis start at the maximum value and decreases thereafter). All plots have 200 points, with uniform steps in their respective range. Note that our proposed parameter value is = 0.1401, the solution to equation FORMULA42, and indeed gives a high ratio. Also, α = 2.769 is the smallest value to yield all Hough votes in the center. Clearly the quantum ratio outperforms the best classical Hough method, which does not vary much across α values. As the center displacement increases, the quantum method probability, for = 0.1401, decreases much faster than the Hough method probability. Final figure 5d display values of |ψ| 2 (µ *) × (at the true center) in blue, for ∈ (0.047, 0.2802), with 200 uniform steps. In red, V (µ *) × ← − α for ← − α ∈ (22.727, 2.769), with 200 uniform steps. DISPLAYFORM0 shape detection and showed that when multiple overlapping shapes are deformed and/or clutter noise is added, a convolution layer with quantum inspired complex kernels outperforms the statistical/classical kernel counterpart and a "Bayesian shape estimator". It is worth to mention that the Bayesian shape estimator is the best method as long as the data satisfy the model assumptions. Once we add multiple shapes, or add clutter noise (not uniform noise), the Bayesian method breaks down rather easily, but not the quantum method nor the statistical version of it (the Hough method being an approximation to it). An analysis comparing the Quantum method to the Hough method was carried out to demonstrate the superior accuracy performance of the quantum method, due to the quantum phenomena of interference, not present in the classical CNN.We have not focused on the problem of learning the shapes here. Given the proposed quantum kernel method, the standard techniques of gradient descent method should also work to learn the kernels, since complex value kernels are also continuous and differentiable. Each layer of the networks carries twice as many parameters, since complex numbers are a compact notation for two numbers, but the trust of the work is to suggest that they may perform better and reduce the size of the entire network. These are just speculations and more investigation of the details that entice such a construction are needed. Note that many articles in the past have mentioned "quantum" and "neural networks" together. Several of them use Schrödinger equation, a quantum physics modeling of the world. Here in no point we visited a concept in physics (forces, energies), as Schrödinger equation would imply, the only model is the one of shapes (computer vision model). Quantum theory is here used as an alternative statistical method, a purely mathematical construction that can be applied to different models and fields, as long as it brings benefits. Also, in our search, we did not find an article that explores the phenomena of interference and demonstrate its advantage in neural networks. The task of brining quantum ideas to this field must require demonstrations of its utility, and we think we did that here. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HyyHX4gZM | A quantum inspired kernel for convolution network, exhibiting interference phenomena, can be very useful (and compared it with real value counterpart). |
We present an artificial intelligence research platform inspired by the human game genre of MMORPGs (Massively Multiplayer Online Role-Playing Games, a.k.a. MMOs). We demonstrate how this platform can be used to study behavior and learning in large populations of neural agents. Unlike currently popular game environments, our platform supports persistent environments, with variable number of agents, and open-ended task descriptions. The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources. Our platform aims to simulate this setting in microcosm: we conduct a series of experiments to test how large-scale multiagent competition can incentivize the development of skillful behavior. We find that population size magnifies the complexity of the behaviors that emerge and in agents that out-compete agents trained in smaller populations. Life on Earth can be viewed as a massive multiagent competition. The cheetah evolves an aerodynamic profile in order to catch the gazelle, the gazelle develops springy legs to run even faster: species have evolved ever new capabilities in order to outcompete their adversaries. The success of biological evolution has inspired many attempts to emulate it in silico, ranging from genetic algorithms that bear only loose resemblance to natural processes, to full-blown simulations of "artificial life". A recurring question has been: at what level of abstraction should we simulate the competitive game of life?In recent years, the field of deep reinforcement learning (RL) has embraced a related approach: train algorithms by having them compete in simulated games BID16 BID14 BID8. Such games are immediately interpretable and provide easy metrics derived from the game's "score" and win conditions. However, popular game benchmarks are currently still limited: they typically define a narrow, episodic task, with a small fixed number of players. In contrast, life on Earth involves a persistent environment, an unbounded number of players, and a seeming "open-endedness", where ever new and more complex species emerge over time, with no end in sight BID18.Our aim is to develop a simulation platform (see FIG3) that captures important properties of life on Earth, while also borrowing from the interpretability and abstractions of human-designed games. To this end, we turn to the game genre of Massively Multiplayer Online Role-Playing Games (MMORPGs, or MMOs for short). These games involve a large, variable number of players competing to survive and prosper in persistent and far-flung environments. Our platform simulates a "Neural MMO" -an MMO in which each agent is a neural net that learns to survive using RL.We demonstrate the capabilities of this platform through a series of experiments that investigate emergent complexity as a function of the number of agents and species that compete in the simulation. We find that large populations act as competitive pressure that encourages exploration of the environment and the development of skillful behavior. In addition, we find that when agents are organized into species (share policy parameters), each species naturally diverges from the others to occupy its own behavioral niche. Upon publication, we will opensource the platform in full. We alternate between collecting experience across 100 procedurally generated worlds and updating agents' parameters via policy gradients. Test time visualization provides insight into the learned policies through value function estimates, map tile visitation distribution, and agent-agent dependencies. Multiagent Reinforcement Learning Multiagent reinforcement learning has received increased attention in recent years BID11 BID12 BID1 BID9 BID22 BID24 BID14 BID8. Unlike single agent environments often well modeled by classic Markov Decision Processes (MDPs), multiagent interaction often introduces nonstationarity in the transition dynamics of the environment due to the continual learning and co-adaptation of other agents. While previous work has attempted to analyze emergent complexity in interactions of groups of 2-10 agents BID1 BID14 BID8, we focus on large populations of agents in a complex world. BID24 also analyze behavior of many agents, but the task setting is considerably simpler, and agents are directly rewarded for foraging or fighting. In contrast, our work focuses on complexity in a diverse environment where agents are only rewarded for survival; doing so requires maintaining their health through food and water while navigating partially obstructed terrain and with various forms of combat. BID23 also simulate populations scaling to millions of learning agents, but they focus on predator-prey population dynamics in this setting. Artificial Life "Artificial life" projects aim to simulate the complexity of biological life , often framed as a multiagent competition. This setting can lead to the development of skilled behaviors BID21 and capable morphologies BID17. We consider the setting of coevolution BID5 in which agents coadapt alongside others. Our platform can simulate a kind of artificial life, but at a relatively high level of abstraction. While some basic features of our environment (movement, food, water) are similar to those in BID6 BID19, our environment is grounded in the established game genre of MMOs. Unlike most past work in this area, our platform is built around deep reinforcement learning, where each agent is a neural net trained with distributed policy gradients, and includes tools for visualizing properties specific to this setting. Foraging agents learn to efficiently balance their food and water levels while competing with other agents for resources. Center: Combat agents learn the latter while also balancing melee, range, and mage attack styles to engage with and outmaneuver other agents. Right: Graphics key for tiles and agents. Game Platforms for Intelligent Agents The Arcade Learning Environment (ALE) BID2 and Gym Retro BID13 provide 1000+ limited scope arcade games most often used to test individual research ideas or generality across many games. Strictly better performance at a large random subset of games is a reasonable metric of quality, and strikingly divergent performance can motivate further research into a particular mode of learning. For example, Montezumas Revenge in ALE is difficult even with reasonable exploration because of the associative memory required to collect and use keys. However, recent have brought into question the overall complexity each individual environment BID4 and strong performance in such tasks (at least in those not requiring precise reflexes) is not particularly difficult for humans. More recent work has demonstrated success on multiplayer games including Go BID16, the Multiplayer Online Battle Arena (MOBA) DOTA2 (OpenAI), and Quake 3 Capture the Flag BID8. Each of these projects has advanced our understanding of a class of algorithms. However, these games were limited to 2-12 players, are round based on the order of an hour, lack persistence, and lack the game mechanics supporting large persistent populations -there is still a large gap in environment complexity compared to the real world. MMORPGs The game genre we focus on in this paper is MMORPGs, which are role-playing games (RPGs) in which many human players take part. RPGs, such as Pokemon and Final Fantasy, involve reasoning over up to hundreds of hours of persistent gameplay -a much longer time horizon than in MOBAs. Like the real world, RPGs confront the player with problems that have many valid solutions, and choices that have long term consequences. MMOs are the (massively) multiplayer analogs to RPGs. They are typically run across several servers, each of which contains a copy of the environment and supports hundreds to millions of concurrent players. Good MMOs involve a curriculum of challenges that require increasingly clever usage of the game systems. Early game content is accessible to new players, but skills required for late game content are inaccessible (and often incomprehensible) to those not intimately familiar with the game. Players have to acquire resources and "level up" in order to reach more advanced stages of the game. Such a curriculum is present in many game genres, but only MMOs contextualize it within social and economic structures approaching the scale of the real world. We present a persistent and massively multiagent environment that defines foraging and combat systems over procedurally generated maps (see Appendix). The purpose of the platform is to discover game mechanics that support complex behavior and agent populations that can learn to make use of them. We follow the iterative development cycle of human MMOs: developers create balanced mechanics while players maximize their skill in utilizing them. The initial configurations of our systems are the of several iterations of balancing, but are by no means fixed: every numeric parameter presented is editable within a simple configuration file. A map consists of a set of discrete tiles, which are positions an agent can occupy. On each step of the simulations, agents may move one tile North/South/East/West. Agents competes for food tiles while periodically refilling their water supply from infinite water tiles. They may attack each other using any of three attack options, each with different damage values and trade offs. The environment assumes only that agents receive local game state and output a decision. The environment is agnostic to the source of that decision, be it a neural network or a hardcoded algorithm. We have tested up to 100M agent trajectories (lifetimes) on 100 cores in 1 week. The code base already contains additional module code for trade, gathering, crafting, item systems, equipment, communication, and trade to name a few. We are actively balancing and integrating these into the neural API. Input Agents observe local game state-all tiles within a fixed L 1 distance of their current position, including tile terrain types and the visible properties of occupying agents. This is an efficient equivalent representation of what a human sees on the screen without requiring rendering. Output Agents output action choices for the next time step ("game tick"). For the experiments below, actions consist of one movement and one attack. As described in Framework, movement options are: North, South, East, West, Pass (no movement). Attack options are: Melee, Range, Mage. This is purely flavor; for those not familiar with MMOs, each attack option simply applies a preset amount of damage at a preset effective distance. The environment will attempt to execute both actions. Invalid actions, such as moving into a stone wall, are ignored. Our policy architecture is detailed in the Appendix. We provide a simple preprocessor that embeds and flattens this stimulus into a single fixed length environment vector and a list of entity embeddings. We apply a linear layer to the preprocessed embeddings followed by three output heads for movement, attacks. There is also a standard value head which is trained to predict the discounted expected lifetime of the agent. Each head is also a linear layer. New types of action choices can be included by adding additional heads. We train with simple policy gradients plus a value baseline. Agents receive only a stream of reward 1. Rewards are postprocessed by a discounting factor, producing returns equal to a discounted estimate of the agent's time until death. We found it possible to obtain good performance without discounting, but training was less stable. We present an initial series of experiments using our platform to explore multiagent interactions in large populations. We find that agent competence scales with population size. In particular, increasing the maximum number of concurrent players (N ent) magnifies exploration and increasing the maximum number of populations with unshared weights (N pop) magnifies niche formation. Agents are sampled uniformly from a number of "populations"-identical architectures with unshared weights. This is for efficiency-see technical details. Technical details We run each experiment using 100 worlds. We define a constant C over the set of worlds W. For each world w ∈ W, we uniformly sample a c ∈ (1, 2, ...C). We define "spawn cap" such that if world w has a spawn cap c, the number of agents in w cannot exceed c. In each world w, one agent is spawned per game tick provided that doing so would exceed the spawn cap c of w. Ideally, we would fix N ent = N pop, as is the case in standard MMOs (humans are independent networks with unshared weights). However, this incurs sample complexity proportional to number of populations. We therefore share parameters across groups of up to 16 agents for efficiency. We perform four experiments to evaluate the effects on foraging performance of training with larger and more populations. For each experiment, we fix N pop and a spawn cap 1. These are paired from and. We train for a fixed number of trajectories per population 9.Evaluating the influence of these variables is nontrivial. The task difficulty is highly dependent on the size and competence of populations in the environment: mean agent lifetime is not comparable among experiments. Furthermore, there is no analog procedure for evaluating relative player competence among MMO servers. However, MMO servers sometimes undergo merges whereby the player bases from multiple servers are placed within a single server. As such, we propose tournament style evaluation in order to directly compare policies learned in different experiment settings. Tournaments are formed by simply concatenating the player bases of each experiment. Results are shown in Figure 3: we vary the maximum number of agents at test time and find that agents trained in larger settings consistently outperform agents trained in smaller settings. When we introduce the combat module as an additional learnable mode of variation on top of foraging, we observe more interesting policies as a . Agent actions become strongly coupled with the states of other agents. As a sanity, we also confirm that all of the populations trained with combat handily outperform all of the populations trained with only foraging. To better understand theses , we decouple our analysis into two modes of variability: maximum number of concurrent players (N ent) and maximum number of populations with unshared weights (N pop). This allows us to examine the effects of each factor independently. In order to isolate the effects of environment randomization, which also encourages exploration, we perform these experiments on a fixed map. Isolating the effects of these variables produces more immediately obvious , discussed below. We briefly examine the randomized setting in Discussion. In the natural world, competition between animals can incentivize them to spread out in order to avoid conflict. We observe that overall exploration scales with number of concurrent agents, with no other variable factors (see Figure 4 ; the map used is shown in FIG3). Agents learn to explore only because the presence of other agents provides a natural and beneficial curriculum for doing so. We find that, given a sufficiently large and resource rich environment, different populations of agents tend to separate to avoid competing with other populations. Both MMOs and the real world often reward masters of a single craft more than jacks of all trades. From FIG3, specialization to particular regions of the map scales with number of populations. This suggests that the presence of other populations force agents to discover a single advantageous skill or trick. That is, increasing the number of populations in diversification to separable regions of the map. As entities cannot out-compete other agents of their population with shared weights, they tend to seek areas of the map that contain enough resources to sustain their population. Regions that are difficult to get to or otherwise unoccupied are especially desirable; this is revealed by observing value maps over time. Jungle climates produce more biodiversity than deserts. Deserts produce more biodiversity than the tallest mountain peaks. To current knowledge, Earth is the only planet to produce life at all: the initial conditions for formation of intelligent life are of paramount importance. The same holds true in simulation: human MMOs mirror this phenomenon. Some games produce more complex and engaging play than others-those most successful garner large and dedicated playerbases that come to understand the game systems better than the developers. Feedback helps drive development and expansion over a period of years, but the amount of effort required to create that initial seed is large.: Agents learn to depend on other agents. Each square map shows the response of an agent of a particular species, located at the square's center, to the presence of agents at any tile around it. Random: dependence map of random policies. Early: "bulls eye" avoidance maps learned after only a few minutes of training. Additional maps correspond to foraging and combat policies learned with automatic targeting (as in tournament ) and learned targeting (experimental, discussed in Additional Insights). In the learned targeting setting, agents begin to fixate on the presence of other agents within combat range, as denoted by the central square patterns. It is unreasonable to expect pure multiagent competition to produce diverse and interesting behavior if the environment does not support it. This is because multiagent competition is a curriculum magnifier, not a curriculum. The multiagent setting is interesting because learning is responsive to competitive and collaborative pressures of other learning agents-but the environment must support and facillitate such pressures in order for multiagent interaction to drive complexity. There is room for debate among reasonable individuals as to theoretical minimum complexity seed environment required to produce complexity on par with that of the real world. However, this is not our objective, and we do not have a reasonable estimate of this quantity. We have chosen to model our environment after MMOs, even though they may be more complicated than the minimum required environment class, because they are known to support the types of interactions we are interested in while maintaining engineering and implementation feasibility. This is not true of any other class environments we are aware of: exact physical simulations are computationally infeasible, and previously studied genres of human games lack crucial elements of complexity (see Background). While some may see our efforts as cherrypicking environment design, we believe this is precisely the objective: developers cherrypick game design decisions to support complexity commensurate with engaging play at the level of general human intelligence. The player base uses these design decisions to create strategies far beyond the imagination of the developers. The trend of increasing exploration with increasing entity number is clear when training on a single map as seen in Figure 4, 5, but it is more subtle with environment randomization. From FIG4, all population sizes explore adequately. We believe that this is because "exploration" as defined by map coverage is not as difficult a problem as developing robust policies. As demonstrated by the Tournament experiments, smaller populations learn brittle policies that do not generalize to scenarios with more competitive pressure-even against a similar number of agents. We visualize agent-agent dependencies in FIG5. We fix an agent at the center of a hypothetical map crop. For each position visible to that agent, we fake another agent and compute the value function estimate of the ant pair. We find that agents learn policies dependent on those of other agents in both the foraging and combat environments. We briefly detail several miscellaneous investigations and subtle points of interest in FIG6. First, we visualize learned attack patterns of agents. Each time an agent attacks, we splat the attack type to the screen. There are a few valid strategies as per the environment. Melee is intentionally overpowered: learning to utilize it at close range serves as a sanity check. This also cautions agents to keep their distance, as the first to strike wins. From FIG6 and observation of the policies, we find that this behavior is learned when targeting is automated. Our final set of experiments prescribes targeting to the agent with lowest health. Jointly learning attack style selection and targeting requires an attentional mechanism to handle variable number of visible targets. We have experimented with this, but are not yet numerically stable. Solving this is likely to increase policy complexity, but policies learned using automatic targeting are still compelling: agents learn to strafe at the edge of their attack radius, attacking opportunistically. Second, a note on tournaments. We equate number of trajectories trained upon as a fairest possible metric of training progress. We experimented with normalizing batch size but found that larger batch size always leads to more stable performance. Batch size is split among species: larger populations outperform smaller populations even though the latter are easier to train. Finally, a quick note on niche formation. Obtaining clean visuals is dependent on having an environment where interaction with other agents is unfavorable. When this is not the case, niche formation may still occur in another space (e.g., diverse attack policies). This is the importance of niche formation-we expect it to stack well with population-based training BID7 and other such methods that require sample diversity. Figure 9: We procedurally generate maps by thresholding an 8 octave Perlin BID15 ridge fractal. We map tile type to a fixed range of values APPENDIX This section contains full environment details that are useful but not essential to understanding the base experiments. All parameters in the following subsystems are configurable; we provide only sane defaults obtained via balancing. A grid of tiles 2 represents game state. Agents may move one tile North/South/East/West each game tick. They can also choose to Pass (i.e. not make a movement action). Each tile object contains:• Terrain type ∈ {grass, f orest, stone, water, lava}. Lava tiles kill agents upon contact. Stone and water tiles are impassible.• Occupying agents: a reference mapping of agents standing on the tile. The foraging system implements gathering based survival by introducing:• Food: initialized to 32, decremented 1 per tick, incremented 1 by occupying forest tiles, which contain 1 food each. Once the resources of a particular tile are consumed, they regenerate probablistically over time.• Water: decremented once per tick, incremented by standing adjacent to a water tile.• Health: decremented once per tick if either food or water is zero. Incremented if both food and water are above a threshold. A snapshot is shown in FIG1. These definitions of resources impose a carrying capacity. This incurs an arms race of exploration strategies in populations of agents above the carrying capacity: survival is trivial with a single agent, but requires intelligent exploration in the presence of competition attempting to do the same. The combat system implements three different attack "styles":• Melee: Inflicts 10 damage at 1 range FIG0: Agents observe their local environment. The model embeds this observations and computes actions via corresponding value, movement, and attack heads. These are all small fully connected networks with 50-100k parameters.• Ranged: Inflicts 2 damage at 1-2 range • Mage: Inflicts 1 damage at 3 range and freezes the target in place, preventing movement (but not attacks) for two ticks A snapshot is shown in FIG1. These definitions of combat impose trade offs in each style. Melee combat is high risk high return. Ranged combat produces less risky but more prolonged conflicts. Mage combat does little damage, but allows agents to retreat or cut off opponents' escape by freezing them in place. The best strategy is not obvious, again imposing an arms race to discover the best strategy in the presence of many agents attempting to do the same. • Attack range is defined by l1 distance: "1 range" is a 3X3 grid centered on the attacker. Range 2 is a 5X5 grid, etc.• Incentive: Animals hunt each other out of hunger or to protect/seize resources. Agents receive food/water resources equal to the damage they inflict on other agents.• Spawn Killing Agents are immune during their first 15 game ticks alive. This prevents an exploit known as "spawn killing" whereby players are repeatedly attacked immediately upon entering the game. Human games often contain similar mechanism to prevent this strategy, as it in uninteresting play. We provide two APIs for this:Gym Wrapper We provide a minimal extension of the Gym VecEnv API BID3 that adds support for variable numbers of agents per world and at any given time. This API distributes environment computation of observations and centralizes training and inference. While this standardization is convenient, MMOs differ significantly from arcade games, which are easier to standardize under a single wrapper. The Neural MMO setting requires support for a large, variable number of agents that run concurrently, with aggregation across many randomly generated environments. As such, the Gym framework incurs additional communications overhead in our setting that we bypass with a fast native API.Native This interface is simpler to use and pins the environment and agents on it to the same CPU core. Full trajectories run locally on the same core as the environment. Interprocess communication is only required infrequently to synchronize gradients across all environments on a master core. We currently do the backwards pass on the same CPU cores for convenience, but plan on separating this into an asynchronous experience queue for GPU workers. We also provide a front end visualizer packed with research tools for analyzing observe agent policies and relevant statistics. We leave detailed documentation of this to the source release, but it includes:• 2D game renderer: shown in FIG1 • 3D game renderer in beta • Value map "ghosting" (See FIG1 • Exploration maps • Interagent dependence • Attack maps (See Experiments)Preprocessor:• Embed indicies corresponding to each tile into a 7D vector. Also concatenates with the number of occupying entities.• Project visible attributes of nearby entities to 32D• Flatten the tile embeddings • Max pool over entity embeddings to handle variable number of observations • Concatenate the two • This produces an embedding, which is computed separately for each head. • For foraging experiments, the attack network is still present for convenience, but the chosen actions are ignored.• It then concatenates the ant vector with the (unlearned) features of the current entity. These include visible properties such as health, food, water, and position.• In order to handle the variable number of visible entities, a linear layer is applied to each followed by 1D max pooling. Attention BID0 may appear the more conventional approach, but recently OpenAI demonstrated that simpler and more efficient max pooling suffices.• The pooled entity vector is then concatenated with the tile and self embeddings to produce a single vector and followed by a final linear layer as shown in FIG0. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1gWz2CcKX | An MMO-inspired research game platform for studying emergent behaviors of large populations in a complex environment |
In this paper, we propose an improved quantitative evaluation framework for Generative Adversarial Networks (GANs) on generating domain-specific images, where we improve conventional evaluation methods on two levels: the feature representation and the evaluation metric. Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation. Moreover, for datasets with multiple classes, we propose Class-Aware Frechet Distance (CAFD), which employs a Gaussian mixture model on the feature space to better fit the multi-manifold feature distribution. Experiments and analysis on both the feature level and the image level were conducted to demonstrate improvements of our proposed framework over the recently proposed state-of-the-art FID method. To our best knowledge, we are the first to provide counter examples where FID gives inconsistent with human judgments. It is shown in the experiments that our framework is able to overcome the shortness of FID and improves robustness. Code will be made available. Generative Adversarial Networks (GANs) have shown outstanding abilities on many computer vision tasks including generating domain-specific images BID7, style transfer, super resolution BID20, etc. The basic idea of GANs is to hold a two-player game between generator and discriminator, where the discriminator aims to distinguish between real and fake samples while the generator tries to generate samples as real as possible to fool the discriminator. Researchers have been continuously exploring better GAN architectures. However, developing a widely-accepted GAN evaluation framework remains to be a challenging topic BID35. Due to a lack of GAN benchmark , newly proposed GAN variants are validated on different evaluation frameworks and therefore incomparable. Because human judgements are inherently limited by manpower resource, good quantitative evaluation frameworks are of very high importance to guide future research on designing, selecting, and interpreting GAN models. There have been varieties of efforts on designing sample-based evaluation for GANs on its ability of generating domain-specific images. The goal is to measure the distance between the generated samples and the real in the dataset. Most existing methods utilized the ImageNet BID29 inception model to map images onto the feature space. The most widely used criteria is probably the Inception Score BID31, which measures the distance via Kullback-Leiber Divergence (KLD). However, it is probability based and is unable to report overfitting. Recently, Frechet Inception Distance (FID) was proposed BID11 on improving Inception Score. It directly measures Frechet Distance on the feature space with the Gaussian assumption. It has been proved that FID is far better than Inception Score BID13 BID15 BID24. However, we argue that assuming normality on the whole feature distribution may lose class information on labeled datasets. In this work, we propose an improved quantitative sample-based evaluating criteria. We improve conventional evaluation methods on two levels: the feature representation and the evaluation metric. Unlike most existing methods including the Inception Score BID31 and FID BID11, our framework uses a specialized encoder trained on the dataset to get domain-specific representation. We argue that applying the ImageNet model to either labeled or unlabeled datasets is ineffective. Moreover, we propose Class-Aware Frechet Distance (CAFD) in our framework to measure the distribution distance of each class (mode) respectively on the feature space to include class information. Instead of the single Gaussian assumption, we employ a Gaussian mixture model (GMM) to better fit the feature distribution. We also include KL divergence (KLD) between mode distribution of real data and generated samples into the framework to help detect mode dropping. Experiments and analysis on both the feature level and the image level were conducted to demonstrate the improved effectiveness of our proposed framework. To our best knowledge, we are the first BID4 to provide counter examples where FID is inconsistent with human judgements (See FIG0). It is shown in the experiments that our framework is able to overcome the shortness of existing methods. Evaluation Methods. Several GAN evaluation methods have been proposed by researchers. While model-based methods including Parzen window estimation and the annealed importance sampling (AIS) BID36 ) require either density estimation or observation on the inner structure of the decoder, model-agnostic methods BID11 BID31 BID15 BID23 are more popular in the GAN community. These methods are sample based. Most of them map images onto the feature space via an ImageNet pretrained model and measure the similarity of the distribution between the dataset and the generated data. Maximum mean discrepancy (MMD) was proposed by and it has been further used in classifier two-sample tests BID23, where statistical hypothesis testing is used to assess whether two sample sets are from the same distribution. Inception Score BID31, along with its improved version Mode Score, was the most widely used metric in the last two years. FID BID11 was proposed on improving the Inception Score. Recently, several interesting methods were also proposed including classification accuracy BID33, precisionrecall measuring BID30 and skill rating BID26. These metrics give complementary perspectives towards sample-based methods. Studies on Existing Frameworks. It is common BID2 in the literature to see algorithms which use existing metrics to optimize early stopping, hyperparameter tuning, and even model architecture. Thus, comparison and analysis on previous evaluation methods have been attracting more and more attention recently BID35 BID13 BID15 BID24. While Inception Score was the most popular metric in the last two years, it was believed to be misleading in recent literature BID11 BID13 BID24 BID4 BID2. Applying the ImageNet model to encode features in Inception Score is ineffective BID35 BID2 BID28. The recently proposed FID has been proved to be far better than Inception Score BID11 BID13 BID15. And its robustness was experimentally demonstrated recently in a technical report BID24. However, we argue that FID still has problems and provide counter examples where FID gives inconsistent with human judgements. Moreover, we propose an improved version of evaluation which overcomes its shortness. The evaluation problem can be formulated as modeling the distance between two distributions P r and P g, where P r denotes the distribution of real samples in the dataset and P g denotes the distributions of new samples generated by GAN models. The main difficulties for GANs on generating domain-specific images can be summarized into three types below.• Lack of generating ability. Either the generator cannot generate useful samples or the GAN training cannot diverge.• Mode collapse. Different modes collapse to a new mixed mode in the generated samples. (e.g. An animal resembling both a horse and a deer.)• Mode dropping. Only part of the modes in the dataset are generated while some modes are implicitly ignored. (e.g. The handwritten 5 can hardly be generated by GAN trained on MNIST.) Therefore, a good evaluation framework should be consistent to human judgements, penalize on mode collapse and mode dropping. Most of the conventional methods utilized an ImageNet pretrained inception model to map images onto the feature space. Inception Score, which was originally formulated as Eq. FORMULA0, ignored information in the dataset completely. Thus, its original formulation was considered to be relatively misleading. DISPLAYFORM0 The Mode Score was proposed to overcome this shortness. Its formulation is shown in Eq.. By including the prior distribution of the ground truth labels, Mode Score improved Inception Score on reporting mode dropping. DISPLAYFORM1 FID BID11, which was formulated in Eq., was proposed on improving Inception Score BID31. DISPLAYFORM2 (µ g, C g), (µ r, C r) are the first-order and second-order statistics for generated samples and real data respectively. Unlike the previous two metrics which are probability-based, FID directly measures Frechet distance on the feature space. It uses an ImageNet model for encoding features and assumes normality on the whole feature distribution. FID was believed to be better than Inception Score BID13 BID15 BID24. However, we argue that FID still has two major problems (See Section 3.1 and 3.2). As both Inception Score BID31 and Mode Score ) is probabilitybased, applying the ImageNet pretrained model on non-ImageNet dataset is relatively meaningless. This misuse of representation on Inception Score was mentioned previously BID28. However, we argue that applying the ImageNet model to map the generated images to the feature space in FID can also be misleading. While both of the BID2 BID28 mentioned that applying the ImageNet pretrained model to the probability-based metric Inception Score BID31 ) is inadequate, the trend for applying it to feature-based metric such as FID BID11 ) is widely followed. BID2 pointed out that because classes are unmatched, the p(y|x) and p(y *) in the formulation of Inception Score are meaningless. However, we argue that applying the ImageNet model for mapping the generated images to the feature space in FID can also be misleading for the two reasons below. First, On labeled datasets with multiple classes, the class labels unmatch those in ImageNet. For example, the class'Bird' in CIFAR-10 (BID19) is divided into several sophisticated category labels in ImageNet. This will make the CNN representations trained on ImageNet is either meaningless or over-complicated. Specifically, some features distinguishing the "acoustic guitar" from "electric guitar" are hardly useful on CIFAR-10 while fine-grained features distinguishing "African hunting dog" from "Cape hunting dog" (which all belong to the category "dog" in CIFAR-10) are not needed as well. On unlabeled datasets with images from a single class such as CelebA, applying the ImageNet inception model is also inappropriate. The categories of ImageNet labels are so sophisticated that the trained model needs to encode diverse features on various objects. However, this will get encoded features limited to a relatively low-dimensional subspace lack of fine-grained information. For example, the ImageNet models can hardly distinguish different faces. In Section 5.1, we designed experiments on both the feature level and the image level to demonstrate the effects of using different representations. We argue that the single Gaussian assumption in FID is over-simplified. As the training decreases intra-class distance and increases inter-class distance, the features are distributed in groups by their class labels. Thus, we propose that on datasets with multiple classes, the feature distribution is better fitted by a Gaussian mixture model. Considering the specific Gaussian mixture model where x ∼ N (µ i, C i) with probability p i, we can derive the first and second moment of the feature distribution in Eq. and Eq.. DISPLAYFORM0 It should be noted that when the feature is n-dimensional and there are K classes in total, there are a total of K(n 2 +n 2 + n + 1) variables in the model. However, directly modeling the whole distribution Gaussian as in FID will in n 2 +n 2 + n degrees of freedom, which is a relatively small number. Thus, FID detects mode-related problems in an implicit way. Either simply dropping a mode or linearly combining images increases FID by unintentionally changing the mean µ. However, FID gets to be misleading when the deficiency type becomes more complicated (See FIG1 . As discussed in Section 3.1, applying the ImageNet inception model to either labeled or unlabeled datasets is ineffective. We argue that a specialized domain-specific encoder should be used for sample-based evaluation. While the features encoded by the ImageNet model are limited within a low-dimensional subspace, the domain-specific model could encode more fine-grained information, making the encoded features much more effective. Specifically, we propose to use the widely used variational autoencoder (VAE) BID17 to acquire the specialized embedding for a specific dataset. In labeled datasets, we can add a cross-entropy loss for training the VAE model. In Section 5.1, we show that simply training an autoencoder can already get better domain-specific representations on CelebA. Before introducing our improved evaluation metric, we would firstly take a step back towards existing popular metrics. Both Inception Score BID31 and Mode Score measure distance between probability distribution while FID BID11 directly measures distance on the feature space. Probability-based metrics better handle mode-related problems (with the correct use of a domain-specific encoder), while directly measuring distance between features better models the generating ability. In fact, we believe these two perspectives are complementary. Thus, we propose a class-aware metric on the feature space to combine the two perspectives together. For datasets with multiple classes, the feature distribution is better fit with mixture Gaussian (See Section 3.2). Thus, we propose Class-Aware Frechet Distance (CAFD) to include class information. Specifically, we compute probability-based Frechet Distance between real data and generated samples in each class respectively. class 0 1 2 3 4 5 dist 64.8 ± 0.5 18.9 ± 0.2 80.5 ± 1.1 81.3 ± 0.3 64.5 ± 0.6 79.0 ± 0.4 class 6 7 8 9 average dist 65.2 ± 0.3 46.8 ± 0.3 90.4 ± 0.3 59.8 ± 0.2 65.1 ± 0.4As previously discussed in Section 4.1, we train a domain-specific VAE along with the cross entropy on datasets with multiple classes and use its learned representations. In our evaluation framework, we also made use of the predicted probability p(y|x). To calculate the expected mean of each class in a specific set S of generated samples, we can derive the formulation below in Eq.. DISPLAYFORM0 where DISPLAYFORM1 Similarly, The covariance matrix in each class is shown in Eq.. DISPLAYFORM2 We compute Frechet distance in each of the K classes and average the to get Class-Aware Frechet Distance (CAFD) in Eq.. DISPLAYFORM3 This improved form based on mixture Gaussian assumption can better evaluate the actual distance compared to the original FID. Moreover, when CAFD is applied to evaluating a specific GAN model, we could get better class-aware understanding towards the generating ability. For example, as shown in TAB0, the selected model generates digit 1 well but struggles on other classes. This information will provide guidance for researchers on how well their generative models perform on each mode and may explain what specific problems exist. As both FID and CAFD aim to model how well domain-specific images are generated, they are not designed to deal with mode dropping, where some of the modes are missed in the generated samples. Thus, motivated by Mode Score , we propose that KL divergence KL(p(y *)||p(y)) should be included as auxiliary scores into the evaluation framework. To sum up, the correct use of encoder, the CAFD and the KL divergence term combine for a complete sample-based evaluation framework. Our proposed method combines the advantages of Inception Score BID31, Mode Score and FID BID11 and overcomes their shortness. Our method is sensitive to different representations. Different selection of encoders can in changes on the evaluation . Experiments in Section 5.1 demonstrate that the ImageNet inception model will give misleading (See FIG0 . Thus, a domain-specific encoder should be used in each evaluation pipeline. Because the representation is not fixed, the correct use (with In this section, we study the representation for mapping the generated images onto the feature space. As discussed in Section 4.1, applying the pretrained ImageNet inception model to sample-based evaluation methods is inappropriate. We firstly investigated the features generated by different encoders on CelebA, which is a widely used dataset containing more than 200k face images. Then, we gave an intuitive demonstration where FID BID11 using ImageNet pretrained representations gives inconsistent with human judgements. We give two proposals of domain-specific encoders in the experiment: an autoencoder and a VAE BID17 . Both proposed encoders share a similar network architecture which is the inverse structure of the 4-conv DCGAN . The embedding is dimensioned 2048, which is the same as the dimension of ImageNet features. We train both models for 25 epochs. The loss weight of the KLD term in VAE is 1e-5. We conducted principle component analysis (PCA) on three feature sets encoded on CelebA: 1) ImageNet inception model. 2) proposed autoencoder 3) proposed VAE. TAB1 shows the percent of explained variance on the first 5 components. Although the ImageNet model should have much greater representation capability than the 4-conv encoder, its first two components has much higher explained variance (9.35% and 7.04%). This supports our claim that the features encoded by ImageNet are limited in a low-dimensional subspace. It can be also noted that VAE better makes use of the feature space compared to the naive autoencoder. To better demonstrate the deficiency of the ImageNet model, we performed three different types of adjustments on the first 10,000 images on CelebA: a) Random noise uniformly distributed in [-33,33] was applied on each pixel. b) Each image was divided into 8x8=64 regions and seven of them were sheltered by a pixel sampled from the face. c) Each image was first divided into 4x4=16 regions and random exchanges were performed twice.. The ImageNet inception model fails to encode fine-grained features on faces. a) Random noise uniformly distributed in [-33,33] was applied on each pixel. b) Each image was divided into 8x8=64 regions and seven of them were sheltered by a pixel sampled from the face. c) Each image was first divided into 4x4=16 regions and random exchanges were performed twice. Results are shown in FIG0. With the ImageNet inception model, it is obvious that FID gave inconsistent with human judgements (See TAB4). In fact, when similar adjustments were conducted with the overall color maintained, FID fluctuated within only a small range. The ImageNet model mainly extracts general features on color, shape to better classify objects in the world while domain-specific facial textures cannot be well represented. For comparison, we applied the trained autoencoder and VAE onto the case. Also, we tried to apply the representation of the discriminator after GAN training, which was previously proposed in. Specifically, we use the features right before the final fc layer for the discriminator. Results are shown in TAB2. It is shown that only representations derived from the domain-specific encoder including the autoencoder and VAE are effective and give consistent with human judgements. The discriminator which learns to discriminate fake samples from the real cannot learn good representation for distance measurement. Thus, for datasets where images are from a single class such as CelebA and LSUN Bedrooms BID38, the representation should be acquired via training a domain-specific encoder such as a VAE. In this way our sample-based evaluation employs specialized representations, which can provide more fine-grained information related to the specific domain. In this section, we used the domain-specific representations and studied the improvements of the evaluation metric CAFD proposed in our framework against the state-of-the-art metric FID BID11. In datasets with multiple classes, the Gaussian mixture model in CAFD will better fit the feature distribution. First, we performed user study to demonstrate the improved consistency of our method. Then, An intuitive case for further demonstration is given where CAFD shows great robustness while FID fails to give consistent with human judgements. For implementation details, on the MNIST dataset, we trained a variational autoencoder (VAE) BID17 with the kl loss weight 1e-5 for the specialized encoder and added the cross-entropy term with a loss weight of 1.0. BID8. We use the domain-specific representation of VAE for embedding images. (See TAB6 5.2.1 USER STUDY Evaluating the evaluation metrics is a non-trivial task, as the best criterion is the consistency with human judgements. Therefore, we performed user study to compare our proposed method with the existing ones including Inception Score BID31, Mode Score and FID BID11 . Our setting is consistent with BID15 . 15 volunteers were first trained to tell generated samples from the groundtruth in the dataset. Then, paired image sets were randomly sampled and volunteers were asked to tell the better sets. Finally, we counted pairs where the metric agreed the voted by the volunteers. We conducted experiments on MNIST with two settings for the experiments: 'easy' and 'hard'. The 'easy' setting is where random pairs are sampled from the intermediate of GAN training, while the 'hard' setting is where only random pairs with the difference of FID of two sampled sets within a threshold are included. TAB3 shows the . It is worth noting that in hard cases, the of Inception Score BID31 are relatively meaningless (50%), which makes it hard to be applied as guidance for improving the quality of generated images by GANs. In both'easy' and'hard' settings, our method gets consistent gain compared to baseline approaches. In this experiment, we gave an intuitive case where FID fails to give consistent with human judgements. We used two different settings of representations and focused on the evaluation metric within each setting. Specifically, Besides the VAE, we also train a classifier on MNIST and use its representation as a supporting experimental setting. BID7. We use two setting of different representations in this experiment: a domain-specific classifier and a VAE. For VAE,'generated' and'hack' are the sampled images in FIG1. Compared to FID, CAFD are more robust to feature-level adjustments. FID, as an overall statistical measure, is able to detect either a single mode dropping or a trivial linear combination of two images. However, as its formulation has relatively limited constraints, it can be hacked in complicated scenarios. Considering the features extracted from MNIST test data, which has a zero FID with itself. We performed operations below on the features. Step 1 Performed principle component analysis (PCA) on the original features. Step 2 Normalized each axis to zero mean and unit variance. Step 3 Switched the normalized projection of the first two component. Step 4 Unnormalized the data and reconstructed features. The adjusted features are completely different with the original one with zero FID maintained. The over-simplified Gaussian assumption on overall distribution cannot tell the differences while our proposed method is able to report the changes with CAFD raising from 0 to 246.2 (539.8) for VAE (classifier). (See TAB6 Furthermore, We used FGSM BID8 to reconstruct the images from the adjusted features in both settings. Specifically, we first trained an decoder for initialization via an AutoEncoder with the encoder fixed. Then, we performed pixelwise adjustment via FGSM BID8 to lower the reconstruction error. Because the used encoder has a relatively simple structure, the final reconstruction error is still relatively high after optimized. For comparison, We trained a simple WGAN-GP model and took samples (generated by intermediate models during training) with comparable FID with our constructed images. Visualization for the VAE setting are shown in FIG1.It is obvious that the quality of constructed images are much worse than the generated samples. After axis permutation, the constructed images suffers from mode collapse. There are many pictures in the right which resemble more than one digits and are hard to recognize. However, for the VAE (classifier) setting, it still received a FID of 25.4 (72.8) lower than 49.9 (73.1) received by generated samples. For comparison, The of CAFD on these cases are shown in TAB6. While FID gives misleading , CAFD are much more robust on the adjusted features. Compared to the constructed images (211.6 (468.6)), the generated images received a much lower CAFD (80.7 (201.4)), which is consistent with human judgements. (See Table 6) Thus, for both settings demonstrates the improved effectiveness of the evaluation metric in our proposed evaluation framework. In this paper, we aimed to tackle the very important problem of evaluating the Generative Adversarial Networks. We presented an improved sample-based evaluation, which improves conventional methods on both representation and evaluation metric. We argue that a domain-specific encoder is needed and propose Class-Aware Frechet Distance to better fit the feature distribution. To our best knowledge, we are the first to provide counter examples where the state-of-the-art FID method is inconsistent with human judgements. Experiments and analysis on both the feature level and the image level have shown that our framework is more effective. Therefore, the encoder should be specifically trained for datasets of which the labels are different from ImageNet. To attain effective representations on non-ImageNet datasets, we need to ensure that the class labels of data used for training GAN models are consistent with those of data used for training the encoder. The Gaussian assumption on the features were commonly used in the literature. Although there are non-linear operations such as relu and max-pooling in the neural network, assuming the normality simplifies the model and enables numerical expression. However, in labeled dataset with multiple classes, the Gaussian assumption is relatively over-simplified. In this experiment, we performed Anderson-Darling test (AD-test) BID32 to quantatively study the normality of the data. Specifically, to test the multivariate normality on a set of features, we first performed principle component analysis (PCA) on the data, and then applied AD-test to the first 10 components and averaged the . We compared the test on each class and the whole training set on MNIST. We used a simple 2-conv structure trained on the MNIST classification task as our feature encoder with a output dimension 1024. To reduce the influence of sample number on the , we divided the whole features randomly into 10 sets to study the normality of the mixed features. Results are shown in Table 9. Although the p-value of both features are small, features within a single class get much greater than the mixed features. It can be inferred that compared to the whole training set, features within each class are much more Gaussian. Thus, the basic assumption of CAFD in our proposed framework is more reasonable compared to the FID BID11 method. The idea of Generative Adversarial Network was originally proposed in BID7. It has been applied to various computer vision tasks BID20 BID40. Researchers have been continuously developing better GAN architectures BID10 BID14 and training strategies BID1 BID12 on generating domain-specific images. Deep convolutional networks were firstly in- Table 9: P-value of AD-test BID32 on features of each class and the whole training images. The whole features were randomly divided into 10 sets. Compared to the mixed features, features encoding images from a single class are more Gaussian. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJlY0jA5F7 | This paper improves existing sample-based evaluation for GANs and contains some insightful experiments. |
Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency. This paper introduces ResBinNet, which is a composition of two interlinked methodologies aiming to address the slow convergence speed and limited accuracy of binary convolutional neural networks. The first method, called residual binarization, learns a multi-level binary representation for the features within a certain neural network layer. The second method, called temperature adjustment, gradually binarizes the weights of a particular layer. The two methods jointly learn a set of soft-binarized parameters that improve the convergence rate and accuracy of binary neural networks. We corroborate the applicability and scalability of ResBinNet by implementing a prototype hardware accelerator. The accelerator is reconfigurable in terms of the numerical precision of the binarized features, offering a trade-off between runtime and inference accuracy. Convolutional Neural Networks (CNNs) have shown promising inference accuracy for learning applications in various domains. These models are generally over-parameterized to facilitate the convergence during the training phase BID7; BID4 ). A line of optimization methodologies such as tensor decomposition BID9; BID16 ), parameter quantization; BID5 ), sparse convolutions BID10; ), and binary networks; BID11 ) have been proposed to reduce the complexity of neural networks for efficient execution. Among these works, binary neural networks in two particular benefits: (i) They reduce the memory footprint by a factor of 32 compared to the full-precision model; this is specifically important since memory access plays an essential role in the execution of CNNs on resource-constrained devices. (ii) Binary networks replace the costly multiplications with simple XNOR operations BID11; BID12 ), reducing the execution time and energy consumption significantly. Considering the prior art, there exist two major challenges associated with binary neural networks. First, the convergence rate of the existing solutions for training binary CNNs is considerably slower than their full-precision counterparts. Second, in order to achieve comparable classification accuracy, binarized neural networks often compensate for the numerical precision loss by employing high dimensional feature maps in a wide CNN topology, which in turn reduces the effective compression rate. As a , full-precision networks often surpass binary networks in terms of convergence rate and final achievable accuracy. In this paper, we propose ResBinNet, a novel solution for increasing the convergence rate and the final accuracy of binary networks. The global flow of ResBinNet is depicted in FIG0. The first phase, which we call Soft Binarization, includes two methodologies that we propose to address the aforementioned challenges for training binary CNNs. First, we introduce a Residual Binarization scheme which allows the number of possible values for activation units to be reconfigurable at runtime. To this purpose, we learn a multi-level residual representation for the features within the CNN to adaptively increase the numerical precision of the activation units. Second, we introduce a novel weight binarization approach, called Tempreture Adjustment, which aims to gradually enforce binarization constraints over the weight parameters throughout the training phase. The two interlinked methods significantly improve both the convergence rate and the final accuracy of ResBinNet compared to prior art. Once the soft training phase is finished, we convert the weights to actual binary values. Fine-tuning of the model is then performed in Hard Binarization phase using existing training algorithms (e.g. BinaryNets)) in few epochs (e.g. one epoch). ResBinNet is designed to fulfill certain goals: (i) It should enable reconfigurability for binary neural networks; in other words, the number of residual binary representatives should be adjustable to offer a trade-off between inference accuracy and computation time.(ii) The multi-level binarized features should be compatible with the XNOR multiplication approach proposed in the existing literature.(iii) ResBinNet should speed up the convergence rate of binarized CNNs. (iv) Current hardware accelerators for binary CNNs should be able to benefit from ResBinNet with minimum modification in their design. In summary, the contributions of this paper are as follows:• Proposing residual binarization, a methodology for learning multi-level residual representations for each feature map in binary CNNs.• Introducing temperature adjustment as a practical approach for gradual (soft) binarization of CNN weights.• Analyzing the trade-off between accuracy and execution time of ResBinNet on a real hardware design.• Evaluating ResBinNet convergence rate and accuracy on three datasets: MNIST, SVHN, and CIFAR-10.• Development of an open-source Application Program Interface (API) for ResBinNet 1.The remainder of the paper is organized as follows: In Section 2, we describe the residual binarization method for binarizing activations. Section 3 explains the temperature adjustment technique for binarizing weights. In Section 4, we discuss how particular ResBinNet operations (e.g. multi-level XNOR-popcount) can be efficiently implemented on existing hardware accelerators. Experiments are discussed in Section 5. Finally, we discuss the related work and in Sections 6 and 7. A binarization scheme converts value x to the binarized estimation e x, which can take one of the possible values γ or −γ. This representation allows us to represent e x with a single bit b x. In particular, for a given layer within the CNN, we can store the single full-precision value of γ as a representative for all features, and reduce the memory footprint by storing bits b x instead of x for each feature. Assuming that both the weights and input features of a CNN layer are binarized, each dot product between a feature vector x and weight vector w can be efficiently computed using XNOR-popcount operations as previously suggested in; BID11 ). Let x = γ x s x and w = γ w s w where {γ x, γ w} are scalar values and {s x, s w} are the corresponding sign vectors. The binary representations of {x, y}, which we denote by {b x, b w}, are simply computed by encoding the sign vectors s to binary vectors. The dot product between x and w can be computed as: DISPLAYFORM0 where xnorpopcount(., .) returns the number of set bits in the element-wise XNOR of the input binary vectors. Figure 2: Schematic flow for computing 3 levels of residual binary estimates e. As we go deeper in levels, the estimation becomes more accurate. Multi-level Residual Binarization: Imposing binary constraints on weights and activations of a neural network inherently limits the model's ability to provide the inference accuracy that a floatingpoint counterpart can achieve. To address this issue, we propose a multi-level binarization scheme where the residual errors are sequentially binarized to increase the numerical precision of the estimation. Figure 2 presents the procedure to compute an estimate e from input x. Each level of the graph (say the i th level) computes the corresponding estimate e i by taking the sign of its input (the residual error from the (i − 1) th level), multiplying it by a parameter γ i, and adding ±γ i to the estimate of the previous level. In addition, it computes the residual error r i and feeds it to the input of the next level. The estimates of deeper levels are therefore more accurate representations for the input x. Note that the estimate e i in level i can be represented using a stream of i bits corresponding to the signs of inputs to the first i levels. Residual Binary Activation Function: Similar to previous works which use the Sign function as the activation function, in this paper we use the residual binarization to account for the activation function. The difference between our approach and the single-bit approach is shown in FIG2. Each level has a separate full-precision representative γ i, which should be learned in the training phase. In this setting, the gradients in the backward propagation are computed the same way as in the conventional single-bit binarization; BID11 ), regardless of the number of residual levels. In the forward propagation, however, the computed of our approach provide a more accurate approximation. For instance, if we employ 2 levels of residual binarization, the activation function can take 2 2 = 4 different values. In general, the total number of possible values for the activation functions for an l-level residual binarization scheme is 2 l.Multi-level XNOR-popcount: In ResBinNet, the dot product of an l-level residual-binarized feature vector e and a vector of binary weights w can be rendered using l subsequent XNOR-popcount operations. Let e = l i=1 γ ei s ei and w = γ w s w, where s ei and s w correspond to the sign of i th residual in e and sign of w, respectively. The dot product between e and w is computed as: DISPLAYFORM1 where {b ei, b w} are the binary representations corresponding to {s ei, s w}, respectively. Note that the subsequent XNOR-popcount operations can be performed sequentially, thus, the same memory used for operating on b ei can be reused for operating on b ei+1. As a , the actual memory footprint for a multi-level residual binarization is the same as that of a single-level binarization, provided that the bit streams are processed sequentially. Residual Encoding: In order to convert matrix-vector multiplications into XNOR-popcount operations, we need to encode a feature x into a stream of binary values {b ei |i ∈ 1, 2, . . ., l}. The pseudo code for this operation, which we call Residual Encoding, is presented in Algorithm 1.Algorithm 1 l-level residual encoding algorithm inputs: γ1, γ2,..., γ l, x outputs: be1, be2,..., be l 1: r ← x 2: e ← 0 3: DISPLAYFORM2 e ← e + Sign(r) × γi 6: r ← r − Sign(r) × γi 7: end for 3 TEMPERATURE ADJUSTMENT Approximating the weights of a neural network with binary values often in loss of accuracy in the pertinent model. In this section, we explain our methodology to minimize the approximation error during the training, such that the trained weights exhibit lower binarization errors. Let W denote the parameter set within a certain layer of the neural network. Instead of directly using W to compute the layer's output, we perform a change of variable θ = γ H(αW) and compute the output using θ. Here, H is a bounded, monotonically-increasing function such as the Hyperbolic Tangent function that is applied on W element-wise. Parameter γ is a trainable parameter that adjusts the maximum and minimum values that θ can take. Parameter α, which we call the Temperature henceforth, controls the slope of function H. Figure 4a and 4b illustrate the effect of parameters α and γ on the nonlinear change of variable θ = γ T anh(αW). Note that θ acts as a semi-binarized parameter set in the soft training phase. As we increase the temperature parameter, H becomes closer to the binary sign function, meaning that the pertinent θ will exhibit less error when approximated with ±γ. Note that W and γ are the trainable parameters in this setting. Parameter θ is used in the forward propagation phase of the soft training, while in the backward propagation step W is updated. Effect on Training: Let g θ and g W be the gradients of the training loss function with respect to θ and W, respectively, then we have g W = g θ × ∂θ ∂W. In other words, the magnitude of the gradient that actually flows through W is controlled by ∂θ ∂W. If θ is close to ±γ, the gradient will be filtered out; otherwise, the gradients will flow through W.Effect of the Temperature on Gradients: Figure 4c illustrates how the temperature parameter can affect the gradient filtering term ∂θ ∂W during the training. As we increase the temperature, elements of W that are closer to 0 receive amplified gradients, while the elements that are closer to the binary regime (i.e. θ ≈ ±γ) encounter damped gradients. This means that increasing the temperature parameter α will push most of the weights to the binary regime with a bigger force; therefore, a neural network trained with high temperature values exhibits a smaller binarization error. Temperature Adjustment: Setting a high temperature at the beginning of the training will eliminate most of the gradients, preventing the training loss from being optimized. To address this problem, we start the soft binarization phase with a low temperature (e.g. α = 1) and slightly increase it at the end of each mini-batch. This approach gradually adapts the weights to binary values during the training. Figure 5 presents an example of the histogram of the semi-binarized weights θ in different training epochs. As can be seen, the distribution is gradually shifted towards binary values as the training proceeds. After soft binarization, the parameter set θ can be used as an initial point for existing hard binarization schemes such as the method proposed by ). As We illustrate in Section 5, the soft binarization methodology significantly increases the convergence rate of binary CNNs. In this section, we show that the modifications required to incorporate residual binarization into existing hardware accelerators for binary CNNs are minimal. ResBinNet provides a trade-off between inference accuracy and the execution time, while keeping the implementation cost (e.g area cost) almost intact; as a , ResBinNet readily offers users a decision on the latency of their learning application by compromising the inference accuracy. As an example, we consider the FPGA accelerator for binary CNNs proposed by BID12 ). We refer the reader to the mentioned paper for details about the original design. Here, we describe the modifications that we integrated into the specific components of their design to accommodate residual binarization. The modified accelerator will be publicly available on Github 2. FIG4 depicts a schematic view of the original hardware and our modified accelerator. Note that in the original implementation, each layer takes a single binary vector b in and computes a single output vector b out while the modified version processes l streams of binary vectors where l is the desired number of residual levels. Matrix-Vector Multiplication: Both in the original and the modified accelerators, the matrix-vector multiplication unit is the most computationally intensive among all other operations. In the original design, this unit takes a binary vector b in and outputs a full-precision vector y. To accommodate residual binarization, we modify this module as follows: the XNOR-popcount operation is sequentially performed on the stream of binary vectors b in,i. Each XNOR-popcount in a different vector y i. Then, the output is computed as y = i γ i y i. Note that the computation overhead of the summation is negligible compared to the XNOR-popcount operation, thus, the runtime of multilevel XNOR-popcount with l-level residual representations is approximately l times the runtime of the conventional XNOR-popcount in the original design. Batch-Normalization and Activation Function: Batch-Normalization in the inference phase can be viewed as multiplying a vector y by constant vector g and subtracting vector t to obtain the normalized vector y norm. The original design in BID12 ) does not require the multiplication step since only the sign of y norm matters to compute the output of the activation function (the Sign function). In our design, the multiplication step is necessary since the value of y norm affects the output of our activation function, which is encoded using Algorithm 1 and sent to the next layer to be used as an input. The original implementation simply computes the Boolean OR of the binary values to perform max pooling over the features within a window. In ResBinNet, however, features are represented with l-bit residual representations. As a , performing Boolean OR over the binary encodings is no longer equivalent to performing max-pooling over the features. Nevertheless, the pooling operation can be performed over the encoded values directly. Assume full-precision values e x and e y, with l-level binary encodings b x and b y, respectively. Considering ordered positive γ i values (i.e. γ 1 > γ 2 > . . . > γ l > 0), we can easily conclude that if e x < e y then b x < b y. We implement our API using Keras BID1 ) library with a Tensorflow backend. The synthesis reports (resource utilization and latency) for the FPGA accelerator are gathered using Vivado Design Suite BID15 ). For temperature adjustment (Section 3), we use a "hard tanh" nonlinearity and gradually increase the temperature by incrementing α at the end of each epoch. We evaluate ResBinNet by comparing the accuracy, number of training epochs, size of the network, and execution time. Proof-of-concept evaluations are performed for three datasets: MNIST, CIFAR-10, and SVHN. Table 1 presents the architecture of the trained neural networks. The architectures are picked from BID12 ). Table 1: Network architectures for evaluation benchmarks. C64 denotes a 3 × 3 convolution with 64 output channels, M P stands for 2 × 2 max pooling, BN represents batch normalization, and D512 means a dense layer with 512 outputs. The residual binarizations are shown using RB. DISPLAYFORM0 Effect of the model size on accuracy: As discussed in BID12 ), the final accuracy of the binary CNN for a particular application is correlated with the shape of the network. For instance, authors of the paper report that the accuracy of MNIST for the architecture in TAB0 varies in the range (95.83%-98.4%) when the number of neurons in hidden layers is varied from 256 to 1024. Similarly, the architecture in BID12 ) for CIFAR-10 is a smaller version of the architecture originally trained by ). Using this smaller architecture drops the accuracy from 88.6% to 80.1%. In our evaluations, we show that ResBinNet can reduce the accuracy drop using more residual binary levels for the activations of the smaller model. Effect of the number of epochs on accuracy: Compared to full-precision neural networks, binarized CNNs usually need more training epochs to achieve the same accuracy. For example, the CIFAR-10 architecture in ) is trained for 500 epochs, while the fullprecision version of the same network can achieve comparable accuracy in roughly 50 iterations 3. Here, we argue that soft binarization in ResBinNet can significantly reduce the number of training epochs for binary CNNs. TAB0 compares ResBinNet with two prior arts, namely Binarynet and FINN. Both baselines use the same training methodology, but the network architectures in FINN are considerably smaller, which leads to lower accuracy rates for FINN. We evaluate ResBinNet using the small architectures of FINN. The training of ResBinNet consists of a soft binarization phase and a single fine-tuning epoch. Note that the fine-tuning phase uses the same algorithm as the two baselines. The higher accuracy of Binarynet compared to our approach is a direct of employing a large architecture and training for many epochs. For each benchmark, the comparison between our approach and the same network architecture of FINN is followed:• CIFAR-10: Compared to FINN, ResBinNet achieves higher accuracy for more than 1 level of residual binarization. We argue that, even for 1-level binarization, the same accuracy is viable if we fine-tune the soft-binarized model (after 50 epochs) for more than 1 epochs (FINN and ResBinNet use the same algorithms in this phase). In addition, the convergence rate of ResBinNet is improved as the number of residual levels is increased.• SVHN and MNIST: For these datasets, ResBinNet achieves a higher accuracy with even fewer epochs compared to CIFAR-10. The final accuracy and the convergence speed also exhibit improvement as the number of residual levels is increased from 1 to 3.We now evaluate the area overhead and execution time of ResBinNet for the modified hardware architecture, which we discussed previously in Section 4. We compare the implementation of the CNN architecture used for the CIFAR-10 and SVHN tasks (See Table 1). FIG5 compares the hardware resource utilization, and execution time per input. The resource utilization of ResBinNet is evaluated in FIG5, which compares the utilization (in %) for different resources of the FPGA (i.e. BRAM, DSP, LUT, and Registers). For each resource, we compare the baseline with different number of residual binarization levels in ResBinNet. Asides from the DSP utilization, which is required for full-precision multiplications in batch normalization, the other three resources show a modest increase in utilization, meaning that the residual binarization method offers a scalable design for real-world systems. FIG5 compares the latency (runtime) of ResBinNet with the baseline accelerator. In particular, we consider multi-level residual binarization with 1, 2, and 3 residual levels which are denoted by RBN1, RBN2, and RBN3, respectively. The numbers on top of the bars show the accuracy of the corresponding binarized CNN for CIFAR-10 task. As can be seen, ResBinNet enables users to achieve higher accuracy by tolerating a higher latency, which is almost linear with respect to the number of residual levels. Training CNNs with binary weights and/or activations has been the subject of very recent works BID2; BID11;; BID12 ). The authors of Binaryconnect BID2 ) suggest a probabilistic methodology that leverages the full-precision weights to generate binary representatives during forward pass while in the back-propagation the full-precision weights are updated. ) is the first work attempting to binarize both weight and activations of CNN. In this work, authors also suggest replacing the costly dot products by XNOR-popcount operations. XNOR-net BID11 ) proposes to use scale factors during training, which in an improved accuracy. The aforementioned works propose optimization solutions that enable the use of binarized values in CNNs which, in turn, enable the design of simple and efficient hardware accelerators. The downside of these works is that, aside from changing the architecture of the CNN BID12 ), they do not offer any other reconfigurability in their designs. On another track of research, the reconfigurability of CNN accelerators has been investigated. This line of research focuses on using adaptive low bit-width representations for compressing the parameters and/or simplifying the pertinent arithmetic operations BID17; BID6; BID0 ). The proposed solutions, however, do not enjoy the same simplified XNOR-popcount operations as in binarized CNNs. Among the aforementioned works, a unified solution which is both reconfigurable and binarized is missing. To the best of our knowledge, ResBinNet is the first to offer a solution which is reconfigurable and, at the same time, enjoys the benefits of binarized CNNs. Our goal in the design of ResBinNet was to remain consistent with the existing CNN optimization solutions. As shown in the paper, ResBinNet is compatible with the accelerators designed for binarized CNNs. This paper introduces ResBinNet, a novel reconfigurable binarization scheme which aims to improve the convergence rate and the final accuracy of binary CNNs. The proposed training is twofold: (i) In the first phase, called soft binarization, we introduce two distinct methodologies designed for binarizing weights and feature within CNNs, namely residual binarization, and temperature adjustment. Residual binarization learns a multi-level representation for features of CNN to provide an arbitrary numerical precision during inference. Temperature adjustment gradually imposes binarization constraints on the weights. (ii) In the second phase, which we call hard binarization, the model is fine-tuned in few training epochs. Our experiments demonstrate that the joint use of residual binarization and temperature adjustment improves the convergence rate and the accuracy of the binarized CNN. We argue that ResBinNet methodology can be adopted by current CNN hardware accelerators as it requires minimal modification to existing binarized CNN solutions. Developers can integrate the approaches proposed in this paper into their deep learning systems to provide users with a trade-off between application latency and inference accuracy. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | SJtfOEn6- | Residual Binary Neural Networks significantly improve the convergence rate and inference accuracy of the binary neural networks. |
In real-world machine learning applications, large outliers and pervasive noise are commonplace, and access to clean training data as required by standard deep autoencoders is unlikely. Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis. Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalous if the reconstruction error exceeds some threshold. In this paper, we proposed an unsupervised method based on subset scanning over autoencoder activations. The contributions of our work are threefold. First, we propose a novel method combining detection with reconstruction error and subset scanning scores to improve the anomaly score of current autoencoders without requiring any retraining. Second, we provide the ability to inspect and visualize the set of anomalous nodes in the reconstruction error space that make a sample noised. Third, we show that subset scanning can be used for anomaly detection in the inner layers of the autoencoder. We provide detection power for several untargeted adversarial noise models under standard datasets. Neural networks generate a large amount of activation data when processing an input. This work applies anomalous pattern detection techniques on this activation data in order to determine if the input is anomalous. Examples of an anomalous input can be noised samples by an adversary (; ; a; a), human annotation errors , etc. The goal of anomalous pattern detection is to quantify, detect, and characterize the data that are generated by an alternative process. Since anomalies are rare and come from diverse sources, it is not feasible to obtain labeled datasets of all possible anomalies/attacks. If an observation deviates from the learned model, it is classified as an anomaly . In real-world problems, large outliers and pervasive perturbations are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders due to reasons such as human annotation errors and poisoning techniques (b). Autoencoders differ from classical classifier networks such as Convolutional Neural Networks (CNNs). Autoencoders do not require labels because the expected output is the input data. The autoencoder is trained to minimize the reconstruction error L(x, x). During the prediction step, anomaly detection can be performed by looking at the distribution of mean reconstruction error L(w, d(e(w))) when w ∈ X clean and L(w, d(e(w))) when w ∈ X adv . An example of both, clean and noise reconstruction error distribution can be seen in Figure 4 (b). Using this type of anomaly detection with autoencoders assumes that the autoencoder is properly trained with clean data. Otherwise, this manifold can be used advantageously by training the autoencoder with corrupted samples that are mapped to clean samples. As a , the autoencoder will learn an underlying vector field that points in the direction of the manifold in which the clean samples lie. Thus, upon the introduction of a perturbation, the magnitude of each arrow in the vector field will indicate the direction in which the data must be moved to map the sample to its clean representation . Further detail on the autoencoder architecture and training setup for the experiments can be found in the Section A.4. Subset scanning frames the detection problem as a search over subsets of data in order to find a subset that maximizes a scoring function F (S), typically a likelihood ratio. Subset scanning exploits a property of these scoring functions that allow for efficient maximization over the exponentially large search space . In this paper, we show how subset scanning methods can enhance the anomaly detection power of autoencoders in an unsupervised manner and without a retraining step. We treat this anomaly detection approach as a search for a subset of node activations that are higher than expected. This is formally quantified as the subset with the highest score according to a non-parametric scan statistic. The contributions of our work are threefold. First, we propose a novel approach combining detection with reconstruction error and subset scanning scores to improve the anomaly score of current autoencoders without requiring any retraining. Second, we provide the ability to identify and visualize the set of anomalous nodes in the reconstruction error space that make noised samples. Third, we show that subset scanning can be used for anomaly detection in the inner layers of the autoencoder. Figure 1: Example of subset scanning score distributions across layers of an autoencoder for adversarial BIM noise = 0.01. In the top of the graph we can see subset score distributions per nodes in a layer. The distributions of subset scanning scores are shown in blue for clean images (C) (expected distribution), and in orange for noised samples A t. Higher AUCs are expected when distributions are separated from each other and lower AUCs when they overlap. The purple structure corresponds to convolutional layers at the Encoder, while the red structure corresponds to the convolution layers for the Decoder. The computed AUC for the subset score distributions can be found in Table 1. The highest mutual information exchange with the adversarial input happens on the first layers (convolutional and maxpooling). This is why the greatest divergence in both C and A t subset scores distributions is seen. In the latent space, due to properties described in Section 4, the autoencoder abstracts basic representations of the images, losing subset scanning power due to the autoencoder mapping the new sample to the expected distribution. This can be seen as an almost perfect overlap of distribution in conv 2d 7. Machine learning models are susceptible to adversarial perturbations of their input data that can cause the input to be misclassified (; ; a; a). There are a variety of methods to make neural networks more robust to adversarial noise. Some require retraining with altered loss functions so that adversarial images must have a higher perturbation in order to be successful . Our work treats the problem as anomalous pattern detection and operates in an unsupervised manner without a priori knowledge of the attack or labeled examples. We also do not rely on training data augmentation or specialized training techniques. These constraints make it a more difficult problem, but more realistic in the adversarial noise domain as new attacks are constantly being created. Before introducing our approach in the next section, we explain related work in two parts. First, we provide a quick overview of Autoencoders as anomaly detectors and second, we discuss different adversarial attacks models used in this paper. Several approaches have been used for anomaly detection with autoencoders. Since autoencoders can model training data distribution, these neural networks are an interesting option for anomaly detection. Most of the methods found in the literature require that the training data only consist of normal examples such as denoising autoencoders , but this alone is no guarantee for anomalies to have a large reconstruction error. present a robust Anomaly Detection with ITSR (Iterative Training Set Refinement) and Adversarial Autoencoders. Their work uses the capabilities of adversarial autoencoders to address the shortcoming of conventional autoencoders in the presence of anomalies samples during training. They also propose a combined criterion of reconstruction error and likelihood in the latent space, as well as a retraining method to increase the separation in both latent and image space. use deep structured energy-based models, showing that a criterion based on an energy score leads to better than the reconstruction error criterion. present an extension of denoising autoencoders that can work with corrupted data. During training, the network uses an anomaly regularizing penalty based on L p -norms. Most of the approaches for anomaly detection with autoencoders require the training data to consist of clean examples or use complex autoencoder architectures and special training. In this work, we propose subset scanning applied to autoencoders. This is an unsupervised anomaly detector that can be applied to any pre-trained, off-the-shelf autoencoder network. We use, as a baseline, the detection capabilities based on mean autoencoder reconstruction error distributions and One-SVM (Schölkopf et al., 2001) for the autoencoder reconstruction error space analysis. Several attack models have been used to target classifiers in this study, we focus on untargeted attacks with Basic Iterative Method (BIM) (b), Fast Gradient Signal Method (FGSM) , and DeepFool (DF) . The idea behind these attacks is to find a perturbation to be included in the original sample X, generating an adversarial sample X adv. Fast Gradient Sign Method (FGSM) FGSM was designed to be extremely fast rather than optimal. It simply uses the sign of the gradient at every pixel to determine the direction with which to change the corresponding pixel value. Given an image x and its corresponding true label y, the FGSM attack sets the perturbation δ to: Basic Iterative Method (BIM) BIM (b) is a straightforward extension of FGSM where adversarial noise is applied multiple times iteratively with small step size: The DF algorithm presented by computes the optimal adversarial perturbation to perform a misclassification. In a binary classifier, the robustness of the model f for an input X 0 is equal to the distance the input to the hyper-plane that separates both classes. So the minimal perturbation to change the classifier decision is the orthogonal projection defined as: 3 SUBSET SCANNING FOR ANOMALOUS PATTERN DETECTION Subset scanning treats the pattern detection problem as a search for the "most anomalous" subset of observations in the data. Herein, anomalousness is quantified by a scoring function, F (S) which is typically a log-likelihood ratio statistic. Therefore, the goal is to efficiently identify S * = arg max S F (S) over all relevant subsets of node activations within an autoencoder that is processing an image at runtime. The particular scoring functions F (S) used in this work are covered in the next sub-section. Heuristic alternatives to subset scanning include "top-down" and "bottom-up" methods. Topdown approaches detect globally interesting patterns and then identify sub-partitions to find smaller anomalous groups of records. These may fail to detect small-scale patterns that are not evident from global aggregate statistics. Similarly, bottom-up approaches that identify individually anomalous data points and aggregates them into clusters may fail when the pattern is only evident by evaluating a group of data points collectively . Treating the detection problem as a subset scan has desirable statistical properties. However, the exhaustive search over groups quickly becomes computationally infeasible due to the exponential number of subsets of records. Fortunately, a large class of scoring functions used in subset scanning satisfy the "Linear Time Subset Scanning" (LTSS) property that allows for exact, efficient maximization over all subsets of data without requiring an exhaustive search . The LTSS property essentially reduces the search space from 2 N to N for a dataset with N records, while guaranteeing that the highest-scoring subset of records is identified. This work uses non-parametric scan statistics (NPSS) that have been used in other pattern detection methods (; ;). Although subset scanning can use parametric scoring functions (i.e. Gaussian, Poisson), the distribution of activations within particular layers are highly skewed and in some cases bi-modal. See Figure 9. Therefore, this work uses non-parametric scan statistics that makes minimal assumptions on the underlying distribution of node activations. The intuition behind the role of non-parametric scan statistics is best explained in a simple example. Consider 100 p-values that are supposed to be uniformly distributed between 0 and 1 under the null hypothesis of no anomaly present in the data. A larger-than-expected activation at a node in a lower p-value for that node. What if we observe 30 (out of 100) p-values all under a threshold value of 0.10? Is that more or less anomalous than finding 20 (out of 100) p-values all under a threshold of 0.075? Non-parametric scan statistics quantify these situations. This same example can be used to highlight why subset scanning is appropriately paired with non-parametric scan statistics. A single p−value of 0.1 is not interesting when viewed by itself. However, if there are 29 other p−values in the same data set that are also 0.1 (or lower), then the observations are now more interesting when considered together, as a group. Subset scanning efficiently identifies the combination of p-values and thresholds in order to maximize the non-parametric scan statistic. There are three steps to appropriately use non-parametric scan statistics on neural network activation data. The first is to form a distribution of "expected" activations at each node. This is done by letting the autoencoder process images that are known to be clean from anomalies (sometimes referred to as "" images) and recording the activations at each node. The second step involves a test image that may be clean or noised and needs to be scored. We record the activations induced by the test image and compare it to the baseline activations created in the first step. This comparison in a p-value at each node. The third step is to quantify the anomalousness of the ing p-values by finding the subset of nodes that maximize the non-parametric scan statistic. We now formalize these three steps. Let there be M images X z included in D H0. A test image X i is now converted to a vector of p-values p ij of length J = |O|, the number of nodes in the network under consideration. Intuitively, if a test image is "natural" (its activations are drawn from the same distribution as the baseline images) then few of the p-values will be extreme. The key assumption is that under the alternative hypothesis of an anomaly present in the activation data, then at least some subset of the activations S O ⊆ O will systematically appear extreme. We now turn to non-parametric scan statistics to identify and quantify this set of p-values. The general form of the NPSS score function is where N (S) represents the number of empirical p-values contained in subset S and N α (S) is the number of p-values less than (significance level) α contained in subset S. Moreover, it has been shown that for a subset S consisting of ). We assume an anomalous process will create some S where the observed significance is higher than the expected, N α (S) > N (S)α, for some α. There are well-known goodness-of-fit statistics that can be utilized in NPSS, the most popular is the Kolmogorov-Smirnov test . Another option is Higher-Criticism . In this work we use the Berk-Jones test statistic : 1−y between the observed and expected proportions of significant p-values. Berk-Jones can be interpreted as the log-likelihood ratio for testing whether the p-values are uniformly distributed on as compared to following a piece-wise constant alternative distribution, and has been shown to fulfill several optimality properties and has greater power than any weighted Kolmogorov statistic. Although NPSS provides a means to evaluate the anomalousness of a subset of node activations S O discovering which of the 2 J possible subsets provides the most evidence of an anomalous pattern is computationally infeasible for moderately sized data sets. However, NPSS has been shown to satisfy the linear-time subset scanning (LTSS) property , which allows for an efficient and exact maximization over subsets of data. The LTSS property uses a priority function G(O j) to rank nodes and then proves that the highestscoring subset consists of the "top-k" priority nodes for some k in 1... J. The priority of a node for NPSS is the proportion of p-values that are less than α. However, because we are scoring a single image and there is only one p-value at each node, the priority of a node is either 1 (when the p-value is less than α) or 0 (otherwise). Therefore, for a fixed, given α threshold, the most anomalous subset is all and only nodes with p-values less than alpha. In order to maximize the scoring function over α we first sort the O j nodes by their p-values. Let S (k) be the subset containing the k nodes with with the smallest p-values. Let α k be the largest pvalue among these k nodes. The LTSS property guarantees that the highest-scoring subset (over all α thresholds) will be one of these J subsets S, S,... S (J) with their corresponding α k threshold. Any subset of nodes that does not take this form (or uses an alternate α k) is provably sub-optimal and not considered. Critically, this drastically reduced search space still guarantees identifying the highest-scoring subset of nodes for a test image under evaluation. Figure 2 shows how the optimal α threshold (and subset size) can vary for different test images under consideration. The leftmost panel shows the distributions of the size of the most anomalous subset of nodes in both clean and noised images. We note that noised images tend to return a larger subset of nodes than clean images. The middle panel shows the optimal α threshold value that maximized the non-parametric scan statistic for clean and noised images. We note that noised images tend to have lower thresholds than clean images. When an image induces a larger number of smaller p-values, the ing score of the image is higher. This is demonstrated in the right-most panel where noised test images have higher scores than clean test images. A conventional autoencoder learns the underlying manifold of the training data, which is used to reconstruct the input (x) as the output (x). The general architecture of any autoencoder involves an encoder and a decoder. The encoder (e : X → Z) is composed of one or more layers that perform nonlinear dimensionality reduction from the high dimensional input space into a low-dimensional latent representation (z = e(x)), while the decoder (d : Z → X) reconstructs the original sample from the latent representation. Both functions compute x = d(e(x)). The autoencoder is optimized by minimizing the reconstruction error L(x, x). Anomalous pattern detection can be performed on a trained autoencoder, by looking at the distributions of mean reconstruction error L(w, d(e(w))) when w ∈ X clean and L(w, d(e(w))) when w ∈ X adv. Due to the inherent properties of the autoencoder for anomaly detection, we propose two experiments or applications of subset scanning. First, we are interested in subset scanning scores distributions along the layers of the encoder. During the untangling phase (z = e(x)) of information reduction from the input space to the latent representation (z), we want to observe until which layer we're able to discriminate the input (clean and noised) to the distribution learnt by the autoencoder. Second, we apply subset scanning methods on the reconstruction error space, to understand if reconstruction error criterion suffices for detection in training autoencoder based anomaly detectors. The connection between the number of nodes in a subset, α value that maximizes the non-parametric scan statistic, and the ing subset score. These are for Fashion-MNIST examples with activations coming from the first layer of the autoencoder. Under the presence of BIM adversarial noise, we observe a larger number of nodes that have smaller p-values. This combination in a higher subset score than the clean images. Critically, the LTSS property allows α to be efficiently chosen to maximize the score for each individual image. The subset size is all nodes with p-values less than the α threshold. We enforce a α max = 0.5 constraint on the search. In this section, we describe the baselines methods used as comparison, as well as the datasets, evaluation metric, adversarial noise generation and autoencoder architecture we used. For generating the attacks a standard CNN model was trained for both datasets. The test accuracies for these models are 0.992 for MNIST and 0.921 for Fashion-MNIST. We trained an autoencoder network on MNIST and Fashion-MNIST (detailed in Section 5.1). The architecture of the autoencoder is depicted in Figure 8, and further details on the training setup can be found in Appendix A.4. The test reconstruction error of the model was 0.284 for Fashion-MNIST and 0.095 for MNIST. In real-world applications, clean training datasets cannot always be guaranteed due to factors such as human annotation errors , and poisoning techniques (b). Consequently, we trained the autoencoder with different levels of data poisoning. We trained autoencoders with 100% of clean samples, 1% of adversarial samples, and 9% of adversarial samples. For this experiment, we used BIM as the attack and Fashion-MNIST as the dataset. We evaluated subset scanning over two experiments. First, we applied our subset scanning method on the reconstruction error calculated over the input data and the last layer of the autoencoder (conv2d 7). This layer has 1 filter containing 784 nodes. For more information, refer to Section 4. Second, we studied subset scanning patterns across adversarial attacks and datasets, to see if we have some common subset scanning behaviors. For this, we applied subset scanning across all layers of the autoencoder (convolutional, max pooling and up-sampling) and analyzed the detection power in each case. For our adversarial experiments, we took M = |D H0 | = 7000 of the 10000 validation images and used them to generate the activation distribution (D H0) at each of the 784 nodes (28×28) for the reconstruction error space and the activations nodes per each inner layer. These 7000 images were not used again. These images form our expectation of "normal" activation behavior for the network. The remaining 3000 images were used to form a "Clean" (C = 1500) sample and an "Adversarial" (A t = 1500) noised sample. For the experiments we only kept the successful attacks for DF, FGSM and BIM, so we only preserve noised samples that were incorrectly classified by the model. We evaluated anomaly detection with subset scanning on the classical MNIST dataset and more complex dataset Fashion-MNIST . We present a quick overview of both datasets: • MNIST : The training set has 60000 images and the test set has 10000 images of handwritten digits. Each digit has been normalized and centered to 28 × 28. • Fashion-MNIST : a relatively new dataset comprising 28 × 28 grayscale images of 70.000 fashion products from 10 categories, with 7000 images per category. The training set has 60000 images and the test set has 10000 images. As an alternative to MNIST, it has the same image size, data format and validation splits, with the digits from MNIST replaced with 10 products of clothes and accessories. Several adversarial attacks for the subset scanning experiments were implemented, briefly introduced in Section 2.2. Specifically, we describe in this section the hyperparameter selection for Basic Iterative Method (BIM) adversarial attack (b), Fast Gradient Signal Method (FGSM) and DeepFool (DF) . BIM and FGSM have an parameter which controls how far a pixel is allowed to change from its original value when noise is added to the image. We used a value of = 0.01 in the scaled pixel space. We also allowed the method to reach its final noised state over 100 steps with each of size 0.002. Smaller values of make the pattern subtler and harder to detect, but also less likely for the attacks to succeed in changing the class label to the target. For DeepFool, we used standard = 1e − 06 and 100 iterations. Example of generated adversarial samples for both datasets are depicted in Figure 7. All untargeted attacks were generated with the Adversarial Robustness Toolbox 1. The set A t only contains images that were successfully noised by each type of adversarial attack. This means that those samples were misclassified from an original predicted label. The 1500 images in group C are natural and have all class labels represented equally. We adopted the following metric to measure the effectiveness of subset scanning over an autoencoder to distinguishing different types of adversarial attacks images under the activation and reconstruction error space. The Detection Power is measured by AUROC, the Area Under the Receiver Operating Characteristic curve, which is also a threshold independent metric . The ROC curve depicts the relationship between true positive rate (TPR) and false positive rate (FPR). Results shown in Figure 6 for reconstruction error and activations space in the first convolutional layer for Figure 3. Table 1: Detection power for individual subset scanning over all layers (convolutional, max pooling and up-sampling) for both datasets under three different adversarial attacks. The noised columns refer to the autoencoder being trained with 1% and 9% BIM noised samples. Under different datasets and attacks, the same initial layers hold the highest detection power. In Table 1, we can observe that across different datasets, noise attacks models, and two proportion of noised samples during training, the first layers (conv 2d 1 and max pooling 2d 1) maintain a high performance regarding detection power (between 0.96 to 1.0 depending on dataset and noise attack). The ROC curves and subset scores distribution for the BIM and FGSM attacks under Fashion-MNIST for the layer conv 2d 1 are shown in Figure 3. Furthermore, Table 1 shows that in the cases where 1% and 9% of the samples are noised during training stage of the autoencoder, the detection power of subset scanning still performs correctly, above 0.82. Table 2 shows the behavior of subset scanning over the reconstruction error space and the detection power in detail for both datasets and different adversarial attacks. We can observe a difference of performance of our method over the Fashion-MNIST dataset. One hypothesis would be that this is due to the autoencoder performance (Loss for Fashion-MNIST 0.284 and MNIST 0.095). To test this idea, we performed preliminary experiments that show a relationship between the decrease in the loss of the trained autoencoder and the increase in the detection power of subset scanning methods under the reconstruction error space. A poorly-trained autoencoder will have a higher loss, while a well-trained autoencoder will have a lower loss. If an autoencoder's loss is high, it is more difficult to separate between clean and noised samples in the reconstruction space. Nonetheless, subset scanning has higher detection power than Mean Reconstruction Error distributions under clean and noise samples (see Figure 4) and Unsupervised outlier detection methods such as One-SVM (Schölkopf et al., 2001). Furthermore, subset scanning under the reconstruction error space is an interesting technique to explore and introspect what nodes or portions of the input image look anomalous. With this information we can not only point out which image looks anomalous, but also indicate which nodes make the input a noised sample, an example of this is depicted in Figure 5. Table 2: Detection power for individual subset scanning over reconstruction error space for both dataset under three different adversarial attacks, two baselines for reconstruction error over AE and One-SVM over reconstruction error of the AE (Schölkopf et al., 2001). In this work, we proposed a novel unsupervised method for adversarial noise detection with off-theshelf autoencoders and subset scanning. We have successfully demonstrated how subset scanning can be used to gain detection strength against multiple adversarial attacks on images across several datasets, without requiring any retraining or complex deep autoencoder network structures. Furthermore, we tested subset scanning over the reconstruction error space and observed significant variations depending on the dataset, autoencoder architecture, and training setup. We performed Figure 5: Anomalous nodes visualization. Overlap of anomalous nodes (white) and reconstruction error (darker blue) per sample. (a) Noised samples with BIM. We can observe that nodes outside the contour will make the sample be classified as noised. (b) Whereas clean we expect the anomalous nodes will be along the contour of the figure. preliminary experiments that yielded a relation between a decrease in the loss of the trained autoencoder and an increase in the detection power of subset scanning under the reconstruction error space. Nonetheless, applying our method under this space provides introspection capabilities that allow us to identify the nodes or portions of the input image look anomalous. Consequently, we are able to not only point out which image looks anomalous but also characterize the nodes that make the input a noised sample. We also evaluated the performance of applying subset scanning over the autoencoder's activations. We observed a consistent and high detection power across noise attacks, datasets, autoencoders architectures and different noised training levels in the initial layers (Convolutional and MaxPooling layers). Due to versatile properties of subset scanning under neural network activation analysis it may be used for several other studies, including unsupervised classification in the latent space of an autoencoder. We would expect that same class images will identify as a subset of inputs (images) that have higher-than-expected activations (i.e. large number of low empirical p−values) at a subset of nodes. Subset scanning applied to autoencoders activations is a novel, unsupervised anomaly detector that can be applied to any pre-trained, off-the-shelf neural network, previously only used in classifier neural networks such as CNNs and ResNet . A.1 ALGORITHM FOR SUBSET SCANNING OVER AUTOENCODER ACTIVATIONS input: Background set of images: X z ∈ D H0, evaluation image: X i, α max. output: S * E Score for the evaluation image AE ← TrainNetwork (training dataset); AE y ← Some flattened layer of AE; *, α *, and F (S *) Algorithm 1: Pseudo-code for subset scanning over autoencoder activations. In Figure 6, we can observe the distribution of subset scores for test sets of images over reconstruction error. Test sets containing all natural images had lower scores than test sets containing noised images (FGSM and BIM generated samples). Higher proportion of noised images ed in higher scores. Figure 6 also shows the ROC curves for each of the noised cases as compared to the scores from test sets containing all natural images., each with relu activations, and a maxpooling layer with a pool size of two after every convolutional layer. The decoder comprises four convolutional layers with 8, 8, 16, 1 filters respectively, a kernel size of three, each with relu activations except the final layer which uses a sigmoid. Each consecutive pair of convolutional layer is interspersed with an upsampling layer with a size of two. We train the autoencoder by minimizing the binary cross-entropy of the decoder output and the original input image using an adadelta optimizer (citep) for 100 epochs taking 128 records per batch. Although subset scanning can use parametric scoring functions (i.e. Gaussian, Poisson), the distribution of activations within particular layers are highly skewed and in some cases bi-modal. See Figure 9. Therefore, this work uses non-parametric scan statistics that makes minimal assumptions on the underlying distribution of node activations. Furthermore we only consider 1-tailed p-values (in the greater direction). This is due to nuances of the ReLu activation function. Alternative activation functions such as tanh and signmoid would allow an "extreme" activation to be considered as either larger or smaller than expected with a pvalue coming from a 2-tailed calculation. A.6 NON-PARAMETRIC SCAN STATISTICS NPSS can be viewed as a second-order test statistic that operate on (by aggregating information across) p-values (i.e., the first order test statistics) to evaluate the the evidence for violations of H 0 in a given subset S. NPSS is operationalized with a given score (test) function; each test is powered for different alternatives, and therefore, NPSS's detection power is linked to preferences of the selected score function. This work used the Berk-Jones scoring function . Where KL is the Kullback-Liebler divergence KL(x, y) = x log to following a piece-wise constant alternative distribution, and has been shown to fulfill several optimality properties. A more commonly known scoring function that also satisfies the LTSS property is the KolmogoorvSmirnov test statistic which is known to be more sensitive to deviations in the center of a distribution. https://www. jstor.org/stable/2958837 and http://www.jstor.org/stable/2958836. Another test is: which can be interpreted as the test statistic of a Wald test for the amount of significant p-values given that N α is binomially distributed with parameters N α and α. Because Higher-Criticism normalizes by the standard-deviation of N α, it tends to be more sensitive to small subsets with very extreme p-values. Most values are accumulated around 0 due to ReLu activations. The large skew and sometimes bi-modal distribution of activations motivated the use of non-parametric scan statistics to quantify what it means for an activation to be larger-than-expected. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1lLvyBtPB | Unsupervised method to detect adversarial samples in autoencoder's activations and reconstruction error space |
Learning knowledge graph embeddings (KGEs) is an efficient approach to knowledge graph completion. Conventional KGEs often suffer from limited knowledge representation, which causes less accuracy especially when training on sparse knowledge graphs. To remedy this, we present Pretrain-KGEs, a training framework for learning better knowledgeable entity and relation embeddings, leveraging the abundant linguistic knowledge from pretrained language models. Specifically, we propose a unified approach in which we first learn entity and relation representations via pretrained language models and use the representations to initialize entity and relation embeddings for training KGE models. Our proposed method is model agnostic in the sense that it can be applied to any variant of KGE models. Experimental show that our method can consistently improve and achieve state-of-the-art performance using different KGE models such as TransE and QuatE, across four benchmark KG datasets in link prediction and triplet classification tasks. Knowledge graphs (KGs) constitute an effective access to world knowledge for a wide variety of NLP tasks, such as question-answering, entity linking and information retrieval. A typical KG such as Freebase and WordNet consists of a set of triplets in the form of (h, r, t) with the head entity h and the tail entity t as nodes and relations r as edges in the graph. A triplet represents the relation between two entities, e.g., (Steve Jobs, founded, Apple Inc.). Despite their effectiveness, KGs in real applications suffer from incompleteness and there have been several attempts for knowledge graph completion among which knowledge graph embedding is one of prominent approaches. Knowledge graph embedding (KGE) models have been designed extensively in recent years (; ; ; ; ; ; ; ;). The general methodology of these models is to model entities and relations in vector spaces based on a score function for triplets (h, r, t). The score function measures the plausibility of each candidate triplet (h, r, t) compared to corrupted false triplets (h, r, t) or (h, r, t). However, traditional KGE models often suffer from limited knowledge representation due to the simply symbolic representation of entities and relations. Some recent works take advantages of both fact triplets and textual description to enrich knowledge representation (a; ; ; ;, but without exploitation of contextual information of the textual descriptions. Moreover, much of this research effort has been dedicated to developing novel architectures for knowledge representation without applications to KGE models. Unlike many existing works which try to propose new architectures for KGEs or knowledge representation, we focus on model-agnostic pretraining technique for KGE models. We present a unified training framework named as PretrainKGEs which consists of three phases: fine-tuning phase, initializing phase and training phase (see Fig. 1). During the fine-tuning phase, we learn better knowledgeable entity and relation representations via pretrained language models using textual descriptions as input sequence. Different from previous works incorporating textual information into knowledge representation, we use pretrained langauge models such as BERT to better understand textual description by making full use of syntactic and semantic information in large- scale corpora on which BERT is pretrained. Thus, we enable to incorporate rich linguistic knowledge learned by BERT into entity and relation representations. Then during the initializing phase, we use knowledgeable entity and relation representations to initialize entity and relation embeddings so that the initialized KGEs inherit the rich knowledge. Finally, during the training phase, we train a KGE model the same way as a traditional KGE model to learn entity and relation embeddings. Extensive experiments using six public KGE models across four benchmark KG datasets show that our proposed training framework can consistently improve and achieve state-of-the-art performance in link prediction and triplet classification tasks. Our contributions are as follows: • We propose a model-agnostic training framework for learning knowledge graph embeddings by first learning knowledge representation via pretrained language models. • Results on several benchmark datasets show that our method can improve and achieve state-of-the-art performance over variants of knowledge graph embedding models in link prediction and triplet classification tasks. • Further analysis demonstrates the effects of knowledge incorporation in our method and shows that our Pretrain-KGEs outperforms baselines especially in the case of fewer training triplets, low-frequency and the out-ofknowledge-base (OOKB) entities. 2 Background and Related Work For each head entity h and tail entity t with their corresponding entity embeddings E h, E t, and each relation r with its relation embeddings R r, we formulate KGE models as follows: where v h, v r, v t ∈ F d are the learnt vectors for each head entity, relation, and tail entity respectively, The model is then optimized to calculate a higher score for true triplets than corrupted false ones. According to the score function, KGE models can be roughly divided into translational models and semantic matching models . Translational models popularized by TransE learn vector embeddings of the entities and the relations, and consider the relation between the head and tail entity as a translation between the two entity embeddings, i.e., in the form of v h + v r ≈ v t when the candidate triplet (h, r, t) holds. Since TransE has problems when dealing with 1-to-N, N-to-1 and N-to-N relations, different translational models are proposed subsequently to define various relational patterns, such as TransH , TransR , TransD , RotatE , and TorusE . On the other hand, semantic matching models define a score function to match latent semantics of the head, tail entity and the relation. For instance, RESCAL , DistMult, SimplE , and ComplEx adopt a bilinear approach to model entities and relations for KGEs. Specifically, ComplEx learns complexvalued representations of entities and relations in complex space, while DistMult, SimplE, and RESCAL embed entities and relations in the traditional real number field. The recent state-of-the-art, QuatE represents entities as hypercomplex-valued embeddings and models relations as rotations in the quaternion space. Both translational models and semantic matching models learn entity and relation embeddings in spite of different embedding spaces. However, these KGE models only use structural information observed in triplets without incorporating external knowledge resources into KGEs, such as textual description of entities and relations. Thus, the embeddings of entities and relations suffer from limited knowledge representation. We instead propose a unified approach to introduce rich linguistic knowledge into KGEs via pretrained language models. In a knowledge graph dataset, names of each entity and relation are provided as textual description of entities and relations. Socher et al. (2013a) first utilize textual information to represent entities by averaging word embeddings of entity names. Following the word averaging method, improve the coverage of commonsense resources in ConceptNet by mining candidate triplets from Wikipedia. They leverage a word averaging model to convert entity and relation names into name vectors. Other recent works also leverage textual description to enrich knowledge representation but ignore contextual information of the textual descriptions (a; ; ; ; . Instead, our method exploits rich contextual information via pretrained models. Recent approaches to modeling language representations offer significant improvements over embeddings, especially pretrained deep contextualized lanaguge representation models such as ELMo , BERT , GPT-2 , and T5 . These deep language models learn better contextualized word presentations, since they are pretrained on large-scale free text data, which make full use of syntactic and semantic information in the large corpora. In this work, we use BERT, a bidirectional Transformer encoder to learn entity and relation representation given textual description. Therefore, by incorporating the plentiful linguistic knowledge learned by pretrained language models, our proposed method can learn better knowledgeable entity and relation representations for subsequent KGE learning. In this section, we will introduce our unified training framework Pretrain-KGEs and provide details of learning knowledgeable entity and relation representations via BERT. An overview of Pretrain-KGEs is shown in Fig. 1. The framework consists of three phases: finetuning phase, initializing phase, and training phase. Our major contribution is the fine-tuning phase with the initializing phase, which incorporates rich knowledge into KGEs via pretained language models, i.e., BERT that enables to exploit contextual information of textual description for entities and relations. By initializing embeddings with knowledgeable entity and relation representations, our training framework improves KGE models to learn better entity and relation embeddings. Fine-tuning Phase Given textual description of entities and relations such as entity names and relation names, we first encode the textual descriptions into vectors via pretrained language models to represent entities and relations respectively. We then project the entity and relation representations into two separate vector spaces to get the entity encoder Enc e (·) for each entity e and the relation encoder Enc r (·) for each relation r. Formally, Enc e (·) and Enc r (·) output entity and relation representations as: where v h, v r, and v t represents encoding vectors of the head entity, the relation, and the tail entity in a triplet (h, r, t) respectively. For details of Enc e (·) and Enc r (·), see section 3.2. Given the entity and relation representations, we then calculate the score of a triplet to measure its plausibility in Eq. 2. For instance, if TransE is adopted, the score function is v h + v r − v t. After fine-tuning, the knowledge representation is used in the following initializing phase. Initializing Phase Given the knowledgeable entity and relation representation, we initialize entity embeddings E and relation embeddings R for a KGE model instead of random initialization. Specifically, E = [E 1 ; E 2 ; · · · ; E k] ∈ F k×d and R = [R 1 ; R 2 ; · · · ; R l] ∈ F l×d in which ";" denotes concatenating column vectors into a matrix. k and l denote the total number of entities and relations respectively. F satisfies R ⊆ F and d denotes the embedding dimension. Then E i ∈ F d represents the embedding of entity with index i and R j ∈ F d represents the embedding of relation with index j. During the initializing phase, we use the representation vector of entity with index i encoded by the entity encoder Enc e (·) as the initialized embedding E i for training KGE models to learn entity embeddings. Likewise, the representation vector of relation with index j encoded by the relation encoder Enc r (·) is considered as the initialized embedding R j for training KGE models to learn relation embeddings. Training Phase After initializing entity and relation embeddings with knowledgeable entity and relation representations, we train a KGE model in the same way as a traditional KGE model. We calculate the score of each training triplet in Eq. 1 and Eq. 2 with the same score function in the finetuning phase. Finally, we optimize the entity embedding E and the relation embedding R using the same loss function of the corresponding KGE model. For example, if TransE and the max-margin loss function with negative sampling are adopted, the loss in the training phase is calculated as: where (h, r, t) and (h, r, t) represent a candidate and a corrupted false triplet respectively, γ denotes the margin, · + = max(·, 0), and f (·) denotes score function of TransE . To learn better knowledge representation of entities and relations given textual description, we first encode the textual description through Bert , a bidirectional Transformer encoder which is pretrained on large-scale corpora and thus learns rich contextual information of texts by making full use of syntactic and semantic information in the large corpora. We define T (e) and T (r) as the textual description of entities and relations respectively. The textual description can be words, phrases, or sentences providing information about entities and relations such as names of entities and relations or definitions of word senses. For example, the definition of entity e = Nyala.n.1 in WordNet is city in Sudan. Then T (Nyala.n.1) = Nyala: city in Sudan. Given the textual descriptions of entities and relations T (e) and T (r), Bert(·) converts T (e) and T (r) into entity representation and relation representation respectively in a vector space R n (n denotes the vector size). We then project the entity and relation representations into two separate vector spaces F d through linear transformations. Formally, we get the entity encoder Enc e (·) for each entity e and the relation encoder Enc r (·) for each relation r as: Enc r (r) = σ(W r Bert(T (r)) + b r ) where W e, W r ∈ F d×n, b e, b r ∈ F d, and σ: The entity and relation representation encoded by Enc e (·) and Enc r (·) are then used to initialize entity and relation embeddings for a KGE model. We evaluate our proposed training framework on four benchmark KG datasets: WN18 In our experiments, we perform link prediction task (filtered setting) mainly with triplet classification task. The link prediction task aims to predict either the head entity h given the relation r and the tail entity t or the tail entity given the head entity and the relation, while triplet classification aims to judge whether a candidate triplet is correct or not. For the link prediction task, we generate corrupted false triplets (h, r, t) and (h, r, t) using negative sampling. For n test triplets, we get their ranks r = (r 1, r 2, · · ·, r n) and calculate standard evaluation metrics: Mean Rank (MR), Mean Reciprocal Rank (MRR) and Hits at N (H@N). For triplet classification, we follow the evaluation protocol in Socher et al. (2013b) and adopt the accuracy metric (Acc) to evaluate our training method. To evaluate the universality of our training framework Pretrain-KGEs, we select multiple public KGE models as baselines including translational models: • TransE , the translationalbased model which models the relation as translations between entities; 2 Detailed statistics of datasets are in Appendix. A. Table 3: Link prediction and Triplet classification ("Class") using QuatE. "Name" means using names of entities and relations as textual description. "Definition" means using names of entities and relations as well as definitions of word senses as textual description. • RotatE , the extension of translational-based models which introduces complex-valued embeddings to model the relations as rotations in complex vector space; • pRotatE , a variant of RotatE where the modulus of complex entity embeddings are constrained and only phase information is involved; and semantic matching models: • DistMult, a semantic matching model where each relation is represented with a diagonal matrix; • ComplEx , the extension of semantic matching model which embedds entities and relations in complex space. • QuatE , the recent stateof-the-art KGE model which learns entity and relation embeddings in the quaternion space. We present for the Pretrain-KGEs algorithm in Table 1, Table 2 and Table 3. Table 1 shows the link prediction on four benchmark KG datasets using six public KGE models. Table 2 compares the on WordNet of using entity names and relation names to the of adding definitions of word senses as additional textual description for entities. Table 3 demonstrates the state-of-the-art performance of our proposed method in both link prediction and triplet classification tasks 3. From the , we can observe that: Our unified training framework can be applied to multiple variants of KGE models in spite of different embedding spaces, and achieves improvements over TransE, DistMult, ComplEx, RotatE, pRotatE and QuatE on most evaluation metrics, especially on MR but still being competitive on MRR (see detailed analysis of MR and MRR in section 5.2.1). Yet, it verifies the universality of our training framework. The reason is that our method incorporates rich linguistic knowledge into entity and relation representation via pretrained language models to learn better knowledgeable representation for the embedding initialization in KGE models. For the effects of knowledge incorporation, see detailed analysis in section 5.2. Our training framework can also facilitate in improving the recent state-of-the-art even further over QuatE on most evaluation metrics in link prediction and triplet classification tasks. It verifies the effectiveness of our proposed training framework. In this section, we provide further analysis of Pretrain-KGEs' performance in the case of fewer training triplets, low-frequency entities and the out-of-knowledge-base (OOKB) entities which are particularly hard to handle due to lack of knowledge representation. We also evaluate the effects of knowledge incorporation into entity and relation embeddings by demonstrating the sensitivity of MR and MRR metrics and visualizing the process of knowledge incorporation. We also evaluate our training framework in the case of fewer training triplets on WordNet and test its performance on entities of varying frequency in test triplets on FB15K as well as the performance on the OOKB entities in test triplets on WordNet as shown in Fig. 2a-2e. To test the performance of our training framework given fewer training triplets, we conduct experiments on WN18 and WN18RR by feeding varying number of training triplets to a KGE model. We use traditional TransE as one of the baselines. Baseline-TransE does not utilize any textual description and randomly initializes entity and relation embeddings before the training phase. Thus, it suffers from learn knowledgeable KGEs when training triplets become fewer. In contrast, our Pretrain-TransE first learns knowledgeable entity and relation representations by encoding textual description through BERT, and uses the learned representations to initialize KGEs for TransE. In this way, we enable to incorporate rich linguistic knowledge from BERT into initizalized entity and relation embeddings so that TransE can perform better given fewer training triplets. On the other hand, to verify the effectiveness of BERT during the fine-tuning phase, we also set the word averaging model following to be the entity encoder Enc e (·) in Eq. 3 for comparison 4. From the , we can observe that although the word averaging model contributes to better performance of TransE on fewer training triplets compared to Baseline-TransE, it does not learn knowledgeable entity and relation representations as well as BERT because BERT can better understand textual descriptions of entities and relations by exploiting rich contextual information of the textual descriptions. Moreover, by utilizing definitions of word senses as additional textual description of entities, the show that our training method achieves the best performance in the case of fewer training triplets. Besides, we also evaluate our training framework for its performance on entities of varying frequency in training triplets on FB15K. From the in Fig. 2c, we can observe that our training framework outperforms Baseline-TransE especially on infrequent entities. The reason is that traditional TransE method cannot learn good representation of infrequent entities due to inadquate dataset information and lack of textual description of entities. When training triplets becomes fewer, there can be increasing OOKB entities in test triplets not observed at training time. Traditional training method of KGE models cannot address the OOKB entity problem since it randomly gives scores of test triplets containing OOKB entities due to random initialization of entity embeddings before training. In contrast, our training method initializes entity embeddings with knowledgeable entity representation. Thus, we also evaluate our training method in the case of OOKB entities. From the in Fig. 2d-2e, we can observe that our training framework can solve the OOKB entity problem on WordNet dataset and performs best when using BERT to encode textual description of entities and In (a)-(e), "TransE" means TransE baseline with random initialization; "Avg" means a word averaging model using entity names and definitions provided in WordNet as textual description; "Name" refers to our proposed Pretrain-TransE method using entity names and relation names as textual description; "Definition" refers to our proposed Pretrain-TransE method using names of entities and relations as well as definitions in WordNet as textual description. In (d)-(e), "Random" means randomly giving scores of triplets. In (f), "1"-"5" denotes the number of iterations during the training phase are 10000-50000 updates. relations including their names and definitions of word senses. Our training framework has natural advantages over traditional training method of KGE models since we learn better knowledgeable entity and relation representation via BERT before training a KGE model. This section verifies the effectiveness of knowledge incorporation during the fine-tuning phase. We show the performance of Baseline-TransE and Pretrain-TransE on WN18RR as iteration increases during the training phase in Fig. 2f. We analyze the changing trend of MR and MRR in Theorem 2. Formally, for n test triplets, we get corresponding ranks in link prediction task r = (r 1, r 2, · · ·, r n), and MR(r) = n i=1 r i n; Theorem 1. 5 Sensitivity of MR and MRR metrics MR is more sensitive to tricky triplets than MRR. Formally, for r = (r 1, r 2, · · ·, r n) and r i > r j (triplet i is worse-learnt than triplet j): where f k (r) denotes ∂f ∂r k (f ∈ {MR, MRR}) and means the sensitivity of metric f to triplet k. In Figure 2c, we can observe that there is better performance on high-frequency triplets than low-frequency ones which are more tricky to handle, since there is less information in datasets provided for low-frequency triplets. According to The- orem 2, we can thus suggest that MR is more sensitive to low-frequency triplets while MRR is more sensitive to high-frequency triplets. Reasons for the increasing MR of Pretrain-TransE in Fig. 2f are illustrated in the following. We visualize the knowledge learning process of Baseline-TransE and our Pretrain-TransE in Fig. 3a 3c. We select top five common supersenses in WN18: plant, animal, act, person and artifact, among which the last three supersenses are all relevant to the concept of human beings and thus can be considered to constitute one common supersense. In Fig. 3a, we can observe that Baseline-TransE learns entity and relation embeddings for triplets containing the five supersenses but does not distinguish embeddings between plant, animal and the other three supersenses. In contrast, Fig. 3b shows that our Pretrain-TransE can further distinguish embeddings between different supersenses, especially separating supersenses related to human beings from others. The main reason is that we can learn better knowledgeable entity and relation representation via BERT by incorporating rich linguistic knowledge into entity and relation embeddings during the initializing phase. However, during the training phase, our PretrainTransE gradually learns different KGEs from those in the initializing phase. Fig. 3c shows that it is due to the oblivion of partial linguistic knowledge incorporated into entity and relation embeddings as the KGEs learn more information contained in datasets at training time. This process can account for the increasing MR of Pretrain-TransE during the training phase in Fig. 2f. But the absolute values of MR and MRR for our Pretrain-TransE are overtly lower than those for TransE baseline, which demonstrates that our training framework enables to learn better knowledgeable entity and relation representation and there still remains incorporated knowledge in entity and relation embeddings during the training phase. To conclude, during the training phase, TransE baseline learns original knowledge contained in datasets. Instead, our proposed method first learns rich linguistic knowledge from BERT, and continues to learn knowledge from datasets while losing partial knowledge learned from BERT. Yet finally, there still remains knowledge from BERT incorporated in entity and relation embeddings during the training phase. We present Pretrain-KGEs, a simple and efficient pretraining technique for knowledge graph embedding models. Pretrain-KGEs is a general technique that can be applied to any KGE model. It contributes to learn better knowledgeable entity and relation representations from pretrained language models, which are leveraged during the initializing and the training phases for a KGE model to learn entity and relation embeddings. Through extensive experiments, we demonstrate state-of-the-art performances using this effective pretraining technique on various benchmark datasets. Further, we verify the effectiveness of our method by demonstrating promising in the case of fewer training triplets, infrequent and OOKB entities which are particularly hard to handle due to lack of knowledge representation. We finally analyze the effects of knowledge incorporation by demonstrating the sensitivity of MR and MRR metrics and visualizing the process of knowledge incorporation. A Detailed Implementation A.1 Implementation Our implementations of TransE , DistMult, ComplEx , RotatE , pRotatE are based on the framework provided by 6. Our implementation of QuatE is based on on the framework provided by 7. In fine-tuning phase, we adopt the following non-linear pointwise function σ(·): x i e i ∈ F (where F can be real number filed R, complex number filed C or quaternion number ring H): where x i ∈ R and e i is the K-dimension hypercomplex-value unit. For instance, when K = 1, F = R; when K = 2, F = C, e 1 = i (the imaginary unit); when K = 4, F = H, e 1,2,3 = i, j, k (the quaternion units). The score functions of baselines are listed in Table 4. TransE v h + vr − vt R DistMult v h, vr, vt R ComplEx Re(v h, vr,vt) C RotatE v h vr − vt C pRotatE 2C sin θ h +θr −θ t 2 C QuatE v h ⊗vr vt H Table 4: Score functions and corresponding F of previous work. v h, v r, v t denote head, tail and relation embeddings respectively. R, C, H denote real number field, complex number field and quaternion number division ring respectively. · denotes L1 norm. · denotes generalized dot product. Re(·) denotes the real part of complex number. · denotes the conjugate for complex vectors. ⊗ denotes circular correlation, denotes Hadamard product. C denotes a constraint on the pRotatE model: v h 2 = v t 2 = C.· denotes the normalized operator. θ h, θ r, θ t denote the angle of complex vectors v h, v r, v t respectively. We also implement the word-averaging baseline to utilize the entitiy names and entity definition in WordNet to represent the entity embedding better. Formally, for entitiy e and its textual description T (e) = w 1 w 2 · · · w L, where w i denotes the i-th token in sentence T (e) and T (e) here together utilizing the entitiy names and entity definition in WordNet. where u i denotes the word embedding of token w i, which is a trainable randomly initialized parameter and will be trained in the pretraining phase. We also adopt our three-phase training method to train word-averaging baseline. Similarly, E = [E 1 ; E 2 ; · · · ; E k] ∈ F k×d and R = [R 1 ; R 2 ; · · · ; R l] ∈ F l×d denote entity and relation embeddings. In pretraining phase, for head entity h, tail entity t and relation r, the score function is calculated as: v h, v r, v t = Avg(h), R r, Avg(t) where R r denotes the relation embedding of relation r. In initializing phase, similar to our proposed model, we initialize E i with Avg(e i). In training phase, we optimize E and R with the same training method to TransE baseline. We evaluate our proposed training framework on four benchmark KG datasets: WN18 , WN18RR , FB15K and FB15K-237 . We list detailed statistics of datasets are in Table 5. The hyper-parameters of are listed in Table 6. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJlv-Fz-pS | We propose to learn knowledgeable entity and relation representations from Bert for knowledge graph embeddings. |
We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations. There has been much prior work on using neural networks to generalize the contents of a KB (e.g., (; ;) ), typically by constructing low-dimensional embeddings of the entities and relations in the KB, which are then used to score potential triples as plausible or implausible elements of the KB. We consider here the related but different problem of incorporating a symbolic KB into a neural system, so as to inject knowledge from an existing KB directly into a neural model. More precisely, we consider the problem of designing neural KB inference modules that are fully differentiable, so that any loss based on their outputs can be backpropagated to their inputs; accurate, in that they are faithful to the original semantics of the KB; expressive, so they can perform non-trivial inferences; and scalable, so that realistically large KBs can be incorporated into a neural model. To motivate the goal of incorporating a symbolic KB into a neural network, consider the task of learning neural semantic parsers from denotations. Many questions-e.g., what's the most recent movie that Quentin Tarantino directed? or which nearby restaurants have vegetarian entrees and take reservations?-are best answered by knowledge-based question-answering (KBQA) methods, where an answer is found by accessing a KB. Within KBQA, a common approach is neural semantic parsing-i.e., using neural methods to translate a natural-language question into a structured query against the KB (e.g., (; ;) ), which is subsequently executed with a symbolic KB query engine. While this approach can be effective, it requires training data pairing natural-language questions with structured queries, which is difficult to obtain. Hence researchers have also considered learning semantic parsers from denotations (e.g., ), where training data consists of pairs (q, A), where q is a natural-language question and A is the desired answer. Typically A is a set of KB entities-e.g., if q is the first sample question above, A would be 1 the singleton set containing Once Upon a Time in Hollywood. Learning semantic parsers from denotations is difficult because the end-to-end process to be learned includes a non-differentiable operation-i.e., reasoning with the symbolic KB that contains the answers. To circumvent this difficulty, prior systems have used three different approaches. Some have used heuristic search to infer structured queries from denotations (e.g., ): this works in some cases but often an answer could be associated with many possible structured queries, introducing noise. Others have supplemented gradient approaches 1 At the time of this writing. x: an entity X: weighted set of entities x: vector encoding X NE: # entities in KB r: an relation R: weighted set of relations r: vector encoding R NR: # relations in KB Mr: matrix for r MR: weighted sum of Mr's, see Eq 1 follow(x, r): see Eq 2 NT: # triples in KB M subj, M obj, M rel: the reified KB, encoded as matrices mapping triple id to subject, object, and relation ids Table 1: Summary of notation used in the paper. (This excludes notation used in defining models for the KB completion and QA tasks of Section 3.) with reinforcement learning (e.g., ). Some systems have also "neuralized" KB reasoning, but to date only over small KBs: this approach is natural when answers are naturally constrained to depend on a small set of facts (e.g., a single table ), but more generally requires coupling a learner with some (non-differentiable) mechanism to retrieve an appropriate small question-dependent subset of the KB (e.g., . In this paper, we introduce a novel scheme for incorporating reasoning on a large question-independent KB into a neural network, by representing a symbolic KB with an encoding called a sparse-matrix reified KB. A sparse-matrix reified KB is very compact, can be distributed across multiple GPUs if necessary, and is well-suited to modern GPU architecture. For KBs with many relations, a reified KB can be up to four orders of magnitude faster than alternative implementations (even alternatives based on sparse-matrix representations), and in our experiments we demonstrate scalability to a KB with over 13 million entities and nearly 44 million facts. This new architectural component leads to radically simpler architectures for neural semantic parsing from denotations-architectures based on a single end-to-end differentiable process, rather than cascades of retrieval and neural processes. We show that very simple instantiations of these architectures are still highly competitive with the state of the art for several benchmark tasks. To our knowledge these models are the first fully end-to-end neural parsers from denotations that have been applied to these benchmark tasks. We also demonstrate that these architectures scale to long chains of reasoning on synthetic tasks, and demonstrate similarly simple architectures for a second task, KB completion. 2 NEURAL REASONING WITH A SYMBOLIC KB 2.1 KBs, entities, and relations. A KB consists of entities and relations. We use x to denote an entity and r to denote a relation. Each entity has an integer index between 1 and N E, where N E is the number of entities in the KB, and we write x i for the entity that has index i. A relation is a set of entity pairs, and represents a relationship between entities: for instance, if x i represents "Quentin Tarantino" and x j represents "Pulp Fiction" then (x i, x j) would be an member of the relation director_of. A relation r is a subset of {1, . . ., N E} × {1, . . ., N E}. Finally a KB consists a set of relations and a set of entities. Weighted sets as "k-hot" vectors. Our differentiable operations are based on weighted sets, where each element x of weighted set X is associated with a non-negative real number. It is convenient to define this weight to be zero for all x ∈ X while for x ∈ X, a weight less than 1 is a confidence that the set contains x, and weights more than 1 make X a multiset. If all elements of X have weight 1, we say X is a hard set. A weighted set X can be encoded as an entity-set vector x ∈ R N E, where the i-th component of x is the weight of x i in X. If X is a hard entity set, then this will be a "k-hot" vector, for k = |X|. The set of indices of x with non-zero values is called the support of x. Sets of relations, and relations as matrices Often we would like to reason about sets of relations 2, so we also assume every relation r in a KB is associated with an entity and hence an integer index. We write r k for the relation with index k, and we assume that relation entities are listed first in the index of entities, so the index k for r k is between 1 and N R, where N R is the number of relations in the KB. We use R for a set of relations, e.g., R = {writer_of, director_of} might be such a set, and use r for a vector encoding of a set. A relation r can be encoded as a relation matrix M r ∈ R N E ×N E, where the value for M r [i, j] is (in general) the weight of the assertion r(x i, x j) in the KB. In the experiments of this paper, all KB relations are hard sets, so M r [i, j] ∈ {0, 1}. Sparse vs. dense matrices for relations. Scalably representing a large KB requires careful consideration of the implementation. One important issue is that for all but the smallest KBs, a relation matrix must be implemented using a sparse matrix data structure, as explicitly storing all N 2 E values is impractical. For instance, consider a KB containing 10,000 movie entities and 100,000 person entities. A relationship like writer_of would have only a few tens of thousands of facts, since most movies have only one or two writers, but a dense matrix would have more than 1 billion values. We thus model relations as sparse matrices. Let N r be the number of entity pairs in the relation r. A common sparse matrix data structure is a sparse coordinate pair (COO) encoding: with a COO encoding, each KB fact requires storing only two integers and one float. Our implementations are based on Tensorflow , which offers limited support for sparse matrices. In particular, driven by the limitations of GPU architecture, Tensorflow only supports matrix multiplication between a sparse matrix COO and a dense matrix, but not between two sparse matrices, or between sparse higher-rank tensors and dense tensors. Entity types. It is often possible to easily group entities into disjoint sets by some notion of "type": for example, in a movie domain, all entities might be either of the type "movie", "person", or "movie studio". It is straightforward to extend the formalism above to typed sets of entities, and doing this can lead to some useful optimizations. We use these optimizations below where appropriate: in particular, one can assume that relation-set vectors r are of dimension N R, not N E, in the sections below. The full formal extension of the definitions above to typed entities and relations is given in Appendix A. The relation-set following operation. Note that relations can also be viewed as labeled edges in a knowledge graph, the vertices of which are entities. Following this view, we define the rneighbors of an entity x i to be the set of entities x j that are connected to x i by an edge labeled r, i.e., r-neighbors(x) ≡ {x j : (x i, x j) ∈ r}. Extending this to relation sets, we define Computing the R-neighbors of an entity is a single-step reasoning operation: e.g., the answer to the question q ="what movies were produced or directed by Quentin Tarantino" is precisely the set R-neighbors(X) for R = {producer_of, writer_of} and X = {Quentin_Tarantino}. "Multi-hop" reasoning operations require nested R-neighborhoods, e.g. if R = {actor_of} then R -neighbors(Rneighbors(X)) is the set of actors in movies produced or directed by Quentin Tarantino. We would like to approximate the R-neighbors computation with differentiable operations that can be performed on the vectors encoding the sets X and R. Let x encode a weighted set of entities X, and let r encode a weighted set of relations. We first define M R to be a weighted mixture of the relation matrices for all relations in R i.e., We then define the relation-set following operation for x and r as: As we will show below, this differentiable numerical relation-set following operation can be used as a neural component to perform certain types of logical reasoning. In particular, Eq 2 corresponds closely to the logical R-neighborhood operation, as shown by the claim below. The support of follow(x, r) is exactly the set of R-neighbors(X). A proof and the implications of this are discussed in Appendix B. COO matrix, so collectively these matrices require space O(N T). Each triple appears in only one relation, so M R in Eq 2 is also size O(N T). Since sparse-sparse matrix multiplication is not supported in Tensorflow we implement xM R using dense-sparse multiplication, so x must be a dense vector of size O(N E), as is the output of relation-set following. Thus the space complexity of follow(x, r) is O(N T + N E + N R), if implemented as suggested by Eq 2. We call this the naive mixing implementation, and its complexity is summarized in Table 2. Because Tensorflow does not support general sparse tensor contractions, it is not always possible to extend sparse-matrix computations to minibatches. Thus we also consider a variant of naive mixing called late mixing, which mixes the output of many single-relation following steps, rather than mixing the KB itself: Unlike naive mixing, late mixing can be extended easily to a minibatches (see Appendix D). Let b be the batch size and X be a minibatch of b examples [x 1 ; . . . ; x b]: then this approach leads to N R matrices XM k, each of size O(bN E). However, they need not all be stored at once, so the space complexity becomes O(bN E + bN R + N T). An additional cost of late mixing is that we must now sum up N R dense matrices. A reified knowledge base. While semantic parses for natural questions often use small sets of relations (often singleton ones), in learning there is substantial uncertainty about what the members of these small sets should be. Furthermore, realistic wide-coverage KBs have many relations-typically hundreds or thousands. This leads to a situation where, at least during early phases of learning, it is necessary to evaluate the of mixing very large sets of relations. When many relations are mixed, late mixing becomes quite expensive (as experiments below show). An alternative is to represent each KB assertion r k (x i, x j) as a tuple (i, j, k) where i, j, k are the indices of x i, x j, and r k. There are N T such triples, so for = 1,..., N T, let (i, j, k) denote the -th triple. We define these sparse matrices: Conceptually, M subj maps the index of the -th triple to its subject entity; M obj maps to the object entity; and M rel maps to the relation. We can now implement the relation-set following as below, where is Hadamard product: Notice that xM T subj are the triples with an entity in x as their subject, rM T rel are the triples with a relation in r, and the Hadamard product is the intersection of these. The final multiplication by M obj finds the object entities of the triples in the intersection. These operations naturally extend to minibatches (see Appendix). The reified KB has size O(N T), the sets of triples that are intersected have size O(bN T), and the final is size O(bN E), giving a final size of O(bN T + bN E), with no dependence on N R. Table 2 summarizes the complexity of these three mathematically equivalent but computationally different implementions. The analysis suggests that the reified KB is preferable if there are many relations, which is the case for most realistic KBs 3. Figure 1: Left and middle: inference time in queries/sec on a synthetic KB as size and number of relations is varied. Queries/sec is given as zero when GPU memory of 12Gb is exceeded. Right: speedups of reified KBs over the baseline implementations. Distributing a large reified KB. The reified KB representation is quite compact, using only six integers and three floats for each KB triple. However, since GPU memory is often limited, it is important to be able to distribute a KB across multiple GPUs. Although to our knowledge prior implementations of distributed matrix operations (e.g., ) do not support sparse matrices, sparse-dense matrix multiplication can be distributed across multiple machines. We thus implemented a distributed sparse-matrix implementation of reified KBs. We distibuted the matrices that define a reified KB "horizontally", so that different triple ids are stored on different GPUs. Details are provided in Appendix C. Like prior work ), we used a synthetic KB based on an n-byn grid to study scalability of inference. Every grid cell is an entity, related to its immediate neighbors, via relations north, south, east, and west. The KB for an n-by-n grid thus has O(n 2) entities and around O(n) triples. We measured the time to compute the 2-hop inference follow(follow(x, r), r) for minibatches of b = 128 one-hot vectors, and report it as queries per second (qps) on a single GPU (e.g., qps=1280 would mean a single minibatch requires 100ms). We also compare to a key-value memory network , using an embedding size of 64 for entities and relations, where there is one memory entry for every triple in the KB. Further details are given in Appendix E. The are shown Figure 1 (left and middle), on a log-log scale because some differences are very large. With only four relations (the leftmost plot), late mixing is about 3x faster than the reified KB method, and about 250x faster than the naive approach. However, for more than around 20 relations, the reified KB is faster (middle plot). As shown in the rightmost plot, the reified KB is 50x faster than late mixing with 1000 relations, and nearly 12,000x faster than the naive approach. With this embedding size, the speed of the key-value network is similar to the reified KB for only four relations, however it is about 7x slower for 50 relations and 10k entities. Additionally, the space needed to store a triple is much larger in a key-value network than the reified KB, so memory is exhausted when the KB exceeds 200,000 entities (with four relations), or when the KB exceeds 100 relations (with 10,000 entities.) The reified KB scales much better, and can handle 10x as many entities and 20x as many relations. As discussed below in Section 4, the reified KB is closely related to key-value memory networks, so it can be viewed as a more efficient implementation of existing neural modules, optimized for reasoning with symbolic KBs. However, being able to include an entire KB into a model can lead to a qualitative difference in model complexity, since it is not necessary to build machinery to retrieve from the KB. To illustrate this, below we present simple models for several tasks, each using the reified KB in different ways, as appropriate to the task. We consider two families of tasks: learning semantic parsers from denotations over a large KB, and learning to complete a KB. KBQA for multi-hop questions. MetaQA consists of 1.2M questions, evenly distributed into one-hop, two-hop, and three-hop questions. (E.g, the question "who acted in a movie directed by Quentin Tarantino?" is a two-hop question.) The accompanying KB contains 43k entities and 186k facts. Past work treated one-hop, two-hop and three-hop questions separately, and the questions are labeled with the entity ids for the "seed entities" that begin the reasoning chains (e.g., the question above would be tagged with the id of the entity for Quentin Tarantino). Using a reified KB for reasoning means the neural model only needs to predict the relations used at each stage in the reasoning process. For each step of inference we thus compute relation sets r t using a differentiable function of the question, and then chain them together with relation-set following steps. Letting x 0 be the set of entities associated with q, the model we use is: where follow(x t−1, r t) is implemented with a reified KB as described in Eq. 4. To predict an answer on a T -hop subtask, we compute the softmax of the appropriate set x T. We used cross entropy loss of this set against the desired answer, represented as a uniform distribution over entities in the target set. Each f t (q)'s is a different linear projection of a common encoding for q, specifically a mean-pooling of the tokens in q encoded with a pre-trained 128-dimensional word2vec model . The full KB was loaded into a single GPU in our experiments. It is interesting to contrast this simple model with the one proposed by. The "module for logic reasoning" they propose in Section 3.4 is fairly complex, with a description that requires a figure, three equations, and a page of text; furthermore, training this model requires constructing an example-dependent subgraph for each training instance. In our model, the "logic reasoning" (and all interaction with the KB) has been encapsulated completely in the follow(x, r) operation-which, as we will demonstrate below, can be re-used for many other problems. Encapsulating all KB reasoning with a single scalable differentiable neural module greatly simplifies modeling: in particular, the problem of learning a structured KB query has been reduced to learning a few differentiable functions of the question, one for each reasoning "hop". The learned functions are also interpretable: they are mixtures of relation identifiers which correspond to soft weighted sets of relations, which in turn softly specify which KB relation should be used in each stage of the reasoning process. Finally, optimization is simple, as the loss on predicted denotations can be back-propagated to the relation-prediction functions. A similar modeling strategy is used in all the other models presented below. KBQA on FreeBase. WebQuestionsSP contains 4737 natural language questions, all of which are answerable using FreeBase , a large open-domain KB. Each question q is again labeled with the entities x that appear in it. FreeBase contains two kinds of nodes: real-world entities, and compound value types (CVTs), which represent non-binary relationships or events (e.g., a movie release event, which includes a movie id, a date, and a place.) Real-world entity nodes can be related to each other or to a CVT node, but CVT nodes are never directly related to each other. In this dataset, all questions can be answered with 1-or 2-hop chains, and all 2-hop reasoning chains pass through a CVT entity; however, unlike MetaQA, the number of hops is not known. Our model thus derives from q three relation sets and then uniformly mixes both potential types of inferences: We again apply a softmax toâ and use cross entropy loss, and f E→E, f E→CVT, and f CVT→E are again linear projections of a word2vec encoding of q. We used a subset of Freebase with 43.7 million facts and 12.9 million entities, containing all facts in Freebase within 2-hops of entities mentioned in any question, excluding paths through some very common entities. We split the KB across three 12-Gb GPUs, and used a fourth GPU for the rest of the model. This dataset is a good illustration of the scalability issues associated with prior approaches to including a KB in a model, such as key-value memory networks. A key-value network can be trained to implement something similar to relation-set following, if it stores all the KB triples in memory. If we assume 64-float embeddings for the 12.9M entities, the full KB of 43.7M facts would be 67Gb in size, which is impractical. Additionally performing a softmax over the 43.7M keys would be prohibitively expensive, as shown by the experiments of Figure 1. This is the reason why in standard practice with key-value memory networks for KBs, the memory is populated with a heuristically subset of the KB, rather than the full KB. We compare experimentally to this approach in Table 3. Knowledge base completion. we treat KB completion as an inference task, analogous to KBQA: a query q is a relation name and a head entity x, and from this we predict a set of tail entities. We assume the answers are computed with the disjunction of multiple inference chains of varying length. Each inference chain has a maximum length of T and we build N distinct inference chains in total, using this model (where x 0 i = x for every i): for i = 1,..., N and t = 1,..., T: r The final output is a softmax of the mix of all the x An encoder-decoder architecture for varying inferential structures. To explore performance on more complex reasoning tasks, we generated simple artificial natural-language sentences describing longer chains of relationships on a 10-by-10 grid. For this task we used an encoder-decoder model which emits chains of relation-set following operations. The question is encoded with the final hidden state of an LSTM, written here h 0. We then generate a reasoning chain of length up to T using a decoder LSTM. At iteration t, the decoder emits a scalar probability of "stopping", p t, and a distribution over relations to follow r t, and then, as we did for the KBQA tasks, sets x t = follow(x t−1, r t). Finally the decoder updates its hidden state to h t using an LSTM cell that "reads" the "input" r t−1. For each step t, the model thus contains the steps The final predicted location is a mixture of all the x t's weighted by the probability of stopping p t at iteration t, i.e.,â = softmax(The function f r is a softmax over a linear projection, and f p is a logistic function. In the experiments, we trained on 360,000 sentences requiring between 1 and T hops, for T = 5 and T = 10, and tested on an additional 12,000 sentences. Experimental . We next consider the performance of these models relative to strong baselines for each task. We emphasize our goal here is not to challenge the current state of the art on any particular benchmark, and clearly there are many ways the models of this paper could be improved. (For instance, our question encodings are based on word2vec, rather than contextual encodings (e.g., ), and likewise relations are predicted with simple linear classifiers, rather than, say, attention queries over some semantically meaningful space, such as might be produced with language models or KB embedding approaches ). Rather, our contribution is to present a generally useful scheme for including symbolic KB reasoning into a model, and we have thus focused on describing simple, easily understood models that do this for several tasks. However, it is important to confirm that the reified KB models "work"-e.g., that they are amenable to use of standard optimizers, etc. Performance (using Hits@1) of our models on the KBQA tasks is shown in Table 3. For the nonsynthetic tasks we also compare to a Key-Value Memory Network (KV-Mem) baseline . For the smaller MetaQA dataset, KV-Mem is initialized with all facts within 3 hops of the query entities, and for WebQuestionsSP it is initialized by a random-walk process seeded by the query entities (see for details). ReifKB consistently outperforms the baseline, dramatically so for longer reasoning chains. The synthetic grid task shows that there is very little degradation as chain length increases, with Hits@1 for 10 hops still 89.7%. It also illustrates the ability to predict entities in a KB, as well as relations. We also compare these to two much more complex architectures that perform end-to-end question answering in the same setting used here: VRN , GRAFT-Net , and PullNet ( and . KB, and then use graph CNN-like methods to "reason" with these graphs. Although not superior, ReifKB model is competitive with these approaches, especially on the most difficult 3-hop setting. A small extension to this model is to mask the seed entities out of the answers (see Appendix E). This model (given as ReifKB + mask) has slightly better performance than GRAFT-Net on 2-hop and 3-hop questions. For KB completion, we evaluated the model on the NELL-995 dataset which is paired with a KB with 154k facts, 75k entities, and 200 relations. On the left of Table 4 we compare our model with three popular embedding approaches ( are from). The reified KB model outperforms DistMult , is slightly worse than ConvE , and is comparable to ComplEx . The competitive performance of the ReifKB model is perhaps surprising, since it has many fewer parameters than the baseline models-only one float and two integers per KB triple, plus a small number of parameters to define the f t i functions for each relation. The ability to use fewer parameters is directly related to the fact that our model directly uses inference on the existing symbolic KB in its model, rather than having to learn embeddings that approximate this inference. Or course, since the KB is incomplete, some learning is still required, but learning is quite different: the system learns logical inference chains in the incomplete KB that approximate a target relation. In this setting for KBC, the ability to perform logical inference "out of the box" appears to be very advantageous. Another relative disadvantage of KB embedding methods is that KB embeddings are generally transductive-they only make predictions for entities seen in training. As a non-transductive baseline, we also compared to the MINERVA model, which uses reinforcement learning (RL) methods to learn how to traverse a KB to find a desired answer. Although RL methods are less suitable as "neural modules", MINERVA is arguably a plausible competitor to end-to-end learning with a reified KB. MINERVA slightly outperforms our simple KB completion model on the NELL-995 task. However, unlike our model, MINERVA is trained to find a single answer, rather than trained to infer a set of answers. To explore this difference, we compared to MINERVA on the grid task under two conditions: the KB relations are the grid directions north, south, east and west, so only the of the target chain is a single grid location, and the KB relations also include a "vertical move" (north or south) and a "horizontal move" (east or west), so the of the target chain can be a set of locations. As expected MINERVA's performance drops dramatically in the second case, from 99.3% Hits@1 to 34.4 %, while our model's performance is more robust. MetaQA answers can also be sets, so we also modified MetaQA so that MINERVA could be used (by making the non-entity part of the sentence the "relation" input and the seed entity the "start node" input) and noted a similarly poor performance. These are shown on the right of Table 4. In Tables 5 we compare the training time of our model with minibatch size of 10 on NELL-995, MetaQA, and WebQuestionsSP. With over 40 million facts and nearly 13 million entities from Freebase, it takes less than 10 minutes to run one epoch over WebQuestionsSP (with 3097 training examples) on four P100 GPUs. In the accompanying plot, we also summarize the tradeoffs between accuracy and training time for our model and three baselines on the MetaQA 3-hop task. (Here ideal performance is toward the upper left of the plot). The state-of-the-art system, which uses a learned method to incrementally retrieve from the KB, is about 15 times slower than the reified KB system. GRAFT-Net is slightly less accurate, but also only slightly faster: recall that GRAFT-Net uses a heuristically selected subset (of up to 500 triples) from the KB for each query, while our system uses the full KB. Note that here the full KB is about 400 times as large as the question-specific subset used by GRAFT-Net. A key-value memory baseline including the full KB is nearly three times as slow as our system, while also performing quite poorly. We can't compare to VRN because the codes are not open-sourced. The relation-set following operation using reified KBs is implemented in an open-source package called NQL, for neural query language. NQL implements a broader range of operations for manipulating KBs, which are described in a short companion paper . This paper focuses on implementation and evaluation of the relation-set following operation with different KB representations, issues not covered in the companion paper. TensorLog, a probabilistic logic which also can be compiled to Tensorflow, and hence is another differentiable approach to neuralizing a KB. TensorLog is also based on sparse matrices, but does not support relation sets, making it unnatural to express the models shown in this paper, and does not use the more efficient reified KB representation. The differentiable theorem prover (DTP) is another differentiable logic (Rocktäschel &), but DPT appears to be much less scalable: it has not been applied to KBs larger than a few thousand triples. The Neural ILP system uses approaches related to late mixing together with an LSTM controller to perform KB completion and some simple QA tasks, but it is a monolithic architecture focused on rule-learning, while in contrast we propose a re-usable neural component, which can be used in as a component in many different architectures, and a scalable implementation of this. It is also reported that neural ILP does not scale to the size of the NELL995 task . The goals of this paper are related KB embedding methods, but distinct. In KB embedding, models are generally fully differentiable, but it is not considered necessary (or even desirable) to accurately match the behavior of inference in the original KB. Being able to construct a learned approximation of a symbolic KB is undeniably useful in some contexts, but embedded KBs also have many disadvantages. They are much larger than a reified KB, with many more learned parameters-typically a long dense vector for every KB entity. Embedded models are typically evaluated by their ability to score a single triple accurately, and many models are not capable of executing multi-step KB inferences efficiently; further, models that do allow multi-step inference are known to produce cascaded errors on long reasoning chains;. In contrast we focus on accurate models of reasoning in a symbolic KB, which requires consideration of novel scalability issues associated with sparse matrice representations. Mathematically, our definition of relation-set following is much like the bilinear model for path following from; however, we generalize this to path queries that include weighted sets of relations, allowing the relations in paths to be learned. Similar differences apply to the work of , which extends the work of to include intersection operations. The vector representation used for weighted sets in a reified KB makes intersection trivial to implement, as intersection corresponds to Hadamard product. Conveniently set union also corresponds to vector sum, and the complement of X is 1 − x, which is perhaps why only a single Neural architectures like memory networks , or other architectures that use attention over some data structure approximating assertions can be used to build soft versions of relation-set following: however, they also do not scale well to large KBs, so they are typically used either with a non-differentiable ad hoc retrieval mechanism, or else in cases where a small amount of information is relevant to a question (e.g., ). Similarly graph CNNs also can be used for reasoning, and often do use sparse matrix multiplication, but again existing implementations have not been scaled to tens of millions of triples/edges or millions of entities/graph nodes. Additionally while graph CNNs have been used for reasoning tasks, the formal connection between them and logical reasoning remains unclear, whereas there is a precise connection between relation-set following and inference. Reinforcement learning (RL) methods have been used to learn mappings from natural-language questions to non-differentiable logical representations and have also been applied to KB completion tasks . Above we compared experimentally to MINERVA, one such method; however, the gradient-based approaches enabled by our methods are generally preferred as being easier to implement and tune on new problems, and easier to combine in a modular way with other architectural elements. We introduced here a novel way of representing a symbolic knowledge base (KB) called a sparsematrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. In a reified KB, all KB relations are represented with three sparse matrices, which can be distributed across multiple GPUs, and symbolic reasoning on realistic KBs with many relations is much faster than with naive implementations-more than four orders of magnitude faster on synthetic-data experiments compared to naive sparse-matrix implementations. This new architectural component leads to radically simpler architectures for neural semantic parsing from denotations and KB completion-in particular, they make it possible to learn neural KBQA models in a completely end-to-end way, mapping from text to KB entity sets, for KBs with tens of millions of triples and entities and hundreds of relations. KBs, entities, and relations, and types. In the more general case, a KB consists of entities, relations, and types. Again use x to denote an entity and r to denote a relation. We also assume each entity x has a type, written type(x), and let N τ denote the number of entities of type τ. Each entity x in type τ has a unique index index τ (x), which is an integer between 1 and N τ. We write x τ,i for the entity that has index i in type τ, or x i if the type is clear from context. Every relation r has a subject type τ subj and an object type τ obj, which constrain the types of x and x for any pair (x, x) ∈ r. Hence r can be encoded as a subset of {1, . . ., N τ subj} × {1, . . ., N τ obj}. Relations with the same subject and object types are called type-compatible. Our differentiable operations are based on typed weighted sets, where again each element x of weighted set X is associated with a non-negative real number, written ω|[x ∈ X]|, and we define ω|[x ∈ X]| ≡ 0 for all x ∈ X. If type(X) = τ then X is constrained to contain only entities of type τ. We also assume every relation r in a KB is associated with an entity x r, and hence, an index and a type. Sets of relations R are allowed if all members are type-compatible. For example R = {writer_of, director_of} might be a set of type-compatible relations. A weighted set X of type τ can be encoded as an entity-set vector x ∈ R Nτ, where the i-th component of x is the weight of the i-th entity of that type in the set X: e.g., We also use type(x) to denote the type τ of the set encoded by x. A relation r with subject type τ 1 and object type τ 2 can be encoded as a relation matrix M r ∈ R Nτ 1 ×Nτ 2. Background on sparse matrices. A COO encoding consists of a N r × 2 matrix Ind r containing pairs of entity indices, and a parallel vector w r ∈ R Nr containing the weights of the corresponding entity pairs. In this encoding, if Extension to soft KBs. In the paper, we assume the non-zero weights in a relation matrix M r are all equal to 1.0. This can be related: if assertions in a KB are associated with confidences, then this confidence can be stored in M r. In this case, the reified KB must be extended to encode the weight for a triple: we find it convenient to redefine M rel to hold that weight. In particular if the weight for the the -th triple r k (x i, x j) is w, then we let The support of follow(x, r) is exactly the set of R-neighbors(X). To better understand this claim, let z = follow(x, r). The claim states z can approximate the R neighborhood of any hard sets R, X by setting to zero the appropriate components of x and r. It is also clear that z[j] decreases when one decreases the weights in r of the relations that link x j to entities in X, and likewise, z[j] decreases if one decreases the weights of the entities in X that are linked to x j via relations in R, so there is a smooth, differentiable path to reach this approximation. More formally, consider first a matrix M r encoding a single binary relation r, and consider the vector x = xM r. As weighted sets, X and r have non-negative entries, so clearly for all i, and so if r is a one-hot vector for the set {r}, then the support of follow(x, r) is exactly the set r-neighbors(X). Finally note that the mixture M R has the property that M R [i(e 1), i(e 2)] > 0 exactly when e 1 is related to e 2 by some relation r ∈ R. Matrix multiplication XM was distributed as follows: X can be split into a "horizontal stacking" of m submatrices, which we write as [X 1 ; . . . ; X m], and M can be similarly partitioned into m 2 submatrices. We then have the that This can be computed without storing either X or M on a single machine, and mathematically applies to both dense and sparse matrices. In our experiments we distibuted the matrices that define a reified KB "horizontally", so that different triple ids are stored on different GPUs. The major problem with naive mixing is that, in the absence of general sparse tensor contractions, it is difficult to adapt to mini-batches-i.e., a setting in which x and r are replaced with matrices X and R with minibatch size b. An alternative strategy is late mixing, which mixes the output of many single-relation following steps, rather than mixing the KB itself: Here R[:, k], the k-th column of R, is "broadcast" to element of the matrix XM k. As noted in the body of the text, while there are N R matrices XM k, each of size O(bN E), they need not all be stored at once, so the space complexity becomes O(bN E + bN R + N T); however we must now sum up N R dense matrices. The implementation of relation-set following for the reified KB can be straightforwardedly extended to a minibatch: Grid experiments. In the grid experiments, the entity vector x is a singleton set, and the relation vector r weights all relations uniformly. We vary the number of relations by inventing m new relation names and assigning existing grid edges to each new relation. For key-value networks, the key is the concatenation of a relation and a subject entity, and the value is the object entity. We considered only the run-time for queries on an untrained randomly-initialized network (since run-time performance on a trained network would be the same); however, it should be noted that considerable time that might be needed to train the key-value memory to approximate the KB, or whether this KB can be approximated well by the key-value memory. We do not show on the grid task for smaller minibatch sizes, but both reified and late mixing are about 40x slower with b = 1 than with b = 128. WebQuestionsSP experiments. For efficiency, on this problem we exploit the type structure of the problem (see Appendix A). Our model uses two types of nodes, CVT and entity nodes. The model also uses three types of relations: relations mapping entities to entities, relations mapping entities to CVT nodes; and relations mapping CVT nodes to entity nodes. MetaQA experiments. An example of a 2-hop question in MetaQA could be "Who co-starred with Robert Downey Jr. in their movies?", and the answer would be a set of actor entities, e.g., "Chris Hemsworth", "Thomas Stanley", etc. Triples in the knowledge base are represented as (subject, relation, object) triples, such as, ("Robert Downey Jr.", "act_in", "Avengers: Endgame"), ("Avengers: Endgame", "stars", "Thomas Stanley"), etc. The quoted strings here all indicate KB entities. We also observed that in the MetaQA 2-hop and 3-hop questions, the questions often exclude the seed entities (e.g., "other movies with the same director as Pulp Fiction"). This can be modeled by masking out seed entities from the predictions after the second hop (ReifKB + mask); this slightly extended model leads to better performance than GRAFT-Net on 2-hop and 3-hop questions. Timing on MetaQA and other natural problems. The raw data for the bubble plot of Table 5 is below. Time (, r t i) In the main text, we say that this "gives the model access to outputs of all chains of length less than t". This is probably easiest to understand by considering a concete example. Let us simplify notation slightly by dropping the subscripts and writing follow(x t−1 i, r t i) as f t (x t−1). Now expand the definition of x t for a few small values of t, using the linearity of the definition relation-set following where appropriate to simplify A pattern is now clear: with this recursive definition x t expands to a mixture of many paths, each of which applies a different subset of f 1,..., f t to the initial input x. Since the weights of the mixture can to a large extent be controlled by varying the norm of the relation vectors r 1,...,r t, this "kernel-like trick" increases the expressive power of the model without introducing new parameters. The final mixture of the x t's seems to provide a bias towards accepting the output of shorter paths, which appears to be useful in practice. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJlguT4YPr | A scalable differentiable neural module that implements reasoning on symbolic KBs. |
We study the Cross-Entropy Method (CEM) for the non-convex optimization of a continuous and parameterized objective function and introduce a differentiable variant (DCEM) that enables us to differentiate the output of CEM with respect to the objective function's parameters. In the machine learning setting this brings CEM inside of the end-to-end learning pipeline in cases this has otherwise been impossible. We show applications in a synthetic energy-based structured prediction task and in non-convex continuous control. In the control setting we show on the simulated cheetah and walker tasks that we can embed their optimal action sequences with DCEM and then use policy optimization to fine-tune components of the controller as a step towards combining model-based and model-free RL. Recent work in the machine learning community has shown how optimization procedures can create new building-blocks for the end-to-end machine learning pipeline (; ; ; ; ; ; ; ;). In this paper we focus on the setting of optimizing an unconstrained, non-convex, and continuous objective function f θ (x): R n × Θ → R asx = arg min x f θ (x), where f is parameterized by θ ∈ Θ and has inputs x ∈ R n. If it exists, some (sub-)derivative ∇ θx is useful in the machine learning setting to make the output of the optimization procedure end-to-end learnable. For example, θ could parameterize a predictive model that is generating potential outcomes conditional on x happening that you want to optimize over. End-to-end learning in these settings can be done by defining a loss function L on top ofx and taking gradient steps ∇ θ L. If f θ were convex this gradient is easy to analyze and compute when it exists and is unique (; ; . Unfortunately analyzing and computing a "derivative" through the non-convex arg min here is not as easy and is challenging in theory and practice. No such derivative may exist in theory, it might not be unique, and even if it uniquely exists, the numerical solver being used to compute the solution may not find a global or even local optimum of f . One promising direction to sidestep these issues is to approximate the arg min operation with an explicit optimization procedure that is interpreted as just another compute graph and unrolled through. This is most commonly done with gradient descent as in ; ; ; ; ; ; . This approximation adds significant definition and structure to an otherwise extremely ill-defined desiderata at the cost of biasing the gradients and enabling the learning procedure to over-fit to the hyper-parameters of the optimization algorithm, such as the number of gradient steps or the learning rate. In this paper we show that the Cross-Entropy Method (CEM) is a reasonable alternative to unrolling gradient descent for approximating the derivative through an unconstrained, non-convex, and continuous arg min. CEM for optimization is a zeroth-order optimizer and works by generating a sequence of samples from the objective function. We show a simple and computationally negligible way of making CEM differentiable that we call DCEM by using the smooth top-k operation from. This also brings CEM into the end-to-end learning process in cases where there is otherwise a disconnection between the objective that is being learned and the objective that is induced by deploying CEM on top of those models. We first quickly study DCEM in a simple non-convex energy-based learning setting for regression. We contrast using unrolled gradient descent and DCEM for optimizing over a SPEN . We show that unrolling through gradient descent in this setting over-fits to the number of gradient steps taken and that DCEM generates a more reasonable energy surface. Our main application focus is on using DCEM in the context of non-convex continuous control. This setting is especially interesting as vanilla CEM is the state-of-the-art method for solving the control optimization problem with neural network transition dynamics as in;. We show that DCEM is useful for embedding action sequences into a lower-dimensional space to make solving the control optimization process significantly less computationally and memory expensive. This gives us a controller that induces a differentiable policy class parameterized by the model-based components. We then use PPO to fine-tune the modelbased components, demonstrating that it is possible to use standard policy learning for model-based RL in addition to just doing maximum-likelihood fitting to observed trajectories. Optimization-based modeling is a way of integrating specialized operations and domain knowledge into end-to-end machine learning pipelines, typically in the form of a parameterized arg min operation. Convex, constrained, and continuous optimization problems, e.g. as in;;;, capture many standard layers as special cases and can be differentiated through by applying the implicit function theorem to a set of optimality conditions from convex optimization theory, such as the KKT conditions. Non-convex and continuous optimization problems, e.g. as in; , are more difficult to differentiate through. Differentiation is typically done by unrolling gradient descent or applying the implicit function theorem to some set of optimality conditions, sometimes forming a locally convex approximation to the larger non-convex problem. Unrolling gradient descent is the most common way and approximates the arg min operation with gradient descent for the forward pass and interprets the operations as just another compute graph for the backward pass that can all be differentiated through. In contrast to these works, we show how continuous and nonconvex arg min operations can also be approximated with the cross entropy method as an alternative to unrolling gradient descent. Oftentimes the solution space of high-dimensional optimization problems may have structural properties that an optimizer can exploit to find a better solution or to find the solution quicker than an otherwise naïve optimizer. This is done in the context of meta-learning in where gradient-descent is unrolled over a latent space. In the context of Bayesian optimization this has been explored with random feature embeddings, hand-coded embeddings, and auto-encoder-learned embeddings (; ; ; ; ; ;). We show that DCEM is another reasonable way of learning an embedded domain for exploiting the structure in and efficiently solving larger optimization problems, with the significant advantage of DCEM being that the latent space is directly learned to be optimized over as part of the end-to-end learning pipeline. High-dimensional non-convex optimization problems that have a lot of structure in the solution space naturally arise in the control setting where the controller seeks to optimize the same objective in the same controller dynamical system from different starting states. This has been investigated in, e.g., planning (; ; ; ; ; ;), and policy distillation . shows how to learn an action space for model-free learning and; embed action sequences with a VAE. There has also been a lot of work on learning reasonable latent state space representations (; ; ; Miladinović et al., 2019) that may have structure imposed to make it more controllable (; ; ; ;). In contrast to these works, we learn how to encode action sequences directly with DCEM instead of auto-encoding the sequences. This has the advantages of 1) never requiring the expensive expert's solution to the control optimization problem, 2) potentially being able to surpass the performance of an expert controller that uses the full action space, and 3) being end-to-end learnable through the controller for the purpose of finding a latent space of sequences that DCEM is good at searching over. Another direction the RL and control has been pursuing is on the combination of model-based and model-free methods (; ; ; ; ; ; ;). proposes differentiable MPC and only do imitation learning on the cartpole and pendulum tasks with known or lightly-parameterized dynamics -in contrast, we are able to 1) scale our differentiable controller up to the cheetah and walker tasks, 2) use neural network dynamics inside of our controller, and 3) backpropagate a policy loss through the output of our controller and into the internal components. We focus on uses of the Cross-Entropy Method (CEM) for optimization in this paper. In this setting, suppose we have a non-convex, deterministic, and continuous objective function f θ (x) parameterized by θ over a domain R n and we want to solve the optimization problem The original form of CEM is an iterative and zeroth-order algorithm to approximate the solution of eq. with a sequence of samples from a sequence of parametric sampling distributions g φ defined over the domain R n, such as Gaussians. We refer the reader to for more details and motivations for using CEM and briefly describe how it works here. Given a sampling distribution g φ, the hyper-parameters of CEM are the number of candidate points sampled in each iteration N, the number of elite candidates k to use to fit the new sampling distribution to, and the number of iterations T. The iterates of CEM are the parameters φ of the sampling distribution. CEM starts with an initial sampling distribution g φ1 (X) ∈ R n, and in each iteration t generates N samples from the domain, evaluates the function at those points v t,i = f θ (X t,i), and re-fits the sampling distribution to the top-k samples by solving the maximum-likelihood problem DCEM minimizes a parameterized objective function f θ and is differentiable w.r.t. θ. Each DCEM iteration samples from the distribution g φ, starting with φ 1. Evaluate the objective function at those points Compute the soft top-k projection of the values with eq. Update φ t+1 by solving the maximum weighted likelihood problem in eq. (5 Differentiating through CEM's output with respect to the objective function's parameters with ∇ θx is useful, e.g., to bring CEM into the end-to-end learning process in cases where there is otherwise a disconnection between the objective that is being learned and the objective that is induced by deploying CEM on top of those models. Unfortunately in the vanilla form presented above the top-k operation in eq. makesx non-differentiable with respect to θ. The function samples can usually be differentiated through with some estimator such as the reparameterization trick , which we use in all of our experiments. The top-k operation can be made differentiable by replacing it with a soft version as done in;; , or by using a stochastic oracle as in?. Here we use the Limited Multi-Label Projection (LML) layer , which projects points from R n onto the LML polytope defined by which is the set of points in the unit n-hypercube with coordinates that sum to k. Notationally, if n is implied by the context we will leave it out and write L k. We propose a temperature-scaled LML variant to project onto the interior of the LML polytope with where τ > 0 is the temperature parameter and H b (y) = − i y i log y i + (1 − y i) log(1 − y i) is the binary entropy function. Equation is a convex optimization layer and can be solved in a negligible amount of time with a GPU-amenable bracketing method on the univariate dual and quickly backpropagated through with implicit differentiation. We can use the LML layer to make a soft and differentiable version of eq. as This is now a maximum weighted likelihood estimation problem (; ;), which still admits an analytic closed-form solution in many cases, e.g. for the natural exponential family . Thus using the soft top-k operation with the reparameterization trick, e.g., on the samples from g in a differentiable variant of CEM that we call DCEM and summarize in alg. 1. We note that we usually also normalize the values in each iteration to help separate the scaling of the values from the temperature parameter. Proposition 2. The temperature-scaled LML layer Π L k (x /τ) approaches the hard top-k operation as τ → 0 + when all components of x are unique. We prove this in app. A by using the KKT conditions of eq.. Corollary 1. DCEM captures CEM as a special case as τ → 0 +. Proposition 3. With an isotropic Gaussian sampling distribution, the update in eq. becomes, where the soft top-k indexing set is This can be proved by differentiating eq., as discussed in, e.g., . As a corollary, this captures prop. 1 as τ → 0 +. Energy-based learning for regression and classification estimate the conditional probability P(y|x) of an output y ∈ Y given an input x ∈ X with a parameterized energy function Predictions are made by solving the optimization problemŷ = arg min Historically linear energy functions have been well-studied, e.g. in Taskar easier to solve and analyze. More recently non-convex energy functions that are parameterized by neural networks are being explored -a popular one being Structured Prediction Energy Networks (SPENs) which propose to model E θ with neural networks. suggests to do supervised learning of SPENs by approximating eq. with gradient descent that is then unrolled for T steps, i.e. by starting with some y 0, making gradient updates y t+1 = y t + γ∇ y E θ (y t |x) ing in an outputŷ = y T, defining a loss function L on top ofŷ, and doing learning with gradient updates ∇ θ L that go through the inner gradient steps. In this context we can alternatively use DCEM to approximate eq.. One potential consideration when training deep energy-based models with approximations to eq. is the impact and bias that the approximation is going to have on the energy surface. We note that for gradient descent, e.g., it may cause the energy surface to overfit to the number of gradient steps so that the output of the approximate inference procedure isn't even a local minimum of the energy surface. One potential advantage of DCEM is that the output is more likely to be near a local minimum of the energy surface so that, e.g., more test-time iterations can be used to refine the solution. We empirically illustrate the impact of the optimizer choice on a synthetic example in sect. 5.1. Our main application focus is in the continuous control setting where we show how to use DCEM to learn a latent control space that is easier to solve than the original problem and induces a differentiable policy class that allows parts of the controller to be fine-tuned with auxiliary policy or imitation losses. We are interested in controlling discrete-time dynamical systems with continuous state-action spaces. Let H be the horizon length of the controller and U H be the space of control sequences over this horizon length, e.g. U could be a multi-dimensional real space or box therein and U H could be the Cartesian product of those spaces representing the sequence of controls over H timesteps. We are interested in repeatedly solving the control optimization problem 2 u 1:H = arg min Algorithm 2 Learning an embedded control space with DCEM Fixed Inputs: Dynamics f trans, per-step state-action cost Update the decoder to improve the controller's cost end while where we are in an initial system state x init governed by deterministic system transition dynamics f trans, and wish to find the optimal sequence of actionsû 1:H such that we find a valid trajectory {x 1:H, u 1:H} that optimizes the cost C t (x t, u t). Typically these controllers are used for receding horizon control where only the first action u 1 is deployed on the real system, a new state is obtained from the system, and the eq. is solved again from the new initial state. In this case we can say the controller induces a policy π(x init) ≡û 1 3 that solves eq. and depends on the cost and transition dynamics, and potential parameters therein. In all of the cases we consider f trans is deterministic, but may be approximated by a stochastic model for learning. Some model-based reinforcement learning settings consider cases where f trans and C are parameterized and potentially used in conjunction with another policy class. For sufficiently complex dynamical systems, eq. is computationally expensive and numerically instable to solve and rife with sub-optimal local minima. The Cross-Entropy Method is the state-ofthe-art method for solving eq. with neural network transitions f trans . CEM in this context samples full action sequences and refines the samples towards ones that solve the control problem. uses CEM with 1000 samples in each iteration for 10 iterations with a horizon length of 12. This requires 1000 × 10 × 12 = 120, 000 evaluations of the transition dynamics to predict the control to be taken given a system stateand the transition dynamics may use a deep recurrent architecture as in or an ensemble of models as in. One comparison point here is a model-free neural network policy takes a single evaluation for this prediction, albeit sometimes with a larger neural network. The first application we show of DCEM in the continuous control setting is to learn a latent action space Z with a parameterized decoder f dec θ: Z → U H that maps back up to the space of optimal action sequences, which we illustrate in fig. 3. For simplicity starting out, assume that the dynamics and cost functions are known (and perhaps even the ground-truth) and that the only problem is to estimate the decoder in isolation, although we will show later that these assumptions can be relaxed. The motivation for having such a latent space and decoder is that the millions of times eq. is being solved for the same dynamic system with the same cost, the solution space of optimal action sequencesû 1:H ∈ U H has an extremely large amount of spatial (over U) and temporal (over time in U H) structure that is being ignored by CEM on the full space. The space of optimal action sequences only contains the knowledge of the trajectories that matter for solving the task at hand, such as different parts of an optimal gait, and not irrelevant control sequences. We argue that CEM over the full action space wastes a lot of computation considering irrelevant action sequences and show that these can be ignored by learning a latent space of more reasonable candidate solutions here that we search over instead. Given a decoder, the control optimization problem in eq. can then be transformed into an optimization problem over Z aŝ which is still a challenging non-convex optimization problem that searches over a decoder's input space to find the optimal control sequence. We illustrate what this looks like in fig. 3, and note the impact of the decoder initialization in app. C. We propose in alg. 2 to use DCEM to approximately solve eq. and then learn the decoder directly to optimize the performance of eq.. Every time we solve eq. with DCEM and obtain an optimal latent representationẑ along with the induced trajectory {x t, u t}, we can take a gradient step to push down the ing cost of that trajectory with ∇ θ C(ẑ), which goes through the DCEM process that uses the decoder to generate samples to obtainẑ. We note that the DCEM machinery behind this is not necessary if a reasonable local minima is consistently found as this is an instance of min-differentiation (, Theorem 10.13) but in practice this breaks down in non-convex cases when the minimum cannot be consistently found.; solve related problems in this space and we discuss them in sect. 2.3. We also note that to learn an action embedding we still need to differentiate through the transition dynamics and cost functions to compute ∇ θ C(ẑ), even if the ground-truth ones are being used, since the latent space needs to have the knowledge of how the control cost will change as the decoder's parameters change. DCEM in this setting also induces a differentiable policy class π(x init) ≡ u 1 = f dec (ẑ) 1. This enables a policy or imitation loss J to be defined on the policy that can fine-tune the parts of the controller (decoder, cost, and transition dynamics) gradient information from ∇ θ J. In theory the same approach could be used with CEM on the full optimization problem in eq.. For realistic problems without modification this is intractable and memory-intensive as it would require storing and backpropagating through every sampled trajectory, although as a future direction we note that it may be possible to delete some of the low-influence trajectories to help overcome this. We use PyTorch and will openly release our DCEM library, model-based control code, and the source, plotting, and analysis code for all of our experiments. In this section we briefly explore the impact of the inner optimizer on the energy surface of a SPEN as discussed in sect. 4.1. For illustrative purposes we consider a simple unidimensional regression task where the ground-truth data is generated from f (x) = x sin(x) for x ∈ [0, 2π]. We model P(y|x) ∝ exp{−E θ (y|x)} with a single neural network E θ and make predictionsŷ by solving the optimization problem eq.. Given the ground-truth output y, we use the loss L(ŷ, y) = ||ŷ − y || 2 2 and take gradient steps of this loss to shape the energy landscape. We consider approximating eq. with unrolled gradient descent and DCEM with Gaussian sampling distributions. Both of these are trained to take 10 optimizer steps and we use an inner learning rate of 0.1 for gradient descent and with DCEM we use 10 iterations with 100 samples per iteration and 10 elite candidates, with a temperature of 1. For both algorithms we start the initial iterate at y 0 = 0. We show in app. B that both of these models attain the same loss on the training dataset but, since this is a unidimensional regression task, we can visualize the entire energy surfaces over the joint input-output space in fig. 1. This shows that gradient descent has learned to adapt from the initial y 0 = 0 position to the final position by descending along the function's surface as we would expect, but there is no reason why the energy surface should be a local minimum around the last iterateŷ = y 10. The energy surface learned by CEM captures local minima around the regression target as the sequence of Gaussian iterates are able to capture a more global view of the function landscape and need to focus in on a minimum of it for regression. We show ablations in app. B from training for 10 inner iterations and then evaluating with a different number of iterations and show that gradient descent quickly steps away from making reasonable predictions. Discussion and limitations. We note that other tricks could be used to force the output to be at a local minimum with gradient descent, such as using multiple starting points or randomizing the number of gradient descent steps taken -our intention here is to highlight this behavior in the vanilla case. We also note that DCEM is susceptible to overfitting to the hyper-parameters behind it in similar, albeit less obvious ways. Eval Reward walker.walk Figure 4: We evaluated our final models by running 100 episodes each on the cheetah and walker tasks. CEM over the full action space uses 10,000 trajectories for control at each time step while embedded DCEM samples only 1000 trajectories. DCEM almost recovers the performance of CEM over the full action space and PPO fine-tuning of the model-based components helps bridge the gap. We first show that it is possible to learn an embedded control space as discussed in sect. 4.2 in an isolated setting. We use the standard cartpole dynamical system from with a continuous state-action space. We assume that the ground-truth dynamics and cost are known and use the differentiable ground-truth dynamics and cost implemented in PyTorch from. This isolates the learning problem to only learning the embedding so that we can study what this is doing without the additional complications that arise from exploration, estimating the dynamics, learning a policy, and other non-stationarities. We show experiments with these assumptions relaxed in sect. 5.2.2. We use DCEM and alg. 2 to learn a 2-dimensional latent space Z = 2 that maps back up to the full control space U H = H where we focus on horizons of length H = 20. For DCEM over the embedded space we use 10 iterations with 100 samples in each iteration and 10 elite candidates, again with a temperature of 1. We show the details in app. D that we are able to recover the performance of an expert CEM controller that uses an order-of-magnitude more samples fig. 2 shows a visualization of what the CEM and embedded DCEM iterates look like to solve the control optimization problem from the same initial system state. CEM spends a lot of evaluations on sequences in the control space that are unlikely to be optimal, such as the ones the bifurcate between the boundaries of the control space at every timestep, while our embedded space is able to learn more reasonable proposals. Next we show that we can relax the assumptions of having known transition dynamics and reward and show that we can learn a latent control space on top of a learned model on the cheetah.run and walker.walk tasks with frame skips of 4 and 2, respectively, from the DeepMind control suite using the MuJoCo physics engine . We then fine-tune the policy induced by the embedded controller with PPO , sending the policy loss directly back into the reward and latent embedding modules underlying the controller. We start with a state-of-the-art model-based RL approach by noting that the PlaNet restricted state space model (RSSM) is a reasonable architecture for proprioceptive-based control in addition to just pixel-based control. We show the graphical model we use in fig. 3, which maintains deterministic hidden states h t and stochastic (proprioceptive) system observations x t and rewards r t. We model transitions as h t+1 = f trans θ (h t, x t), observations with x t ∼ f odec θ (h t), rewards with r t = f rew θ (h t, x t), and map from the latent action space to action sequences with u 1:T = f dec (z). We follow the online training procedure of to initialize all of the models except for the action decoder f dec, using approximately 2M timesteps. We then use a variant of alg. 2 to learn f dec to embed the action space for control with DCEM, which we also do online while updating the models. We describe the full training process in app. E. Figure 3: Our RSSM with action sequence embeddings Our DCEM controller induces a differentiable policy class π θ (x init) where θ are the parameters of the models that impact the actions that the controller is selecting. We then use PPO to define a loss on top of this policy class and fine-tune the components (the decoder and reward module) so that they improve the episode reward rather than the maximumlikelihood solution of observed trajectories. We chose PPO because we thought it would be able to fine-tune the policy with just a few updates because the policy is starting at a reasonable point, but this did not turn out to be the case and in the future other policy optimizers can be explored. We implement this by making our DCEM controller the policy in the PyTorch PPO implementation by. We provide more details behind our training procedure in app. E. We evaluate our controllers on 100 test episodes and the rewards in fig. 4 show that DCEM is almost (but not exactly) able to recover the performance of doing CEM over the full action space while using an order-of-magnitude less trajectory samples (1,000 vs 10,0000). PPO fine-tuning helps bridge the gap between the performances. Videos of our trained models are available at: https://sites.google.com/view/diff-cross-entropy-method Discussion and limitations. DCEM in the control setting has many potential future directions to explore and help bring efficiency and policy-based fine-tuning to model-based reinforcement learning. Much more analysis and experimentation is necessary to achieve this as we faced many issues getting the model-based cheetah and walker tasks to work that did not arise in the ground-truth cartpole task. We discuss this more in app. E. We also did not focus on the sample complexity of our algorithms getting these proof-of-concept experiments working. We also note that other reasonable baselines on this task could involve distilling the controller into a model-free policy and then doing search on top of that policy, as done in POPLIN . We have laid the foundations for differentiating through the cross-entropy method and have brought CEM into the end-to-end learning pipeline. Beyond further explorations in the energy-based learning and control contexts we showed here, DCEM can be used anywhere gradient descent is unrolled. We find this especially promising for meta-learning, potentially building on LEO . Inspired by DCEM, other more powerful sampling-based optimizers could be made differentiable in the same way, potentially optimizers that leverage gradient-based information in the inner optimization steps (; ; ;) or by also learning the hyper-parameters of structured optimizers (; ;). We acknowledge the scientific Python community for developing the core set of tools that enabled this work, including PyTorch , Jupyter , Matplotlib (, seaborn , numpy , pandas (, and SciPy . A PROOF OF PROP. 2 Proof. We first note that a solution exists to the projection operation, and it is unique, which comes from the strict convexity of the objective . The Lagrangian of the temperature-scaled LML projection in eq. is Differentiating eq. gives and the first-order optimality condition ∇ y L(y, ν) = 0 gives y i = σ(τ −1 (x i + ν *)), where σ is the sigmoid function. Using lem. 1 as τ → 0 + gives Substituting this back into the constraint 1 y = k gives that π(x) k < −ν * < π(x) k+1, where π(x) sorts x ∈ R n in ascending order so that π(x) 1 ≤ π(x) 2 ≤... ≤ π(x) n. Thus we have that y i = 1{x i ≥ π(x) k }, which is 1 when x i is in the top-k components of x and 0 otherwise, and therefore the temperature-scaled LML layer approaches the hard top-k function as τ → 0 +. where σ(x /τ) = (1 + exp{ −x /τ}) −1 is the temperature-scaled sigmoid. We have found the decoder to be influenced by the activation function that's used with it and have found the ELU to perform the best. fig. 7 conveys some intuition behind this choice. We randomly initialize a neural network u = f θ (z) with no biases, where θ = {W i} i for every layer weight W i, and then scale the weights with αθ. We then sample z ∼ N (0, I), pass them through f αθ, and plot the outputs. The ReLU induces an extremely biased distribution which is seen more prevalently as α grows that is not as present when using the ELU or hyperbolic tangent since they are almost linear around zero. Despite the reasonable looking initializations for the hyperbolic tangent, we found that it does not perform as well in practice in our experiments. We found that the initial scale α of the decoder's parameters is also important for learning because of the network is not initially producing samples that cover the full output space as shown with α = 1, it seems hard for it to learn how to expand to cover the full output space. In this section we discuss some of the ablations we considered when learning the latent action space for the cartpole task. In all settings we use DCEM to unroll 10 inner iterations that samples 100 candidate points in each iteration and has an elite set of 10 candidates. For training, we randomly sample initial starting points of the cartpole and for validation we use a fixed set of initial points. Figure 8 shows the convergence of models as we vary the latent space dimension and temperature parameter, and fig. 9 shows that DCEM is able to fully recover the expert performance on the cartpole. Because we are operating in the ground-truth dynamics setting we measure the performance by comparing the controller costs. We use τ = 0 to indicate the case where we optimize over the latent space with vanilla CEM and then update the decoder with, where the gradient doesn't go back into the optimization process that producedẑ. This is non-convex min differentiation and is reasonable whenẑ is near-optimal, but otherwise is susceptible to making the decoder difficult to search over. These show a few interesting points that come up in this setting, which of course may be different in other settings. Firstly that with a two-dimensional latent space, all of the temperature values are able to find a reasonable latent space at some point during training. However after more updates, the lower-temperature experiments start updating the decoder in ways that make it more difficult to search over and start achieving worse performance than the τ = 1 case. For higher-dimensional latent spaces, the DCEM machinery is necessary to keep the decoder searchable. Furthermore we notice that just a 16-dimensional latent space for this task can be difficult for learning, one reason this could be is from DCEM having too many degrees of freedom in ways it can update the decoder to improve the performance of the optimizer.: Improvement factor on the ground-truth cartpole task from embedding the action space with DCEM compared to running CEM on the full action space, showing that DCEM is able to recover the full performance. We use the DCEM model that achieves the best validation loss. The error lines show the 95% confidence interval around three trials. Figure 10: Learned DCEM reward surfaces for the cartpole task. Each row shows a different initial state of the system. We can see that as the temperature decreases, the latent representation can still capture near-optimal values, but they are in much narrower regions of the latent space than when τ = 1. Algorithm 3 PlaNet variant that we use for proprioceptive control with optional DCEM embedding Models: a deterministic state model, a stochastic state model, a reward model, and (if using DCEM) an action sequence decoder. Initialize dataset D with S random seed episodes. Initialize the transition model's deterministic hidden state h 0 and initialize the environment, obtaining the initial state estimate x 0. CEM-Solve can use DCEM or full CEM for t = 1,..., T do u t ← CEM-solve(h t−1, x t−1) Add exploration noise ∼ p to the action u t. Obtain the hidden states of the {h τ,x τ} from the model. Compute the multi-step likelihood bound L(τ, h τ,x τ) (, eq 6 .) Optimize the likelihood bound if using DCEM then Update the decoder end if end if end for For the cheetah.run and walker.walk DeepMind control suite experiments we start with a modified PlaNet architecture that does not have a pixel decoder. We started with this over PETS to show that this RSSM is reasonable for proprioceptivebased control and not just pixel-based control. This model is graphically shown in fig. 3 and has 1) a deterministic state model h t = f (h t−1, x t−1, u t−1), 2) a stochastic state model x t ∼ p(x t, h t), and 3) a reward model: r t ∼ p(r t |h t, x t). In the proprioceptive setting, we posit that the deterministic state model is useful for multi-step training even in fully observable environments as it allows the model to "push forward" information about what is potentially going to happen in the future. For the modeling components, we follow the recommendations in and use a GRU with 200 units as the deterministic path in the dynamics model and implement all other functions as two fully-connected layers, also with 200 units with ReLU activations. Distributions over the state space are isotropic Gaussians with predicted mean and standard deviation. We train the model to optimize the variational bound on the multi-step likelihood as presented in on batches of size 50 with trajectory sequences of length 50. We start with 5 seed episodes with random actions and in contrast to , we have found that interleaving the model updates with the environment steps instead of separating the updates slightly improves the performance, even in the pixel-based case, which we do not report on here. For the optimizers we either use CEM over the full control space or DCEM over the latent control space and use a horizon length of 12 and 10 iterations here. For full CEM, we sample 1000 candidates in each iteration with 100 elite candidates. For DCEM we use 100 candidates in each iteration with 10 elite candidates. Our training procedure has the following three phases, which we set up to isolate the DCEM additions. We evaluate the models output from these training runs on 100 random episodes in fig. 4 in the main paper. Now that these ideas have been validated, promising directions of future work include trying to combine them all into a single training run and trying to reduce the sample complexity and number of timesteps needed to obtain the final model. Under review as a conference paper at ICLR 2020 Phase 1: Model initialization. We start in both environments by launching a single training run of fig. 11 to get initial system dynamics. fig. 11 shows that these starting points converge to near-stateof-the-art performance on these tasks. These models take slightly longer to converge than in , likely due to how often we update our models. We note that at this point, it would be ideal to use the policy loss to help fine-tune the components so that policy induced by CEM on top of the models can be guided, but this is not feasible to do by backpropagating through all of the CEM samples due to memory, so we instead next move on to initializing a differentiable controller that is feasible to backprop through. Phase 2: Embedded DCEM initialization. Our goal in this phase is to obtain a differentiable controller that is feasible to backprop through. Our first failed attempt to achieve this was to use offline training on the replay buffer, which would have been ideal as it would require no additional transitions to be collected from the environment. We tried using alg. 2, the same procedure we used in the ground-truth cartpole setting, to generate an embedded DCEM controller that achieves the same control cost on the replay buffer as the full CEM controller. However we found that when deploying this controller on the system, it quickly stepped off of the data manifold and failed to control it -this seemed to be from the controller finding holes in the model that causes the reward to be over-predicted. We then used an online data collection process identical to the one we used for phase 1 to jointly learn the embedded control space while updating the models so that the embedded controller doesn't find bad regions in them. We show where the DCEM updates fit into alg. 3. One alternative that we tried to updating the decoder to optimize the control cost on the samples from the replay buffer is that the decoder can also be immediately updated after planning at every step. This seemed nice since it didn't require any additional DCEM solves, but we found that the decoder became too biased during the episode as samples at consecutive timesteps have nearly identical information. For the hyper-parameters, we kept most of the DCEM hyper-parameters fixed throughout this phase to 100 samples, 10 elites, and a temperature τ = 1. We ablated across 1) the number of DCEM iterations taken to be {3, 5, 10}, 2) deleting the replay buffer from phase 1 or not, and 3) re-initializing the model or not from phase 1. We report the best runs that we use as the starting point for the next phase in fig. 12, which achieve reasonable performance but don't match the performance of doing CEM over the full action space. These runs all use 10 DCEM iterations and both keep the replay buffer from phase 1. The Cheetah run keeps the models from phase 1 and the Walker re-initializes the models. The cheetah curve around timestep 600k shows, the stability here can be improved as sometimes the decoder finds especially bad regions in the model that induce extremely high losses. Phase 3: Policy optimization into the controller. Finally now that we have a differentiable policy class induced by this differentiable controller we can do policy learning to fine-tune parts of it. We initially chose Proximal Policy Optimization (PPO) for this phase because we thought that it would be able to fine-tune the policy in a few iterations without requiring a good estimate of the value function, but this phase also ended up consuming many timesteps from the environment. Crucially in this phase, we do not do likelihood fitting at all, as our goal is to show that PPO can be used as another useful signal to update the parts of a controller -we did this to isolate the improvement from PPO but in practice we envision more unified algorithms that use both signals at the same time. Using the standard PPO hyper-parameters, we collect 10 episodes for each PPO training step and ablate across 1) the number of passes to make through these episodes {1, 2, 4}, 2) every combination of the reward, transition, and decoder being fine-tuned or frozen, 3) using a fixed variance of 0.1 around the output of the controller or learning this, 4) the learning rate of the fine-tuned model-based portions {10 −4, 10 −5}. Figure 13 shows the of the best runs from this search. We conclude by showing the PPO-fine-tuned DCEM iterates for solving a single control optimization problem from a random system state for the cheetah fig. 14. and walker fig. 15 tasks. PlaNet+DCEM+PPO walker.walk | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HJluEeHKwH | DCEM learns latent domains for optimization problems and helps bridge the gap between model-based and model-free RL --- we create a differentiable controller and fine-tune parts of it with PPO |
We propose a new framework for entity and event extraction based on generative adversarial imitation learning -- an inverse reinforcement learning method using generative adversarial network (GAN). We assume that instances and labels yield to various extents of difficulty and the gains and penalties (rewards) are expected to be diverse. We utilize discriminators to estimate proper rewards according to the difference between the labels committed by ground-truth (expert) and the extractor (agent). Experiments also demonstrate that the proposed framework outperforms state-of-the-art methods. Event extraction (EE) is a crucial information extraction (IE) task that focuses on extracting structured information (i.e., a structure of event trigger and arguments, "what is happening", and "who or what is involved ") from unstructured texts. In most recent five years, many event extraction approaches have brought forth encouraging by retrieving additional related text documents BID18, introducing rich features of multiple categories [BID26, incorporating relevant information within or beyond context BID23, BID24 BID7 and adopting neural network frameworks BID4, BID8, BID8, BID17, BID13 BID27 .There are still challenging cases: for example, in the following sentences: "Masih's alleged comments of blasphemy are punishable by death under Pakistan Penal Code" and "Scott is charged with first-degree homicide for the death of an infant. ", the word death can trigger an Execute event in the former sentence and a Die event in the latter one. With similar local information (word embeddings) or contextual features (both sentences include legal events), supervised models pursue the probability distribution which resembles that in the training set (in ACE2005 data, we have overwhelmingly more Die annotation on death than Execute), and will label both as Die event, causing error in the former instance. Such mistake is due to the lack of a mechanism that explicitly deals with wrong and confusing labels. Many multi-classification approaches utilize cross-entropy loss, which aims at boosting the probability of the correct labels. Many approaches -including AdaBoost which focuses weights on difficult cases -usually treat wrong labels equally and merely inhibits them indirectly. Models are trained to capture features and weights to pursue correct labels, but will become vulnerable and unable to avoid mistakes when facing ambiguous instances, where the probabilities of the confusing and wrong labels are not sufficiently "suppressed". Therefore, exploring information from wrong labels is a key to make the models robust. In this paper, we propose a dynamic mechanism -inverse reinforcement learning -to directly assess correct and wrong labels on instances in entity and event extraction. We assign explicit scores on cases -or rewards in terms of Reinforcement Learning (RL). We adopt discriminators from generative adversarial networks (GAN) to estimate the reward values. Discriminators ensures the highest reward for ground-truth (expert) and the extractor attempts to imitate the expert by pursuing highest rewards. For challenging cases, if the extractor continues selecting wrong labels, the GAN keeps expanding the margins between rewards for ground-truth labels and (wrong) extractor labels and eventually deviates the extractor from wrong labels. The main contributions of this paper can be summarized as follows: • We apply reinforcement learning framework to event extraction tasks, and the proposed framework is an end-to-end and pipelined approach that extracts entities and event triggers and determines the argument roles for detected entities.• With inverse reinforcement learning propelled by GAN, we demonstrate that a dynamic reward function ensures more optimal performance in a complicated RL task. We follow the schema of Automatic Content Extraction (ACE) 1 to detect the following elements from unstructured natural language text:• Entity: word or phrase that describes a real world object such as a person ("Masih" as PER in Figure 1). ACE schema defines 7 types of entities.• Event Trigger: the word that most clearly expresses an event (interaction or change of status). ACE schema defines 33 types of events such as Sentence ("punishable" in Figure 1) and Execute ("death").• Event argument: an entity that serves as a participant or attribute with a specific role in an event mention, e.g., a PER "Masih" serves as a Defendant in a Sentence event triggered by "punishable". For broader readers who might not be familiar with reinforcement learning, we briefly introduce by their counterparts or equivalent concepts in supervised models with the RL terms in the parentheses: our goal is to train an extractor (agent A) to label entities, event triggers and argument roles (actions a) in text (environment e); to commit correct labels, the extractor consumes features (state s) and follow the ground truth (expert E); a reward R will be issued to the extractor according to whether it is different from the ground truth P l a c e N / A P e r s o n A g e n t P l a c e N / A P e r s o n A g e n tFigure 1: Our framework includes a reward estimator based on GAN to issue dynamic rewards with regard to the labels (actions) committed by event extractor (agent). The reward estimator is trained upon the difference between the labels from ground truth (expert) and extractor (agent). If the extractor repeatedly misses Execute label for "death", the penalty (negative reward values) is strengthened; if the extractor make surprising mistakes: label "death" as Person or label Person "Masih" as Place role in Sentence event, the penalty is also strong. For cases where extractor is correct, simpler cases such as Sentence on "death" will take a smaller gain while difficult cases Execute on "death" will be awarded with larger reward values.and how serious the difference is -as shown in Figure 1, a repeated mistake is definitely more serious -and the extractor improves the extraction model (policy π) by pursuing maximized rewards. Our framework can be briefly described as follows: given a sentence, our extractor scans the sentence and determines the boundaries and types of entities and event triggers using Q-Learning (Section 3.1); meanwhile, the extractor determines the relations between triggers and entities -argument roles with policy gradient (Section 3.2). During the training epochs, GANs estimate rewards which stimulate the extractor to pursue the most optimal joint model (Section 4). The entity and trigger detection is often modeled as a sequence labeling problem, where longterm dependency is a core nature; and reinforcement learning is a well-suited method [BID15 are also good candidates for context embeddings. From RL perspective, our extractor (agent A) is exploring the environment, or unstructured natural language sentences when going through the sequences and committing labels (actions a) for the tokens. When the extractor arrives at tth token in the sentence, it observes information from the environment and its previous action a t−1 as its current state s t; the extractor commits a current action a t and moves to the next token, it has a new state s t+1. The information from the environment is token's context embedding v t, which is usually acquired from Bi-LSTM BID12 outputs; previous action a t−1 may impose some constraint for current action a t, e.g., I-ORG does not follow B-PER 2. With the aforementioned notations, we have DISPLAYFORM0 To determine the current action a t, we generate a series of Q-tables with DISPLAYFORM1 where f sl (·) denotes a function that determine the Q-values using the current state as well as previous states and actions. Then we achievê DISPLAYFORM2 Equation 2 and 3 suggest that an RNN-based framework which consumes current input and previous inputs and outputs can be adopted, and we use a unidirectional LSTM as BID1. We have a full pipeline as illustrated in Figure 2.2. In this work, we use BIO, e.g., "B-Meet" indicates the token is beginning of Meet trigger, "I-ORG" means that the token is inside an organization phrase, and "O" denotes null., with fixed rewards r = ±5 for correct/wrong labels and discount factor λ = 0.01. Score for wrong label is penalized while correct one is reinforced. For each label (action a t) with regard to s t, a reward r t = r(s t, a t) is assigned to the extractor (agent). We use Q-learning to pursue the most optimal sequence labeling model (policy π) by maximizing the expected value of the sum of future rewards E(R t), where R t represents the sum of discounted future rewards r t + γr t+1 + γ 2 r t+2 +... with a discount factor γ, which determines the influence between current and next states. We utilize Bellman Equation to update the Q-value with regard to the current assigned label to approximate an optimal model (policy π *). DISPLAYFORM3 As illustrated in FIG0, when the extractor assigns a wrong label on the "death" token because the Q-value of Die ranks first, Equation 4 will penalize the Q-value with regard to the wrong label; while in later epochs, if the extractor commits a correct label of Execute, the Q-value will be boosted and make the decision reinforced. We minimize the loss in terms of mean squared error between the original and updated Q-values notated as Q sl (s t, a t): DISPLAYFORM4 and apply back propagation to optimize the parameters in the neural network. After the extractor determines the entities and triggers, it takes pairs of one trigger and one entity (argument candidate) to determine whether the latter serves a role in the event triggered by the former. In this task, for each pair of trigger and argument candidate, our extractor observes the context embeddings of trigger and argument candidate -v ttr and v tar respectively, as well as the output of another Bi-LSTM consuming the sequence of context embeddings between trigger and argument candidates in the state; the state also includes a representation (onehot vector) of the entity type of the argument candidate a tar, and the event type of the trigger a tar also determine the available argument role labels, e.g., an Attack event never has Adjudicator arguments as Sentence events. With these notations we have: DISPLAYFORM0 where the footnote tr denotes the trigger, ar denotes argument candidate, and f ss denotes the sub-sentence Bi-LSTM for the context embeddings between trigger and argument. We have another ranking table for argument roles: DISPLAYFORM1 where f tr,ar represents a mapping function whose output sizes is determined by the trigger event type a ttr. e.g., Attack event has 5 -Attacker, Target, Instrument, Place and Not-a-role labels and the mapping function for Attack event contains a fully-connected layer with output size of 5. And we determine the role witĥ a tr,ar = arg max atr,ar Q tr,ar (s tr,ar, a tr,ar).We assign a reward r(s tr,ar, a tr,ar) to the extractor, and since there is one step in determining the argument role label, the expected values of R = r(s tr,ar, a tr,ar).We utilize another RL algorithm -Policy Gradient BID19 to pursue the most optimal argument role labeling performance. We have probability distribution of argument role labels that are from the softmax output of Q-values: P (a tr,ar |s tr,ar) = softmax(Q tr,ar (s tr,ar, a tr,ar)).To update the parameters, we minimize loss function DISPLAYFORM2 From Equation 10 and FIG1 we acknowledge that, when the extractor commits a correct label (Agent for the GPE entity "Pakistan"), the reward encourages P (a tr,ar |s tr,ar) to increase; and when the extractor is wrong (e.g., Place for "Pakistan"), the reward will be negative, leading to a decreased P (a tr,ar |s tr,ar). Here we have a brief clarification on different choices of RL algorithms in the two tasks. In the sequence labeling task, we do not take policy gradient approach due to high variance of E(R t), i.e., the sum of future rewards R t should be negative when the extractor chooses a wrong label, but an ill-set reward and discount factor γ assignment or estimation may give a positive R t (often with a small value) and still push up the probability of the wrong action, which is not desired. There are some variance reduction approaches to constrain the R t but they still need additional estimation and bad estimation will introduce new risk. Q-learning only requires rewards on current actions r t, which are relatively easy to constrain. In the argument role labeling task, determination on each trigger-entity pair consists of only one single step and R t is exactly the current reward r, then policy gradient approach performs correctly if we ensure negative rewards for wrong actions and positive for correct ones. However, this one-step property impacts the Q-learning approach: without new positive values from further steps, a small positive reward on current correct label may make the updated Q-value smaller than those wrong ones. So far in our paper, the reward values demonstrated in the examples are fixed, we have DISPLAYFORM0 and typically we have c 1 > c 2. This strategy makes RL-based approach no difference from classification approaches with cross-entropy in terms of "treating wrong labels equally" as discussed in introductory section. Moreover, recent RL approaches on relation extraction BID25 BID9 ] adopt a fixed setting of reward values with regard to different phases of entity and relation detection based on empirical tuning, which requires additional tuning work when switching to another data set or schema. In event extraction task, entity, event and argument role labels yield to a complex structure with variant difficulties. Errors should be evaluated case by case, and from epoch to epoch. In the earlier epochs, when parameters in the neural networks are slightly optimized, all errors are tolerable, e.g., in sequence labeling, extractor within the first 2 or 3 iterations usually labels most tokens with O labels. As the epoch number increases, the extractor is expected to output more correct labels, however, if the extractor makes repeated mistakes -e.g., the extractor persistently labels"death" as O in the example sentence "... are punishable by death..." during multiple epochs -or is stuck in difficult cases -e.g., whether FAC (facility) token "bridges" serves as a Place or Target role in an Attack event triggered by "bombed " in sentence "U.S. aircraft bombed Iraqi tanks holding bridges... "-a mechanism is required to assess these challenges and to correct them with salient and dynamic rewards. We describe the training approach as a process of extractor (agent A) imitating the ground-truth (expert E), and during the process, a mechanism ensures that the highest reward values are issued to correct labels (actions a), including the ones from both expert E and a. DISPLAYFORM1 This mechanism is Inverse Reinforcement Learning BID0, which estimates the reward first in an RL framework. Equation 12 reveals a scenario of adversary between ground truth and extractor and Generative Adversarial Imitation Learning (GAIL) BID11, which is based on GAN BID10, fits such adversarial nature. In the original GAN, a generator generates (fake) data and attempts to confuse a discriminator D which is trained to distinguish fake data from real data. In our proposed GAIL framework, the extractor (agent A) substitutes the generator and commits labels to the discriminator D; the discriminator D, now serves as reward estimator, aims to issue largest rewards to labels (actions) from the ground-truth (expert E) or identical ones from the extractor but provide lower rewards for other/wrong labels. Rewards R(s, a) and the output of D are now equivalent and we ensure: DISPLAYFORM2 where s, a E and a A are input of the discriminator. In the sequence labeling task, s consists of the context embedding of current token v t and a one-hot vector that represents the previous action a t−1 according to Equation 1, in the argument role labeling task, s comes from the representations of all elements mentioned in Equation 6; a E is a one-hot vector of ground-truth label (expert, or "real data") while a A denotes the counterpart from the extractor (agent, or "generator"). The concatenated s and a E is the input for "real data" channel while s and a A build the input for "generator" channel of the discriminator. In our framework, due to different dimensions in two tasks and event types, we have 34 discriminators (1 for sequence labeling, 33 for event argument role labeling with regard to 33 event types). Every discriminator consists of 2 fully-connected layers with a sigmoid output. The original output of D denotes a probability which is bounded in, and we use linear transformation to shift and expand it: Figure 5: An illustrative example of the GAN structure in sequence labeling scenario (argument role labeling scenario has the identical frameworks except vector dimensions). As introduced in Section 4, the "real data" in the original GAN is replaced by feature/state representation (Equation 1, or Equation 6 for argument role labeling scenario) and ground-truth labels (expert actions) in our framework, while the "generator data" consists of features and extractor's attempt labels (agent actions). The discriminator serves as the reward estimator and a linear transform is utilized to extend the D's original output of probability range. DISPLAYFORM3 e.g., in our experiments, we set α = 20 and β = 0.5 and make R(s, a) ∈ [−10, 10].To pursue Equation 13, we minimize the loss function and optimize the parameters in the neural network: DISPLAYFORM4 During the training process, after we feed neural network mentioned in Section 3.1 and 3.2 with a mini-batch of data, we collect the features (or states s), corresponding extractor labels (agent actions a A) and ground-truth (expert actions a E) to update the discriminators according to Equation 15; then we feed features and extractor labels into the discriminators to acquire reward values and train the extractor -or the generator from GAN's perspective. Since the discriminators are continuously optimized, if the extractor (generator) makes repeated mistakes or makes surprising ones (e.g., considering a PER as a Place), the margin of rewards between correct and wrong labels expands and outputs reward with larger absolute values. Hence, in sequence labeling task, the updated Q-values are updated with a more discriminative difference, and, similarly, in argument role labeling task, the P (a|s) also increases or decreases more significantly with a larger absolute reward values. Figure 5 illustrates how we utilize GAN for reward estimation. In case where discriminators are not sufficiently optimized (e.g., in early epochs) and may output undesired values -e.g., negative for correct actions, we impose a hard margiñ R(s, a) = max(0.1, R(s, a)) when a is correct, min(−0.1, R(s, a)) otherwise to ensure that correct actions will always take positive reward values and wrong ones take negative. In training phase, the extractor selects labels according to the rankings of Q-values in Equation 3 and 8 and GANs will issue rewards to update the Q-tables and policy probabilities; and we also adopt -greedy strategy: we set a probability threshold ∈ and uniformly sample a number ρ ∈ before the extractor commits a label for an instance: a = arg max a Q(s, a), if ρ ≥ Randomly pick up an action, if othersWith this strategy, the extractor is able to explore all possible labels (including correct and wrong ones), and acquires rewards with regard to all labels to update the neural networks with richer information. Moreover, after one step of -greedy exploration, we also force the extractor to commit ground-truth labels and issue it with expert (highest) rewards, and update the parameters accordingly. This additional step is inspired by BID14 , 2018], which combines cross-entropy loss from supervised models with RL loss functions 3. Such combination can simultaneously and explicitly encourage correct labels and penalize wrong labels and greatly improve the efficiency of pursuing optimal models. To evaluate the performance with our proposed approach, we utilize ACE2005 documents excluding informal documents from cts (Conversational Telephone Speech) and un (UseNet) and we have 5, 272 triggers and 9, 612 arguments. We follow training (529 documents with 14, 180 sentences), validation (30 documents with 863 sentences) and test (40 documents with 672 sentences) splits and adopt the same criteria of the evaluation to align with [BID17 :• An entity (named entities and nominals) is correct if its entity type and offsets find a match in the ground truth.• A trigger is correct if its event type and offsets find a match in the ground truth.• An argument is correctly labeled if its event type, offsets and role find a match in the ground truth. We use ELMo embeddings 4 BID15. Because ELMo is delivered with builtin Bi-LSTMs, we treat ELMo embedding as context embeddings in Figure 2 and 4. We use GAIL-ELMo in the tables to denote the setting. Moreover, in order to disentangle the contribution from ELMo embeddings, we also present the performance in a non-ELMo setting (denoted as GAIL-W2V) which utilizes the following embedding techniques to represent tokens in the input sentence.• Token surface embeddings: for each unique token in the training set, we have a lookup dictionary for embeddings which is randomly initialized and updated in the training phase.• Character-based embeddings: each character also has a randomly initialized embedding, and will be fed into a token-level Bi-LSTM network, the final output of this network will enrich the information of token.• POS embeddings: We apply Part-of-Speech (POS) tagging on the sentences using Stanford CoreNLP tool BID21. The POS tags of the tokens also have a trainable look-up dictionary (embeddings).• Pre-trained embeddings: We also acquire embeddings trained from a large and publicly available corpus. These embeddings preserve semantic information of the tokens and they are not updated in the training phase. We concatenate these embeddings and feed them into the Bi-LSTM networks as demonstrated in Figure 2 and 4. To relieve over-fitting issues, we utilize dropout strategy on the input data during the training phase. We intentionally set "UNK" (unknown) masks, which hold entries in the look-up dictionaries of tokens, POS tags and characters. We randomly mask known tokens, POS tags and characters in the training sentences with "UNK" mask. We also set an all-0 vector on Word2Vec embeddings of randomly selected tokens. We tune the parameters according to the F1 score of argument role labeling. For Qlearning, we set a discount factor γ = 0.01. For all RL tasks, we set exploration threshold = 0.1. We set all hidden layer sizes (including the ones on discriminators) and LSTM (for subsentence Bi-LSTM) cell memory sizes as 128. The dropout rate is 0.2. When optimizing the parameters in the neural networks, we use SGD with Momentum and the learning rates start from 0.02 (sequence labeling), 0.005 (argument labeling) and 0.001 (discriminators), then the learning rate will decay every 5 epochs with exponential of 0.9; all momentum values are set as 0.9.For the non-ELMo setting, we set 100 dimensions for token embeddings, 20 for PoS embeddings, and 20 for character embeddings. For pre-trained embeddings, we train a 100-dimension Word2Vec [] model from English Wikipedia articles (January 1st, 2017), with all tokens preserved and a context window of 5 from both left and right. We also implemented an RL framework with fixed rewards of ±5 as baseline with identical parameters as above. For sequence labeling (entity and event trigger detection task), we also set an additional reward value of −50 for B-I errors, namely an I-label does not follow B-label with the same tag name (e.g., I-GPE follows B-PER). We use RL-W2V and RL-ELMo to denote these fixed-reward settings. We compare the performance of entity extraction (including named entities and nominal mentions) with the following state-of-the-art and high-performing approaches:• JointIE []: a joint approach that extracts entities, relations, events and argument roles using structured prediction with rich local and global linguistic features.• JointEntityEvent BID23: an approach that simultaneously extracts entities and arguments with document context. • Tree-LSTM []: a Tree-LSTM based approach that extracts entities and relations. JointIE [] 85.2 76.9 80.8 JointEntityEvent BID23 83.5 80.2 81.8 Tree-LSTM [] 82.9 83.9 83.4 KBLSTM BID24 85 BID24 • KBLSTM BID24: an LSTM-CRF hybrid model that applies knowledge base information on sequence labeling. From Table 1 we can conclude that our proposed method outperforms the other approaches, especially with an impressively high performance of recall. CRF-based models are applied on sequence labeling tasks because CRF can consider the label on previous token to avoid mistakes such as appending an I-GPE to a B-PER, but it neglects the information from the later tokens. Our proposed approach avoids the aforementioned mistakes by issuing strong penalties (negative reward with large absolute value); and the Q-values in our sequence labeling sub-framework also considers rewards for the later tokens, which significantly enhances our prediction performance. For event extraction performance with system-predicted entities as argument candidates, besides [] and BID23 we compare 5 our performance with:• dbRNN BID17: an LSTM framework incorporating the dependency graph (dependency-bridge) information to detect event triggers and argument roles. TAB6 demonstrates that the performance of our proposed framework is better than state-of-the-art approaches except lower F1 score on argument identification against BID17. BID17 utilizes Stanford CoreNLP to detect the noun phrases and take the detected phrases as argument candidates, while our argument candidates come from system predicted entities and some entities may be missed. However, BID17's approach misses entity type information, which cause many errors in argument role labeling task, whereas our argument candidates hold entity types, and our final role labeling performance is better than BID17.Our framework is also flexible to consume ground-truth (gold) annotation of entities as argument candidates. And we demonstrate the performance comparison with the following state-of-the-art approaches on the same setting besides BID17:• JointIE-GT []: similar to [], the only difference is that this approach detects arguments based on ground-truth entities.5. Some high-performing event approaches such as [,] have no argument role detection, thus they are not included for the sake of fair comparison. ], an RNN-based approach which integrates local lexical features. For this setting, we keep the identical parameters (including both trained and preset ones) and network structures which we used to report our performance in Table 1 and 2, and we substitute system-predicted entity types and offsets with ground-truth counterparts. TAB7 demonstrates that, without any further deliberate tuning, our proposed approach can still provide better performance. We notice that some recent approaches [] consolidate argument role labels with same names from different event types (e.g., Adjudicator in Trial-Hearing, Charge-Indict, Sue, Convict, etc.), for argument role labeling they only deal with 37 categories while our setting consists of 143 categories (with a hierarchical routine of 33 event types and 3-7 roles for each type). The strategy of consolidation can boost the scores and our early exploration with similar strategy reaches an argument role labeling F1 score of 61.6 with gold entity annotation, however, the appropriateness with regard to ACE schema definition still concerns us. For example, the argument role Agent appear in Injure, Die, Transport, Start-Org, Nominate, Elect, Arrest-Jail and Release-Parole events, the definition of each Agent in these types includes criminals, business people, law enforcement officers and organizations which have little overlap and it is meaningless and ridiculous to consider these roles within one single label. Moreover, when analyzing errors from this setting, we encounter errors such as Attacker in Meet events or Destination in Mary events, which completely violate ACE schema. Hence, for the sake of solid comparison, we do not include this setting, though we still appreciate and honor any work and attempt to pursue higher performance. The statistical in Table 1, 2 and 3 demonstrate that dynamic rewards outperforms the settings with fixed rewards. As presented in Section 4, fixed reward setting resembles classification methods with cross-entropy loss, which treat errors equally and do not incorporate much information from errors, hence the performance is similar to some earlier approaches but does not outperform state-of-the-art. For instances with ambiguity, our dynamic reward function can provide more salient margins between correct and wrong labels: e.g., "... they sentenced him to death...", with the identical parameter set as aforementioned, reward for the wrong Die label is −5.74 while correct Execute label gains 6.53. For simpler cases, e.g., "... submitted his resignation...", we have flatter rewards as 2.74 for End-Position, −1.33 for None or −1.67 for Meet, which are sufficient to commit correct labels. Scores in Table 1, 2 and 3 prove that non-ELMo settings already outperform state-of-the-art, which confirms the advantage and contribution of our GAIL framework. Moreover, in spite of insignificant drop in fixed reward setting, we agree that ELMo is a good replacement 6 for a combination of word, character and PoS embeddings. The only shortcoming according to our empirical practice is that ELMo takes huge amount of GPU memory and the training procedure is slow (even we do not update the pre-trained parameters during our training phase). Losses of scores are mainly missed trigger words and arguments. For example, the Meet trigger "pow-wow " is missed because it is rarely used to describe a formal political meeting; and there is no token with similar surface form -which can be recovered using character embedding or character information in ELMo setting -in the training data. We observe some special erroneous cases due to fully biased annotation. In the sentence "Bombers have also hit targets...", the entity "bombers" is mistakenly classified as the Attacker argument of the Attack event triggered by the word "hit". Here the "bombers" refers to aircraft and is considered as a VEH (Vehicle) entity, and should be an Instrument in the Attack event, while "bombers" entities in the training data are annotated as Person (who detonates bombs), which are never Instrument. This is an ambiguous case, however, it does not compromise our claim on the merit of our proposed framework against ambiguous errors, because our proposed framework still requires a mixture of different labels to acknowledge ambiguity. One of the recent event extraction approaches mentioned in the introductory section BID13 utilizes GAN in event extraction. The GAN in the cited work outputs generated features to regulate the event model from features leading to errors, while our approach directly assess the mistakes to explore levels of difficulty in labels. Moreover, our approach also covers argument role labeling, while the cited paper does not. RL-based methods have been recently applied to a few information extraction tasks such as relation extraction; and both relation frameworks from BID9 BID25 apply RL on entity relation detection with a series of predefined rewards. We are aware that the term imitation learning is slightly different from inverse reinforcement learning. Techniques of imitation learning BID5 BID16 BID3 attempt to map the states to expert actions by following demonstration, which resembles supervised learning, while inverse reinforcement learning BID0 BID20 BID28 BID11 BID2 estimates the rewards first and apply the rewards to RL. BID22 is an imitation learning application on bio-medical event extraction, and there is no reward estimator used. We humbly recognize our work as inverse reinforcement learning approach although "GAIL" is named after imitation learning. In this paper, we propose an end-to-end entity and event extraction framework based on inverse reinforcement learning. Experiments have demonstrated that the performance benefits from dynamic reward values estimated from discriminators in GAN, and we also demonstrate the performance of recent embedding work in the experiments. In the future, besides releasing the source code, we also plan to further visualize the reward values and attempt to interpret these rewards so that researchers and event extraction system developers are able to better understand and explore the algorithm and remaining challenges. Our future work also includes using cutting edge approaches such as BERT BID6, and exploring joint model in order to alleviate impact from upstream errors in current pipelined framework. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJlsVZ966m | We use dynamic rewards to train event extractors. |
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick", allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Frechet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65. Figure 1: Class-conditional samples generated by our model. The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks at the forefront of efforts to generate highfidelity, diverse images with models learned directly from data. GAN training is dynamic, and sensitive to nearly every aspect of its setup (from optimization parameters to model architecture), but a torrent of research has yielded empirical and theoretical insights enabling stable training in a variety of settings. Despite this progress, the current state of the art in conditional ImageNet modeling achieves an Inception Score of 52.5, compared to 233 for real data. In this work, we set out to close the gap in fidelity and variety between images generated by GANs and real-world images from the ImageNet dataset. We make the following three contributions towards this goal:• We demonstrate that GANs benefit dramatically from scaling, and train models with two to four times as many parameters and eight times the batch size compared to prior art. We introduce two simple, general architectural changes that improve scalability, and modify a regularization scheme to improve conditioning, demonstrably boosting performance.• As a side effect of our modifications, our models become amenable to the "truncation trick," a simple sampling technique that allows explicit, fine-grained control of the tradeoff between sample variety and fidelity.• We discover instabilities specific to large scale GANs, and characterize them empirically. Leveraging insights from this analysis, we demonstrate that a combination of novel and existing techniques can reduce these instabilities, but complete training stability can only be achieved at a dramatic cost to performance. Our modifications substantially improve class-conditional GANs. When trained on ImageNet at 128×128 resolution, our models (BigGANs) improve the state-of-the-art Inception Score (IS) and Fréchet Inception Distance (FID) from 52.52 and 18.65 to 166.5 and 7.4 respectively. We also successfully train BigGANs on ImageNet at 256×256 and 512×512 resolution, and achieve IS and FID of 232.5 and 8.1 at 256×256 and IS and FID of 241.5 and 11.5 at 512×512. Finally, we train our models on an even larger dataset -JFT-300M -and demonstrate that our design choices transfer well from ImageNet. Code and weights for our pretrained generators are publicly available 1. A Generative Adversarial Network (GAN) involves Generator (G) and Discriminator (D) networks whose purpose, respectively, is to map random noise to samples and discriminate real and generated samples. Formally, the GAN objective, in its original form involves finding a Nash equilibrium to the following two player min-max problem: DISPLAYFORM0 where z ∈ R dz is a latent variable drawn from distribution p(z) such as N (0, I) or U[−1, 1]. When applied to images, G and D are usually convolutional neural networks . Without auxiliary stabilization techniques, this training procedure is notoriously brittle, requiring finely-tuned hyperparameters and architectural choices to work at all. Much recent research has accordingly focused on modifications to the vanilla GAN procedure to impart stability, drawing on a growing body of empirical and theoretical insights (; Sønderby et al., 2017;). One line of work is focused on changing the objective function BID1;; BID3 ) to encourage convergence. Another line is focused on constraining D through gradient penalties (; ;) or normalization , both to counteract the use of unbounded loss functions and ensure D provides gradients everywhere to G.Of particular relevance to our work is Spectral Normalization , which enforces Lipschitz continuity on D by normalizing its parameters with running estimates of their first singular values, inducing backwards dynamics that adaptively regularize the top singular direction. analyze the condition number of the Jacobian of G and find that performance is dependent on G's conditioning. find that employing Spectral Normalization in G improves stability, allowing for fewer D steps per iteration. We extend on these analyses to gain further insight into the pathology of GAN training. Other works focus on the choice of architecture, such as SA- GAN which adds the self-attention block from to improve the ability of both G and D to model global structure. ProGAN trains high-resolution GANs in the single-class setting by training a single model across a sequence of increasing resolutions. In conditional GANs class information can be fed into the model in various ways. In it is provided to G by concatenating a 1-hot class vector to the noise vector, and the objective is modified to encourage conditional samples to maximize the corresponding class probability predicted by an auxiliary classifier. de for ablations of our proposed modifications. Batch is batch size, Param is total number of parameters, Ch. is the channel multiplier representing the number of units in each layer, Shared is using shared embeddings, Skip-z is using skip connections from the latent to multiple layers, Ortho. is Orthogonal Regularization, and Itr indicates if the setting is stable to 10 6 iterations, or it collapses at the given iteration. Other than rows 1-4, are computed across 8 random initializations. modify the way class conditioning is passed to G by supplying it with classconditional gains and biases in BatchNorm layers. , D is conditioned by using the cosine similarity between its features and a set of learned class embeddings as additional evidence for distinguishing real and generated samples, effectively encouraging generation of samples whose features match a learned class prototype. Objectively evaluating implicit generative models is difficult . A variety of works have proposed heuristics for measuring the sample quality of models without tractable likelihoods (; ; BID4). Of these, the Inception Score and Fréchet Inception Distance have become popular despite their notable flaws BID2. We employ them as approximate measures of sample quality, and to enable comparison against previous work. In this section, we explore methods for scaling up GAN training to reap the performance benefits of larger models and larger batches. As a baseline, we employ the SA-GAN architecture of , which uses the hinge loss GAN objective. We provide class information to G with class-conditional BatchNorm (; de) and to D with projection . The optimization settings follow (notably employing Spectral Norm in G) with the modification that we halve the learning rates and take two D steps per G step. For evaluation, we employ moving averages of G's weights following;; , with a decay of 0.9999. We use Orthogonal Initialization , whereas previous works used N (0, 0.02I) or Xavier initialization . Each model is trained on 128 to 512 cores of a Google TPUv3 Pod , and computes BatchNorm statistics in G across all devices, rather than per-device as is typical. We find progressive growing unnecessary even for our 512×512 models. Additional details are in Appendix C.We begin by increasing the batch size for the baseline model, and immediately find tremendous benefits in doing so. Rows 1-4 of TAB1 show that simply increasing the batch size by a factor of 8 improves the state-of-the-art IS by 46%. We conjecture that this is a of each batch covering more modes, providing better gradients for both networks. One notable side effect of this scaling is that our models reach better final performance in fewer iterations, but become unstable and undergo complete training collapse. We discuss the causes and ramifications of this in Section 4. For these experiments, we report scores from checkpoints saved just before collapse. We then increase the width (number of channels) in each layer by 50%, approximately doubling the number of parameters in both models. This leads to a further IS improvement of 21%, which we posit is due to the increased capacity of the model relative to the complexity of the dataset. Doubling the depth did not initially lead to improvement -we addressed this later in the BigGAN-deep model, which uses a different residual block structure. We note that class embeddings c used for the conditional BatchNorm layers in G contain a large number of weights. Instead of having a separate layer for each embedding , we opt to use a shared embedding, which is linearly projected to each layer's gains and biases . This reduces computation and memory costs, and improves training speed (in number of iterations required to reach a given performance) by 37%. Next, we add direct skip connections (skip-z) from the noise vector z to multiple layers of G rather than just the initial layer. The intuition behind this design is to allow G to use the latent space to directly influence features at different resolutions and levels of hierarchy. In BigGAN, this is accomplished by splitting z into one chunk per resolution, and concatenating each chunk to the conditional vector c which gets projected to the BatchNorm gains and biases. In BigGAN-deep, we use an even simpler design, concatenating the entire z with the conditional vector without splitting it into chunks. Previous works have considered variants of this concept; our implementation is a minor modification of this design. Skip-z provides a modest performance improvement of around 4%, and improves training speed by a further 18%. Unlike models which need to backpropagate through their latents, GANs can employ an arbitrary prior p(z), yet the vast majority of previous works have chosen to draw z from either N (0, I) or U[−1, 1]. We question the optimality of this choice and explore alternatives in Appendix E.Remarkably, our best come from using a different latent distribution for sampling than was used in training. Taking a model trained with z ∼ N (0, I) and sampling z from a truncated normal (where values which fall outside a range are resampled to fall inside that range) immediately provides a boost to IS and FID. We call this the Truncation Trick: truncating a z vector by resampling the values with magnitude above a chosen threshold leads to improvement in individual sample quality at the cost of reduction in overall sample variety. FIG0 (a) demonstrates this: as the threshold is reduced, and elements of z are truncated towards zero (the mode of the latent distribution), individual samples approach the mode of G's output distribution. Related observations about this trade-off were made in .This technique allows fine-grained, post-hoc selection of the trade-off between sample quality and variety for a given G. Notably, we can compute FID and IS for a range of thresholds, obtaining the variety-fidelity curve reminiscent of the precision-recall curve (FIG17). As IS does not penalize lack of variety in class-conditional models, reducing the truncation threshold leads to a direct increase in IS (analogous to precision). FID penalizes lack of variety (analogous to recall) but also rewards precision, so we initially see a moderate improvement in FID, but as truncation approaches zero and variety diminishes, the FID sharply drops. The distribution shift caused by sampling with different latents than those seen in training is problematic for many models. Some of our larger models are not amenable to truncation, producing saturation artifacts FIG0 ) when fed truncated noise. To counteract this, we seek to enforce amenability to truncation by conditioning G to be smooth, so that the full space of z will map to good output samples. For this, we turn to Orthogonal Regularization BID5, which directly enforces the orthogonality condition: DISPLAYFORM0 where W is a weight matrix and β a hyperparameter. This regularization is known to often be too limiting , so we explore several variants designed to relax the constraint while still imparting the desired smoothness to our models. The version we find to work best removes the diagonal terms from the regularization, and aims to minimize the pairwise cosine similarity between filters but does not constrain their norm: DISPLAYFORM1 where 1 denotes a matrix with all elements set to 1. We sweep β values and select 10 −4, finding this small added penalty sufficient to improve the likelihood that our models will be amenable to truncation. Across runs in TAB1, we observe that without Orthogonal Regularization, only 16% of models are amenable to truncation, compared to 60% when trained with Orthogonal Regularization. We find that current GAN techniques are sufficient to enable scaling to large models and distributed, large-batch training. We find that we can dramatically improve the state of the art and train models up to 512×512 resolution without need for explicit multiscale methods like. Despite these improvements, our models undergo training collapse, necessitating early stopping in practice. In the next two sections we investigate why settings which were stable in previous works become unstable when applied at scale. Much previous work has investigated GAN stability from a variety of analytical angles and on toy problems, but the instabilities we observe occur for settings which are stable at small scale, necessitating direct analysis at large scale. We monitor a range of weight, gradient, and loss statistics during training, in search of a metric which might presage the onset of training collapse, similar to . We found the top three singular values σ 0, σ 1, σ 2 of each weight matrix to be the most informative. They can be efficiently computed using the Alrnoldi iteration method (Golub & der), which extends the power iteration method, used in , to estimation of additional singular vectors and values. A clear pattern emerges, as can be seen in FIG1 (a) and Appendix F: most G layers have well-behaved spectral norms, but some layers (typically the first layer in G, which is over-complete and not convolutional) are ill-behaved, with spectral norms that grow throughout training and explode at collapse. To ascertain if this pathology is a cause of collapse or merely a symptom, we study the effects of imposing additional conditioning on G to explicitly counteract spectral explosion. First, we directly regularize the top singular values σ 0 of each weight, either towards a fixed value σ reg or towards some ratio r of the second singular value, r · sg(σ 1) (with sg the stop-gradient operation to prevent the regularization from increasing σ 1). Alternatively, we employ a partial singular value decomposition to instead clamp σ 0. Given a weight W, its first singular vectors u 0 and v 0, and σ clamp the value to which the σ 0 will be clamped, our weights become: DISPLAYFORM0 where σ clamp is set to either σ reg or r · sg(σ 1). We observe that both with and without Spectral Normalization these techniques have the effect of preventing the gradual increase and explosion of either σ 0 or σ0 σ1, but even though in some cases they mildly improve performance, no combination prevents training collapse. This evidence suggests that while conditioning G might improve stability, it is insufficient to ensure stability. We accordingly turn our attention to D. As with G, we analyze the spectra of D's weights to gain insight into its behavior, then seek to stabilize training by imposing additional constraints. FIG1 (b) displays a typical plot of σ 0 for D (with further plots in Appendix F). Unlike G, we see that the spectra are noisy, σ0 σ1 is well-behaved, and the singular values grow throughout training but only jump at collapse, instead of exploding. The spikes in D's spectra might suggest that it periodically receives very large gradients, but we observe that the Frobenius norms are smooth (Appendix F), suggesting that this effect is primarily concentrated on the top few singular directions. We posit that this noise is a of optimization through the adversarial training process, where G periodically produces batches which strongly perturb D. If this spectral noise is causally related to instability, a natural counter is to employ gradient penalties, which explicitly regularize changes in D's Jacobian. We explore the R 1 zero-centered gradient penalty from Mescheder et al. FORMULA0: DISPLAYFORM0 With the default suggested γ strength of 10, training becomes stable and improves the smoothness and boundedness of spectra in both G and D, but performance severely degrades, ing in a 45% reduction in IS. Reducing the penalty partially alleviates this degradation, but in increasingly ill-behaved spectra; even with the penalty strength reduced to 1 (the lowest strength for which sudden collapse does not occur) the IS is reduced by 20%. Repeating this experiment with various strengths of Orthogonal Regularization, DropOut , and L2 (See Appendix I for details), reveals similar behaviors for these regularization strategies: with high enough penalties on D, training stability can be achieved, but at a substantial cost to performance. We also observe that D's loss approaches zero during training, but undergoes a sharp upward jump at collapse (Appendix F). One possible explanation for this behavior is that D is overfitting to the training set, memorizing training examples rather than learning some meaningful boundary between real and generated images. As a simple test for D's memorization (related to Gulrajani et al. FORMULA0), we evaluate uncollapsed discriminators on the ImageNet training and validation sets, and measure what percentage of samples are classified as real or generated. While the training accuracy is consistently above 98%, the validation accuracy falls in the range of 50-55%, no better than random guessing (regardless of regularization strategy). This confirms that D is indeed memorizing the training set; we deem this in line with D's role, which is not explicitly to generalize, but to distill the training data and provide a useful learning signal for G. Additional experiments and discussion are provided in Appendix G. We find that stability does not come solely from G or D, but from their interaction through the adversarial training process. While the symptoms of their poor conditioning can be used to track and.0/275 BigGAN-deep 128 5.7 ±.3/124.5 ± 2 6.3 ±.3/148.1 ± 4 7.4 ±.6/166.5 ± 1 25 ± 2/253 ± 11 BigGAN-deep 256 6.9 ±.2/171.4 ± 2 7.0 ±.1/202.6 ± 2 8.1 ±.1/232.5 ± 2 27 ± 8/317 ± 6 BigGAN-deep 512 7.5/152.8 7.7/181.4 11.5/241.5 39.7/298Table 2: Evaluation of models at different resolutions. We report scores without truncation (Column 3), scores at the best FID (Column 4), scores at the IS of validation data (Column 5), and scores at the max IS (Column 6). Standard deviations are computed over at least three random initializations.identify instability, ensuring reasonable conditioning proves necessary for training but insufficient to prevent eventual training collapse. It is possible to enforce stability by strongly constraining D, but doing so incurs a dramatic cost in performance. With current techniques, better final performance can be achieved by relaxing this conditioning and allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good .5 EXPERIMENTS We evaluate our models on ImageNet ILSVRC 2012 (at 128×128, 256×256, and 512×512 resolutions, employing the settings from TAB1, row 8. The samples generated by our models are presented in FIG2, with additional samples in Appendix A, and online 2 . We report IS and FID in TAB3 . As our models are able to trade sample variety for quality, it is unclear how best to compare against prior art; we accordingly report values at three settings, with complete curves in Appendix D. First, we report the FID/IS values at the truncation setting which attains the best FID. Second, we report the FID at the truncation setting for which our model's IS is the same as that attained by the real validation data, reasoning that this is a passable measure of maximum sample variety achieved while still achieving a good level of "objectness." Third, we report FID at the maximum IS achieved by each model, to demonstrate how much variety must be traded off to maximize quality. In all three cases, our models outperform the previous state-of-the-art IS and FID scores achieved by and .In addition to the BigGAN model introduced in the first version of the paper and used in the majority of experiments (unless otherwise stated), we also present a 4x deeper model (BigGAN-deep) which uses a different configuration of residual blocks. As can be seen from Table 3: BigGAN on JFT-300M at 256×256 resolution. The FID and IS columns report these scores given by the JFT-300M-trained Inception v2 classifier with noise distributed as z ∼ N (0, I) (non-truncated). The (min FID) / IS and FID / (max IS) columns report scores at the best FID and IS from a sweep across truncated noise distributions ranging from σ = 0 to σ = 2. Images from the JFT-300M validation set have an IS of 50.88 and FID of 1.94.extend to other architectures, and that increased depth leads to improvement in sample quality. Both BigGAN and BigGAN-deep architectures are described in Appendix B.Our observation that D overfits to the training set, coupled with our model's sample quality, raises the obvious question of whether or not G simply memorizes training points. To test this, we perform class-wise nearest neighbors analysis in pixel space and the feature space of pre-trained classifier networks (Appendix A). In addition, we present both interpolations between samples and class-wise interpolations (where z is held constant) in Figures 8 and 9. Our model convincingly interpolates between disparate samples, and the nearest neighbors for its samples are visually distinct, suggesting that our model does not simply memorize training data. We note that some failure modes of our partially-trained models are distinct from those previously observed. Most previous failures involve local artifacts , images consisting of texture blobs instead of objects , or the canonical mode collapse. We observe class leakage, where images from one class contain properties of another, as exemplified by FIG2. We also find that many classes on ImageNet are more difficult than others for our model; our model is more successful at generating dogs (which make up a large portion of the dataset, and are mostly distinguished by their texture) than crowds (which comprise a small portion of the dataset and have more large-scale structure). Further discussion is available in Appendix A. To confirm that our design choices are effective for even larger and more complex and diverse datasets, we also present of our system on a subset of JFT-300M . The full JFT-300M dataset contains 300M real-world images labeled with 18K categories. Since the category distribution is heavily long-tailed, we subsample the dataset to keep only images with the 8.5K most common labels. The ing dataset contains 292M images -two orders of magnitude larger than ImageNet. For images with multiple labels, we sample a single label randomly and independently whenever an image is sampled. To compute IS and FID for the GANs trained on this dataset, we use an Inception v2 classifier trained on this dataset. Quantitative are presented in Table 3. All models are trained with batch size 2048. We compare an ablated version of our model -comparable to SA-GAN but with the larger batch size -against a "full" BigGAN model that makes uses of all of the techniques applied to obtain the best on ImageNet (shared embedding, skip-z, and orthogonal regularization). Our show that these techniques substantially improve performance even in the setting of this much larger dataset at the same model capacity (64 base channels). We further show that for a dataset of this scale, we see significant additional improvements from expanding the capacity of our models to 128 base channels, while for ImageNet GANs that additional capacity was not beneficial. In FIG6 (Appendix D), we present truncation plots for models trained on this dataset. Unlike for ImageNet, where truncation limits of σ ≈ 0 tend to produce the highest fidelity scores, IS is typically maximized for our JFT-300M models when the truncation value σ ranges from 0.5 to 1. We suspect that this is at least partially due to the intra-class variability of JFT-300M labels, as well as the relative complexity of the image distribution, which includes images with multiple objects at a variety of scales. Interestingly, unlike models trained on ImageNet, where training tends to collapse without heavy regularization (Section 4), the models trained on JFT-300M remain stable over many hundreds of thousands of iterations. This suggests that moving beyond ImageNet to larger datasets may partially alleviate GAN stability issues. The improvement over the baseline GAN model that we achieve on this dataset without changes to the underlying models or training and regularization techniques (beyond expanded capacity) demonstrates that our findings extend from ImageNet to datasets with scale and complexity thus far unprecedented for generative models of images. We have demonstrated that Generative Adversarial Networks trained to model natural images of multiple categories highly benefit from scaling up, both in terms of fidelity and variety of the generated samples. As a , our models set a new level of performance among ImageNet GAN models, improving on the state of the art by a large margin. We have also presented an analysis of the training behavior of large scale GANs, characterized their stability in terms of the singular values of their weights, and discussed the interplay between stability and performance. In the BigGAN model FIG3 ), we use the ResNet GAN architecture of , which is identical to that used by , but with the channel pattern in D modified so that the number of filters in the first convolutional layer of each block is equal to the number of output filters (rather than the number of input filters, as in Miyato et al. FORMULA0 ; Gulrajani et al. FORMULA0). We use a single shared class embedding in G, and skip connections for the latent vector z (skip-z). In particular, we employ hierarchical latent spaces, so that the latent vector z is split along its channel dimension into chunks of equal size (20-D in our case), and each chunk is concatenated to the shared class embedding and passed to a corresponding residual block as a conditioning vector. The conditioning of each block is linearly projected to produce per-sample gains and biases for the BatchNorm layers of the block. The bias projections are zero-centered, while the gain projections are centered at 1. Since the number of residual blocks depends on the image resolution, the full dimensionality of z is 120 for 128 × 128, 140 for 256 × 256, and 160 for 512 × 512 images. The BigGAN-deep model (Figure 16) differs from BigGAN in several aspects. It uses a simpler variant of skip-z conditioning: instead of first splitting z into chunks, we concatenate the entire z with the class embedding, and pass the ing vector to each residual block through skip connections. BigGAN-deep is based on residual blocks with bottlenecks , which incorporate two additional 1 × 1 convolutions: the first reduces the number of channels by a factor of 4 before the more expensive 3 × 3 convolutions; the second produces the required number of output channels. While BigGAN relies on 1 × 1 convolutions in the skip connections whenever the number of channels needs to change, in BigGAN-deep we use a different strategy aimed at preserving identity throughout the skip connections. In G, where the number of channels needs to be reduced, we simply retain the first group of channels and drop the rest to produce the required number of channels. In D, where the number of channels should be increased, we pass the input channels unperturbed, and concatenate them with the remaining channels produced by a 1 × 1 convolution. As far as the network configuration is concerned, the discriminator is an exact reflection of the generator. There are two blocks at each resolution (BigGAN uses one), and as a BigGAN-deep is four times deeper than BigGAN. Despite their increased depth, the BigGAN-deep models have significantly fewer parameters mainly due to the bottleneck structure of their residual blocks. For example, the 128 × 128 BigGAN-deep G and D have 50.4M and 34.6M parameters respectively, while the corresponding original BigGAN models have 70.4M and 88.0M parameters. All BigGAN-deep models use attention at 64 × 64 resolution, channel width multiplier ch = 128, and z ∈ R 128. ReLU, Global sum pooling DISPLAYFORM0 ReLU, Global sum pooling in G. For BigGAN-deep, we use the learning rate of 2 · 10 −4 in D and 5 · 10 −5 in G for 128 × 128 models, and 2.5 · 10 −5 in both D and G for 256 × 256 and 512 × 512 models. We experimented with the number of D steps per G step (varying it from 1 to 6) and found that two D steps per G step gave the best . We use an exponential moving average of the weights of G at sampling time, with a decay rate set to 0.9999. We employ cross-replica BatchNorm in G, where batch statistics are aggregated across all devices, rather than a single device as in standard implementations. Spectral Normalization is used in both G and D, following SA-GAN . We train on a Google TPU v3 Pod, with the number of cores proportional to the resolution: 128 for 128×128, 256 for 256×256, and 512 for 512×512. Training takes between 24 and 48 hours for most models. We increase from the default 10 −8 to 10 −4 in BatchNorm and Spectral Norm to mollify low-precision numerical issues. We preprocess data by cropping along the long edge and rescaling to a given resolution with area resampling. The default behavior with batch normalized classifier networks is to use a running average of the activation moments at test time. Previous works have instead used batch statistics when sampling images. While this is not technically an invalid way to sample, it means that are dependent on the test batch size (and how many devices it is split across), and further complicates reproducibility. We find that this detail is extremely important, with changes in test batch size producing drastic changes in performance. This is further exacerbated when one uses exponential moving averages of G's weights for sampling, as the BatchNorm running averages are computed with non-averaged weights and are poor estimates of the activation statistics for the averaged weights. To counteract both these issues, we employ "standing statistics," where we compute activation statistics at sampling time by running the G through multiple forward passes (typically 100) each with different batches of random noise, and storing means and variances aggregated across all forward passes. Analogous to using running statistics, this in G's outputs becoming invariant to batch size and the number of devices, even when producing a single sample. We run our networks on CIFAR-10 using the settings from TAB1, row 8, and achieve an IS of 9.22 and an FID of 14.73 without truncation. We compute the IS for both the training and validation sets of ImageNet. At 128×128 the training data has an IS of 233, and the validation data has an IS of 166. At 256×256 the training data has an IS of 377, and the validation data has an IS of 234. At 512×512 the training data has an IS of 348, and the validation data has an IS of 241. The discrepancy between training and validation scores is due to the Inception classifier having been trained on the training data, ing in high-confidence outputs that are preferred by the Inception Score. FIG6: JFT-300M IS vs. FID at 256×256. We show truncation values from σ = 0 to σ = 2 (top) and from σ = 0.5 to σ = 1.5 (bottom). Each curve corresponds to a row in Table 3. The curve labeled with baseline corresponds to the first row (with orthogonal regularization and other techniques disabled), while the rest correspond to rows 2-4 -the same architecture at different capacities (Ch). While most previous work has employed N (0, I) or U[−1, 1] as the prior for z (the noise input to G), we are free to choose any latent distribution from which we can sample. We explore the choice of latents by considering an array of possible designs, described below. For each latent, we provide the intuition behind its design and briefly describe how it performs when used as a drop-in replacement for z ∼ N (0, I) in an SA-GAN baseline. As the Truncation Trick proved more beneficial than switching to any of these latents, we do not perform a full ablation study, and employ z ∼ N (0, I) for our main to take full advantage of truncation. The two latents which we find to work best without truncation are Bernoulli {0, 1} and Censored Normal max (N (0, I), 0), both of which improve speed of training and lightly improve final performance, but are less amenable to truncation. We also ablate the choice of latent space dimensonality (which by default is z ∈ R 128), finding that we are able to successfully train with latent dimensions as low as z ∈ R 8, and that with z ∈ R 32 we see a minimal drop in performance. While this is substantially smaller than many previous works, direct comparison to single-class networks (such as those in , which employ a z ∈ R 512 latent space on a highly constrained dataset with 30,000 images) is improper, as our networks have additional class information provided as input. • N (0, I). A standard choice of the latent space which we use in the main experiments.• U[−1, 1]. Another standard choice; we find that it performs similarly to N (0, I).• Bernoulli {0, 1}. A discrete latent might reflect our prior that underlying factors of variation in natural images are not continuous, but discrete (one feature is present, another is not). This latent outperforms N (0, I) (in terms of IS) by 8% and requires 60% fewer iterations.• max (N (0, I), 0), also called Censored Normal. This latent is designed to introduce sparsity in the latent space (reflecting our prior that certain latent features are sometimes present and sometimes not), but also allow those latents to vary continuously, expressing different degrees of intensity for latents which are active. This latent outperforms N (0, I) (in terms of IS) by 15-20% and tends to require fewer iterations.• Bernoulli {−1, 1}. This latent is designed to be discrete, but not sparse (as the network can learn to activate in response to negative inputs). This latent performs near-identically to N (0, I).• Independent Categorical in {−1, 0, 1}, with equal probability. This distribution is chosen to be discrete and have sparsity, but also to allow latents to take on both positive and negative values. This latent performs near-identically to N (0, I).• N (0, I) multiplied by Bernoulli {0, 1}. This distribution is chosen to have continuous latent factors which are also sparse (with a peak at zero), similar to Censored Normal but not constrained to be positive. This latent performs near-identically to N (0, I).• Concatenating N (0, I) and Bernoulli {0, 1}, each taking half of the latent dimensions. This is inspired by, and is chosen to allow some factors of variation to be discrete, while others are continuous. This latent outperforms N (0, I) by around 5%.• Variance annealing: we sample from N (0, σI), where σ is allowed to vary over training. We compared a variety of piecewise schedules and found that starting with σ = 2 and annealing towards σ = 1 over the course of training mildly improved performance. The space of possible variance schedules is large, and we did not explore it in depth -we suspect that a more principled or better-tuned schedule could more strongly impact performance.• Per-sample variable variance: N (0, σ i I), where σ i ∼ U[σ l, σ h] independently for each sample i in a batch, and (σ l, σ h) are hyperparameters. This distribution was chosen to try and improve amenability to the Truncation Trick by feeding the network noise samples with non-constant variance. This did not appear to affect performance, but we did not explore it in depth. One might also consider scheduling (σ l, σ h), similar to variance annealing. Figure 21: G training statistics with σ 0 in G regularized towards 1. Collapse occurs after 125000 iterations.(a) σ0 (b) DISPLAYFORM0 Figure 22: D training statistics with σ 0 in G regularized towards 1. Collapse occurs after 125000 iterations. DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 In this section, we present and discuss additional investigations into the stability of our models, expanding upon the discussion in Section 4. The symptoms of collapse are sharp and sudden, with sample quality dropping from its peak to its lowest value over the course of a few hundred iterations. We can detect this collapse when the singular values in G explode, but while the (unnormalized) singular values grow throughout training, there is no consistent threshold at which collapse occurs. This raises the question of whether it is possible to prevent or delay collapse by taking a model checkpoint several thousand iterations before collapse, and continuing training with some hyperparameters modified (e.g., the learning rate).We conducted a range of intervention experiments wherein we took checkpoints of a collapsed model ten or twenty thousand iterations before collapse, changed some aspect of the training setup, then observed whether collapse occurred, when it occurred relative to the original collapse, and the final performance attained at collapse. We found that increasing the learning rates (relative to their initial values) in either G or D, or both G and D, led to immediate collapse. This occurred even when doubling the learning rates from 2 · 10 −4in D and 5 · 10 −5 in G, to 4 · 10 −4 in D and 1 · 10 −4 in G, a setting which is not normally unstable when used as the initial learning rates. We also tried changing the momentum terms (Adam's β 1 and β 2), or resetting the momentum vectors to zero, but this tended to either make no difference or, when increasing the momentum, cause immediate collapse. FIG0 ), the noise spikes resemble an impulse response: at each spike, the spectra jump upwards, then slowly decrease, with some oscillation. One possible explanation is that this behavior is a consequence of D memorizing the training data, as suggested by experiments in Section 4.2. As it approaches perfect memorization, it receives less and less signal from real data, as both the original GAN loss and the hinge loss provide zero gradients when D outputs a confident and correct prediction for a given example. If the gradient signal from real data attenuates to zero, this can in D eventually becoming biased due to exclusively received gradients that encourage its outputs to be negative. If this bias passes a certain threshold, D will eventually misclassify a large number of real examples and receive a large gradient encouraging positive outputs, ing in the observed impulse responses. This argument suggests several fixes. First, one might consider an unbounded loss (such as the Wasserstein loss BID1) which would not suffer this gradient attentuation. We found that even with gradient penalties and brief re-tuning of optimizer hyperparameters, our models did not stably train for more than a few thousand iterations with this loss. We instead explored changing the margin of the hinge loss as a partial compromise: for a given model and minibatch of data, increasing the margin will in more examples falling within the margin, and thus contributing to the loss. We explored a range of novel and existing techniques which ended up degrading or otherwise not affecting performance in our setting. We report them here; our evaluations for this section are not as thorough as those for the main architectural choices. Our intention in reporting these is to save time for future work, and to give a more complete picture of our attempts to improve performance or stability. We note, however, that these must be understood to be specific to the particular setup we used. A pitfall of reporting negative is that one might report that a particular technique doesn't work, when the reality is that this technique did not have the desired effect when applied in a particular way to a particular problem. Drawing overly general might close off potentially fruitful avenues of research.• We found that doubling the depth (by inserting an additional Residual block after every upor down-sampling block) hampered performance.• We experimented with sharing class embeddings between both G and D (as opposed to just within G). This is accomplished by replacing D's class embedding with a projection from G's embeddings, as is done in G's BatchNorm layers. In our initial experiments this seemed to help and accelerate training, but we found this trick scaled poorly and was sensitive to optimization hyperparameters, particularly the choice of number of D steps per G step.• We tried replacing BatchNorm in G with WeightNorm , but this crippled training. We also tried removing BatchNorm and only having Spectral Normalization, but this also crippled training.• We tried adding BatchNorm to D (both class-conditional and unconditional) in addition to Spectral Normalization, but this crippled training.• We tried varying the choice of location of the attention block in G and D (and inserting multiple attention blocks at different resolutions) but found that at 128×128 there was no noticeable benefit to doing so, and compute and memory costs increased substantially. We found a benefit to moving the attention block up one stage when moving to 256×256, which is in line with our expectations given the increased resolution.• We tried using filter sizes of 5 or 7 instead of 3 in either G or D or both. We found that having a filter size of 5 in G only provided a small improvement over the baseline but came at an unjustifiable compute cost. All other settings degraded performance.• We tried varying the dilation for convolutional filters in both G and D at 128×128, but found that even a small amount of dilation in either network degraded performance.• We tried bilinear upsampling in G in place of nearest-neighbors upsampling, but this degraded performance.• In some of our models, we observed class-conditional mode collapse, where the model would only output one or two samples for a subset of classes but was still able to generate samples for all other classes. We noticed that the collapsed classes had embedings which had become very large relative to the other embeddings, and attempted to ameliorate this issue by applying weight decay to the shared embedding only. We found that small amounts of weight decay (10 −6) instead degraded performance, and that only even smaller values (10 −8) did not degrade performance, but these values were also too small to prevent the class vectors from exploding. Higher-resolution models appear to be more resilient to this problem, and none of our final models appear to suffer from this type of collapse.• We experimented with using MLPs instead of linear projections from G's class embeddings to its BatchNorm gains and biases, but did not find any benefit to doing so. We also experimented with Spectrally Normalizing these MLPs, and with providing these (and the linear projections) with a bias at their output, but did not notice any benefit.• We tried gradient norm clipping (both the global variant typically used in recurrent networks, and a local version where the clipping value is determined on a per-parameter basis) but found this did not alleviate instability. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | B1xsqj09Fm | GANs benefit from scaling up. |
In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation. Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks. Furthermore, provide an upper bound for target error when transferring the knowledge, which can be summarized as minimizing the source error and distance between marginal distributions simultaneously. However, common methods based on the theory usually ignore the joint error such that samples from different classes might be mixed together when matching marginal distribution. And in such case, no matter how we minimize the marginal discrepancy, the target error is not bounded due to an increasing joint error. To address this problem, we propose a general upper bound taking joint error into account, such that the undesirable case can be properly penalized. In addition, we utilize constrained hypothesis space to further formalize a tighter bound as well as a novel cross margin discrepancy to measure the dissimilarity between hypotheses which alleviates instability during adversarial learning. Extensive empirical evidence shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks. The advent of deep convolutional neural networks brings visual learning into a new era. However, the performance heavily relies on the abundance of data annotated with ground-truth labels. Since traditional machine learning assumes a model is trained and verified in a fixed distribution (single domain), where generalization performance is guaranteed by VC theory (N.), thus it cannot always be applied to real-world problem directly. Take image classification task as an example, a number of factors, such as the change of light, noise, angle in which the image is pictured, and different types of sensors, can lead to a domain-shift thus harm the performance when predicting on test data. Therefore, in many practical cases, we wish that a model trained in one or more source domains is also applicable to another domain. As a solution, domain adaptation (DA) aims to transfer the knowledge learned from a source distribution, which is typically fully labeled into a different (but related) target distribution. This work focus on the most challenging case, i.e, unsupervised domain adaptation (UDA), where no target label is available. suggests that target error can be minimized by bounding the error of a model on the source data, the discrepancy between distributions of the two domains, and a small optimal joint error. Owing to the strong representation power of deep neural nets, many researchers focus on learning domain-invariant features such that the discrepancy of two feature spaces can be minimized. For aligning feature distributions across domains, mainly two strategies have been substantially explored. The first one is bridging the distributions by matching all their statistics . The second strategy is using adversarial learning to build a minimax game between domain discriminator and feature extractor, where a domain discriminator is trained to distinguish the source from the target while the feature extractor is learned to confuse it simultaneously (; ;). In spite of the remarkable empirical accomplished by feature distribution matching schemes, they still suffer from a major limitation: the joint distributions of feature spaces and categories are not well aligned across data domains. As is reported in , such methods fail to generalize in certain closely related source/target pairs, e.g., digit classification adaptation from MNIST to SVHN. One potential reason is when matching marginal distributions of source and target domains, samples from different classes can be mixed together, where the joint error becomes nonnegligible since no hypothesis can classify source and target at the same time. This work aims to address the above problem by incorporating joint error to formalize an optimizable upper bound such that the undesired overlap due to a wrong match can be properly penalized. We evaluate our proposal on several different classification tasks. In some experimental settings, our method outperforms other methods by a large margin. The contributions of this work can be summarized as follows: · We propose a novel upper bound taking joint error into account and theoretically prove that our proposal can reduce to several other methods under certain simplifications. · We construct a constrained hypothesis space such that a much tighter bound can be obtained during optimization. · We adopt a novel measurement, namely cross margin discrepancy, for the dissimilarity of two hypotheses on certain domain to alleviate the instability during adversarial learning and provide reliable performance. The upper bound proposed by invokes numerous approaches focusing on reducing the gap between source and target domains by learning domain-invariant features, which can be achieved through statistical moment matching. Long et al. (2015; use maximum mean discrepancy (MMD) to match the hidden representations of certain layers in a deep neural network. Transfer Component Analysis (TCA) tries to learn a subspace across domains in a Reproducing Kernel Hilbert Space (RKHS) using MMD that dramatically minimize the distance between domain distributions. Adaptive batch normalization (AdaBN) modulates the statistics from source to target on batch normalization layers across the network in a parameterfree way. Another way to learn domain-invariant features is by leveraging generative adversarial network to produce target features that exactly match the source. relax divergence measurement in the upper bound by a worst case which is equivalent to the maximum accuracy that a discriminator can possibly achieve when distinguishing source from target. follow this idea but separate the training procedure into classification stage and adversarial learning stage where an independent feature extractor is used for target. Saito et al. (2017b) explore a tighter bound by explicitly utilizing task-specific classifiers as discriminators such that features nearby the support of source samples will be favored by extractor. introduce margin disparity discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss to bridge the gap between theory and algorithm. Methods perform distribution alignment on pixel-level in raw input, which is known as image-to-image translation, are also proposed (; ; ; ; ;). Distribution matching may not only bring the source and target domains closer, but also mix samples with different class labels together. Therefore, Saito et al. (2017a);; aim to use pseudo-labels to learn target discriminative representations encouraging a lowdensity separation between classes in the target domain . However, this usually requires auxiliary data-dependent hyper-parameter to set a threshold for a reliable prediction. present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions, where the back-propagation of training objective is highly dependent on pseudo-labels. We consider the unsupervised domain adaptation as a binary classification task (our proposal holds for multi-class case) where the learning algorithm has access to a set of n labeled points {( . from the source domain S and a set of m unlabeled points {(x i t) ∈ X} m i=1 sampled i.i.d. from the target domain T. Let f S: X → {0, 1} and f T: X → {0, 1} be the optimal labeling functions on the source and target domains, respectively. Let (usually 0-1 loss) denotes a distance metric between two functions over a distribution that satisfies symmetry and triangle inequality. As a commonly used notation, the source risk of hypothesis h: X → {0, 1} is the error w.r.t. the true labeling function f S under domain S, i.e., S (h):= S (h, f S). Similarly, we use T (h) to represent the risk of the target domain. With these notations, the following bound holds: For simplicity, we use The above upper bound is minimized when h = f S, and it is equivalent to T (f S, f T) owing to the triangle inequality: Furthermore, we demonstrate in such case, our proposal is equivalent to an upper bound of optimal joint error λ because: Fig. 1b illustrates a case where common methods fail to penalize the undesirable situation when samples from different classes are mixed together during distribution matching, while our proposal is capable to do so (for simplicity we assume f S takes a specific form, then T (f S, f T) measures the overlapping area 2 and 5, which is equivalent to the optimal joint error λ). Since optimal labeling functions f S, f T are not available during training, we shall further relax the upper bound by taking supreme w.r.t f S, f T within a hypothesis space H: Then minimizing target risk T (h) becomes optimizing a minimax game and since the max-player taking two parameters f 1, f 2 is too strong, we introduce a feature extractor g to make the min-player stronger. Applying g to the source and target distributions, the overall optimization problem can be written as: min However, if we leave H unconstrained, the supreme term can be arbitrary large. In order to obtain a tight bound, we need to restrict the size of hypothesis space as well as maintain the upper bound. For f S ∈ H 1 ≤ H and f T ∈ H 2 ≤ H, the following holds: The constrained subspace for H 1 is trivial as according to its definition, f S must belong to the space consisting of all classifiers for source domain, namely H sc. However, the constrained subspace for H 2 is a little problematic since we have no access to the true labels of the target domain, thus it is hard to locate f T. Therefore, the only thing we can do is to construct a hypothesis space for H 2 that most likely contains f T. As is illustrated in Fig. 1c, when matching distributions of source and target domain, if the ideal case is achieved where the conditional distributions of source and target are perfectly aligned, then it is fare to assume f T ∈ H sc. However, if the worst case is reached where samples from different class are mixed together, then we tend to believe f T / ∈ H sc. Considering this, we present two proposals in the following sections based on different constraints. We assume H 2 is a space where the hypothesis can classify the samples from the source domain with an accuracy of γ ∈, namely H γ sc, such that we can avoid the worst case by choosing a small value for the hyper-parameter γ when a huge domain shift exists. In practice, it is difficult to actually build such a space and sample from it due to a huge computational cost. Instead, we use a weighted source risk to constrain the behavior of f 2 as an approximation to the sample from H γ sc, which leads to the final training objective: Firstly, we build a space consisting of all classifiers for approximate target domain {( based on pseudo labels which can be obtained by the prediction of h during training procedure, namely Ht c . Here, we assume H 2 is an intersection between two hypothesis spaces, i.e. Given enough reliable pseudo labels, we can be confident about f T ∈ H 2 . Analogously, the training objective is given by: The reason we make such an assumption for H 2 can be intuitively explained by Fig. 2 . If H 2 = H sc, then f 2 must perfectly classify the source samples, and it is possible that f 2 does not pass through some target samples (shadow are in 2a), especially when two domains differ a lot. In such case, the feature extractor can move those samples into either side of the decision boundary to reduce the training objective (shadow area) which is not a desired behavior. With an appropriate constraint (2b), as for the extractor, the only way to reduce the objective (shadow area) is to move those samples (orange) inside of f 2. Following the above notations, we consider a score function s(x, y) for multi-class classification where the output indicates the confidence of the prediction on class y. Thus an induced labeling function named l s from X → Y is given by: As a well-established theory, the margin between data points and the classification surface plays a significant role in achieving strong generalization performance. In order to quantify into differentiable measurement as a surrogate of 0-1 loss, we introduce the margin theory developed by , where a typical form of margin loss can be interpreted as: We aim to utilize this concept to further improve the reliability of our proposed method by leveraging this margin loss to define a novel measurement of the discrepancy between two hypotheses f 1, f 2 (e.g. softmax) over a distribution D, namely cross margin discrepancy: Before further discussion, we firstly construct two distributions D f1, D f2 induced by f 1, f 2 respectively, where Then we consider the case where two hypotheses f 1 and f 2 disagree, i.e. y 1 = l f1 (x) = l f2 (x) = y 2, and the primitive loss is defined as: Then the cross margin discrepancy can be viewed as: which is a sum of the margin loss for f 1 on D f2 and the margin loss for f 2 on D f1, if we use the logarithm of softmax as the score function. Thanks to the trick introduced by to mitigate the burden of exploding or vanishing gradients when performing adversarial learning, we further define a dual form as: This dual loss resembles the objective of the generative adversarial network, where two hypotheses try to increase the probability of their own prediction and simultaneously decrease the probability of their opponents; whereas the feature extractor is trained to increase the probability of their opponents, such that the discrepancy can be minimized without unnecessary oscillation. However, a big difference here is when training extractor, GANs usually maximize an alternative term log f 1 (x, y 2) + log f 2 (x, y 1) instead of directly minimizing log(1 − f 1 (x, y 2)) + log(1 − f 2 (x, y 1)) since the original term is close to zero if the discriminator achieves optimum. In our case, the hypothesis can hardly beat the extractor thus the original form can be more smoothly optimized. During the training procedure, the two hypotheses will eventually agree on some points (l f1 (x) = l f2 (x) = y) such that we need to define a new form of discrepancy measurement. Analogously, the primitive loss and its dual form are given by: Another reason why we propose such a discrepancy measurement is that it helps alleviate instability for adversarial learning. As is illustrated in Fig. 3b, during optimization of a minimax game, when two hypotheses try to maximize the discrepancy (shadow area), if one moves too fast around the decision boundary such that the discrepancy is actually maximized w.r.t some samples, then these samples can be aligned on either side to decrease the discrepancy by tuning the feature extractor, which is not a desired behavior. From Fig. 3a, we can see that our proposed cross margin discrepancy is flat for the points around original, i.e. the gradient w.r.t those points nearby the decision boundary will be relatively small, which helps to prevent such failure. propose a novel margin-aware generalization bound based on scoring functions and a new divergence MDD. The training objective used in MDD can be alternatively interpreted as (here (h, f) denotes the margin disparity): min Recall Eq.7, if we set f 1 = f 2 = f and free the constraint of f to any f ∈ H, our proposal degrades exactly to MDD. As is discussed above, when matching distribution, if and only if the ideal case is achieved, where the conditional distributions of induced feature spaces for source and target perfectly match (which is not always possible), can we assume two optimal labeling functions f S, f T to be identical. Besides, an unconstrained hypothesis space for f is definitely not helpful to construct a tight bound. Saito et al. (2017b) propose two task-specific classifiers f 1, f 2 that are used to separate the decision boundary on source domain, such that the extractor is encouraged to produce features nearby the support of the source samples. The objective used in MCD can be alternatively interpreted as (here (f 1, f 2) is quantified by L 1 ): Again, recall Eq.7, if we set γ = 1 and h = f 1, MCD is equivalent to our proposal. As is proved in section 3.1, the upper bound is optimized when h = f S. However, it no longer holds since the upper bound is relaxed by taking supreme to form an optimizabel objective, i.e. setting h = f 1 does not necessarily minimize the objective. Besides, as we discuss above, a fixed γ = 1, i.e H 2 = H sc lacks generality since we have no idea about where f T might be, such that it is not likely to be applicable to those cases where a huge domain shift exists. In this experiment, our proposal is assessed in four types of adaptation scenarios by adopting commonly used digits datasets (Fig. 6 in Appendix),i.e. MNIST , Street View House Numbers (SVHN) , and USPS such that the could be easily compared with other popular methods. All experiments are performed in an unsupervised fashion without any kinds of data augmentation. Details are omitted due to the limit of space (see A.1). We report the accuracy of different methods in Tab. 1. Our proposal outperforms the competitors in almost all settings except a single compared with GPDA . However, their solution requires sampling that increases data size and is equivalent to adding Gaussian noise to the last layer of a classifier, which is considered as a type of augmentation. Our success partially owes to combining the upper bound with the joint error, especially when optimal label functions differ from each other (e.g. MNIST →SVHN). Moreover, as most scenarios are relatively easy for adaptation thus we can be more confident about the hypothesis space constraint owing to reliable pseudo-labels, which leads to a tighter bond during optimization. The demonstrate our proposal can improve generalization performance by adopting both of these advantages. Fig. 4a shows that our original proposal is quite sensitive to the hyper-parameter γ. In short, setting γ = 1 here yields the best performance in most situations, since f S, f T can be quite close after aligning distributions, especially in these easily adapted scenarios. However, in MNIST → SVHN, setting γ = 0.1 gives the optimum which means that f S, f T are so far away due to a huge domain shift that no extractor is capable of introducing an identical conditional distribution in feature space. The improvement is not that much, but at least we outperform the directly comparable MCD and show the importance of hypothesis space constraint. Furthermore, Fig. 4d empirically proves simply minimizing the discrepancy between the marginal distribution does not necessarily lead to a reliable adaptation, which demonstrates the importance of joint error. In addition, Fig. 4b,Fig. 4c show the superiority of the cross margin discrepancy which accelerates the convergence and provides a slightly better . We further evaluate our method on object classification. The VisDA dataset is used here, which is designed for 12-class adaptation task from synthetic object to real object images. Source domain contains 152,397 synthetic images (Fig. 7a in Appendix), which are generated by rendering 3D CAD models. Data of the target domain is collected from MSCOCO consisting of 55,388 real images (Fig. 7b in Appendix). Since the 3D models are generated without the and color diversity, the synthetic domain is quite different from the real domain, which makes it a much more difficult problem than digits adaptation. Again, this experiment is performed in unsupervised fashion and no data augmentation technique excluding horizontal flipping is allowed. Details are omitted due to the limit of space (see A.2). We report the accuracy of different methods in Tab. 2, and find that our proposal outperforms the competitors in all settings. The image structure of this dataset is more complex than that of digits, yet our method provides reliable performance even under such a challenging condition. Another key observation is that some competing methods (e.g., DANN, MCD), which can be categorized as distribution matching based on adversarial learning, perform worse than MDD which simply matches statistics, in classes such as plane and horse, while our methods perform better across all classes, which clearly demonstrates the importance of taking the joint error into account. As for the original proposal (Fig. 5c), performance drops when relaxing the constraint which actually confuses us. Because we expect an improvement here since it is unbelievable that f S, f T eventually lie in a similar space judging from the relatively low prediction accuracy. As for the alternative proposal (Fig. 5d), we test the adaptation performance for different η and the prediction accuracy drastically drops when η goes beyond 0.2. One possible cause is that f 2 and h might almost agree on target domain, such that the prediction of h could not provide more accurate information for the target domain without introducing noisy pseudo labels. Fig. 5a, Fig. 5b again demonstrate the superiority of cross margin discrepancy and the importance of joint error. In this work, we propose a general upper bound that takes the joint error into account. Then we further pursuit a tighter bound with reasonable constraint on the hypothesis space. Additionally, we adopt a novel cross domain discrepancy for dissimilarity measurement which alleviates the instability during adversarial learning. Extensive empirical evidence shows that learning an invariant representation is not enough to guarantee a good generalization in the target domain, as the joint error matters especially when the domain shift is huge. We believe our take an important step towards understanding unsupervised domain adaptation, and also stimulate future work on the design of stronger adaptation algorithms that manage to align conditional distributions without using pseudo-labels from the target domain. layer and a 0.5 rate of dropout is conducted. Nesterov accelerated gradient is used for optimization with a mini-batch size of 32 and an initial learning rate of 10 −3 which decays exponentially. As for the hyper-parameter, we test for γ = {0.1, 0.5, 0.9, 1} and η = {0, 0.5, 0.8, 0.9}. For a direct comparison, we report the accuracy after 10 epochs. Office-Home is a complex dataset (Fig. 8) containing 15,500 images from four significantly different domains: Art (paintings, sketches and/or artistic depictions), Clipart (clip art images), Product (images without ), and Real-world (regular images captured with a camera). In this experiment, following the protocol from , we evaluate our method by fine-tuning a ResNet-50 model pretrained on ImageNet . The model except the last layer combined with a single-layer bottleneck is used as feature extractor and a randomly initialized 2-layer fully-connected network with width 1024 is used as a classifier, where batch normalization is applied to each layer and a 0.5 rate of dropout is conducted. For optimization, we use the SGD with the Nesterov momentum term fixed to 0.9, where the batch size is 32 and learning rate is adjusted according to. From Tab. 3, we can see the adaptation accuracy of the source-only method is rather low, which means a huge domain shift is quite likely to exist. In such case, simply minimizing the discrepancy between source and target might not work as the joint error can be increased when aligning distributions, thus the assumption of the basic theory does not hold anymore. On the other hand, our proposal incorporates the joint error into the target error upper bound which can boost the performance especially when there is a large domain shift. Figure 8: Sample images from the Office-Home dataset . Office-31 (Fig. 9) is a popular dataset to verify the effectiveness of a domain adaptation algorithm, which contains three diverse domains, Amazon from Amazon website, Webcam by web camera and DSLR by digital SLR camera with 4,652 images in 31 unbalanced classes. In this experiment, following the protocol from , we evaluate our method by fine-tuning a ResNet-50 model pretrained on ImageNet . The model used here is almost identical to the one in Office-Home experiment except a different width 2048 for classifiers. For optimization, we use the SGD with the Nesterov momentum term fixed to 0.9, where the batch size is 32 and learning rate is adjusted according to. The on Office-31 are reported in Tab. 4. As for the tasks D→A and W→A, judging from the adaptation accuracy of those previous methods that do not consider the joint error, it is quite likely that samples from different classes are mixed together when matching distributions. Our method shows an advantage in such case which demonstrates that the proposal manage to penalize the undesired matching between source and target. As for the tasks A→W and A→D, our proposal shows relatively high variance and poor performance especially in A→W. One possible reason is that our method depends on building reliable classifiers for the source domain to satisfy the constraint. However, the Amazon dataset contains a lot of noise (Fig. 10) such that the decision boundary of the source classifiers varies drastically in each iteration during training procedure, which can definitely harm the convergence. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rkerLaVtDr | joint error matters for unsupervised domain adaptation especially when the domain shift is huge |
A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves ‘knowledge’ from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other ‘knowledge exchange’ methods. Research communities have amassed a sizable number of deep net architectures for different tasks, and new ones are added almost daily. Some of those architectures are trained from scratch while others are fine-tuned, i.e., before training, their weights are initialized using a structurally similar deep net which was trained on different data. Beyond fine-tuning, particularly in reinforcement learning, teachers have also been considered in one way or another by BID23; BID6; BID30; BID13; BID0 BID21; BID2; BID26; BID20. For instance, progressive neural net BID23 keeps multiple teachers during both training and inference, and learns to extract useful features from the teachers for a new target task. PathNet BID6 uses genetic algorithms to choose pathways from a giant network for learning new tasks.' Growing a Brain' BID30 fine-tunes a neural network while growing the network's capacity (wider or deeper layers). Actor-mimic BID20 pre-trains a big model on multiple source tasks, then the big model is used as a weight initialization for a new model which will be trained on a new target task. Knowledge distillation BID9 distills knowledge from a large ensemble of models to a smaller student model. However, all the aforementioned techniques have limitations. For example, progressive neural net models BID23 grow with the number of teachers. This large number of parameters limits the number of teachers a progressive neural net can handle, and largely increases the training and testing time. In PathNet BID6, searching over a big network for pathways is computationally intensive. For fine-tuning based methods such as'Growing a Brain' BID30 and actor-mimic BID20, only one pretrained model can be used at a time. Hence, their performance heavily relies on the chosen pretrained model. To address these shortcomings, we develop knowledge flow which moves'knowledge' of multiple teachers when training a student. Irrespective of how many teachers we use, the student is guaranteed to become independent at the final stage of training and the size of the ing student net remains constant. In addition, our framework makes no restrictions on the deep net size of the teacher and student, which provides flexibility in choosing teacher models. Importantly, our approach is applicable to a variety of tasks from reinforcement learning to fully-supervised training. We evaluate knowledge flow on a variety of tasks from reinforcement learning to fully-supervised learning. In particular, we follow BID23; BID6 and compare on the same ∞ k=0 γ k r t+k, where γ is the discount factor. The expected future reward when observing state x and when following policy π θπ is defined as V π θπ (x t) = E τ ∼π θπ [R t |x t], where τ = {(x t, a t, r t), (x t+1, a t+1, r t+1),...} is a trajectory generated by following π θπ from state x t.The goal of reinforcement learning is to find a policy that maximizes the expected future reward from each state x t. Without loss of generality, in this paper, we follow the asynchronous advantage actor-critic (A3C) formulation BID17. In A3C, the policy mapping π θπ (x) = arg max a∈Aπθπ (a|x) is obtained from a probability distribution over states, wherê π θπ (a|x) is modeled by a deep net with parameters θ π. The value function is also approximated by a deep net V θv (x), having parameters θ v.To optimize the policy parameters θ π given a state x t, a loss function based on a scaled negative log-likelihood and a negative entropy regularizer is common: DISPLAYFORM0 [− logπ θπ (a t |x t)(R t − V θv (x t)) − βH(π θπ (·|x t))].Hereby, R t = k−1 i=0 γ i r t+i + γ k V θv (x t+k) is the empirical k-step return obtained when starting in state x t, and |τ | is the length of the trajectory τ generated by following π θπ. The scalar β ≥ 0 is a user-specified constant, and H(π θπ (·|x t)) is the entropy function, which encourages exploration by favoring a uniform probability distributionπ θπ (a|x). To optimize the value function V θv, it is common to use the squared loss DISPLAYFORM1 By minimizing the empirical expectation of τ π (θ π) and τ v (θ v), i.e., by addressing DISPLAYFORM2 alternatingly, we learn a policy and a value function that maximize expected return. Instead of optimizing the programs given in Eq. and Eq. from scratch, the aforementioned warm-start techniques (see Sec. 5 for more) are applicable. To address their mentioned shortcomings, we propose knowledge flow, a framework that moves'knowledge' from an arbitrary number of deep nets, henceforth referred to as'teachers' to a deep net under training, called the'student.' (c) Average normalized weights for teachers' and the student's layers. At the beginning of training, the student heavily relies on teacher one. As training progresses, teacher one's weight decreases, and the student's weight increases until the student is eventually independent. DISPLAYFORM0 Knowledge flow is outlined on example deep nets in FIG0. We train the parameters of the student net which are randomly initialized. To this end we take advantage of teachers, whose parameters are fixed and obtained from pre-trained models on different source tasks by different algorithms. For example, for reinforcement learning, we may consider teachers trained by A3C BID17, A2C or DQN BID16.'Knowledge' of multiple teachers is transferred to a student by adding transformed and scaled intermediate representations from the teacher deep nets to the student net. To achieve this, we modify the student net, i.e., f θ in the supervised setting and π θπ (a|x), V θv (x) in the reinforcement learning case. We add teacher representations which are transformed by multiplication with a trainable matrix Q and scaled via a weight p w that is normalized to sum to one for each student layer and parameterized via trainable parameters w. The normalized weights encode which of the teachers' or the student's representation to trust at every layer of the student net. Note that a teacher can help the student at different levels of abstraction with input from different levels of its net. Importantly, after training, the student model should perform well on the target task without relying on teachers. To achieve this, as training progresses, we increasingly encourage a high normalized weight on the student representation, which forces the student to eventually capture all the'knowledge.' Due to the trainable scaling, at an early stage of training, we observe the student to rely heavily on the'knowledge' of the teacher to quickly obtain better performance. However, as training proceeds, the student is encouraged to become more and more independent. During final stages of training, the student will no longer be able to rely on teachers, which ensures that the student has learned to master the desired task on its own. This is observed in FIG0 To formally encourage this successive transfer we introduce two additional loss functions. The first, referred to as the dependency loss dep (w), captures how much a student relies on teachers. It depends on the weight vector w which encodes the strength of the coupling. The second one ensures that a student's behavior doesn't change rapidly when the teachers' influence decreases. We use loss KL (·, ·) to capture the change. By combining student net modifications and additional loss terms, for the supervised task we obtain DISPLAYFORM0 and for reinforcement learning the transformed program reads as follows: DISPLAYFORM1 Loss˜ · · (θ, w, Q) originates from the original loss · · (θ) (Eqs. FORMULA3 - FORMULA2) by transforming the deep net to include cross-connections, hence its dependence on w, Q. The tilde ('·') denotes this dependence, also for probability distributionf and policy distributionπ. Parameters from the current and a previous iteration are referred to via θ and θ old respectively. For both supervised and reinforcement learning, λ 1 and λ 2 control the strength which is used to decrease the influence of the teacher. A low λ 1 allows the student to rely on teachers. Close to the end of training, the student should be independent. Therefore, we set λ 1 to a small value at the beginning, and gradually increase its value as training progresses. Note that we don't make any assumptions about teachers and student's objective. If a teacher's and student's objective differ, negative transfer may occur initially. However, the proposed method quickly decreases the weight for teacher layers to reduce this effect. Despite differences, students could potentially still benefit from the low level representation of the teachers. We do observe this low level knowledge transfer in our experiments. In the following we first describe how to modify the deep nets, before we detail the loss functions dep and KL, which are used to successively decrease the influence of the teachers. For each layer j in the student model, we define a candidate set L j, which contains l j 0 and all the teachers' layers to be considered. For example, in FIG0, layer one of the student model is combined with layer one of teacher one and layer two of teacher two. Therefore, the candidate set of layer one of the student model is given by DISPLAYFORM2 To decide which teachers' or the student's representation to trust at every layer of the student net, we introduce a normalized weight p j w (l) for all j ∈ {1, . . ., L 0}, where l ∈ L j, summing to one for each layer j in the student deep net, i.e., DISPLAYFORM3 To obtain the combined intermediate representation of layer j for the student model, we use DISPLAYFORM4 DISPLAYFORM5 The maximal number of introduced matrices Q in our framework is DISPLAYFORM6 In practice, we don't link a student's layer to every layer of a teacher network. Intuitively, a teachers' bottom layer BID6 and progressive neural network (PNN) BID23. Since PathNet and PNN don't report exact scores we obtain their numbers from their plots and indicate that with a ∼ symbol. The of the state-of-the-art methods: A3C BID17, PPO, and ACKTR BID31 features are very likely irrelevant to a student's top layer features. Indeed, we observed that linking a teachers' bottom layer to a student's top layer generally doesn't yield improvements. Therefore, in practice, we recommend to link one teacher layer to one or two student layers, in which case we introduce on the order of M L 0 matrices Q. Also note that while additional trainable parameters Q and w are introduced in our framework, Q and w are not part of the ing student network since we ensure p DISPLAYFORM7 at the end of training as discussed next. Hence, the additional parameters function as auxiliary knobs that help the student learn faster. In the final stage of training, the student will be independent (see FIG0 (c)) and does no longer rely on Q, w, or any transformed representations from teachers. Decreasing Teachers' Influence: We successively decrease the influence of the teachers during training by gradually encouraging the normalized weight p j w (l j 0) to increase to a value of 1 ∀j ∈ {1, 2, . . ., L 0}. To capture how much the student relies on teachers, we introduce the dependence cost as the negative log probability: DISPLAYFORM8 By minimizing dep (w), we encourage weights for the layers of the student to increase. Hence we encourage the student to become more and more independent. During the final stage of training, p j w (l j 0) approaches one for all j ∈ {1, . . ., L 0}, making the student independent of the transformed representation obtained from teachers. Empirically, we found that a fast decrease of the influence of the teacher can degrade the performance. This is intuitive as it requires some time to find good transformations Q. Moreover, decreasing the influence of a teacher too fast may change the output distribution over labels or actions of the student model too much, and thus lead to performance loss. To prevent changing a student's output distribution too fast, we found a Kullback-Leibler (KL) regularizer to yield good . More specifically, in the case of supervised learning we use DISPLAYFORM9 Hereby, θ is the set of current parameters, and θ old are the previous ones. In the reinforcement learning case we use DISPLAYFORM10 In the following we evaluate knowledge flow on reinforcement and supervised learning tasks. Results are reported by using only the student model to avoid even the smallest influence from any teacher nets. We evaluate knowledge flow on reinforcement learning using Atari games that were used by BID23 BID6. Following existing work, the input to our agent are raw images from the environment. The agent learns to predict actions only based on the rewards and the input images from the environment. The agent chooses an action every four frames, and the last action is repeated on the skipped four frames. For all teacher models and the student model, we use the fully forward architecture of A3C BID17. The model has three hidden layers. The first layer is a convolutional layer with 16 filters of size 8x8 and stride 4. The second layer is a convolutional layer with 32 filters of size 4x4 and stride 2. The third layer is a fully connected layer with 256 hidden units. Following the third hidden layer are two sets of output. One is a softmax output that provides a probability distribution over all valid actions. The other one is a scalar output that provides the estimated value function. We use the same hyper-parameter settings as BID17 except for the learning rate. BID17 use RMSProp with shared statistics while we use Adam with shared statistics, which we found to give better when training the baselines. The learning rate is set to 10 −4 and gradually decreased to zero for all experiments. To select λ 1 and λ 2 in our framework, we follow progressive neural net BID23: randomly sample λ 1 ∈ {0.05, 0.1, 0.5} and λ 2 ∈ {0.001, 0.01, 0.05}. Note that λ 1 is set to zero at the beginning of training, and linearly increased to the sampled value at the end of training. Following BID23, we repeat each experiment 25 times with different random seeds and randomly sampled λ 1 and λ 2. The of the top three out of 25 runs are reported. As A3C, we run 16 agents on 16 CPU cores in parallel. Evaluation Metrics: We follow the evaluation procedure of BID16. The trained student models are evaluated by playing each game for 30 episodes. We also follow the'no-op' procedure: at the beginning of each testing episode, the agents perform up to 30'no-op' actions. Results: We first compare our framework with PathNet BID6 and progressive neural net (PNN) BID23, which are state-of-the-art transfer reinforcement learning frameworks, using their experimental settings. The comparison is summarized in TAB0. The state-of-the-art BID17 BID31 on Atari games are also included in TAB0 for reference. Compared to PathNet, a student model trained using our transfer framework with one teacher achieves higher scores in 11 out of 14 experiments. Compared with PNN, for a two-teacher framework, our trained student model has only 0.7M parameters and PNN has 16M parameters. Nonetheless we observe higher scores in five out of the seven experiments. The demonstrate that knowledge flow effectively transfers knowledge from teachers to the student. TAB0 also indicates that, in our framework, when the number of teachers increases from one to two, the student's performance improves significantly across all experiments. The training curves for the experiments are shown in FIG1. The curve is the average of the top three out of 25 runs. We observe our approach to generally perform very well. To further evaluate knowledge flow, we experiment with different combinations of environment/teacher settings. These settings are not used by PathNet and progressive neural network. The are summarized in TAB2, where "ours w/ expert" represents that one teacher is expert for the target game; "ours w/ non-expert" represents that both teachers are not experts for the target game; "Fine-tune" represents fine-tuning from a non-expert on a new target game; "A3C baseline" represents our implementation of the A3C baseline; "A3C" represents the scores reported originally BID17. Note that our A3C implementation achieves better scores than those reported by BID17 for most of the games. As shown in TAB2, knowledge flow with expert teacher performs better than the baseline across all experiments, which we interpret as evidence that knowledge flow successfully transfers'knowledge' from an expert teacher to the student. In addition, knowledge flow with non-expert teachers also outperforms fine-tuning on a non-expert teacher. The reasons are twofold: First, a student model in knowledge flow can learn from multiple teachers while the fine-tuning method can only start from one setting. Second, in knowledge flow, the student can avoid the negative impact from insufficiently pretrained teachers, while fine-tuning from an insufficiently pretrained model slows down the training process and may degrade the overall performance. The training curves for the experiments are shown in FIG5. More training curves are in the Appendix (Fig. 6). Note that in knowledge flow, the student can benefit from the intermediate representations of the teacher, even if input space, output space and objectives differ. For example, in FIG5, the two teachers are Chopper Command and Space Invaders, which are quite different from the target game Seaquest. The student model still benefits from learning from the teachers and achieves scores ten times larger than learning without teacher and fine-tuning from a teacher. For supervised learning, we use a variety of image classification benchmarks, including CIFAR-10 , CIFAR-100 BID12, STL-10 , and EM-NIST BID4. The parameters λ 1 for the dependent cost and λ 2 for the KL cost are determined using the validation set of each dataset. Evaluation Metrics: To evaluate the trained student model we report top-1 error rate on the test set of each dataset. All plots and reported numbers are the average of three runs obtained using different random seeds. (b) CIFAR-10/CIFAR-100: CIFAR-10 and CIFAR-100 datasets consist of colored images of size 32 × 32. CIFAR-10 (C10) has 10 classes and CIFAR-100 (C100) has 100 classes. For both dataset, the training and test sets contain 50,000 and 10,000 images respectively. We perform all experiments on CIFAR-10 and CIFAR-100 with standard data augmentation BID10.We use Densenet BID10 ) (depth 100, growth rate 24) as a baseline and follow their hyper-parameter settings to train our baseline, teacher and student models. For our approach, we first train teachers on CIFAR-10, CIFAR-100, and SVHN BID18. We then train the student model using a different combination of teachers. We compare our to fine-tuning and the baseline model. As shown in TAB3 (a), for the CIFAR-10 target task, fine-tuning from the CIFAR-100 expert improves 4% over the baseline. Fine-tuning from the SVHN expert performs worse than the baseline model. Intuitively, for the CIFAR-10 target task, the CIFAR-100 deep net is a good teacher while a deep net trained with SVHN isn't. Presented with both good and inadequate teachers, knowledge flow improves by 13% over the baseline. This demonstrates that knowledge flow can not only leverage a good teacher's'knowledge,' but it can also avoid misleading influence. As detailed in TAB3 (b), the are similar on the CIFAR-100 dataset. To further demonstrate the properties of knowledge flow, additional are in the appendix. As mentioned before,'knowledge' transfer has been considered using a variety of techniques. We briefly discuss related work in contrast to our approach in the following and defer details to Sec. 8. PathNet BID6 enables multiple agents to train the same deep net while reusing parameters and avoiding catastrophic forgetting. In contrast to this formulation we consider availability of multiple pre-trained teacher nets. Progressive Net BID23 ) leverages transfer and avoids catastrophic forgetting by introducing lateral connections to previously learned features. Our discussed method uses similar lateral connections. However, in contrast to BID23, our method ensures independence of the student upon training, addressing a limitation in BID23 where only a fraction of the capacity of the student is eventually utilized. Distral a neologism combining'distill & transfer learning' BID26 considers joint training of multiple tasks. Multiple tasks share a'distilled' policy which encodes common behavior between different tasks. While each worker addresses its own task, a shared policy encourages consistency between the policies. Different from Distral, which is a multi-task learning framework, knowledge flow addresses a single task, while in multi-task learning, multiple tasks are addressed at the same time. Hence, common for multi-task learning and knowledge flow is a transfer of information. However, in multi-task learning, information extracted from different tasks are shared to boost performance, while, in knowledge flow, the information of multiple teachers is leveraged to help a student learn better a single, new, previously unseen task. Other related work includes actor-mimic BID20, learning without forgetting BID13, growing a brain BID30, policy distillation, domain adaptation BID19, knowledge distillation BID9 or lifelong learning BID2. A more detailed discussion on related work is provided in Sec. 8 of the supplementary material. We developed a general knowledge flow approach that permits to train a deep net from any number of teachers. We showed for reinforcement learning and supervised learning, demonstrating improvements compared to training from scratch and to fine-tuning. In the future we plan to learn when to use which teacher and how to actively swap teachers during training of a student. BID9 to distill knowledge from a larger model (teacher) to a smaller model (student). The student models have 50% -5% parameters of the teacher models. Following their setup, we conduct experiments on MNIST, MNIST with digit'3' missing in the training set, CIFAR-100, and ImageNet. For MNIST and MNIST with digit'3' missing, following KD, the teacher model is an MLP with two hidden layers of 1200 hidden units, and the student model is an MLP with two hidden layers of 800 hidden units. For CIFAR-100, we use the model from Chen FORMULA2 as teacher model. The student model follows the structure of the teacher, but the number of output channels of each convolutional layer is halved. For ImageNet, the teacher model is a 50-layer ResNet BID8, and the student model is a 18-layer ResNet. The test error of the distilled student model are summarize in TAB4. Our framework has consistently better performance than KD, because the student model in our framework benefits not only from the output layer behavior of the teacher but also from intermediate layer representations of the teacher. The'EMNIST Letters' dataset consists of images of size 28 × 28 pixels showing handwritten letters. It has 26 balanced classes. Each class contains lower and upper case letters. The training and test sets contain 124,800 and 20,800 images respectively. The'EMNIST Digits' dataset consists of images of size 28 × 28 pixels showing handwritten digits. It has 10 balanced classes. The training and test sets contain 240,000 and 40,000 images respectively. In this case we use the MNIST model from as a baseline, teacher and student model. We trained teachers on EMNIST Digits, EMNIST Letters, and EMNIST Letters with only 13 classes. Our target task is EMNIST Letters. The student model is trained with different teachers and the are compared to fine-tuning, the baseline model, and the state-of-the-art on EMNIST.The are summarized in Table 5. Compared to the baseline and fine-tuning, student learning in our framework with expert teacher (EMNIST Letters), semi-expert teacher (Half EMNIST Letters), and non-expert teacher (EMNIST Digits) all have better performance. In FIG6 we illustrate the accuracy over epochs for training of different models. The STL-10 dataset consist of colored images of size 96 × 96 pixels. It has 10 balanced classes. The training set contains 5,000 labeled images and 100,000 unlabeled images. The test set contains 8,000 images. In our experiment, we only use the 5,000 labeled images for training. We use the STL-10 model from as our baseline, teacher and student model. We trained teachers on CIFAR-10 and CIFAR-100. We compare our to fine-tuning and the baseline in Table 6. Note that STL-10 is very similar to CIFAR-10 and CIFAR-100. Therefore, both CIFAR-10 and CIFAR-100 are very good teachers. As shown in Table 6, compared to the baseline, fine-tuning a Table 6: Our approach on the STL-10 dataset (fully supervised). Zhao et al. FORMULA2 25.20 BID27 21.34Baseline 25.50 Fine-tune from C1014.32 Fine-tune from C100 14.38 Ours (C100) 12.35 Ours (C10, C100) 11.09 model using weights pretrained on CIFAR-10 and CIFAR-100 reduce test errors by more than 10%.Compared with fine-tuning, student model training in our framework further reduces the test error by 3%. Note that we only train on the labeled data while other approaches use this data for testing of semi-supervised approaches. Hence our are obtained using fewer data and may not be directly comparable. We still list their in Table 6 for reference. In Fig. 5 we illustrate the accuracy over the epochs of training. We also compare to Distral BID26, which is the state-of-the-art multi-task reinforcement learning framework. We used'KL + ent 1 col', which has a central model (m 0), and a task model (m i) for each task. We perform the experiments on Atari games. In the experiments, we have three tasks (task 1, task 2, task 3). The teachers of task 2 (m 2) and task 3 (m 3) are provided for our framework. Distral is trained for 120M steps (40M steps/task), and our model is trained for 40M steps. For fair comparison, we report of Distral's task 1 model (m 1), which is better than its center model (m 0). The are summarized in TAB5. Distral is suboptimal, because it aims to learn a multi-task agent. In addition, identical action and state space is assumed. When the target task is very different from the source tasks, Distral cannot decrease the teacher influence. In contrast, our framework can decrease a teacher's influence, and thus reduce negative transfer. Following the reviewer's suggestion, we plot the averaged normalized weight (p w) for teachers and the student in the C10 experiment, where C100 and SVHN experts are teachers. Intuitively, the C100 teacher should have a higher p w value than the SVHN teacher, because C100 is more relevant to C10. The plot verifies this intuition. As shown in FIG8, p w of the C100 teacher is higher than that of the SVHN teacher over the entire training. Note, both teachers' normalized weights approach zero at the end of training. To verify that the student really benefits from the knowledge of teachers, we conduct an ablation study suggested by a reviewer. We use teacher models that haven't been trained at all. Intuitively, learning with untrained teachers should have worse performance than learning with knowledgeable teachers. Our experiments verify this intuition. In Fig. 8 (a), where the target task is hero, learning with untrained teachers ('w/ untrained teachers') achieves an average reward of 15934. Learning with knowledgeable teachers ('Ours with seaquest and riverraid teacher') achieves an average reward of 30928. More are presented in Figs. 8 (b, c). The show that knowledge flow achieves higher rewards than training with untrained teachers in different environments and teacher-student settings. The KL term prevents the student's output distribution over actions or labels from drastic changes when the teachers' influence is decreasing. To investigate the importance of the KL term, we conduct an ablation study where the KL coefficient (λ 2) is set to zero. The is summarized in Fig. 9. Considering Fig. 9 (a), where the target task is MsPacman and the teachers are Riverraid and Seaquest experts. Without the KL term, when a teacher's influence decreases, the rewards drop drastically. In contrast, with a KL term, we don't observe performance drops. At the end of training, learning with the KL term achieves an average reward of 2907 and learning without the KL term achieves an average reward of 1215. More are presented in Fig. 9 (b, c), which shows that training with the KL term achieves higher reward than training without the KL term. In additional experiments, following the suggestion of a reviewer, we use architectures for the teacher which differ from the student model. More specifically, we use the model of BID16 as a teacher model. The teacher model consists of 3 convolutional layers, which have 32, 64, and 64 filters, followed by a hidden fully connected layer which has 512 ReLUs. We use the model of BID17 as the student model. The student model consists of 2 convolutional layers, which have 16 and 32 filters respectively, followed by a hidden fully connected layer which has 256 ReLUs. Both models' fully connected layers are followed by two output layers for actions and values. In the experiments, we link each teacher's first convolutional layer to the student's first convolutional layer. Moreover, we link each teacher's third convolutional layer to the student's second convolutional layer, and each teacher's fully connected layer to the student's fully connected layer. In the experiment, the target task is KungFu Master, and the teachers are experts for Seaquest and Riverraid. The are summarized in FIG0. We observed that learning with teachers, whose architecture differs from the student, to have similar performance as learning with teachers which have the same architecture. Consider as an example FIG0, where the target task is KungFu Master, and the teachers are experts for Seaquest and Riverraid. At the end of training, learning with teachers of different architectures achieves an average reward of 37520, and learning with teachers of the same architecture achieves an average reward of 35012. More are shown in FIG0. The show that knowledge flow can enable higher rewards, even if the teachers and the student architectures differ. 7.6 AVERAGE NETWORK AS θ old For the parameters θ old an average network can be used. To investigate how usage of an average network to obtain the parameters θ old affects the performance, we conduct an experiment where θ old is computed using the exponential running average of the model weight. More specifically, θ old is updated as follows: θ old ← α · θ old + (1 − α) · θ, where α = 0.9. The are summarized in FIG0. We observe that using an exponential average to compute θ old in very similar performance as using a single model. Consider FIG0, where the target task is Boxing and the teacher is a Riverraid expert. At the end of training, using an average network to obtain θ old achieves an average reward of 96.2 and using a single network to obtain θ old achieves an average reward of 96.0. More on using an average network are shown in FIG0 (b, c). As mentioned before, variants of'knowledge' transfer have been considered using a variety of techniques, for instance, fine-tuning, progressive neural nets BID23, PathNet BID6,'Growing a Brain' BID30, actor-mimic BID20, learning without forgetting BID13. Also related are techniques on transfer learning and lifelong learning. We discuss those methods and contrast them to our approach in the following. PathNet BID6 enables multiple agents to train the same giant deep net while reusing parameters and avoiding catastrophic forgetting. To this end, agents embedded in the neural net discover which weights can be reused for new tasks and restrict application of gradients to those parameters. In contrast to this formulation we consider availability of multiple teacher nets, which are trained. Progressive Net BID23 ) leverages transfer and avoids catastrophic forgetting by introducing lateral connections to previously learned features. Our discussed method uses similar lateral connections. However, in contrast to BID23, we introduce scaling with normalized weights. This ensures independence of the student upon training, addressing a limitation in BID23 where only a fraction of the capacity of the student is eventually utilized. Distral a neologism combining'distill & transfer learning' BID26 considers joint training of multiple tasks. Multiple tasks share a'distilled' policy which encodes common behavior between different tasks. While each worker addresses its own task, a shared policy encourages consistency between the policies. Different from Distral, which is a multi-task learning framework, knowledge flow addresses a single task, while in multi-task learning, multiple tasks are addressed at the same time. Hence, common for multi-task learning and knowledge flow is a transfer of information. However, in multi-task learning, information extracted from different tasks are shared to boost performance, while, in knowledge flow, the information of multiple teachers is leveraged to help a student learn better a single, new, previously unseen task. Knowledge distillation BID9 distills information form a larger deep net into a smaller one. It assumes both nets are trained on the same dataset. In contrast, our technique allows knowledge transfer between different source and target domains. Actor-mimic BID20 enables an agent to learn how to address multiple tasks simultaneously and generalize the extracted knowledge to new domains. A single policy net learns how to act in a set of tasks following the guidance of several expert teachers. A combination of feature regression and cross entropy loss is used to encourage the student to produce similar actions and representations. Our proposed technique differs in that we take advantage of a teachers representation at the beginning of training, Learning without forgetting BID13 permits to add a new task to a deep net without forgetting the original capabilities. Importantly, only data from the new task is used and the old capabilities are retained by first recording the old networks output on the new data. Similar techniques have been developed by BID7 BID11. In contrast, we transfer'knowledge' from teacher networks more explicitly. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJeOioA9Y7 | ‘Knowledge Flow’ trains a deep net (student) by injecting information from multiple nets (teachers). The student is independent upon training and performs very well on learned tasks irrespective of the setting (reinforcement or supervised learning). |
Despite the impressive performance of deep neural networks (DNNs) on numerous learning tasks, they still exhibit uncouth behaviours. One puzzling behaviour is the subtle sensitive reaction of DNNs to various noise attacks. Such a nuisance has strengthened the line of research around developing and training noise-robust networks. In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input. We provide an efficient and simple approach to approximate such a regularizer for arbitrarily deep networks. This is done by leveraging the analytic expression of the output mean of a shallow neural network, avoiding the need for memory and computation expensive data augmentation. We conduct extensive experiments on LeNet and AlexNet on various datasets including MNIST, CIFAR10, and CIFAR100 to demonstrate the effectiveness of our proposed regularizer. In particular, we show that networks that are trained with the proposed regularizer benefit from a boost in robustness against Gaussian noise to an equivalent amount of performing 3-21 folds of noisy data augmentation. Moreover, we empirically show on several architectures and datasets that improving robustness against Gaussian noise, by using the new regularizer, can improve the overall robustness against 6 other types of attacks by two orders of magnitude. Deep neural networks (DNNs) have emerged as generic models that can be trained to perform impressively well in a variety of learning tasks ranging from object recognition and semantic segmentation to speech recognition and bioinformatics . Despite their increasing popularity, flexibility, generality, and performance, DNNs have been recently shown to be quite susceptible to small imperceptible input noise (; ;). Such analysis gives a clear indication that even state-of-the-art DNNs may lack robustness. Consequently, there has been an ever-growing interest in the machine learning community to study this uncanny behaviour. In particular, the work of demonstrates that there are systematic approaches to constructing adversarial attacks that in misclassification errors with high probability. Even more peculiarly, some noise perturbations seem to be doubly agnostic , i.e. there exist deterministic perturbations that can in misclassification errors with high probability when applied to different networks, irrespective of the input (denoted network and input agnostic). Understanding this degradation in performance under adversarial attacks is of tremendous importance, especially for real-world DNN deployment, e.g. self-driving cars/drones and equipment for the visually impaired. A standard and popular means to alleviate this nuisance is noisy data augmentation in training, i.e. a DNN is exposed to noisy input images during training so as to bolster its robustness during inference. Several works have demonstrated that DNNs can in fact benefit from such augmentation . However, data augmentation in general might not be sufficient for two reasons. Particularly with high-dimensional input noise, the amount of data augmentation necessary to sufficiently capture the noise space will be very large, which will increase training time. Data augmentation with high energy noise can negatively impact the performance on noise-free test examples. This can be explained by the fundamental trade-off between accuracy and robustness . It can also arise from the fact that augmentation forces the DNN to have the same prediction for two vastly different versions of the same input, noise-free and a substantially corrupted version. Despite the impressive performance of DNNs on various tasks, they have been shown to be very sensitive to certain types of noise, commonly referred to as adversarial examples, particularly in the recognition task . Adversarial examples can be viewed as small imperceptible noise that, once added to the input of a DNN, its performance is severely degraded. This finding has incited interest in studying/measuring the robustness of DNNs. The literature is rich with work that aims to unify and understand the notion of network robustness. For instance, suggested a spectral stability analysis for a wide class of DNNs by measuring the Lipschitz constant of the affine transformation describing a fully-connected or a convolutional layer. This was extended to compute an upper bound for a composition of layers, i.e. a DNN. However, this measure sets an upper bound on the robustness over the entire input domain and does not take into account the noise distribution. Later, Fawzi et al. (2017a) defined robustness as the mean support of the minimum adversarial perturbation, which is now the most common definition for robustness. Not only was robustness studied against adversarial perturbations but also against geometric transformations to the input. emphasized the independence of the robustness measure to the ground truth class labels and that it should only depend on the classifier and the dataset distribution. Subsequently, two different metrics to measure DNN robustness were proposed: one for general adversarial attacks and another for noise sampled from uniform distribution. showed the trade-off between robustness and test error from a theoretical point of view on a simple classification problem with hyperspheres. On the other hand, and based on various robustness analyses, several works proposed various approaches in building networks that are robust against noise sampled from well known distributions and against generic adversarial attacks. For instance, proposed a model that was trained to classify adversarial examples with statistical hypothesis testing on the distribution of the dataset. Another approach is to perform statistical analysis on the latent feature space instead , or train a DNN that rejects adversarial attacks . Moreover, the geometry of the decision boundaries of DNN classifiers was studied by Fawzi et al. (2017b) to infer a simple curvature test for this purpose. Using this method, one can restore the orig- Figure 1: Overview of the proposed graph for training Gaussian robust networks. The yellow block corresponds to an arbitrary network Φ(., θ) viewed as the composition of two subnetworks separated by a ReLU. The stream on the bottom computes the output mean µ4 of the network Φ(:, θ) assuming that (i) the noise input distribution is independent Gaussian with variances σ 2 x, and (ii) Ω(. : θ2) is approximated by a linear function. This evaluation for the output mean is efficient as it only requires an extra forward pass (bottom stream), as opposed to other methods that employ computationally and memory intensive network linearizations or data augmentation. inal label and classify the input correctly. Restoring the original input using defense mechanisms, which can only detect adversarial examples, can be done by denoising (ridding it from its adversarial nature) so long as the noise perturbation is well-known and modeled apriori . A fresh approach to robustness was proposed by , where they showed that using bounded ReLUs (if augmented with Gaussian noise) to limit the output range can improve robustness. A different work proposed to distill the learned knowledge from a deep model to retrain a similar model architecture as a means to improving robustness . This training approach is one of many adversarial training strategies for robustness. More closely to our work is , where a new training regularizer was proposed for a large family of DNNs. The proposed regularizer softly enforces that the upper bound of the Lipshitz constant of the output of the network to be less than or equal to one. Moreover and very recently, the work of has derived analytic expressions for the output mean and covariance of networks in the form of (Affine, ReLU, Affine) under a generic Gaussian input. This work also demonstrates how a (memory and computation expensive) two-stage linearization can be employed to locally approximate a deep network with a two layer one, thus enabling the application of the derived expressions on the approximated shallower network. Most prior work requires data augmentation, training new architectures that distill knowledge, or detect adversaries a priori, ing in expensive training routines that may be ineffective in the presence of several input noise types. To this end, we address these limitations through our new regularizer that aims to fundamentally tackle Gaussian input noise without data augmentation and, as a consequence, improves overall robustness against other types of attacks. Background on Network Moments. Networks with a single hidden layer of the form (Affine, ReLU, Affine) can be written in the functional form g(x) = Bmax (Ax + c 1, 0 p) + c 2. The max is an element-wise operator, A ∈ R p×n, and B ∈ R d×p. Thus, g: showed that: Note that µ 2 = Aµ x + c 1, σ 2 = diag (Σ 2), Σ 2 = AΣ x A, Φ and ϕ are the standard Gaussian cumulative (CDF) and density (PDF) functions, respectively. The vector multiplication and division are element-wise operations. Lastly, diag extracts the diagonal elements of a matrix into a vector. For ease of notation, we let To extend the of Theorem to deeper models, a two-stage linearization was proposed in , where (A, B) and (c 1, c 2) are taken to be the Jacobians and biases of the first order Taylor approximation to the two network functions around a ReLU layer in a DNN. Refer to for more details about this expression and the proposed linearization. Proposed Robust Training Regularizer. To propose an alternative to noisy data augmentation to address its drawbacks, one has to realize that this augmentation strategy aims to minimize the expected training loss of a DNN when subjected to noisy input distribution D through sampling. In fact, it minimizes an empirical loss that approximates this expected loss when enough samples are present during training. When sampling is insufficient (a drawback of data augmentation in highdimensions), this approximation is too loose and robustness can suffer. However, if we have access to an analytic expression for the expected loss, expensive data augmentation can be averted. This is the key motivation of the paper. Mathematically, the training loss can be modeled as Here, Φ: R n → R d is any arbitrary network with parameters θ, is the loss function, are the noise-free data-label training pairs, and α ≥ 0 is a trade off parameter. While the first term in Equation 1 is the standard empirical loss commonly used for training, the second term is often replaced with its Monte Carlo estimate through data augmentation. That is, for each training example x i, the second term is approximated with an empirical average ofÑ noisy examples of This will increase the size of the dataset by a factor ofÑ, which will in turn increase training complexity. As discussed earlier, network performance on the noise-free examples can also be negatively impacted. Note that obtaining a closed form expression for the second term in Equation 1 for some of the popularly used losses is more complicated than deriving expressions for the output mean of the network Φ itself, e.g. in Theorem. Therefore, we propose to replace this loss with the following surrogate Because of Jensen's inequality, Equation 2 is a lower bound to Eq Equation 1 when is convex, which is the case for most popular losses including 2 -loss and cross-entropy loss. The proposed second term in Equation 2 encourages that the output mean of the network Φ of every noisy example (x i + n) matches the correct class label y i. This regularizer will stimulate a separation among the output mean of the classes if the training data is subjected to noise sampled from D. Having access to an analytic expression for these means will prompt a simple inexpensive training, where the actual size of the training set is unaffected and augmentation is avoided. This form of regularization is proposed to replace data augmentation. While a closed-form expression for the second term of Equation 2 might be infeasible for a general network Φ, an expensive approximation can be attained. In particular, Theorem Equation 1 provides an analytic expression to evaluate the second term in Equation 2, for when D is Gaussian and when the network is approximated by a two-stage linearization procedure as However, it is not clear how to utilize such a to regularize networks during training with Equation 2 as a loss. This is primarily due to the computationally expensive and memory intensive network linearization proposed in. Specifically, the linearization parameters (A, B, c 1, c 2) are a function of the network parameters, θ, which are updated with every gradient descent step on Equation 2; thus, two-stage linearization has to be performed in every θ update step, which is infeasible. proposes a generic approach to train robust arbitrary networks against noise sampled from an arbitrary distribution D. Since the problem in its general setting is too broad for detailed analysis, we restrict the scope of this work to the class of networks, which are most popularly used and parameterized by θ, Φ(.; θ): with ReLUs as nonlinear activations. Moreover, since random Gaussian noise was shown to exhibit an adversarial nature;; , and it is one of the most well studied noise models for the useful properties it exhibits, we restrict D to the case of Gaussian noise. In particular, D is independent zero-mean Gaussian noise at the input, i.e. x, where σ 2 x ∈ R n is a vector of variances and Diag reshapes the vector elements into a diagonal matrix. Generally, it is still difficult to compute the second term in Equation 2 under Gaussian noise for arbitrary networks. However, if we have access to an inexpensive approximation of the network, avoiding the computationally and memory expensive network linearization in , an approximation to the second term in Equation 2 can be used for efficient robust training directly on θ. Consider the l th ReLU layer in Φ(.; θ). the network can be expressed as Φ(.; θ) = Ω(ReLU l (Υ(., θ 1)); θ 2 ). Note that the parameters of the overall network Φ(.; θ) is the union of the parameters of the two subnetworks Υ(.; θ 1) and Ω(.; θ 2), i.e. θ = θ 1 ∪ θ 2. Throughout this work and to simplify the analysis, we set l = 1. With such a choice of l, the first subnetwork Υ(., θ 1) is affine with θ 1 = {A, c 1}. However, the second subnetwork Ω(., θ 2) is not linear in general, and thus, one can linearize Ω(., θ 2) at E n∼N (0,Σx) [ReLU 1 (Υ (x i + n; θ 1))] = T (µ 2, σ 2) = µ 3. Note that µ 3 is the output mean after the ReLU and µ 2 = Ax i + c 1, since Υ(x i + n; θ 1) = A (x i + n) + c 1. Both T (., .) and σ 2 are defined in Equation 1. Thus, linearizing Ω at µ 3 with linearization parameters (B, c 2) being the Jacobian of Ω and c 2 = Ω(µ 3, θ 2) − Bµ 3, we have that, for any point v i close to µ 3: Ω(v i, θ 2) ≈ Bv i + c 2. While computing (B, c 2) through linearization is generally very expensive, computing the approximation to Equation 2 requires explicit access to neither B nor c 2. Note that this second term for l = 1 is given as: The approximation follows from the assumption that the input to the second subnetwork Ω(.; θ 2), i.e. Or simply, that the input to Ω is close to the mean inputs, i.e. µ 3, to Ω under Gaussian noise. The penultimate equality follows from the linearity of the expectation. As for the last equality, (B, c 2) are the linearization parameters of Ω at µ 3, where c 2 = Ω(µ 3, θ 2) − Bµ 3 by the first order Taylor approximation. Thus, computing the second term of Equation 2 according to Equation 3 can be simply approximated by a forward pass of µ 3 through the second network Ω. As for computing µ 3 = T (µ 2, σ 2), note that µ 2 = Ax i + c 1 in Equation 3, which is equivalent to a forward pass of x i through the first subnetwork because Υ(., θ 1) is linear with θ 1 = {A, c 1}. Moreover, since σ 2 = diag (AΣ x A), we have: The expression for σ 2 can be efficiently computed by simply squaring the linear parameters in the first subnetwork and performing a forward pass of the input noise variance σ 2 x through Υ without the bias c 1 and taking the element-wise square root. Lastly, it is straightforward to compute T (µ 2, σ 2) as it is an elementwise function in Equation 1. The overall computational graph in Figure 1 shows a summary of the computation needed to evaluate the loss in Equation 2 using only forward passes through the two subnetworks Υ and Ω. It is now possible with the proposed efficient approximation of our proposed regularizer in Equation 2 to efficiently train networks on noisy training examples that are corrupted with noise N (0, Σ x) without any form of prohibitive data augmentation. In this section, we conduct experiments on multiple network architectures and datasets to demonstrate the effectiveness of our proposed regularizer in training more robust networks, especially in comparison with data augmentation. To standardize robustness evaluation, we first propose a new unified robustness metric against additive noise from a general distribution D and later specialize it when D is Gaussian. Lastly, we show that networks trained with our proposed regularizer not only outperform in robustness networks trained with Gaussian augmented data. Moreover, we show that such networks are also much more magnitudes times robust against other types of attacks. On the Robustness Evaluation Metric. While there is a consensus on the definition of robustness in the presence of adversarial attacks, as the smallest perturbation required to fool a network, i.e. to change its prediction, it is not straightforward to extend such a definition to additive noise sampled from a distribution D. In particular, the work of tried to address this difficulty by defining the robustness of a classifier around an example x as the distance between x and the closest Figure 2: General trade-off between accuracy and robustness on LeNet. We see, in all plots, that the accuracy tends to be negatively correlated with robustness over varying noise levels and amount of augmentation. Baseline refers to training with neither data augmentation nor our regularizer. However, it is hard to compare the performance of our method against data augmentation from these plots as we can only compare the robustness of models with similar noise-free testing accuracy. decision boundary. However, this definition is difficult to compute in practice and is not scalable, as it requires solving a generally nonconvex optimization problem for every testing example x that may also suffer from poor local minima. To remedy these drawbacks, we present a new robustness metric for generic additive noise. Robustness Against Additive Noise. Consider a classifier Ψ with ψ(x) = arg max i Ψ i (x) as the predicted class label for the example x regardless of the correct class label y i. We define the robustness on a sample x against a generic additive noise sampled from a distribution D as Here, the proposed robustness metric D (x) measures the probability of the classifier to preserve the original prediction of the noise-free example ψ(x) after adding noise, ψ(x + n), from distribution D. Therefore, the robustness over a testing dataset T can be defined as the expected robustness over the test dataset: , for ease, we relax Equation 4 from the probability of preserving the prediction score to a 0/1 robustness over mrandomly sampled examples from D. That is, D (x) = 1 means that, among m randomly sampled noise from D added to x, none changed the prediction from ψ(x). However, if a single example of these m samples changed the prediction from ψ(x), we set D (x) = 0. Thus, the robustness score is the average of this measure over the testing dataset T. Robustness Against Gaussian Noise. For additive Gaussian noise, i.e. D = N (0, Σ x = Diag σ 2 x), robustness is averaged over a range of testing variances σ 2 x. We restrict σ x to 30 evenly sampled values in [0, 0.5], where this set is denoted as A 1. In practice, this is equivalent to sampling m Gaussian examples for each σ x ∈ A, and if none of the m samples changes the prediction of the classifier ψ from the original noise-free example, the robustness for that sample at that σ x noise level is set to 1 and then averaged over the complete testing set. Then, the robustness is the average over multiple σ x ∈ A. To make the computation even more efficient, instead of sampling a large number of Gaussian noise samples (m), we only sample a single noise sample with the average energy over D. That is, we sample a single n of norm n 2 = σ x √ n. This is due to the fact that Experimental Setup. In this section, we demonstrate the effectiveness of the proposed regularizer in improving robustness. Several experiments are performed with our objective Equation 2, where we strike a comparison with data augmentation approaches. Architecture Details. The input images in MNIST (gray-scale) and CIFAR (colored) are squares with sides equal to 28 and 32, respectively. Since AlexNet was originally trained on ImageNet of report for models with a test accuracy that is at least as good as the accuracy of the baseline with a tolerance: 0%, 0.39%, and 0.75% for MNIST, CIFAR10, CIFAR100, respectively. Only the models with the highest robustness are presented. Training with our regularizer can attain similar/better robustness than 21-fold noisy data augmentation on MNIST and CIFAR100, while maintaining a high noise-free test accuracy. sides equal to 224, we will marginally alter the implementation of AlexNet in to accommodate for this difference. First, we change the number of hidden units in the first fully-connected layer (in LeNet to 4096, AlexNet to 256, LeNet on MNIST to 3136). For AlexNet, we changed all pooling kernel sizes from 3 to 2 and the padding size of conv1 from 2 to 5. Second, we swapped each maxpool with the preceding ReLU, which makes training and inference more efficient. Third, we enforce that the first layer in all the models is a convolution followed by ReLU as discussed earlier. Lastly, to simplify analysis, we removed all dropout layers. We leave the details of the optimization hyper-parameters to the appendix. Results. For each model and dataset, we compare baseline models, i.e. models trained with noisefree data and without our regularization, with two others: one using data augmentation and another using our proposed regularizer. Each of the latter has two configurable variables: the level of noise controlled by σ Accuracy vs. Robustness. We start by demonstrating that data augmentation tends to improve the robustness, as captured by (T) over the test set, at the expense of decreasing the testing accuracy on the noise-free examples. Realizing this is essential for a fair comparison, as one would need to compare the robustness of networks that only have similar noise-free testing accuracies. To this end, we ran 60 training experiments with data augmentation on LeNet with three datasets (MNIST, CIFAR10, and CIFAR100), four augmentation levels (Ñ ∈ {2, 6, 11, 21}), and five noise levels (σ x ∈ A = {0.125, 0.25, 0.325, 0.5, 1.0}). In contrast, we ran robust training experiments using Equation 2 with the trade-off coefficient α ∈ {0.5, 1, 1.5, 2, 5, 10, 20} on the same datasets, but we extended the noise levels σ x to include the extreme noise regime of σ x ∈ {2, 5, 10, 20}. These noise levels are too large to be used for data augmentation, especially since x ∈ n; however, as we will see, they are still beneficial for our proposed regularizer. Figure 2 shows both the testing accuracy and robustness as measured by (T) over a varying range of training σ x for the data augmentation approach of LeNet on MNIST, CIFAR-10 and CIFAR-100. It is important to note here that the main goal of these plots is not to compare the robustness score, but rather, to demonstrate a very important trend. In particular, increasing the training σ x for each approach degrades testing accuracy on noise-free data. However, the degradation in our approach is much more graceful since the trained LeNet model was never directly exposed to individually corrupted examples during training as opposed to the data augmentation approach. Note that our regularizer enforces the separation between the expected output prediction analytically. Moreover, the robustness of both methods consistently improves as the training σ x increases. This trend holds even on the easiest dataset (MNIST). Interestingly, models trained with our regularizer enjoy an improvement in testing accuracy over the baseline model. Such behaviour only emerges with a large factor of augmentation, N = 21, and a small enough training σ x on MNIST. This indicates that models can benefit from better accuracy with a good approximation of Equation 1 through our proposed objective or through extensive Monte Carlo estimation. However, as σ x increases, Monte Carlo estimates of the second term in Equation 1 via data augmentation (withÑ = 21) is no longer enough to capture the noise. Robustness Comparison. For fair comparison, it is essential to only compare the robustness of networks that achieve similar testing accuracy, since perfect robustness is attainable with a deterministic classifier that assigns the same class label regardless of the input. In fact, we proposed a unified robustness metric for the reason that most commonly used metrics are disassociated from Table 1: Gaussian robustness improves overall robustness. We report the robustness metrics corresponding to various attacks (PGD, LBFGS, FGSM, and DF2), our proposed GNR metric, and the test accuracy ACC for LeNet and AlexNet networks trained on MNIST and CIFAR100 using our proposed regularizer with noise variance σ in training. Note that σ = 0 corresponds to baseline models trained without our regularizer. We observe that training networks with our proposed regularizer (designed for additive Gaussian attacks) not only improves the robustness against Gaussian attacks but also against 6 other types of attacks which 4 of them listed here and the others are left for appendix. the ground-truth labels and only consider model predictions. Therefore, we filtered out the from Figure 2 by removing all the experiments that achieved lower test accuracy than the baseline model. Figure 3 summarizes these for LeNet. Now, we can clearly see the difference between training with data augmentation and our approach. For MNIST (Figure 3a), we achieved the same robustness as 21-fold data augmentation without feeding the network with any noisy examples during training and while preserving the same baseline accuracy. Interestingly, for CIFAR10 (Figure 3b), our method is twice as robust as the best robustness achieved via data augmentation. Moreover, for CIFAR100 (Figure 3c), we are able to outperform data augmentation by around 5%. Finally, for extra validation, we also conducted the same experiments with AlexNet on CIFAR10 and CIFAR100 which can be found in the appendix. We can see that our proposed regularizer can improve robustness by 15% on CIFAR10 and around 25% on CIFAR100. It is interesting to note that for CIFAR10, data augmentation could not improve the robustness of the trained models without drastically degrading the testing accuracy on the noise-free examples. Moreover, it is interesting to observe that the best robustness achieved through data augmentation is even worse than the baseline. This could be due to the trade-off coefficient α in Equation 1. Towards General Robustness via Gaussian Robustness. Here, we investigate whether improving robustness to Gaussian input noise can improve robustness against other types of attacks. Specifically, we compare the robustness of models trained using our proposed regularizer (robust again Gaussian attacks) with baseline models subject to different types of attacks: Projected Gradient Descent (PGD) and LBFGS attacks , Fast Sign Gradient Method (FGSM) , and DeepFool L2Attack (DF2) as provide by. For all these attacks, we report the minimum energy perturbation that can change the network prediction. We also report our Gaussian Network Robustness (GNR) metric, which is the Gaussian version of Equation 4 along with the testing accuracy (ACC). We perform experiments on LeNet on MNIST, CIFAR10 and CIFAR100 datasets and on AlexNet on both CI-FAR10 and CIFAR100. Due to space constraints, we show the robustness for only LeNet on MNIST and AlexNet of CIFAR100 and leave the rest along with two other types of attacks for the appendix. Table 1 shows that improving GNR through our data augmentation free regularizer can significantly improve all robustness metrics. For instance, comparing LeNet trained with our proposed regularizer against LeNet trained without any regularization, i.e. σ = 0, we see that robustness against all types of attacks improves by almost two orders of magnitude, while maintaining a similar testing accuracy. A similar improvement in performance is consistently present for AlexNet on CIFAR100. Addressing the sensitivity problem of deep neural networks to adversarial perturbation is of great importance to the machine learning community. However, building robust classifiers against this noises is computationally expensive, as it is generally done through the means of data augmentation. We propose a generic lightweight analytic regularizer, which can be applied to any deep neural network with a ReLU activation after the first affine layer. It is designed to increase the robustness of the trained models under additive Gaussian noise. We demonstrate this with multiple architectures and datasets and show that it outperforms data augmentation without observing any noisy examples. A EXPERIMENTAL SETUP AND DETAILS. All experiments, are conducted using PyTorch version 0.4.1. All hyperparameters are fixed and Table 2 we report the setup for the two optimizers. In particular, we use the Adam optimizaer with β 1 = 0.9, β 2 = 0.999, = 10 −8 with amsgrad set to False. The second optimizer is with momentum=0.9, dampening=0, with Nesterov acceleration. In each experiment, we randomly split the training dataset into 10% validation and 90% training and monitor the validation loss after each epoch. If validation loss did not improve for lr patience epochs, we reduce the learning rate by multiplying it by lr factor. We start with an initial learning rate of lr initial. The training is terminated only if the validation loss did not improve for loss patience number of epochs or if the training reached 100 epochs. We report the of the model with the best validation loss. In particular, one can observe that with σ large than 0.7 the among of noise is severe even for the human level. Training on such extreme noise levels will deem data augmentation to be difficult. We measure the robustness against Gaussian noise by averaging over a range of input noise levels, where at each level for each image, we consider it misclassified if the probability of it being misclassified is greater than a certain threshold. The final robustness is the average over multiple testing σ x. This is special case of the more general case in Equation. We then report the area under the curve of the robustness with varying testing σ x as shown in Figure 6. The area under this curve thus represents the overall robustness of a given model under several varying input noise standard deviation σ x. We report the robustness of several architectures over several datasets with and without our trained regularizer. We show that our proposed efficient regularizer not only improves the robustness against Gaussin noise attacks but againts several other types of attacks. Table 3 summarizes the types of attacks used for robustness evaluation. reported models trained with our regularizer on CIFAR10 and CIFAR100 on all training σx are within 1.68% and 4.83% of the baseline accuracy, respectively. The models trained with the proposed regularizer achieve better robustness than 11-fold and 6-fold noisy data augmentation on CIFAR10 and CIFAR100, respectively. 34.65 35.50 Table 4: Gaussian robustness improves overall robustness. We report the robustness metrics corresponding to various attacks (PGD, LBFGS, FGSM, AGA, AUA and DF2), our proposed GNR metric, and the test accuracy ACC for LeNet and AlexNet networks trained on MNIST, CIFAR10 and CIFAR100 using our proposed regularizer with noise variance σ in training. Note that σ = 0 corresponds to baseline models trained without our regularizer. We observe that training networks with our proposed regularizer (designed for additive Gaussian attacks) not only improves the robustness against Gaussian attacks but also against 6 other types of attacks. | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | B1xDq2EFDH | An efficient estimate to the Gaussian first moment of DNNs as a regularizer to training robust networks. |
Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur. Existing neural architectures typically do not scale to the entire evidence, and hence, resort to selecting a single passage in the document (either via truncation or other means), and carefully searching for the answer within that passage. However, in some cases, this strategy can be suboptimal, since by focusing on a specific passage, it becomes difficult to leverage multiple mentions of the same answer throughout the document. In this work, we take a different approach by constructing lightweight models that are combined in a cascade to find the answer. Each submodel consists only of feed-forward networks equipped with an attention mechanism, making it trivially parallelizable. We show that our approach can scale to approximately an order of magnitude larger evidence documents and can aggregate information from multiple mentions of each answer candidate across the document. Empirically, our approach achieves state-of-the-art performance on both the Wikipedia and web domains of the TriviaQA dataset, outperforming more complex, recurrent architectures. Reading comprehension, the task of answering questions based on a set of one more documents, is a key challenge in natural language understanding. While data-driven approaches for the task date back to BID11, much of the recent progress can be attributed to new largescale datasets such as the CNN/Daily Mail Corpus BID8, the Children's Book Test Corpus BID9 and the Stanford Question Answering Dataset (SQuAD) BID21. These datasets have driven a large body of neural approaches BID24 BID16 BID22 BID27, inter alia) that build complex deep models typically driven by long short-term memory networks BID12. These models have given impressive on SQuAD where the document consists of a single paragraph and the correct answer span is typically only present once. However, they are computationally intensive and cannot scale to large evidence texts. Such is the case in the recently released TriviaQA dataset BID14, which provides as evidence, entire webpages or Wikipedia articles, for answering independently collected trivia-style questions. So far, progress on the TriviaQA dataset has leveraged existing approaches on the SQuAD dataset by truncating documents and focusing on the first 800 words BID14 BID18. This has the obvious limitation that the truncated document may not contain the evidence required to answer the question 1. Furthermore, in TriviaQA there is often useful evidence spread throughout the supporting documents. This cannot be harnessed by approaches such as that greedily search for the best 1-2 sentences in a document. For example, in Fig.1 the answer does not appear in the first 800 words. The first occurrence of the answer string is not sufficient to answer the question. The passage starting at token 4089 does contain all of the information required to infer the answer, but this inference requires us to resolve the two complex co-referential phrases in'In the summer of that year they got married in a church'. Access to other mentions of Krasner and Pollock and the year 1945 is important to answer this question. Figure 1: Example from TriviaQA in which multiple mentions contain information that is useful in inferring the answer. Only the italicized phrase completely answers the question (Krasner could have married multiple times) but contains complex coreference that is beyond the scope of current natural language processing. The last phrase is more easy to interpret but it misses the clue provided by the year 1945.In this paper we present a novel cascaded approach to extractive question answering (§3) that can accumulate evidence from an order of magnitude more text than previous approaches, and which achieves state-of-the-art performance on all tasks and metrics in the TriviaQA evaluation. The model is split into three levels that consist of feed-forward networks applied to an embedding of the input. The first level submodels use simple bag-of-embeddings representations of the question, a candidate answer span in the document, and the words surrounding the span (the context). The second level submodel uses the representation built by the first level, along with an attention mechanism BID2 that aligns question words with words in the sentence that contains the candidate span. Finally, for answer candidates that are mentioned multiple times in the evidence document, the third level submodel aggregates the mention level representations from the second level to build a single answer representation. At inference time, predictions are made using the output of the third level classifier only. However, in training, as opposed to using a single loss, all the classifiers are trained using the multi-loss framework of BID1, with gradients flowing down from higher to lower submodels. This separation into submodels and the multi-loss objective prevents adaptation between features BID10. This is particularly important in our case where the higher level, more complex submodels could subsume the weaker, lower level models c.f. BID1.To summarize, our novel contributions are• a non-recurrent architecture enabling processing of longer evidence texts consisting of simple submodels • the aggregation of evidence from multiple mentions of answer candidates at the representation level • the use of a multi-loss objective. Our experimental (§4) show that all the above are essential in helping our model achieve state-of-the-art performance. Since we use only feed-forward networks along with fixed length window representations of the question, answer candidate, and answer context, the vast majority of computation required by our model is trivially parallelizable, and is about 45× faster in comparison to recurrent models. Most existing approaches to reading comprehension BID24 BID16 BID22 BID27 BID25 BID15, inter alia) involve using recurrent neural nets (LSTMs BID12 or memory nets BID26) along with various flavors of the attention mechanism BID2 to align the question with the passage. In preliminary experiments in the original TriviaQA paper, BID14 explored one such approach, the BiDAF architecture BID22, for their dataset. However, BiDAF is designed for SQuAD, where the evidence passage is much shorter (122 tokens on an average), and hence does not scale to the entire document in TriviaQA (2895 tokens on an average); to work around this, the document is truncated to the first 800 tokens. Pointer networks with multi-hop reasoning, and syntactic and NER features, have been used recently in three architectures -Smarnet BID4, Reinforced Mnemonic Reader BID13 and MEMEN BID18 for both SQuAD and TriviaQA. Most of the above also use document truncation.Approaches such as first select the top sentences using a very coarse model and then run a recurrent architecture on these sentences to find the correct span. BID3 propose scoring spans in each paragraph with a recurrent network architecture separately and then take taking the span with the highest score. Our approach is different from existing question-answering architectures in the following aspects. First, instead of using one monolithic architecture, we employ a cascade of simpler models that enables us to analyze larger parts of the document. Secondly, instead of recurrent neural networks, we use only feed-forward networks to improve scalability. Third, our approach aggregates information from different mentions of the same candidate answer at the representation level rather than the score level, as in other approaches BID15 BID14. Finally, our learning problem also leverages the presence of several correct answer spans in the document, instead of considering only the first mention of the correct answer candidate. For the reading comprehension task (§3.1), we propose a cascaded model architecture arranged in three different levels (§3.2). Submodels in the lower levels (§3.3) use simple features to score candidate answer spans. Higher level submodels select the best answer candidate using more expensive attention-based features (§3.4) and by aggregating information from different occurrences of each answer candidate in the document (§3.5). The submodels score all the potential answer span candidates in the evidence document 2, each represented using simple bags-of-embeddings. Each submodel is associated with its own objective and the final objective is given by a linear combination of all the objectives (§3.6). We conclude this section with a runtime analysis of the model (§3.7). We take as input a sequence of question word embeddings q = {q 1 . . . q m}, and document word embeddings d = {d 1 . . . d n}, obtained from a dictionary of pre-trained word embeddings. Each candidate answer span, s = {d s1 . . . d so} is a collection of o ≤ l consecutive word embeddings from the document, where l is the maximum length of a span. The set of all candidate answer spans is S:= {s i} nl i=1. Limiting spans to length l minimally affects oracle accuracy (see Section §4) and allows the approach to scale linearly with document size. Since the same spans of text can occur multiple times in each document, we also maintain the set of unique answer candidate spans, u ∈ S u, and a mapping between each span and the unique answer candidate that it corresponds to, s u. In TriviaQA, each answer can have multiple alternative forms, as shown in Fig.1. The set of correct answer strings is S * and our task is to predict a single answer candidateû ∈ S. We first describe our meta-architecture, which is a collection of simple submodels M k (·) organized in a cascade. The idea of modeling separate classifiers that use complementary features comes from BID1 who found this gave considerable gains over combining all the features into a single model for a conversational system. As shown in FIG0, our architecture consists of two submodels M 1, M 2 at the first level, one submodel M 3 at the second, and one submodel M 4 at the third level. Each submodel M k returns a score, φ DISPLAYFORM0 s that is fed as input to the submodel at the next level. φ DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Using their respective scores, φs, the models M 1...M 3 define a distribution over all spans, while M 4 uses φ u to define a distribution over all unique candidates, as follows: DISPLAYFORM4 In training, our total loss is given by an interpolation of losses for each of M 1,.., M 4. However, during inference we make a single prediction, simply computed asû = arg max u∈Su φu. The first level is the simplest, taking only bags of embeddings as input. This level contains two submodels, one that looks at the span and question together (§3.3.1), and another that looks at the span along with the local context in which the span appears (§3.3.2). We first describe the span representations used by both. Span Embeddings: We denote a span of length o as a vector s, containing• averaged document token embeddings, and • a binary feature γ qs indicating whether the spans contains question tokens DISPLAYFORM0 The binary feature γ qs is motivated by the question-in-span feature from BID3, we use the question-in-span feature, motivated by the observation that questions rarely contain the answers that they are seeking and it helps the model avoid answers that are over-complete -containing information already known by the questioner. The question + span component of the level 1 submodel predicts the correct answer using a feedforward neural network on top of fixed length question and span representations. The span representation is taken from above and we represent the question with a weighted average of the question embeddings. Question Embeddings: Motivated by BID16 we learn a weight δ qi for each of the words in the question using the parameterized function defined below. This allows the model to learn to focus on the important words in the question. δ qi is generated with a two-layer feed-forward net with rectified linear unit (ReLU) activations BID17 BID7, DISPLAYFORM0 where U, V, w, z, a and b are parameters of the feed-forward network. Since all three submodels rely on identically structured feed-forward networks and linear prediction layers, from now on we will use the abbreviations ffnn and linear as shorthand for the functions defined above. The scores, δ qi are normalized and used to generate an aggregated question vectorq as follows. DISPLAYFORM1 Now that we have representations of both the question and the span, the question + span model computes a hidden layer representation of the question-span pair as well as a scalar score for the span candidate as follows: DISPLAYFORM2 s ) where [x; y] represents the concatenation of vectors x and y. The span + short context component builds a representation of each span in its local linguistic context. We hypothesize that words in the left and right context of a candidate span are important for the modeling of the span's semantic category (e.g. person, place, date). Unlike level 1 which builds a question representation independently of the document, the level 2 submodel considers the question in the context of each sentence in the document. This level aligns the question embeddings with the embeddings in each sentence using neural attention BID2 BID16 BID27 BID22, specifically the attend and compare steps from the decomposable attention model of. The aligned representations are used to predict if the sentence could contain the answers. We apply this attention to sentences, not individual span contexts, to prevent our computational requirements from exploding with the number of spans. Subsequently, it is only because level 2 includes the level 1 representations h s and h s that it can assign different scores to different spans in the same sentence. Sentence Embeddings: We define g s = {d gs,1 . . . d g s,G} to be the embeddings of the G words of the sentence that contains s. First, we measure the similarity between every pair of question and sentence embeddings by passing them through a feed-forward net, ffnn att1 and using the ing hidden layers to compute a similarity score, η. Taking into account this similarity, attended vectors q i andd gs,j are computed for each question and sentence token, respectively. DISPLAYFORM0 The original vector and its corresponding attended vector are concatenated and passed through another feed-forward net, ffnn att2 the final layers from which are summed to obtain a question-aware sentence vectorḡ s, and a sentence context-aware question vector,q. DISPLAYFORM1 Using these attended vectors as features, along with the hidden layers from level 1 and the questionspan feature, new scores and hidden layers are computed for level 2: DISPLAYFORM2 s;q;ḡ s; γ qs ]), φ DISPLAYFORM3 s ) In this level, we aggregate information from all the candidate answer spans which occur multiple times throughout the document. The hidden layers of every span from level 2 (along with the question-in-span feature) are passed through a feed-forward net, and then summed if they correspond to the same unique span, using the s u map. The sum, h u is then used to compute the score and the hidden layer 3 for each unique span, u in the document. DISPLAYFORM0 DISPLAYFORM1 The hidden layer in level 3 is used only for computing the score φu, mentioned here to preserve consistency of notation. To handle distant supervision, previous approaches use the first mention of the correct answer span (or any of its aliases) in the document as gold BID14. Instead, we leverage the existence of multiple correct answer occurrences by maximizing the probability of all such occurrences. Using Equation 1, the overall objective, (U *, V *, w *, z *, a *, b *) is given by the total negative log likelihood of the correct answer spans under all submodels: DISPLAYFORM0 where λ 1,..,.λ 4 are hyperparameters, such that 4 i=1 λ i = 1, to weigh the contribution of each loss term. We briefly discuss the asymptotic complexity of our approach. For simplicity assume all hidden dimensions and the embedding dimension are ρ and that the complexity of matrix(ρ×ρ)-vector(ρ×1) multiplication is O(ρ 2). Thus, each application of a feed-forward network has O(ρ 2) complexity. Recall that m is the length of the question, n is the length of the document, and l is the maximum length of each span. We then have the following complexity for each submodel:Level 1 (Question + Span): Building the weighted representation of each question takes O(mρ 2) and running the feed forward net to score all the spans requires O(nlρ 2), for a total of O(mρ 2 + nlρ 2). Level 2: Computing the alignment between the question and each sentence in the document takes O(nρ 2 + mρ 2 + nmρ) and then scoring each span requires O(nlρ 2). This gives a total complexity of O(nlρ 2 + nmρ), since we can reasonably assume that m < n. Thus, the total complexity of our approach is O(nlρ 2 + mnρ). While the nl and nm terms can seem expensive at first glance, a key advantage of our approach is that each sub-model is trivially parallelizable over the length of the document (n) and thus very amenable to GPUs. Moreover note that l is set to 5 in our experiments since we are only concerned about short answers. The TriviaQA dataset BID14 contains a collection of 95k trivia question-answer pairs from several online trivia sources. To provide evidence for answering these questions, documents are collected independently, from the web and from Wikipedia. Performance is reported independently in either domain. In addition to the answers from the trivia sources, aliases for the answers are collected from DBPedia; on an average, there are 16 such aliases per answer entity. Answers and their aliases can occur multiple times in the document; the average occurrence frequency is almost 15 times per document in either domain. The dataset also provides a subset on the development and test splits which only contain examples determined by annotators to have enough evidence in the document to support the answer. In contrast, in the full development and test split of the data, the answer string is guaranteed to occur, but not necessarily with the evidence needed to answer the question. Data preprocessing: All documents are tokenized using the NLTK 4 tokenizer. Each document is truncated to contain at most 6000 words and at most 1000 sentences (average the number of sentences per document in Wikipedia is about 240). Sentences are truncated to a maximum length of 50 (avg sentence length in Wikipedia is about 22). Spans only up to length l = 5 are considered and cross-sentence spans discarded -this in an oracle exact match accuracy of 95% on the Wikipedia development data. To be consistent with the evaluation setup of BID14, for the Wikipedia domain we create a training instance for every question (with all its associated documents), while on the web domain we create a training instance for every question-document pair. Hyperparameters: We use GloVe embeddings BID20 of dimension 300 (trained on a corpus of 840 billion words) that are not updated during training. Each embedding vector is normalized to have 2 norm of 1. Out-of-vocabulary words are hashed to one of 1000 random embeddings, each initialized with a mean of 0 and a variance of 1. Dropout regularization BID23 ) is applied to all ReLU layers (but not for the linear layers). We additionally tuned the following hyperparameters using grid search and indicate the optimal values in parantheses: network size (2-layers, each with 300 neurons), dropout ratio (0.1), learning rate (0.05), context size, and loss weights (λ 1 = λ 2 = 0.35, λ 3 = 0.2, λ 4 = 0.1). We use Adagrad BID6 for optimization (default initial accumulator value set to 0.1, batch size set to 1). Each hyperparameter setting took 2-3 days to train on a single NVIDIA P100 GPU. The model was implemented in Tensorflow BID0. Table 1 presents our on both the full test set as well as the verified subsets, using the exact match (EM) and F 1 metrics. Our approach achieves state-of-the-art performance on both the Wikipedia and web domains outperforming considerably more complex models 5. In the web domain, except for the verified F 1 scores, we see a similar trend. Surprisingly, we outperform approaches which use multi-layer recurrent pointer networks with specialized memories BID4 BID13 6. DISPLAYFORM0 Wikipedia Dev (EM) Table 2: Model ablations on the full Wikipedia development set. For row labeled **, explanation provided in Section §4.3. Table 2 shows some ablations that give more insight into the different contributions of our model components. Our final approach (3-Level Cascade, Multiloss) achieves the best performance. Training with only a single loss in level 3 (3-Level Cascade, Single Loss) leads to a considerable decrease in performance, signifying the effect of using a multi-loss architecture. It is unclear if combining the two submodels in level 1 into a single feed-forward network that is a function of the question, span and short context (3-Level Cascade, Combined Level 1) is significantly better than our final model. Although we found it could obtain high , it was less consistent across different runs and gave lower scores on average (49.30) compared to our approach averaged over 4 runs (51.03). Finally, the last three rows show the of using only smaller components of our model instead of the entire architecture. In particular, our model without the aggregation submodel (Level 1 + Level 2 Only) performs considerably worse, showing the value of aggregating multiple mentions of the same span across the document. As expected, the level 1 only models are the weakest, showing that attending to the document is a powerful method for reading comprehension. FIG2 shows the behavior of the k-best predictions of these smaller components. While the difference between the level 1 models becomes more enhanced when considering the top-k candidates, the difference between the model without the aggregation submodel (Level 1 + Level 2 Only) and our full model is no longer significant, indicating that the former might not be able to distinguish between the best answer candidates as well as the latter. The effect of truncation on Wikipedia in FIG2 (right) indicates that more scalable approaches that can take advantage of longer documents have the potential to perform better on the TriviaQA task. Multiple Mentions: TriviaQA answers and their aliases typically reoccur in the document (15 times per document on an average). To verify whether our model is able to predict answers which occur frequently in the document, we look at the frequencies of the predicted answer spans in FIG3 (left). This distribution follows the distribution of the gold answers very closely, showing that our model learns the frequency of occurrence of answer spans in the document. Speed: To demonstrate the computational advantages of our approach we implemented a simple 50-state bidirectional LSTM baseline (without any attention) that runs through the document and predicts the start/end positions separately. FIG3 shows the speedup ratio of our approach compared to this LSTM baseline as the length of the document is increased (both approaches use a P100 GPU). For a length of 200 tokens, our approach is about 8× faster, but for a maximum length of 10,000 tokens our approach is about 45× faster, showing that our method can more easily take advantage of GPU parallelization. We observe the following phenomena in the (see Table 3) which provide insight into the benefits of our architecture, and some directions of future work. Lower levels get the prediction right, but not the upper levels. Model predicts entities from the question. Table 3: Example predictions from different levels of our model. Evidence context and aggregation are helpful for model performance. The model confuses between entities of the same type, particularly in the lower levels. Aggregation helps As motivated in Fig 1, we observe that aggregating mention representations across the evidence text helps (row 3). Lower levels may contain, among the top candidates, multiple mentions of the correct answer (row 4). However, since they cannot aggregate these mentions, they tend to perform worse. Moreover, level 3 does not just select the most frequent candidate, it selects the correct one (row 2).Context helps Models which take into account the context surrounding the span do better (rows 1-4) than the level 1 (question + span) submodel, which considers answer spans completely out of context. Entity-type confusion Our model still struggles to resolve confusion between different entities of the same type (row 4). Context helps mitigate this confusion in many cases (rows 1-2). How-ever, sometimes the lower levels get the answer right, while the upper levels do not (row 5) -this illustrates the value of using a multi-loss architecture with a combination of models. Our model still struggles with deciphering the entities present in the question (row 5), despite the question-in-span feature. We presented a 3-level cascaded model for TriviaQA reading comprehension. Our approach, through the use of feed-forward networks and bag-of-embeddings representations, can handle longer evidence documents and aggregated information from multiple occurrences of answer spans throughout the document. We achieved state-of-the-art performance on both Wikipedia and web domains, outperforming several complex recurrent architectures. | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | HyRnez-RW | We propose neural cascades, a simple and trivially parallelizable approach to reading comprehension, consisting only of feed-forward nets and attention that achieves state-of-the-art performance on the TriviaQA dataset. |
Compressed forms of deep neural networks are essential in deploying large-scale computational models on resource-constrained devices. Contrary to analogous domains where large-scale systems are build as a hierarchical repetition of small- scale units, the current practice in Machine Learning largely relies on models with non-repetitive components. In the spirit of molecular composition with repeating atoms, we advance the state-of-the-art in model compression by proposing Atomic Compression Networks (ACNs), a novel architecture that is constructed by recursive repetition of a small set of neurons. In other words, the same neurons with the same weights are stochastically re-positioned in subsequent layers of the network. Empirical evidence suggests that ACNs achieve compression rates of up to three orders of magnitudes compared to fine-tuned fully-connected neural networks (88× to 1116× reduction) with only a fractional deterioration of classification accuracy (0.15% to 5.33%). Moreover our method can yield sub-linear model complexities and permits learning deep ACNs with less parameters than a logistic regression with no decline in classification accuracy. The universe is composed of matter, a physical substance formed by the structural constellation of a plethora of unitary elements denoted as atoms. The type of an atom eventually defines the respective chemical elements, while structural bonding between atoms yields molecules (the building blocks of matter and our universe). In Machine Learning a neuron is the infinitesimal nucleus of intelligence (i.e. {atom, matter} ↔ {neuron, AI}), whose structural arrangement in layers produces complex intelligence models. Surprisingly, in contrast to physical matter where molecules often reuse quasi-identical atoms (i.e. repeating carbon, hydrogen, etc.), neural networks do not share the same neurons across layers. Instead, the neurons are parameterized through weights which are optimized independently for every neuron in every layer. Inspired by nature, we propose a new paradigm for constructing deep neural networks as a recursive repetition of a fixed set of neurons. Staying faithful to the analogy we name such models as Atomic Compression Networks (ACNs). Extensive experimental show that by repeating the same set of neurons, ACNs achieve unprecedented compression in terms of the total neural network parameters, with a minimal compromise on the prediction quality. Deep neural networks (DNN) achieve state-of-the-art prediction performances on several domains like computer vision ) and natural language processing . Therefore, considerable research efforts are invested in adopting DNNs for mobile, embedded, or Internet of Things (IoT) devices . Yet, multiple technical issues related to restricted resources, w.r.t. computation and memory, prevent their straightforward application in this particular domain; ). Even though prior works investigate neural compression techniques like pruning or low-rank parameter factorization, they face fragility concerns regarding the tuning of hyperparameters and network architecture, besides struggling to balance the trade-off between compression and accuracy . • a novel compression paradigm for neural networks composed of repeating neurons as the atomic network components and further motivated by function composition; • compression rates of up to three orders of magnitudes compared to a cross-validated fullyconnected network on nine real-world vector datasets; • first work to achieve sub-linear model complexities measured in the number of trained parameters compared to connected architectures on several computer vision tasks. 2 RELATED WORK Our approach of training a set of neurons and (re-)using them in building the network architecture is partially related to the existing scheme of modular neural networks (MNN). End-to-End Module Networks (a;) are deep neural network models that are constructed from manual module blueprints defined for different sub-tasks in question answering. The Compositional Recursive Learner proposed by employs a curriculum learning approach to learn modular transformations while Routing Networks (RN) consist of a set of pre-defined modules (which each can be a NN) and a meta controller (called router) that decides in which order these modules are consecutively applied to a given input. Modular Networks are an extension to RNs employing conditional computation with Expectation-Maximization. Finally the recent work of focuses on the recursive formulation of the Routing Network, consecutively applying one of a set of modules to an input. The crucial difference to our approach is that our neurons are much smaller than the general modules of MNN and that we reuse the same components on the same network level (e.g. within the same layer) while the modules of MNN are only applied sequentially. A different extension of the RN is the model of which uses the router in a gating mechanism to control the input to a set of shared RNN modules similar to a Mixture of Experts model . Although Mixture of Experts models are related, they normally do not stack and recursively reuse components within the same network but have a comparably shallow architecture. Another related field of research is that of neural architecture search (NAS) . It is concerned with the automatic discovery of high performing neural network architectures for diverse tasks. There exists a multitude of search-approaches including (Neuro-)Evolution and Reinforcement Learning . A sub-field of NAS is the dedicated search for neural modules called cells or blocks ), which can be reused in the network architecture. Although this is related to our approach, the discovered cells are usually much more complex than single neurons and only share their architecture while each cell has a different set of weights, therefore not reducing the total number of required parameters. Although parameter sharing approaches exist, they either share weights between different evolved macro-architectures or only use them to warm start cells with a parameter copy on initialization, which then is refined in subsequent fine tuning steps , missing the advantage of the recursive sharing scheme employed by ACNs. Although recent works like achieve huge speed-ups, the large amount of time required to find suitable architectures remains a major disadvantage, while our approach can be trained from scratch in one go. One popular way for network compression is the pruning of unimportant weights from the network, an idea first appearing several decades ago to prevent over-fitting by reducing the network complexity . The seminal work by Han et al. (2015b) proposes an iterative procedure consisting of training and pruning weights with small magnitudes. focus on pruning filters of trained CNNs based on the L 1 -norm while Louizos et al. (2017b) and consider L 0 -norm regularization. In contrast employ a pruning schedule on the channel with the smallest effect on the activations of the subsequent layer. Molchanov et al. (2016; use Taylor expansion to calculate importance scores for different channels to prune CNNs. Similarly neuron importance scores based on the reconstruction error of responses in the final layers are used by while propose an adaptive pruning scheme for sub-groups of weights. Furthermore there also exist recent Bayesian approaches to weight pruning (; a;). The input masks are shown by colored arrows to respective neurons, where f 1 (red) has 2, f 2 (orange) has 3 and f m (green) has 4 inputs. The output layer f out (blue) is an independent module which is not reused. Another path of research utilizes factorization approaches to produce low-rank approximations to the weight matrices of DNNs . Alternatively vector quantization methods can be applied to the parameter matrices, e.g. dividing the weights into sub-groups which are represented by their centroid . SqueezeNet introduced by adapts so-called squeeze modules which reduce the number of channels while TensorNet uses the Tensor-Train format to tensorize the dense weight matrices of FC layers. use the Kronecker product to achieve layer-wise compression. Recent work combines several of the aforementioned methods in hybrid models. Han et al. (2015a) use pruning together with layer-wise quantization and additional Huffman coding. Tucker decomposition supported by a variational Bayesian matrix factorization is employed while jointly apply pruning and quantization techniques in a single framework. A special variant of compression methods for DNNs is the teacher-student approach also known as network distillation (; ;), which utilizes the relation of two models, a fully trained deep teacher model and a significantly smaller and more shallow student model to distill the knowledge into a much more compressed form. There exist different approaches to the design and training of the student model, e.g. utilizing special regularization or quantization techniques . A crucial advantage of ACNs is that we do not waste time and computational resources on training a large, over-parameterized architecture which is prone to over-fitting and usually needs a lot of training data only to compress it afterwards (e.g. by pruning or distillation). Instead we directly train smaller models which bootstrap their capacity by recursively-stacked neurons and therefore are more compact and can be trained efficiently with less data. Furthermore, the aforementioned approaches are often quite limited in achieving large compression without degrading the prediction performance. HashedNets use random weight sharing within the parameter matrix of each layer and introduce an additional hashing formulation of the forward pass and the gradient updates. Our model is different from these approaches since we share neurons which can have an arbitrary number of weights and don't restrict the sharing to the same layer, but enable sharing across layers, too. As an additional complementary method to the aforementioned approaches one can train networks with reduced numerical precision of their parameters . This achieves further compression and experiments show that it leads only to a marginal degradation of model accuracy. Taking this idea to the extreme constrain the values of all weights and activations to be either +1 or -1. A neuron can be understood as an arbitrary function f (x; θ): The idea is to stochastically apply the same neuron with the same weights to the outputs of neurons from the previous layer. The different functions or components f i then form a global set M, f 1, f 2,..., f m ∈ M which defines the set of parameters (indexed by the neurons) which is available to the model. Additionally we add the output layer f out which is a unique FC layer that is not reused anywhere else in the network (see figure 1). The most naive approach to recursion is repeating whole layers with shared weights. Each module is represented by a FC layer mapping from a layer input dimension D in to the output dimension D out. The architecture is depicted in figure 1a. More generally this approach can also be written in terms of a function composition (see equation 1). Sharing layers forces them to have equal input and output dimensions, ing in rectangular-shaped neural architectures. For comparison with our neuron level ACN in the ablation study in the experiment section we reference this approach with random sampling of the sequential network composition by LayerNet. In contrast to repeating entire layers, ACN reuses the most atomic components of a neural network which are represented by its single neurons. Consequently a full network layer in the model consists of several neurons which do not need to be all different, but the same neuron can be reused more than once in the same layer (see figure 1b). Each neuron p ∈ M has an input dimension d p which we choose randomly from a preset range of dimensions D. When inserting the neuron into a layer, we randomly sample d p connections to the neurons of the previous layer forming a mask δ p on its activation z. While the trainable parameters (weights and bias) of the neurons are shared, this connection mask is different for every neuron in every layer. The procedure for sampling the ACN architecture is shown in algorithm 1. Equation 2 shows the forward pass through a ACN neuron layer, where Π represents the projection (selection) of the mask indices δ from the activation of the previous layers: To reduce the additional memory complexity of these input masks we can store the seed of the pseudo random generator used for sampling the masks and are able to reproduce them on the fly in the forward pass at inference time. Since the input masks of the neurons are completely random, situations may arise in which not all elements of z are forwarded to the next layer. However in our experiments we find that this case happens only very rarely granted that the number of neurons per layer and the greater part of the input dimensions D are sufficiently large. To gain more insights into the potential of ACN we perform a small study on a synthetic curve fitting problem. We compare a simple FC network with a varying number of layers and neurons to an ACN with approximately the same number of parameters. The curve fitting problem is a regression task on the function f (x) = 3.5x 3 − 3 sin(x) + 0.5 sin(8x 2 + 0.5π) 2. To achieve the best possible fit for the models we perform a hyper-parameter grid search including the dimension of hidden layers, learning rate and the number of different modules for the recursive net. Details can be found in appendix A.2. Considering our knowledge about the true function which includes powers of x we boost the performance of both models by squaring the neuron activations. Figure 2: Plots of the model fit to the curve data. The first row shows the fit of the FC baseline, the second that of ACN. In the columns the respective number of model parameters as well as the achieved MSE on the test set are given above the plots. Since an arbitrary number of parameters cannot always be achieved with each model, the next nearest number with a competitive model was selected, e.g. in the first column with 17 and 18 parameters respectively. As can be seen in figure 2, ACN consistently achieves a better fit in terms of MSE than the FC baseline. Already an ACN with only 32 parameters is sufficient to approximately fit even the highly non-linear parts of the function while the FC network only achieves a comparable fit with actually more than twice the number of parameters. This can be explained by some intuition about function composition. Consider a function f (x) = (αx + β) 2, f: R → R with sole parameters α and β. Then we can create much more complex functions with the same set of parameters by composition f (f (x)) = α 3 x 2 + 2α 2 βx + αβ 2 + β 2. Extending this to other functions g, h,... (which each can be a neuron) enables us to create complex functions with a small set of parameters. Furthermore compared to a standard FCN our ACN achieves much deeper architectures with the same number of parameters what further improves the fitting capability. We evaluate the performance of the proposed algorithms using nine publicly available real-world classification datasets and three image datasets. The selected datasets and their characteristics are detailed in appendix A.1. The datasets were chosen from the OpenML-CC18 benchmark 1 which is a curated high quality version of the OpenML100 benchmark . Details on the employed hyper-parameters and training setup can be found in appendix A.2. We compare our model to six different baseline models: • FC: A standard FC network of comparable size. It was shown in that this is a simple but strong baseline which often outperforms more sophisticated methods. • RER: Random Edge Removal first introduced in Cireşan et al.. For this baseline a number of connections is randomly dropped from a FC network on initialization. In contrast to dropout the selected connections are dropped completely and accordingly the network is trained and also evaluated as sparse version. • TensorNet : This model builds on a generalized low-rank quantization method that treats the weight matrices of FC layers as a two dimensional tensor which can be decomposed into the Tensor-Train (TT) format . The authors also compare their model to HashedNet and claim to outperform it by 1.2%. To train TensorNet on our datasets we introduce two FC layers mapping from the input to the consecutive TT layers and from the last TT layer back to the output dimension. • BC: The Bayesian compression method of Louizos et al. (2017a) uses scale mixtures of normals as sparsity-inducing priors on groups of weights, extending the variational dropout approach of to completely prune unimportant neurons while training a model from scratch. We achieve varying levels of compression and accuracy by specifying different thresholds for the variational dropout rate of the model. • TP: use first and second order Taylor expansions to compute importance scores for filters and weights which then are used to prune respective parts of the network. In our experiments we use the method which the authors report as best, introducing pruning gates directly after batch normalization layers. To realize different levels of compression we prune increasingly bigger parts of the network based on the calculated importance scores in an iterative fine tuning procedure. • LogisticRegression: A linear model. The of the experimental study on the real-world datsets are shown in figure 3 while table 1 shows the for the image datasets. All are obtained by running each model-hyperparameter combination with three different random seeds and averaging the ing performance metric and number of parameters. The standard deviation between runs can be found in table 5 in appendix A.3. Table 1: Classification test accuracy on image datasets -in each row the best model (selected based on validation accuracy) up to the specified maximum number of parameters is shown. Therefore the model and its test accuracy values do not change for greater numbers of parameters, if a smaller model has a better validation accuracy. For some models very small versions could not be trivially achieved (indicated by "-"). All trained models are fully connected architectures. We did not train ACN for more than 500,000 parameters. As can be seen in figure 3, ACN consistently outperforms all other models on all of the nine real-world datasets for very small models, for datasets with a huge feature space even for models with up to 10,000 parameters. This shows that the ACN architecture is very efficient w.r.t. its parameters and the required amount of training-data and that on these datasets the recursive nature and parameter sharing throughout the architecture has a positive impact on the predictive efficiency compared to the number of parameters. For larger models the advantage of ACN decreases and the other models catch Figure 3: Plots of the model classification error on the test set for different numbers of parameters. For each tick on the x-axis the model with the best validation accuracy under the threshold for the number of parameters is selected. In the titles next to the name of the dataset we indicate the dimension of the respective feature space. For some methods very small models could not be achieved so they have no points plotted on the left part of the plots (e.g. a fully connected network will not have less parameters than a linear model with number of inputs × number of classes many parameters) up or even outperform it by a small margin, e.g. in case of the InternetAds dataset, however ACN remains competitive. TensorNet usually is at least a factor of two larger than the other models and therefore is only shown in the last bin. The same applies for TP and BC on the optdigits, theorem and nomao/spambase datasets respectively. To evaluate the parameter efficiency of our technique we compare ACN and FC (as the best performing baseline) to a large optimal FC network trained on the same dataset (doing a hyper-parameter search with the same parameters mentioned in section 4.2 and five increasingly large architectures). We show the regarding the relative compression rate and the respective loss in model performance in table 2. Say n is the number of model parameters of the large baseline and n that of the evaluated model, then the compression rate is given by ρ = n/n. Our model achieves compression rates of 88 up to more than 1100 times while the loss in test accuracy is kept at around 0.15% up to 5.33%. In comparison FC achieves compression of 67 up to 528 times with accuracy loss between 0.28% and 7.13%. For all datasets except spambase and theorem ACNs realize higher compression rates than FC as well as a smaller loss in accuracy in 5/9 cases. However since the comparison to a large model baseline can be biased by model selection and the thoroughness of the respective hyper-parameter search, we also investigate how the performance and capacity of the models compare w.r.t. a linear baseline. The in table 2 confirm the trend that is observed in the comparison to the large model. On the image datasets ACN consistently and significantly outperforms all other models for up to 100,000 parameters. On MNIST our smallest model has 4091 parameters and outperforms a linear model with 7850 by 1.8% although the baseline has nearly twice as many parameters. The same trend can be observed for FashionMNIST (7057 / 7850 with a difference of 2.04%) and CIFAR10 where an ACN with 7057 parameters employs less than a quarter of the 30730 parameters needed by the simplest linear model and outperfoms it by a overwhelming 8.63%. Surprisingly the LayerNet outperfoms all other models for the category < 500, 000. For more than 500,000 parameters the FC network performs best. Although the on CIFAR10 first might seem rather poor compared to the approximately 96% accuracy which is achieved by CNN , the best achieved by dedicated FC networks lie around 56% and are only topped by a specifically designed network with linear bottleneck layers and unsupervised pre-training . While our model sacrifices some processing time for higher compression, the forward pass at inference time still works in a matter of milliseconds on a standard CPU. To investigate the effect of the chosen level of sharing and recursion we perform a small ablation study. We compare ACN with parameter-sharing and recursion on a neuron-level with the simpler approach of sharing and reusing whole layers described as the LayerNet architecture in section 3. Therefore we also report the performance for the LayerNet in an extra column in the tables 5 and 1. The imply that in general the recursion on neuron-level is much more effective to achieve competitive models with high compression, outperforming the LayerNet in all but one case for less than 10,000 parameters. However the LayerNet seems to have a beneficial regularization effect on large models, what is notable especially on the image datasets. In this paper we presented Atomic Compression Networks (ACN), a new network architecture which recursively reuses neurons throughout the model. We evaluate our model on nine vector and three image datasets where we achieve promising regarding the compression rate and the loss in model accuracy. In general ACNs achieve much tinier models with only a small to moderate decrease of accuracy compared to six other baselines. For future work we plan to include skip connections in the architecture and to extend the idea to CNNs and the sharing of kernel parameters as well as for the FC layers. Another interesting path of research is the combination of the ACN scheme with NAS methods to further optimize the efficiency and performance of the created architectures. A.1 DATASET DETAILS Table 3 shows detailed attributes for the datasets selected from the OpenML-CC18 benchmark. The selection criteria were datasets with more than 3000 instances and more than 50 original features. The number of features reported in the table is the number of features after one-hot-encoding. For the image datasets- we employ the original train/test splits and use a random 20% of the training set for validation leading to a train/val/test split of 0.64/0.16/0.2. The datasets- are split randomly using the same fractions. We do not use any kind of additional transformation or data augmentation beyond those which was already originally applied to the respective datasets. bioresponse 3751 1776 2 45.77 HAR 10299 561 6 13.65 InternetAds 3279 3113 2 14.00 isolet 7797 617 26 3.82 nomao 34465 174 2 28.56 optdigits 5620 64 10 9.86 spambase 4601 57 2 39.40 Given the large computational demands of deep learning architectures, a proper full hyper-parameter search is infeasible for the models employed on the real-world datasets. Under these circumstances we follow the established trend of using some specific deep architectures designed based on expert knowledge together with a small selected parameter grid containing learning rate (0.01, 0.001), use of batch normalization and dropout probability (0.0, 0.3). To enable a fair comparison the hyper-parameter search for every model type which is trained from scratch is done for the same number of combinations. Accordingly for each dataset we evaluate eight different architectures per model type where we vary the number of hidden layers and the width of these layers, for ACN M and D are different between some of the eight varying architectures. The size of the set of modules M is varied in the range between 3 and 5 for the LayerNet (ablation study) and between 16 and 512 for the ACN. The range of input dimensions D is set to a subset of depending on the maximum width of the used hidden layers and the respective input dimension. In case of the TensorNet we vary the dimension of the tensor-train layers and the length of the tensor-train decomposition between different architectures per dataset together with the aforementioned parameter grid. All other parameters remain at their default values . For the two pruning approaches BC and TP we train two different large FC architectures with the described hyper-parameter grid and then prune the networks in small steps, selecting the best architecture within every parameter bin. The models are trained for 25 epochs on batches of size 64 for the real-world datasets and 128 for the image datasets. In order to minimize the loss we apply stochastic gradient descent update steps using Adam and halve the initial learning rate every 10 epochs. The only exception is Tp where we use SGD with a momentum term of 0.99 as proposed by the authors but employ the same learning rate schedule. Finally, in order to avoid exploding gradients the gradient norm is truncated to a maximum of 10. Apart from the TensorNet for which we adapt the publicly available Tensorflow Code 2, all models are implemented using the PyTorch 1.1 framework. For BC 3 and TP 4 we adapt the public PyTorch code base provided by the authors. In this section we show the presented in figure 3 in tabular form in table 5. Furthermore for completeness we present additional regarding two simple sparsification baselines in table 6 employing L1-regularization (L1) and iterative hard tresholding (L1+HT) (b) but without explicit cardinality constraint . The additional baseline models perform mostly on par with the small FC baseline, sometimes slightly better or slightly worst. The biggest change can be seen on the splice dataset where the additional baselines perform better than FC and ACN in the three bins with the highest number of parameters. The additional baselines were trained and evaluated following the same experimentation and hyperparameter protocol as the other models. | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1xO4xHFvB | We advance the state-of-the-art in model compression by proposing Atomic Compression Networks (ACNs), a novel architecture that is constructed by recursive repetition of a small set of neurons. |
Generative models that can model and predict sequences of future events can, in principle, learn to capture complex real-world phenomena, such as physical interactions. However, a central challenge in video prediction is that the future is highly uncertain: a sequence of past observations of events can imply many possible futures. Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally as in the case of pixel-level autoregressive models, or do not directly optimize the likelihood of the data. To our knowledge, our work is the first to propose multi-frame video prediction with normalizing flows, which allows for direct optimization of the data likelihood, and produces high-quality stochastic predictions. We describe an approach for modeling the latent space dynamics, and demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video. Exponential progress in the capabilities of computational hardware, paired with a relentless effort towards greater insights and better methods, has pushed the field of machine learning from relative obscurity into the mainstream. Progress in the field has translated to improvements in various capabilities, such as classification of images , machine translation and super-human game-playing agents , among others. However, the application of machine learning technology has been largely constrained to situations where large amounts of supervision is available, such as in image classification or machine translation, or where highly accurate simulations of the environment are available to the learning agent, such as in game-playing agents. An appealing alternative to supervised learning is to utilize large unlabeled datasets, combined with predictive generative models. In order for a complex generative model to be able to effectively predict future events, it must build up an internal representation of the world. For example, a predictive generative model that can predict future frames in a video would need to model complex real-world phenomena, such as physical interactions. This provides an appealing mechanism for building models that have a rich understanding of the physical world, without any labeled examples. Videos of real-world interactions are plentiful and readily available, and a large generative model can be trained on large unlabeled datasets containing many video sequences, thereby learning about a wide range of real-world phenoma. Such a model could be useful for learning representations for further downstream tasks , or could even be used directly in applications where predicting the future enables effective decision making and control, such as robotics . A central challenge in video prediction is that the future is highly uncertain: a short sequence of observations of the present can imply many possible futures. Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally (as in the case of pixel-level autoregressive models), or do not directly optimize the likelihood of the data. In this paper, we study the problem of stochastic prediction, focusing specifically on the case of conditional video prediction: synthesizing raw RGB video frames conditioned on a short context of past observations (; ; ; ;). Specifically, we propose a new class of video prediction models that can provide exact likelihoods, generate diverse stochastic futures, and accurately synthesize realistic and high-quality video frames. The main idea behind our approach is to extend flow-based generative models (; into the setting of conditional video prediction. To our knowledge, flow-based models have been applied only to generation of non-temporal data, such as images , and to audio sequences . Conditional generation of videos presents its own unique challenges: the high dimensionality of video sequences makes them difficult to model as individual datapoints. Instead, we learn a latent dynamical system model that predicts future values of the flow model's latent state. This induces Markovian dynamics on the latent state of the system, replacing the standard unconditional prior distribution. We further describe a practically applicable architecture for flow-based video prediction models, inspired by the Glow model for image generation , which we call VideoFlow. Our empirical show that VideoFlow achieves that are competitive with the state-ofthe-art in stochastic video prediction on the action-free BAIR dataset, with quantitative that rival the best VAE-based models. VideoFlow also produces excellent qualitative , and avoids many of the common artifacts of models that use pixel-level mean-squared-error for training (e.g., blurry predictions), without the challenges associated with training adversarial models. Compared to models based on pixel-level autoregressive prediction, VideoFlow achieves substantially faster test-time image synthesis 1, making it much more practical for applications that require real-time prediction, such as robotic control. Finally, since VideoFlow directly optimizes the likelihood of training videos, without relying on a variational lower bound, we can evaluate its performance directly in terms of likelihood values. Early work on prediction of future video frames focused on deterministic predictive models (; ; ; ;). Much of this research on deterministic models focused on architectural changes, such as predicting high-level structure Villegas et al. (2017b), incorporating pixel transformations (; ;) and predictive coding architectures , as well as different generation objectives (; ;) and disentangling representations (a;). With models that can successfully model many deterministic environments, the next key challenge is to address stochastic environments by building models that can effectively reason over uncertain futures. Real-world videos are always somewhat stochastic, either due to events that are inherently random, or events that are caused by unobserved or partially observable factors, such as off-screen events, humans and animals with unknown intentions, and objects with unknown physical properties. In such cases, since deterministic models can only generate one future, these models either disregard potential futures or produce blurry predictions that are the superposition or averages of possible futures. A variety of methods have sought to overcome this challenge by incorporating stochasticity, via three types of approaches: models based on variational auto-encoders (VAEs) , generative adversarial networks , and autoregressive models (; ; van den b; c;). Among these models, techniques based on variational autoencoders which optimize an evidence lower bound on the log-likelihood have been explored most widely;;; ). To our knowledge, the only prior class of video prediction models that directly maximize the log-likelihood of the data are auto-regressive models (; ; van den b; c;), that generate the video one pixel at a time. However, synthesis with such models is typically inherently sequential, making synthesis substantially inefficient on modern parallel hardware. Prior work has aimed to speed up training and synthesis with such auto-regressive models . However, show that the predictions from these models are sharp but noisy and that the proposed VAE model produces substantially better predictions, especially for longer horizons. In contrast to autoregressive models, we find that our proposed method exhibits faster sampling, while still directly optimizing the log-likelihood and producing high-quality long-term predictions.... Figure 1: Left: Multi-scale prior The flow model uses a multi-scale architecture using several levels of stochastic variables. Right: Autoregressive latent-dynamic prior The input at each timestep xt is encoded into multiple levels of stochastic variables (z t). We model those levels through a sequential process ). Flow-based generative models (; have a unique set of advantages: exact latentvariable inference, exact log-likelihood evaluation, and parallel sampling. In flow-based generative models (;, we infer the latent variable z corresponding to a datapoint x, by transforming x through a composition of invertible We assume a tractable prior p θ (z) over latent variable z, for eg. a Logistic or a Gaussian distribution. By constraining the transformations to be invertible, we can compute the log-likelihood of x exactly using the change of variables rule. Formally, where is transformed to h i by f i. We learn the parameters of f 1... f K by maximizing the log-likelihood, i.e Equation, over a training set. Given g = f −1, we can now generate a samplex from the data distribution, by sampling z ∼ p θ (z) and computingx = g(z). We propose a generative flow for video, using the standard multi-scale flow architecture in as a building block. In our model, we break up the latent space z into separate latent variables per timestep: z = {z t} T t=1. The latent variable z t at timestep t is an invertible transformation of a corresponding frame of video: x t = g θ (z t). Furthermore, like in , we use a multi-scale architecture for g θ (z t) (Fig. 1): the latent variable z t is composed of a stack of multiple levels: where each level l encodes information about frame x t at a particular scale: We first briefly describe the invertible transformations used in the multi-scale architecture to infer {z = f θ (x t) and refer to for more details. For convenience, we omit the subscript t in this subsection. We choose invertible transformations whose Jacobian determinant in Equation 1 is simple to compute, that is a triangular matrix, diagonal matrix or a permutation matrix as explored in prior work . For permutation matrices, the Jacobian determinant is one and for triangular and diagonal Jacobian matrices, the determinant is simply the product of diagonal terms. • Actnorm: We apply a learnable per-channel scale and shift with data-dependent initialization. • Coupling: We split the input y equally across channels to obtain y 1 and y 2. We compute z 2 = f (y 1) * y 2 + g(y 1) where f and g are deep networks. We concat y 1 and z 2 across channels. • SoftPermute: We apply a 1x1 convolution that preserves the number of channels. • Squeeze: We reshape the input from H × W × C to H/2 × W/2 × 4C which allows the flow to operate on a larger receptive field. We infer the latent variable z (l) at level l using: where N is the number of steps of flow. In Equation, via Split, we split the output of Flow equally across channels into h (>l), the input to Flow (l+1) and z (l), the latent variable at level l. We, thus enable the flows at higher levels to operate on a lower number of dimensions and larger scales. When l = 1, h (>l−1) is just the input frame x and for l = L we omit the Split operation. Finally, our multi-scale architecture f θ (x t) is a composition of the flows at multiple levels from l = 1... L from which we obtain our latent variables i.e {z We use the multi-scale architecture described above to infer the set of corresponding latent variables for each individual frame of the video: Figure 1 for an illustration. As in Equation, we need to choose a form of latent prior p θ (z). We use the following autoregressive factorization for the latent prior: where z <t denotes the latent variables of frames prior to the t-th timestep: {z 1, ..., z t−1}. We specify the conditional prior p θ (z t |z <t) as having the following factorization: where <t is the set of latent variables at previous timesteps and at the same level l, while z (>l) t is the set of latent variables at the same timestep and at higher levels. See Figure 1 for a graphical illustration of the dependencies. ) be a conditionally factorized Gaussian density: where where N N θ is a deep 3-D residual network augmented with dilations and gated activation units and modified to predict the mean and log-scale. We describe the architecture and our ablations of the architecture in Section B and C of the appendix. In summary, the log-likelhood objective of Equation has two parts. The invertible multi-scale architecture contributes | via the sum of the log Jacobian determinants of the invertible transformations mapping the video {x t} T t=1 to {z t} T t=1; the latent dynamics model contributes log p θ (z), i.e Equation. We jointly learn the parameters of the multi-scale architecture and latent dynamics model by maximizing this objective. Note that in our architecture we have chosen to let the prior p θ (z), as described in eq., model temporal dependencies in the data, while constraining the flow g θ to act on separate frames of video. Fooling rate SAVP-VAE 16.4 % VideoFlow 31.8 % SV2P 17.5 % Table 1: We compare the realism of the generated trajectories using a real-vs-fake 2AFC Amazon Mechanical Turk with SAVP-VAE and SV2P. Figure 2: We condition the VideoFlow model with the frame at t = 1 and display generated trajectories at t = 2 and t = 3 for three different shapes. We have experimented with using 3-D convolutional flows, but found this to be computationally overly expensive compared to an autoregressive prior; in terms of both number of operations and number of parameters. Further, due to memory limits, we found it only feasible to perform SGD with a small number of sequential frames per gradient step. In case of 3-D convolutions, this would make the temporal dimension considerably smaller during training than during synthesis; this would change the model's input distribution between training and synthesis, which often leads to various temporal artifacts. Using 2-D convolutions in our flow f θ with autoregressive priors, allows us to synthesize arbitrarily long sequences without introducing such artifacts. All our generated videos and qualitative can be viewed at this website. In the generated videos, a border of blue represents the conditioning frame, while a border of red represents the generated frames. We use VideoFlow to model the Stochastic Movement Dataset used in. The first frame of every video consists of a shape placed near the center of a 64x64x3 resolution gray with its type, size and color randomly sampled. The shape then randomly moves in one of eight directions with constant speed. show that conditioned on the first frame, a deterministic model averages out all eight possible directions in pixel space. Since the shape moves with a uniform speed, we should be able to model the position of the shape at the (t + 1) th step using only the position of the shape at the t th step. Using this insight, we extract random temporal patches of 2 frames from each video of 3 frames. We then use VideoFlow to maximize the loglikelihood of the second frame given the first, i.e the model looks back at just one frame. We observe that the bits-per-pixel on the holdout set reduces to a very low 0.04 bits-per-pixel for this model. On generating videos conditioned on the first frame, we observe that the model consistently predicts the future trajectory of the shape to be one of the eight random directions. We compare our model with two state-of-the-art stochastic video generation models SV2P and ) using their Tensor2Tensor implementation . We assess the quality of the generated videos using a real vs fake Amazon Mechanical Turk test. In the test, we inform the rater that a "real" trajectory is one in which the shape is consistent in color and congruent throughout the video. We show that VideoFlow outperforms the baselines in terms of fooling rate in Table 1 consistently generating plausible "real" trajectories at a greater rate. We use the action-free version of the BAIR robot pushing dataset that contain videos of a Sawyer robotic arm with resolution 64x64. In the absence of actions, the task of video generation is completely unsupervised with multiple plausible trajectories due to the partial observability of the environment and stochasticity of the robot actions. We train the baseline models, SAVP-VAE, SV2P and SVG-LP to generate 10 target frames, conditioned on 3 input frames. We extract random temporal patches of 4 frames, and train VideoFlow to maximize the log-likelihood of Bits-per-pixel VideoFlow 1.87 SAVP-VAE ≤ 6.73 SV2P ≤ 6.78 Table 2: Left: We report the average bits-per-pixel across 10 target frames with 3 conditioning frames for the BAIR action-free dataset. Figure 3: We measure realism using a 2AFC test and diversity using mean pairwise cosine distance between generated samples in VGG perceptual space. the 4th frame given a context of 3 past frames. We, thus ensure that all models have seen a total of 13 frames during training. We estimated the variational bound of the bits-per-pixel on the test set, via importance sampling, from the posteriors for the SAVP-VAE and SV2P models. We find that VideoFlow outperforms these models on bits-per-pixel and report these values in Table 2. We attribute the high values of bits-per-pixel of the baselines to their optimization objective. They do not optimize the variational bound on the log-likelihood directly due to the presence of a β = 1 term in their objective and scheduled sampling . Figure 4: For a given set of conditioning frames on the BAIR action-free we sample 100 videos from each of the stochastic video generation models. We choose the video closest to the ground-truth on the basis of PSNR, SSIM and VGG perceptual metrics and report the best possible value for each of these metrics. All the models were trained using ten target frames but are tested to generate 27 frames. For all the reported metrics, higher is better. Accuracy of the best sample: The BAIR robot-pushing dataset is highly stochastic and the number of plausible futures are high. Each generated video can be super realistic, can represent a plausible future in theory but can be far from the single ground truth video perceptually. To partially overcome this, we follow the metrics proposed in prior work; ) to evaluate our model. For a given set of conditioning frames in the BAIR action-free test-set, we generate 100 videos from each of the stochastic models. We then compute the closest of these generated videos to the ground truth according to three different metrics, PSNR (Peak Signal to Noise Ratio), SSIM (Structural Similarity) and cosine similarity using features obtained from a pretrained VGG network and report our findings in Figure 4. This metric helps us understand if the true future lies in the set of all plausible futures according to the video model. In prior work, effectively tune the pixel-level variance as a hyperparameter and sample from a deterministic decoder. They obtain training stabiltiy and improve sample quality by removing pixel-level noise using this procedure. We can remove pixel-level noise in our VideoFlow model ing in higher quality videos at the cost of diversity by sampling videos at a lower temperature, analogous to the procedure in . For a network trained with additive coupling layers, we can sample the t th frame x t from P (x t |x <t) with a temperature T simply by scaling the standard deviation of the latent gaussian distribution P (z t |z <t) by a factor of T. We report with both a temperature of 1.0 and the optimal temperature tuned on the validation set using VGG similarity metrics in Figure 4. Additionally, we also applied low-temperature sampling to the latent gaussian priors of SV2P and SAVP-VAE and empirically found it to hurt performance. We report these in Figure 10 For SAVP-VAE, we notice that the hyperparameters that perform the best on these metrics are the ones that have disappearing arms. For completeness, we report these numbers as well as the numbers for the best performing SAVP models that do not have disappearing arms. Our model with optimal temperature performs better or as well as the SAVP-VAE and SVG-LP models on the VGG-based similarity metrics, which correlate well with human perception and SSIM. Our model with temperature T = 1.0 is also competent with state-of-the-art video generation models on these metrics. PSNR is explicitly a pixel-level metric, which the VAE models incorporate as part of its optimization objective. VideoFlow on the other-hand models the conditional probability of the joint distribution of frames, hence as expected it underperforms on PSNR. Diversity and quality in generated samples: For each set of conditioning frames in the test set, we generate 10 videos and compute the mean distance in VGG perceptual space across these 45 different pairs. We average this across the test-set for T = 1.0 and T = 0.6 and report these numbers in Figure 3. We also assess the quality of the generated videos at T = 1.0 and T = 0.6, using a real vs fake Amazon Mechanical Turk test and report fooling rates. We observe that VideoFlow outperforms diversity values reported in prior work while being competitive in the realism axis. We also find that VideoFlow at T = 0.6 has the highest fooling rate while being competent with state-of-the-art VAE models in diversity. On inspection of the generated videos, we find that at lower temperatures, the arm exhibits less random behaviour with the objects remaining static and clear achieving higher realism scores. At higher temperatures, the motion of arm is much more stochastic, achieving high diversity scores with the objects becoming much noisier leading to a drop in realism. Figure 6: Left: We display interpolations between a) a small blue rectangle and a large yellow rectangle b) a small blue circle and a large yellow circle. Right: We display interpolations between the first input frame and the last target frame of two test videos in the BAIR robot pushing dataset. BAIR robot pushing dataset: We encode the first input frame and the last target frame into the latent space using our trained VideoFlow encoder and perform interpolations. We find that the motion of the arm is interpolated in a temporally cohesive fashion between the initial and final position. Further, we use the multi-level latent representation to interpolate representations at a particular level while keeping the representations at other levels fixed. We find that the bottom level interpolates the motion of objects which are at a smaller scale while the top level interpolates the arm motion. We encode two different shapes with their type fixed but a different size and color into the latent space. We observe that the size of the shape gets smoothly interpolated. During training, we sample the colors of the shapes from a uniform discrete distribution which is reflected in our experiments. We observe that all the colors in the interpolated space lie in the set of colors in the training set. Figure 7: Left: We generate 100 frames into the future with a temperature of 0.5. The top and bottom row correspond to generated videos in the absence and presence of occlusions respectively. Right: We use VideoFlow to detect the plausibility of a temporally inconsistent frame to occur in the immediate future. We generate 100 frames into the future using our model trained on 13 frames with a temperature of 0.5 and display our in Figure 7. On the top, even 100 frames into the future, the generated frames remain in the image manifold maintaining temporal consistency. In the presence of occlusions, the arm remains super-sharp but the objects become noisier and blurrier. Our VideoFlow model has a bijection between the z t and x t meaning that the latent state z t cannot store information other than that present in the frame x t. This, in combination with the Markovian assumption in our latent dynamics means that the model can forget objects if they have been occluded for a few frames. In future work, we would address this by incorporating longer memory in our VideoFlow model; for example by parameterizing N N θ as a recurrent neural network in our autoregressive prior (eq. 8) or using more memory-efficient backpropagation algorithms for invertible neural networks. We use our trained VideoFlow model, conditioned on 3 frames as explained in Section 5.2, to detect the plausibility of a temporally inconsistent frame to occur in the immediate future. We condition the model on the first three frames of a test-set video X <4 to obtain a distribution P (X 4 |X <4) over its 4th frame X 4. We then compute the likelihood of the t th frame X t of the same video to occur as the 4th time-step using this distribution. i.e, P(X 4 = X t |X <4) for t = 4... 13. We average the corresponding bits-per-pixel values across the test set and report our findings in Figure 7. We find that our model assigns a monotonically decreasing log-likelihood to frames that are more far out in the future and hence less likely to occur in the 4th time-step. We describe a practically applicable architecture for flow-based video prediction models, inspired by the Glow model for image generation , which we call VideoFlow. We introduce a latent dynamical system model that predicts future values of the flow model's latent state replacing the standard unconditional prior distribution. Our empirical show that VideoFlow achieves that are competitive with the state-of-the-art VAE models in stochastic video prediction. Finally, our model optimizes log-likelihood directly making it easy to evaluate while achieving faster synthesis compared to pixel-level autoregressive video models, making our model suitable for practical purposes. In future work, we plan to incorporate memory in VideoFlow to model arbitrary long-range dependencies and apply the model to challenging downstream tasks. be our dataset of i.i.d. observations of a random variable x with an unknown true distribution p * (x). Our data consist of 8-bit videos, with each dimension rescaled to the domain . We add a small amount of uniform noise to the data, u ∼ U(0, 1/256.), matching its discretization level . Let q(x) be the ing empirical distribution corresponding to this scaling and addition of noise. Note that additive noise is required to prevent q(x) from having infinite densities at the datapoints, which can in ill-behaved optimization of the log-likelihood; it also allows us to recast maximization of the log-likelihood as minimization of a KL divergence. We repeat our evaluations described in Figure 4 applying low temperature to the latent gaussian priors of SV2P and SAVP-VAE. We empirically find that decreasing temperature from 1.0 to 0.0 monotonically decreases the performance of the VAE models. Our insight is that the VideoFlow model gains by low-temperature sampling due to the following reason. At lower T, we obtain a tradeoff between a performance gain by noise removal from the and a performance hit due to reduced stochasticity of the robot arm. On the other hand, the VAE models have a clear but slightly blurry throughout from T = 1.0 to T = 0.0. Reducing T in this case, solely reduces the stochasticity of the arm motion thus hurting performance. We show correlation between training progression (measured in bits per pixel) and quality of the generated videos in Figure 11. We display the videos generated by conditioning on frames from the test set for three different values of bits-per-pixel on the test-set. As we approach lower bits-per-pixel, our VideoFlow model learns to model the structure of the arm with high quality as well as its motion ing in high quality video. To report bits-per-pixel we use the following set of hyperparameters. We use a learning rate schedule of linear warmup for the first 10000 steps and apply a linear-decay schedule for the last 150000 steps. We train all our baseline models for 300K steps using the Adam optimizer. Our models were tuned using the maximum VGG cosine similarity metric with the ground-truth across 100 decodes. We use three values of latent loss multiplier 1e-3, 1e-4 and 1e-5. For the SAVP-VAE model, we additionally apply linear decay on the learning rate for the last 100K steps. SAVP-GAN: We tune the gan loss multiplier and the learning rate on a logscale from 1e-2 to 1e-4 and 1e-3 to 1e-5 respectively. Figure 12: We compare P(X4 = Xt|X<4) and VGG cosine similarity between X4 and Xt for t = 4... 13 We plot correlation between cosine similarity using a pretrained VGG network and bits-per-pixel using our trained VideoFlow model. We compare P(X 4 = X t |X <4) as done in Section 5.5 and the VGG cosine similarity between X 4 and X t for t = 4... 13. We report our for every video in the test set in Figure 13. We notice a weak correlation between VGG perceptual metrics and bits-per-pixel with a correlation factor of −0.51. I VIDEOFLOW: LOW PARAMETER REGIME We repeated our evaluations described in Figure 4, with a smaller version of our VideoFlow model with 4x parameter reduction. Our model remains competetive with SVG-LP on the VGG perceptual metrics. Figure 13: We repeat our evaluations described in Figure 4 with a smaller version of our VideoFlow model. | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | rJgUfTEYvH | We demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video. |
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise. As a successful learning algorithm, stochastic gradient descent (SGD) was originally adopted for dealing with the computational bottleneck of training neural networks with large-scale datasets BID0. Its empirical efficiency and effectiveness have attracted lots of attention. And thus, SGD and its variants have become standard workhorse for learning deep models. Besides the aspect of empirical efficiency, recently, researchers started to analyze the optimization behaviors of SGD and its impacts on generalization. The optimization properties of SGD have been studied from various perspectives. The convergence behaviors of SGD for simple one hidden layer neural networks were investigated in BID13 BID1. In non-convex settings, the characterization of how SGD escapes from stationary points, including saddle points and local minima, was analyzed in BID3 BID10 BID8.On the other hand, in the context of deep learning, researchers realized that the noise introduced by SGD impacts the generalization, thanks to the research on the phenomenon that training with a large batch could cause a significant drop of test accuracy BID11. Particularly, several works attempted to investigate how the magnitude of the noise influences the generalization during the process of SGD optimization, including the batch size and learning rate BID7 BID5 BID2 BID9. Another line of research interpreted SGD from a Bayesian perspective. In BID14 BID2, SGD was interpreted as performing variational inference, where certain entropic regularization involves to prevent overfitting. And the work BID21 tried to provide an understanding based on model evidence. These explanations are compatible with the flat/sharp minima argument BID6 BID11, since Bayesian inference tends to targeting the region with large probability mass, corresponding to the flat minima. However, when analyzing the optimization behavior and regularization effects of SGD, most of existing works only assume the noise covariance of SGD is constant or upper bounded by some constant, and what role the noise structure of stochastic gradient plays in optimization and generalization was rarely discussed in literature. In this work, we theoretically study a general form of gradient-based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. By investigating this general dynamics, we analyze how the noise structure of SGD influences the escaping behavior from minima and its regularization effects. Several novel theoretical and empirical justifications are made.1. We derive a key indicator to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency;2. We further justify that SGD in the context of deep neural networks satisfies these two conditions, and thus provide a plausible explanation why SGD can escape from sharp minima more efficiently, converging to flat minima with a higher probability. Moreover, these flat minima typically generalize well according to various works BID6 BID11 BID16 BID22. We also show that Langevin dynamics with well tuned isotropic noise cannot beat SGD, which further confirms the importance of noise structure of SGD; 3. A large number of experiments are designed systematically to justify our understanding on the behavior of the anisotropic diffusion of SGD. We compare SGD with full gradient descent with different types of diffusion noise, including isotropic and positiondependent/independent noise. All these comparisons demonstrate the effectiveness of anisotropic diffusion for good generalization in training deep networks. The remaining of the paper is organized as follows. In Section 2, we introduce the of SGD and a general form of optimization dynamics of interest. We then theoretically study the behaviors of escaping from minima in Ornstein-Uhlenbeck process in Section 3, and establish two conditions for characterizing the noise structure that affects the escaping efficiency. In Section 4, we show that the noise of SGD in the context of deep learning meets the two conditions, and thus explains its superior efficiency of escaping from sharp minima over other dynamics with isotropic noise. Various experiments are conducted for verifying our understanding in Section 5, and we conclude the paper in Section 6. In general, supervised learning usually involves an optimization process of minimizing an empirical loss over training data, DISPLAYFORM0 denotes the training set with N i.i.d. samples, the prediction function f is often parameterized by θ ∈ R D, such as deep neural networks. And (·, ·) is the loss function, such as mean squared error and cross entropy, typically corresponding to certain negative log likelihood. Due to the over parameterization and non-convexity of the loss function in deep networks, there exist multiple global minima, exhibiting diverse generalization performance. We call those solutions generalizing well good solutions or minima, and vice versa. Gradient descent and its stochastic variants A typical approach to minimize the loss function is gradient descent (GD), the dynamics of which in each iteration t is, θ t+1 = θ t − η t g 0 (θ t), where g 0 (θ t) = ∇ θ L(θ t) denotes the full gradient and η t denotes the learning rate. In non-convex optimization, a more useful kind of gradient based optimizers act like GD with an unbiased noise, including gradient Langevin dynamics (GLD), θ t+1 = θ t − η t g 0 (θ t) + σ t t, t ∼ N (0, I), and stochastic gradient descent (SGD), during each iteration t of which, a minibatch of training samples with size m are randomly selected, with index set B t ⊂ {1, 2, . . ., N}, and a stochastic gradient is evaluated based on the chosen minibatch,g(θ t) = i∈Bt ∇ θ (f (x i ; θ t), y i )/m, which is an unbiased estimator of the full gradient g 0 (θ t). Then, the parameters are updated with some learning rate η t as θ t+1 = θ t − η tg (θ t). Denote g(θ) = ∇ θ ((f (x; θ), y), the gradient for loss with a single data point (x, y), and assume that the size of minibatch is large enough for the central limit theorem to hold, and thusg(θ t) follows a Gaussian distribution BID14, DISPLAYFORM1 Note that the covariance matrix Σ depends on the model architecture, dataset and the current parameter θ t. Now we can rewrite the update of SGD as, DISPLAYFORM2 Inspired by GLD and SGD, we may consider a general kind of optimization dynamics, namely, gradient descent with unbiased noise, DISPLAYFORM3 For small enough constant learning rate η t = η, the above iteration in Eq. can be treated as the numerical discretization of the following stochastic differential equation BID9 BID2, DISPLAYFORM4 Considering ησ 2 t Σ t as the coefficient of noise term, existing works BID7 BID9 studied the influence of noise magnitude of SGD on generalization, i.e. ησ 2 t = η/m. In this work, we focus on studying the benefits of anisotropic structure of Σ t in SGD helping escape from minima by bridging the covariance matrix with the Hessian of the loss surface, and its implicit regularization effects on generalization, especially in deep learning context. For the purpose of eliminating the influence of the noise magnitude, we constrain it to be a constant when studying different structures of noise covariance. The noise magnitude could be evaluated as the expectation of the squared norm of the noise vector, DISPLAYFORM5 Thus, we introduce the following constraint, DISPLAYFORM6 From the statistical physics point of view, Tr(ησ 2 t Σ t) characterizes the kinetic energy (Gardiner), thus it is natural to force the energy to be unchanging, otherwise it is trivial that the higher the energy is, the less stable the system is. For simplicity, we absorb ησ 2 t into Σ t, denoting ησ 2 t Σ t as Σ t. If not pointed out, the subscript t of matrix Σ t is omitted to emphasize that we are fixing t and discussing the varying structure of Σ. For a general loss function L(θ) = E X X (θ) (the expectation could be either population or empirical), where X denotes data example and θ denoted parameters to be optimized, under suitable smoothness assumptions, the SDE associated with the gradient variant optimizer as shown in Eq. can be written as follows BID9 BID2 BID8, with little abuse of notation, DISPLAYFORM0 Let L 0 = L(θ 0) be one of the minimal values of L(θ), then for a fixed t small enough (such that DISPLAYFORM1 It is natural to measure the escaping efficiency using E[L t − L 0] since it characterizes the increase of the potential, i.e., the increase of the loss L. And also note that L t − L 0 ≥ 0, for any δ > 0, the escaping probability DISPLAYFORM2 where H t denotes the Hessian of L(θ t) at θ t.We provide the proof in Appendix, and the same for the other propositions. The escaping efficiency for general processes is hard to analyze due to the intractableness of the integral in Eq.. However, we may consider the second-order approximation locally near the minima θ 0, where DISPLAYFORM3. Without losing generality, we suppose θ 0 = 0. Further, suppose that H is a positive definite matrix and the diffusion covariance Σ t = Σ is constant for t. Then the SDE FORMULA7 becomes an Ornstein-Uhlenbeck process, DISPLAYFORM4 Proposition 2 (Escaping efficiency of Ornstein-Uhlenbeck process). For Ornstein-Uhlenbeck process, with t small enough, the escaping efficiency from minimum θ 0 = 0 is, DISPLAYFORM5 Inspired by Proposition 1 and Proposition 2, we propose Tr (HΣ) as an empirical indicator measuring the efficiency for a stochastic process escaping from minima. Now we turn to analysis which kind of noise covariance structure Σ will benefit escaping sharp minima, under the constraint Eq..Firstly, for the isotropic loss surface, i.e., DISPLAYFORM6 Tr Σ, which is invariant under the constraint that Tr Σ is constant (Eq. FORMULA6). Thus it is only nontrivial to study the impact of noise structure when the Hessian of loss surface is anisotropic. Secondly, H and Σ being semi-positive definite, to achieve the maximum of Tr(HΣ) under constraint, Σ should be DISPLAYFORM7, where λ 1, u 1 are the maximal eigenvalue and corresponding unit eigenvector of H. Note that the rank-1 matrix Σ * is highly anisotropic. More generally, the following Proposition 3 characterizes one kind of anisotropic noise significantly outperforming isotropic noise in order of number of parameters D, given H is ill-conditioned. Proposition 3 (The benefits of anisotropic noise). With semi-positive definite H and Σ, assume DISPLAYFORM8 DISPLAYFORM9 Σ is "aligned" with H. Let u i be the corresponding unit eigenvector of eigenvalue λ i, for some projection coefficient a > 0, DISPLAYFORM10 then we have the benefit of the anisotropic noise over the isotropic one in term of escaping efficiency, which can be characterized by the follow ratio, DISPLAYFORM11 whereΣ = Tr Σ D I denotes the covariance of isotropic noise, to meet the constraint Eq..To give some geometric intuitions on the left hand side of Eq. FORMULA2, let the maximal eigenvalue and its corresponding unit eigenvector of Σ be γ 1, v 1, then the right hand side has a lower bound as u DISPLAYFORM12 Thus if the maximal eigenvalues of H and Σ are aligned in proportion, γ 1 / Tr Σ ≥ a 1 λ 1 / Tr H, and the angle of their corresponding unit eigenvectors is close to zero, u 1, v 1 ≥ a 2, the second condition Eq. in Proposition 3 holds for a = a 1 a 2.Typically, in the scenario of modern deep neural networks, due to the over-parameterization, Hessian and the gradient covariance are usually ill-conditioned and anistropic near minima, as shown by BID19 and BID2. Thus the first condition in Eq. usually holds for deep neural networks, and we further justify it by experiments in Section 5.3. Therefore, in the following section, we turn to focus on how the gradient covariance, i.e. the covariance of SGD noise meets the second condition of Proposition 3 in the context of deep neural networks. In this section, we mainly investigate the anisotropic structure of gradient covariance in SGD, and explore its connection with the Hessian of loss surface. Around the true parameter According to the classic statistical theory (, Chap. 8), for population loss L(θ) = E X (θ), with being the negative log likelihood, when evaluating at the true parameter θ *, there is the exact equivalence between the Hessian H of the population loss and Fisher information matrix F, DISPLAYFORM0 In practice, with the assumptions that the sample size N is large enough (i.e. indicating asymptotic behavior) and suitable smoothness conditions, when the current parameter θ t is not far from the ground truth, Fisher is close to Hessian. Thus we can obtain the following approximate equality between gradient covariance and Hessian, DISPLAYFORM1 The first approximation is due to the dominance of noise over the mean of gradient in the later stage of SGD optimization, which has been shown in BID20. A similar experiment as BID20 has been conducted to demonstrate this observation, which is left in Appendix due to the limit of space. In the following, we theoretically characterize the closeness between Σ and H in the context of one hidden layer neural networks; and show that the gradient covariance introduced by SGD indeed has more benefits than isotropic one in term of escaping from minima, provided some assumptions. One hidden layer neural network with fixed output layer parameters For binary classification neural network with one hidden layer in classic setups (with softmax and cross-entropy loss), we have following to globally bound Fisher and Hessian with each other. Proposition 4 (The relationship between Fisher and Hessian in one hidden layer neural network). Consider the binary classification problem with data {(x i, y i)} i∈I, y ∈ {0, 1}, and typical (either population or empirical) loss as DISPLAYFORM2, where f denotes the output of neural network, and φ denotes the cross-entropy loss with softmax, DISPLAYFORM3 If: the neural network f is with one hidden layer and piece-wise linear activation. And the parameters of output layer are fixed during training; the optimization happens on a set U such that, f (x; θ) ∈ (−C, C), ∀θ ∈ U, ∀x, i.e., the output of the classifier is bounded during optimization. Then, we have the following relationship between (either population or empirical) Fisher F and Hessian H almost everywhere: DISPLAYFORM4 A B means that (B − A) is semi-positive definite. There are a few remarks on Proposition 4. Firstly, as shown in BID1, the considered neural networks in Proposition 4 are non-convex and have multiple minima, and thus it is still nontrivial to consider the escaping from minima. Secondly, the Proposition 4 holds in both population and empirical sense, since the proof does not distinguish the two circumstances. Thirdly, the bound between F and H holds "globally" in the set U where the output f is bounded, rather than merely around the true global minima as discussed previously. By Proposition 4, the following relationship between gradient covariance and Hessian could be derived. Proposition 5 (The relationship between gradient covariance and Hessian in one hidden layer neural network). Assume the conditions in Proposition 4 hold, then for some small δ > 0 and for θ close enough to minima θ * (local or global), DISPLAYFORM5 holds for any positive eigenvalue λ and its corresponding unit eigenvector u of Hessian H.As a direct corollary of Proposition 5, for such neural networks, the second condition Eq. in Proposition 3 holds in a very loose sense. Therefore, based on the discussion on population loss around the true parameters and one hidden layer neural network with fixed output layer parameters, given the ill-conditioning of H due to the over-parameterization of modern deep networks, according to Proposition 3, we can conclude the noise structure of SGD helps to escape from sharp minima much faster than the dynamics with isotropic noise, and converge to flatter solutions with a high probability. These flat minima typically generalize well BID6 BID11 BID16 BID22. Thus, we attribute such properties of SGD on its better generalization performance comparing to GD, GLD and other dynamics with isotropic noise BID7 BID5 BID11.In the following, we conduct a series of experiments systematically to verify our understanding on the behavior of escaping from minima and its regularization effects for different optimization dynamics. To better understanding the behavior of anisotropic noise different from isotropic ones, we introduce dynamics with different kinds of noise structure to empirical study with, as shown on TAB0. DISPLAYFORM0 σt is adjusted to make σt t share the same expected norm as that of SGD DISPLAYFORM1 The covariance diag(Σt) is the diagonal of the covariance of SGD noise. DISPLAYFORM2. γi, vi are the first k leading eigenvalues and corresponding eigenvalues of the covariance of SGD noise, respectively. (A low rank approximation of Σ sgd t) GLD Hessian t ∼ N 0,Ht H t is a low rank approximation of the Hessian matrix of loss L(θ) by its the first k leading eigenvalues and corresponding eigenvalues. GLD 1st eigven (H) t ∼ N 0, λ1u1u DISPLAYFORM3 λ1, u1 are the maximal eigenvalue and its corresponding unit eigenvector of the Hessian matrix of loss L(θt). We design a 2-D toy example L(w 1, w 2) with two basins, a small one and a large one, corresponding to a sharp and flat minima, and (−1, −1), respectively, both of which are global minima. Please refer to Appendix for the detailed constructions. We initialize the dynamics of interest with the sharp minimum (w 1, w 2) =, and run them to study their behaviors escaping from this sharp minimum. To explicitly control the noise magnitude, we only conduct experiments on GD, GLD const, GLD diag, GLD leading (with k = 2 = D in TAB0, or in other words, the exactly covariance of SGD noise), GLD Hessian (k = 2) and GLD 1st eigven(H). And we adjust σ t in each dynamics to force their noise to share the same expected squared norm as defined in Eq.. Figure 1(a) shows the trajectories of the dynamics escaping from the sharp minimum towards the flat one (−1, −1), while Figure 1(b) presents the success rate of escaping for each dynamic during 100 repeated experiments. As shown in Figure 1, GLD 1st eigvec(H) achieves the highest success rate, indicating the fastest escaping speed from the sharp minimum. The dynamics with anisotropic noise aligned with Hessian well, including GLD 1st eigvec(H), GLD Hessian and GLD leading, greatly outperform GD, GLD const with isotropic noise, and GLD diag with noise poorly aligned with Hessian. These experiments are consistent with our theoretical analysis on Ornstein-Uhlenbeck process shown Proposition 2 and 3, demonstrating the benefits of anisotropic noise for escaping from sharp minima. We empirically show that in one hidden layer neural network with fixed output layer parameters, the anisotropic noise induced by SGD indeed helps escape from sharp minima more efficiently than isotropic noise. Three networks are trained to binary classify 1, 000 linearly separable two-dimensional points. The number of hidden nodes for each network varies in {20, 200, 2000}. We plot the empirical indicator Tr (HΣ) in FIG3. We can easily observe that as the increase of the number of hidden nodes, the ratio is enlarged significantly, which is consistent with the Eq. described in Proposition 3. In this part, we conduct a series of experiments in real deep learning scenarios to demonstrate the behavior of SGD noise and its implicit regularization effects. We construct a noisy training set based on FashionMNIST dataset 1. Concretely, the training set consist of 1000 images with correct labels, and another 200 images with random labels. All the test data are with clean labels. A small LeNet-like network is utilized such that the spectrum decomposition over gradient covariance matrix and Hessian matrix are computationally feasible. The network consists of two convolutional layers and two fully-connected layers, with 11, 330 parameters in total. We firstly run the standard gradient decent for 3000 iterations to arrive at the parameters θ * GD near the global minima with near zero training loss and 100% training accuracy, which are typically sharp minima that generalize poorly BID16. And then all other compared methods are initialized with θ * GD and run for optimization with the same learning rate η t = 0.07 and same batch size m = 20 (if needed) for fair comparison 2.Verification of SGD noise satisfying the conditions in Proposition 3 To see whether the noise of SGD in real deep learning circumstance satisfies the two conditions in Proposition 3, we run SGD optimizer initialized from θ * GD, i.e. the sharp minima found by GD. FIG4 shows the first 400 eigenvalues of Hessian at θ * GD, from which we see that the 140th eigenvalue has already decayed to about 1% of the first eigenvalue. Note that Hessian H ∈ R D×D, D = 11330, thus H around θ * GD approximately meets the ill-conditioning requirement in Proposition 3. FIG4 shows the projection coefficient estimated byâ = u T 1 Σu1 Tr H λ1 Tr Σ along the trajectory of SGD. The plot indicates that the projection coefficient is in a descent scale comparing to D 2d−1, thus satisfying the second condition in Proposition 3. Therefore, Proposition 3 ensures that SGD would escape from minima θ * GD faster than GLD in order of O(D 2d−1), as shown in FIG4 (c). An interesting observation is that in the later stage of SGD optimization, Tr(HΣ) becomes significantly (10 7 times) smaller than in the beginning stage, implying that SGD has already converged to minima being almost impossible to escape from. This phenomenon demonstrates the reasonability to employ Tr(HΣ) as an empirical indicator for escaping efficiency. Behaviors of different dynamics escaping from minima and its generalization effects To compare the different dynamics on escaping behaviors and generalization performance, we run dynamics initialized from the sharp minima θ * GD found by GD. The settings for each compared method are as follows. The hyperparameter σ 2 for GLD const has already been tuned as optimal (σ = 0.001) by grid search. For GLD leading, we set k = 20 for comprising the computational cost and approximation accuracy. As for GLD Hessian, to reduce the expensive evaluation of such a huge Hessian in each iteration, we set k = 20 and update the Hessian every 10 iterations. We adjust σ t in GLD dynamic, GLD Hessian and GLD 1st eigvec(H) to guarantee that they share the same expected squred noise norm defined in Eq. as that of SGD. And we measure the expected sharpness of different minima as E ν∼N (0,δ 2 I) L(θ + ν) − L(θ), as defined in BID16, Eq. FORMULA7 ). The are shown in FIG5.As shown in FIG5, SGD, GLD 1st eigvec(H), GLD leading and GLD Hessian successfully escape from the sharp minima found by GD, while GLD, GLD dynamic and GLD diag are trapped in the minima. This demonstrates that the methods with anisotropic noise "aligned" with loss curvature can help to find flatter minima that generalize well. We also provide experiments on standard CIFAR-10 with VGG11 in Appendix. DISPLAYFORM0, and δ = 0.01, the expectation is computed by average on 1000 times sampling. We theoretically investigate a general optimization dynamics with unbiased noise, which unifies various existing optimization methods, including SGD. We provide some novel on the behaviors of escaping from minima and its regularization effects. A novel indicator is derived for characterizing the escaping efficiency. Based on this indicator, two conditions are constructed for showing what type of noise structure is superior to isotropic noise in term of escaping. We then analyze the noise structure of SGD in deep learning and find that it indeed satisfies the two conditions, thus explaining the widely know observation that SGD can escape from sharp minima efficiently toward flat minina that generalize well. Various experimental evidence supports our arguments on the behavior of SGD and its effects on generalization. Our study also shows that isotropic noise helps little for escaping from sharp minima, due to the highly anisotropic nature of landscape. This indicates that it is not sufficient to analyze SGD by treating it as an isotropic diffusion over landscape (; BID15 . A better understanding of this out-of-equilibrium behavior BID2) is on demand. Taking expectation with respect to the distribution of θ t, DISPLAYFORM0 for the expectation of Brownian motion is zero. Thus the solution of EY t is, DISPLAYFORM1 Proof. Without losing generality, we assume that L 0 = 0.For multivariate Ornstein-Uhlenbeck process, when θ 0 = 0 is an constant, θ t follows a multivariate Gaussian distribution (Øksendal, 2003).Consider change of variables θ → φ(θ, t) = e Ht θ t. Here, for symmetric matrix A, DISPLAYFORM0 where λ 1,..., λ n and U are the eigenvalues and eigenvector matrix of A. Note that with this notation, DISPLAYFORM1 Applying Ito's lemma, we have DISPLAYFORM2 which we can integrate form 0 to t to get DISPLAYFORM3 The expectation of θ t is zero. And by Ito's isometry (Øksendal, 2003), the covariance of θ t is, DISPLAYFORM4 DISPLAYFORM5 The proof is finished. Proof. Firstly compute the gradients and Hessian of φ, DISPLAYFORM0 And note the Gauss-Newton decomposition for functions with the form of L = φ • f, DISPLAYFORM1 Since the output layer parameters for f is fixed and the activation functions are piece-wise linear, f (x; θ) is a piece-wise linear function on its parameters θ. Therefore ∂ 2 f ∂θ 2 = 0, a.e., and H = E (x,y) DISPLAYFORM2 It is easy to check that e DISPLAYFORM3. Thus, DISPLAYFORM4 A.5 PROOF OF PROPOSITION 5Proof. For simplicity, we define g:= ∇, g 0:= ∇L = E∇.The gradient covariance and Fisher has the following relationship, DISPLAYFORM5 Hence, DISPLAYFORM6 Therefore, with the condition DISPLAYFORM7, we have DISPLAYFORM8 Tr F e −2δ, for δ small enough. On the other hand, Proposition 4 indicates that e −C F H e C F, which means, DISPLAYFORM9. Therefore, for λ, u being a positive eigenvalue and the corresponding unit eigenvector of H, we have DISPLAYFORM10 B ADDITIONAL EXPERIMENTS B.1 DOMINANCE OF NOISE OVER GRADIENT FIG6 shows the comparison of gradient mean and the expected norm of noise during training using SGD. The dataset and model are same as the experiments of FashionMNIST in main paper, or as in Section C.2. From FIG6, we see that in the later stage of SGD optimization, noise indeed dominates gradient. These experiments are implemented by TensorFlow 1.5.0. FIG7 shows the first 50 iterations of FashionMNIST experiments in main paper. We observe that SGD, GLD 1st eigvec(H), GLD Hessian and GLD leading successfully escape from the sharp minima found by GD, while GLD diag, GLD dynamic, GLD const and GD do not. These experiments are implemented by TensorFlow 1.5.0. Dataset Standard CIFAR-10 dataset without data augmentation. Model Standard VGG11 network without any regularizations including dropout, batch normalization, weight decay, etc. The total number of parameters of this network is 9, 750, 922.Training details Learning rates η t = 0.05 are fixed for all optimizers, which is tuned for the best generalization performance of GD. The batch size of SGD is m = 100. The noise std of GLD constant is σ = 10 −3, which is tuned to best. Due to computational limitation, we only conduct experiments on GD, GLD const, GLD dynamic, GLD diag and SGD.Estimation of Sharpness The sharpness are estimated by DISPLAYFORM0 with M = 100 and δ = 0.01.Experiments Similar experiments are conducted as in main paper for CIFAR-10 and VGG11, as shown in FIG8. The observations and consist with main paper. These experiments are implemented by PyTorch 0.3.0. Note that Σ is the inverse of the Hessian of the quadric form generalizeing the sharp minima. And the 3-dimensional plot of the loss surface is shown in FIG9.Hyperparameters All learning rates are equal to 0.005. All dynamics concerned are tuned to share the same expected square norm, 0.01. The number of iteration during one run is 500.These experiments are implemented by PyTorch 0.3.0. Dataset Our training set consists of 1200 examples randomly sampled from original FashionM-NIST training set, and we further specify 200 of them with randomly wrong labels. The test set is same as the original FashionMNIST test set. Model Network architecture: input ⇒ conv1 ⇒ max_pool ⇒ ReLU ⇒ conv2 ⇒ max_pool ⇒ ReLU ⇒ fc1 ⇒ ReLU ⇒ fc2 ⇒ output. Both two convolutional layers use 5 × 5 kernels with 10 channels and no padding. The number of hidden units between fully connected layers are 50. The total number of parameters of this network are 11, 330. • GD: Learning rate η = 0.1. We tuned the learning rate (in diffusion stage) in a wide range of {0.5, 0.2, 0.15, 0.1, 0.09, 0.08, . . ., 0.01} and no improvement on generalization.• GLD constant: Learning rate η = 0.07, noise std σ = 10 −3. We tuned the noise std in range of {10 −1, 10 −2, 10 −3, 10 −4, 10 −5} and no improvement on generalization.• GLD dynamic: Learning rate η = 0.07.• GLD diagnoal: Learning rate η = 0.07.• GLD leading: Learning rate η = 0.07, number of leading eigenvalues k = 20, batchsize m = 20. We first randomly divide the training set into 60 mini batches containing 20 examples, and then use those minibatches to estimate covariance matrix.• GLD Hessian: Learning rate η = 0.07, number of leading eigenvalues = 20, update frequence f = 10. Do to the limit of computational resources, we only update Hessian matrix every 10 iterations. But add Hessian generated noise every iteration. And to the same reason, we simplily set the coefficent of Hessian noise to Tr H/m Tr Σ, to avoid extensively tuning of hyperparameter. • GLD 1st eigvec(H): Learning rate η = 0.07, as for GLD Hessian, and we set the coefficient of noise to λ 1 /m Tr Σ, where λ 1 is the first eigenvalue of H. These experiments are implemented by TensorFlow 1.5.0. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | H1M7soActX | We provide theoretical and empirical analysis on the role of anisotropic noise introduced by stochastic gradient on escaping from minima. |
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning. In this paper, we show how to make more effective use of the model by exploiting its differentiability. We construct a policy optimization algorithm that uses the pathwise derivative of the learned model and policy across future timesteps. Instabilities of learning across many timesteps are prevented by using a terminal value function, learning the policy in an actor-critic fashion. Furthermore, we present a derivation on the monotonic improvement of our objective in terms of the gradient error in the model and value function. We show that our approach (i) is consistently more sample efficient than existing state-of-the-art model-based algorithms, (ii) matches the asymptotic performance of model-free algorithms, and (iii) scales to long horizons, a regime where typically past model-based approaches have struggled. Model-based reinforcement learning (RL) offers the potential to be a general-purpose tool for learning complex policies while being sample efficient. When learning in real-world physical systems, data collection can be an arduous process. Contrary to model-free methods, model-based approaches are appealing due to their comparatively fast learning. By first learning the dynamics of the system in a supervised learning way, it can exploit off-policy data. Then, model-based methods use the model to derive controllers from it either parametric controllers (; ;) or non-parametric controllers . Current model-based methods learn with an order of magnitude less data than their model-free counterparts while achieving the same asymptotic convergence. Tools like ensembles, probabilistic models, planning over shorter horizons, and meta-learning have been used to achieved such performance (; ; . However, the model usage in all of these methods is the same: simple data augmentation. They use the learned model as a black-box simulator generating samples from it. In high-dimensional environments or environments that require longer planning, substantial sampling is needed to provide meaningful signal for the policy. Can we further exploit our learned models? In this work, we propose to estimate the policy gradient by backpropagating its gradient through the model using the pathwise derivative estimator. Since the learned model is differentiable, one can link together the model, reward function, and policy to obtain an analytic expression for the gradient of the returns with respect to the policy. By computing the gradient in this manner, we obtain an expressive signal that allows rapid policy learning. We avoid the instabilities that often from back-propagating through long horizons by using a terminal Q-function. This scheme fully exploits the learned model without harming the learning stability seen in previous approaches (; . The horizon at which we apply the terminal Q-function acts as a hyperparameter between model-free (when fully relying on the Q-function) and model-based (when using a longer horizon) of our algorithm. The main contribution of this work is a model-based method that significantly reduces the sample complexity compared to state-of-the-art model-based algorithms . For instance, we achieve a 10k return in the half-cheetah environment in just 50 trajectories. We theoretically justify our optimization objective and derive the monotonic improvement of our learned policy in terms of the Q-function and the model error. Furtermore, we experimentally analyze the theoretical derivations. Finally, we pinpoint the importance of our objective by ablating all the components of our algorithm. The are reported in four model-based benchmarking environments ). The low sample complexity and high performance of our method carry high promise towards learning directly on real robots. Model-Based Reinforcement Learning. Learned dynamics models offer the possibility to reduce sample complexity while maintaining the asymptotic performance. For instance, the models can act as a learned simulator on which a model-free policy is trained on (; ;). The model can also be used to improve the target value estimates or to provide additional context to a policy . Contrary to these methods, our approach uses the model in a different way: we exploit the fact that the learned simulator is differentiable and optimize the policy with the analytical gradient. Long term predictions suffer from a compounding error effect in the model, ing in unrealistic predictions. In such cases, the policy tends to overfit to the deficiencies of the model, which translates to poor performance in the real environment; this problem is known as model-bias . The model-bias problem has motivated work that uses meta-learning, interpolation between different horizon predictions , and interpolating between model and real data . To prevent model-bias, we exploit the model for a short horizon and use a terminal value function to model the rest of the trajectory. Finally, since our approach returns a stochastic policy, dynamics model, and value function could use model-predictive control (MPC) for better performance at test time, similar to . MPC methods have shown to be very effective when the uncertainty of the dynamics is modelled (; . Differentable Planning. Previous work has used backpropagate through learned models to obtain the optimal sequences of actions. For instance, learn linear local models and obtain the optimal sequences of actions, which is then distilled into a neural network policy. The planning can be incorporated into the neural network architecture (; ; ;) or formulated as a differentiable function . Planning sequences of actions, even when doing model-predictive control (MPC), does not scale well to high-dimensional, complex domains. Our method, instead learns a neural network policy in an actor-critic fashion aided with a learned model. In our study, we evaluate the benefit of carrying out MPC on top of our learned policy at test time, Section 5.4. The suggest that the policy captures the optimal sequence of action, and re-planning does not in significant benefits. Policy Gradient Estimation. The reinforcement learning objective involves computing the gradient of an expectation (a). By using Gaussian processes , it is possible to compute the expectation analytically. However, when learning expressive parametric non-linear dynamical models and policies, such closed form solutions do not exist. The gradient is then estimated using Monte-Carlo methods . In the context of model-based RL, previous approaches mostly made use of the score-function, or REINFORCE estimator . However, this estimator has high variance and extensive sampling is needed, which hampers its applicability in high-dimensional environments. In this work, we make use of the pathwise derivative estimator . Similar to our approach, uses this estimator in the context of model-based RL. However, they just make use of real-world trajectories that introduces the need of a likelihood ratio term for the model predictions, which in turn increases the variance of the gradient estimate. Instead, we entirely rely on the predictions of the model, removing the need of likelihood ratio terms. Actor-Critic Methods. Actor-critic methods alternate between policy evaluation, computing the value function for the policy; and policy improvement using such value function . Actor-critic methods can be classified between on-policy and off-policy. On-policy methods tend to be more stable, but at the cost of sample efficiency . On the other hand, off-policy methods offer better sample complexity. Recent work has significantly stabilized and improved the performance of off-policy methods using maximum-entropy objectives (a) and multiple value functions . Our method combines the benefit of both. By using the learned model we can have a learning that resembles an on-policy method while still being off-policy. In this section, we present the reinforcement learning problem, two different lines of algorithms that tackle it, and a summary on Monte-Carlo gradient estimators. A discrete-time finite Markov decision process (MDP) M is defined by the tuple (S, A, f, r, γ, p 0, T). Here, S is the set of states, A the action space, s t+1 ∼ f (s t, a t) the transition distribution, r: S × A → R is a reward function, p 0: S → R + represents the initial state distribution, γ the discount factor, and T is the horizon of the process. We define the return as the sum of rewards r(s t, a t) along a trajectory τ:= (s 0, a 0, ..., s T −1, a T −1, s T). The goal of reinforcement learning is to find a policy π θ: S × A → R + that maximizes the expected return, i.e., max Actor-Critic. In actor-critic methods, we learn a functionQ (critic) that approximates the expected return conditioned on a state s and action a, Then, the learned Q-function is used to optimize a policy π (actor). Usually, the Q-function is learned by iteratively minimizing the Bellman residual: The above method is referred as one-step Q-learning, and while a naive implementation often in unstable behaviour, recent methods have succeeded in stabilizing the Q-function training . The actor then can be trained to maximize the learnedQ function The benefit of this form of actor-critic method is that it can be applied in an off-policy fashion, sampling random mini-batches of transitions from an experience replay buffer . Model-Based RL. Model-based methods, contrary to model-free RL, learn the transition distribution from experience. Typically, this is carried out by learning a parametric function approximatorf φ, known as a dynamics model. We define the state predicted by the dynamics model asŝ t+1, i.e.,ŝ t+1 ∼f φ (s t, a t). The models are trained via maximum likelihood: In order to optimize the reinforcement learning objective, it is needed to take the gradient of an expectation. In general, it is not possible to compute the exact expectation so Monte-Carlo gradient estimators are used instead. These are mainly categorized into three classes: the pathwise, score function, and measure-valued gradient estimator . In this work, we use the pathwise gradient estimator, which is also known as the re-parameterization trick . This estimator is derived from the law of the unconscious statistician (LOTUS) Here, we have stated that we can compute the expectation of a random variable x without knowing its distribution, if we know its corresponding sampling path and base distribution. A common case, and the one used in this manuscript, θ parameterizes a Gaussian distribution:, which is equivalent to x = µ θ + σ θ for ∼ N. Exploiting the full capability of learned models has the potential to enable complex and highdimensional real robotics tasks while maintaining low sample complexity. Our approach, modelaugmented actor-critic (MAAC), exploits the learned model by computing the analytic gradient of the returns with respect to the policy. In contrast to sample-based methods, which one can think of as providing directional derivatives in trajectory space, MAAC computes the full gradient, providing a strong learning signal for policy learning, which further decreases the sample complexity. In the following, we present our policy optimization scheme and describe the full algorithm. Among model-free methods, actor-critic methods have shown superior performance in terms of sample efficiency and asymptotic performance (a). However, their sample efficiency remains worse than modelbased approaches, and fully off-policy methods still show instabilities comparing to on-policy algorithms . Here, we propose a modification of the Q-function parametrization by using the model predictions on the first time-steps after the action is taken. Specifically, we do policy optimization by maximizing the following objective: whereby, s t+1 ∼f (s t, a t) and a t ∼ π θ (s t). Note that under the true dynamics and Q-function, this objective is the same as the RL objective. Contrary to previous reinforcement learning methods, we optimize this objective by back-propagation through time. Since the learned dynamics model and policy are parameterized as Gaussian distributions, we can make use of the pathwise derivative estimator to compute the gradient, ing in an objective that captures uncertainty while presenting low variance. The computational graph of the proposed objective is shown in Figure 1. While the proposed objective resembles n-step bootstrap , our model usage fundamentally differs from previous approaches. First, we do not compromise between being offpolicy and stability. Typically, n-step bootstrap is either on-policy, which harms the sample complexity, or its gradient estimation uses likelihood ratios, which presents large variance and in unstable learning. Second, we obtain a strong learning signal by backpropagating the gradient of the policy across multiple steps using the pathwise derivative estimator, instead of the REINFORCE estimator . And finally, we prevent the exploding and vanishing gradients effect inherent to back-propagation through time by the means of the terminal Q-function . The horizon H in our proposed objective allows us to trade off between the accuracy of our learned model and the accuracy of our learned Q-function. Hence, it controls the degree to which our algorithm is model-based or well model-free. If we were not to trust our model at all (H = 0), we would end up with a model-free update; for H = ∞, the objective in a shooting objective. Note that we will perform policy optimization by taking derivatives of the objective, hence we require accuracy on the derivatives of the objective and not on its value. The following lemma provides a bound on the gradient error in terms of the error on the derivatives of the model, the Q-function, and the horizon H. Lemma 4.1 (Gradient Error). Letf andQ be the learned approximation of the dynamics f and Q-function Q, respectively. Assume that Q andQ have L q /2-Lipschitz continuous gradient and f and f have L f /2-Lipschitz continuous gradient. Let f = max t ∇f (ŝ t,â t) − ∇f (s t, a t) 2 be the error on the model derivatives and Q = ∇Q(ŝ H,â H) − ∇Q(s H, a H) 2 the error on the Q-function derivative. Then the error on the gradient between the learned objective and the true objective can be bounded by: The in Lemma 4.1 stipulates the error of the policy gradient in terms of the maximum error in the model derivatives and the error in the Q derivatives. The functions c 1 and c 2 are functions of the horizon and depend on the Lipschitz constants of the model and the Q-function. Note that we are just interested in the relation between both sources of error, since the gradient magnitude will be scaled by the learning rate, or by the optimizer, when applying it to the weights. In the previous section, we presented our objective and the error it incurs in the policy gradient with respect to approximation error in the model and the Q function. However, the error on the gradient is not indicative of the effect of the desired metric: the average return. Here, we quantify the effect of the modeling error on the return. First, we will bound the KL-divergence between the policies ing from taking the gradient with the true objective and the approximated one. Then we will bound the performance in terms of the KL. Lemma 4.2 (Total Variation Bound). Under the assumptions of the Lemma 4.1, let θ = θ o + α∇ θ J π be the parameters ing from taking a gradient step on the exact objective, andθ = θ o + α∇ θĴπ the parameters ing from taking a gradient step on approximated objective, where α ∈ R +. Then the following bound on the total variation distance holds Proof. See Appendix. The previous lemma in a bound on the distance between the policies originated from taking a gradient step using the true dynamics and Q-function, and using its learned counterparts. Now, we can derive a similar from to bound the difference in average returns. Theorem 4.1 (Monotonic Improvement). Under the assumptions of the Lemma 4.1, be θ andθ as defined in Lemma 4.2, and assuming that the reward is bounded by r max. Then the average return of the πθ satisfies Proof. See Appendix. Hence, we can provide explicit lower bounds of improvement in terms of model error and function error. Theorem 4.1 extends previous work of monotonic improvement for model-free policies (b;), to the model-based and actor critic set up by taking the error on the learned functions into account. From this bound one could, in principle, derive the optimal horizon H that minimizes the gradient error. However, in practice, approximation errors are hard to determine and we treat H as an extra hyper-parameter. In section 5.2, we experimentally analyze the error on the gradient for different estimators and values of H. Based on the previous sections, we develop a new algorithm that explicitly optimizes the modelaugmented actor-critic (MAAC) objective. The overall algorithm is divided into three main steps: model learning, policy optimization, and Q-function learning. Model learning. In order to prevent overfitting and overcome model-bias , we use a bootstrap ensemble of dynamics models {f φ1, ...,f φ M}. Each of the dynamics models parameterizes the mean and the covariance of a Gaussian distribution with diagonal covariance. The bootstrap ensemble captures the epistemic uncertainty, uncertainty due to the limited capacity or data, while the probabilistic models are able to capture the aleatoric uncertainty , inherent uncertainty of the environment. We denote by f φ the transitions dynamics ing from φ U, where U ∼ U[M] is uniform random variable on {1, ..., M}. The dynamics models are trained via maximum likelihood with early stopping on a validation set. Sample trajectories from the real environment with policy π θ. Add them to D env. 4: for i = 1... G 2 do 10: end for 13: until the policy performs well in the real environment 14: return Optimal parameters θ * Policy Optimization. We extend the MAAC objective with an entropy bonus (b), and perform policy learning by maximizing We learn the policy by using the pathwise derivative of the model through H steps and the Q-function by sampling multiple trajectories from the sameŝ 0. Hence, we learn a maximum entropy policy using pathwise derivative of the model through H steps and the Q-function. We compute the expectation by sampling multiple actions and states from the policy and learned dynamics, respectively. Q-function Learning. In practice, we train two Q-functions since it has been experimentally proven to yield better . We train both Q functions by minimizing the Bellman error (Section 3.1): Similar to , we minimize the Bellman residual on states previously visited and imagined states obtained from unrolling the learned model. Finally, the value targets are obtained in the same fashion the Stochastic Ensemble Value Expansion , using H as a horizon for the expansion. In doing so, we maximally make use of the model by not only using it for the policy gradient step, but also for training the Q-function. Our method, MAAC, iterates between collecting samples from the environment, model training, policy optimization, and Q-function learning. A practical implementation of our method is described in Algorithm 1. First, we obtain trajectories from the real environment using the latest policy available. Those samples are appended to a replay buffer D env, on which the dynamics models are trained until convergence. The third step is to collect imaginary data from the models: we collect k-step transitions by unrolling the latest policy from a randomly sampled state on the replay buffer. The imaginary data constitutes the D model, which together with the replay buffer, is used to learn the Q-function and train the policy. Our algorithm consolidates the insights built through the course of this paper, while at the same time making maximal use of recently developed actor-critic and model-based methods. All in all, it consistently outperforms previous model-based and actor-critic methods. Our experimental evaluation aims to examine the following questions: 1) How does MAAC compares against state-of-the-art model-based and model-free methods? 2) Does the gradient error correlate with the derived bound?, 3) Which are the key components of its performance?, and 4) Does it benefit from planning at test time? In order to answer the posed questions, we evaluate our approach on model-based continuous control benchmark tasks in the MuJoCo simulator (; . We compare our method on sample complexity and asymptotic performance against state-of-the-art model-free (MF) and model-based (MB) baselines. Specifically, we compare against the model-free soft actor-critic (SAC) (a), which is an off-policy algorithm that has been proven to be sample efficient and performant; as well as two state-of-the-art model-based baselines: modelbased policy-optimization (MBPO) and stochastic ensemble value expansion (STEVE) . The original STEVE algorithm builds on top of the model-free algorithm DDPG, however this algorithm is outperformed by SAC. In order to remove confounding effects of the underlying model-free algorithm, we have implemented the STEVE algorithm on top of SAC. We also add SVG to comparison, which similar to our method uses the derivative of dynamic models to learn the policy. The , shown in Fig. 2, highlight the strength of MAAC in terms of performance and sample complexity. MAAC scales to higher dimensional tasks while maintaining its sample efficiency and asymptotic performance. In all the four environments, our method learns faster than previous MB and MF methods. We are able to learn near-optimal policies in the half-cheetah environment in just over 50 rollouts, while previous model-based methods need at least the double amount of data. Furthermore, in complex environments, such as ant, MAAC achieves near-optimal performance within 150 rollouts while other take orders of magnitudes more data. Here, we investigate how the bounds obtained relate to the empirical performance. In particular, we study the effect of the horizon of the model predictions on the gradient error. In order to do so, we construct a double integrator environment; since the transitions are linear and the cost is quadratic for a linear policy, we can obtain the analytic gradient of the expect return. Figure 3: L1 error on the policy gradient when using the proposed objective for different values of the horizon H as well as the error obtained when using the true dynamics. The correlate with the assumption that the error in the learned dynamics is lower than the error in the Q-function, as well as they correlate with the derived bounds. Figure 3 depicts the L1 error of the MAAC objective for different values of the horizon H as well as what would be the error using the true dynamics. As expected, using the true dynamics yields to lower gradient error since the only source comes from the learned Q-function that is weighted down by γ H. The using learned dynamics correlate with our assumptions and the derived bounds: the error from the learned dynamics is lower than the one in the Q-funtion, but it scales poorly with the horizon. For short horizons the error decreases as we increase the horizon. However, large horizons is detrimental since it magnifies the error on the models. In order to investigate the importance of each of the components of our overall algorithm, we carry out an ablation test. Specifically, we test three different components: 1) not using the model to train the policy, i.e., set H = 0, 2) not using the STEVE targets for training the critic, and 3) using a single sample estimate of the path-wise derivative. The ablation test is shown in Figure 4. The test underpins the importance of backpropagating through the model: setting H to be 0 inflicts a severe drop in the algorithm performance. On the other hand, using the STEVE targets in slightly more stable training, but it does not have a significant effect. Finally, while single sample estimates can be used in simple environments, they are not accurate enough in higher dimensional environments such as ant. Figure 4: Ablation test of our method. We test the importance of several components of our method: not using the model to train the policy (H = 0), not using the STEVE targets for training the Q-function (-STEVE), and using a single sample estimate of the pathwise derivative. Using the model is the component that affects the most the performance, highlighting the importance of our derived estimator. One of the key benefits of methods that combine model-based reinforcement learning and actor-critic methods is that the optimization procedure in a stochastic policy, a dynamics model and a Q-function. Hence, we have all the components for, at test time, refine the action selection by the means of model predictive control (MPC). Here, we investigate the improvement in performance of planning at test time. Specifically, we use the cross-entropy method with our stochastic policy as our initial distributions. The , shown in Table 2, show benefits in online planning in complex domains; however, its improvement gains are more timid in easier domains, showing that the learned policy has already interiorized the optimal behaviour. HalfCheetahEnv HopperEnv Walker2dEnv MAAC+MPC 3.97e3 ± 1.48e3 1.09e4 ± 94.5 2.8e3 ± 11 1.76e3 ± 78 MAAC 3.06e3 ± 1.45e3 1.07e4 ± 253 2.77e3 ± 3.31 1.61e3 ± 404 Table 1: Performance at test time with (maac+mpc) and without (maac) planning of the converged policy using the MAAC objective. In this work, we present model-augmented actor-critic, MAAC, a reinforcement learning algorithm that makes use of a learned model by using the pathwise derivative across future timesteps. We prevent instabilities arisen from backpropagation through time by the means of a terminal value function. The objective is theoretically analyzed in terms of the model and value error, and we derive a policy improvement expression with respect to those terms. Our algorithm that builds on top of MAAC is able to achieve superior performance and sample efficiency than state-of-the-art model-based and model-free reinforcement learning algorithms. For future work, it would be enticing to deploy the presented algorithm on a real-robotic agent. Then, the error in the gradient in the previous term is bounded by In order to bound the model term we need first to bound the rewards since Similar to the previous bounds, we can bound now each reward term by With this we can bound the total error in models Then, the gradient error has the form A.2 PROOF OF LEMMA 4.2 The total variation distance can be bounded by the KL-divergence using the Pinsker's inequality Then if we assume third order smoothness on our policy, by the Fisher information metric theorem then Given that θ −θ 2 = α ∇ θ J π − ∇ θĴπ 2, for a small enough step the following inequality holds Given the bound on the total variation distance, we can now make use of the monotonic improvement theorem to establish an improvement bound in terms of the gradient error. Let J π (θ) and J π (θ) be the expected return of the policy π θ and πθ under the true dynamics. Let ρ andρ be the discounted state marginal for the policy π θ and πθ, respectively Then, combining the from Lemma 4.2 we obtain the desired bound. In order to show the significance of each component of MAAC, we conducted more ablation studies. The are shown in Figure 5. Here, we analyze the effect of training the Q-function with data coming from just the real environment, not learning a maximum entropy policy, and increasing the batch size instead of increasing the amount of samples to estimate the value function. Figure 5: We further test the significance of some components of our method: not use the dynamics to generate data, and only use real data sampled from environments to train policy and Q-functions (real_data), remove entropy from optimization objects (no_entropy), and using a single sample estimate of the pathwise derivative but increase the batch size accordingly (5x batch size). Considering entropy and using dynamic models to augment data set are both very important. A.5 EXECUTION TIME COMPARISON | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Skln2A4YDB | Policy gradient through backpropagation through time using learned models and Q-functions. SOTA results in reinforcement learning benchmark environments. |
Meta-learning algorithms learn to acquire new tasks more quickly from past experience. In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks. The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach. Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can in principle be used to train optimal meta-learners. Our experimental indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and significantly exceeds the performance of learning from scratch. Reusing past experience for faster learning of new tasks is a key challenge for machine learning. Meta-learning methods achieve this by using past experience to explicitly optimize for rapid adaptation (; ; ; a; ;). In the context of reinforcement learning (RL), meta-reinforcement learning (meta-RL) algorithms can learn to solve new RL tasks more quickly through experience on past tasks (b; a). Typical meta-RL algorithms assume the ability to sample from a pre-specified task distribution, and these algorithms learn to solve new tasks drawn from this distribution very quickly. However, specifying a task distribution is tedious and requires a significant amount of supervision (b; b) that may be difficult to provide for large, real-world problem settings. The performance of meta-learning algorithms critically depends on the meta-training task distribution, and meta-learning algorithms generalize best to new tasks which are drawn from the same distribution as the meta-training tasks. In effect, meta-RL offloads much of the design burden from algorithm design to designing a sufficiently broad and relevant distribution of meta-training tasks. While this offloading helps in acquiring representations for fast adaptation to the specified task distribution, specifying this is often tedious and challenging. A natural question is whether we can do away with manual task design and develop meta-RL algorithms that learn only from unsupervised environment interaction. In this paper, we take an initial step toward the formalization and design of such methods. Our goal is to automate the meta-training process by removing the need for hand-designed metatraining tasks. To that end, we introduce unsupervised meta-RL: meta-learning from a task distribution that is acquired automatically, rather than requiring manual design of the meta-training tasks. Unsupervised meta-RL methods must solve two difficult problems together: meta-RL with broad task distributions, and unsupervised exploration for proposing a wide variety of tasks for meta-learning. Since the assumptions of our method differ fundamentally from prior meta-RL methods (we do not assume access to hand-specified meta-training tasks that use human-specified reward functions), the best points of comparison for our approach are learning meta-test tasks entirely from scratch with conventional RL algorithms. Our method can also be thought of as automatically acquiring an environment-specific learning procedure for deep neural network policies, somewhat related to data-driven initialization procedures explored in supervised learning (Krähenbühl et al., 2015;). The primary contributions of our work are to propose a framework for unsupervised meta-RL; to sketch out a family of unsupervised meta-RL algorithms; to provide a theoretical derivation that allows us to reason about the optimality of unsupervised meta-RL methods in terms of mutual information objectives; and to describe an instantiation of an algorithm from this family that builds on a recently proposed procedure for unsupervised exploration and modelagnostic meta-learning (MAML) (a). In addition to our theoretical derivations, we provide an empirical evaluation that studies the performance of two variants of our approach on simulated control tasks. Our experimental evaluation shows that, for a variety of tasks, unsupervised meta-RL can effectively acquire RL procedures that perform significantly better than standard RL methods that learn from scratch, without requiring additional task knowledge. Our work lies at the intersection of meta-RL, goal generation, and unsupervised exploration. Metalearning algorithms use data from multiple tasks to learn how to learn, acquiring rapid adaptation procedures from experience (; ; ; ; ; ; ; ; ; a; ;). These approaches have been extended into the setting of RL (b; ; a; ; ; ; ; a; . In practice, the performance of meta-learning algorithms depends on the user-specified meta-training task distribution. We aim to lift this limitation and provide a general recipe for avoiding manual task engineering for meta-RL. To that end, we make use of unsupervised task proposals. These proposals can be obtained in a variety of ways, including adversarial goal generation , information-theoretic methods (; ;), and even random functions. We argue that, theoretically, methods based on mutual information have the potential to provide optimal task proposals for unsupervised meta-RL. Exploration methods that seek out novel states are also closely related to goal generation methods (; ; ; ;), but do not by themselves aim to generate new tasks or learn to adapt more quickly to new tasks, only to achieve wide coverage of the state space. These methods are complementary to our approach, but address a distinct problem. While model-based RL methods (; ; ; b;) might likewise use unsupervised experience to learn a dynamics model, they are not equipped with a strategy to employ that model to learn effectively at test time. Related to our work are prior methods that study the training of goal-conditioned policies (; ;). Indeed, our theoretical derivation first studies the goal reaching case, before generalizing it to the more general case of arbitrary tasks. This generalization allows unsupervised meta-learning methods to solve arbitrary tasks at meta-test time without being restricted to a task parameterization or goal specification. Additionally, in Section 3.4 we discuss why the framework of meta-learning gives us theoretical benefits over the goal reaching paradigm. Figure 1: Unsupervised meta-reinforcement learning: Given an environment, unsupervised meta-RL produces an environment-specific learning algorithm that quickly acquire new policies that maximize any task reward function. In this paper, we consider the problem of automatically tailoring (i.e., learning) a reinforcement learning algorithm to solve tasks on a specific environment. This learning algorithm should be meta-learned without requiring any human supervision or specification of task distribution. The implicit assumption in this formulation will be that all test-time tasks share the same dynamics but use different reward functions. To talk about environments on which many tasks might be performed, we consider a controlled Markov process (CMP) -a Markov decision process without a reward function, C = (S, A, P, γ, ρ), with state space S, action space A, transition dynamics P, discount factor γ and initial state distribution ρ. The CMP along with this reward function r z produces a Markov decision processes M i = (S, A, P, γ, ρ, r z). The goal of the learning algorithm f is to learn an optimal policy π * i (a | s) for any reward function r z that is provided with the CMP. With this notation in place, we formally define the unsupervised meta-RL as follows: Definition 1. Unsupervised Meta-Reinforcement Learning: Given a CMP C, output a learning algorithm f that can learn new tasks efficiently in MDPs defined by CMP C together with an (unknown) reward function r z. The data available to unsupervised meta-RL are unsupervised interactions with the CMP, and does not include observing any of the reward functions r z. We illustrate this problem setting in Figure 1. In contrast, in standard meta-reinforcement learning, a task distribution P (T) is known and we simply try to optimize for a fast learning algorithm on that known task distribution. A detailed definition can be found in Section 3.2. In the rest of this section, we will first sketch out a general recipe for an unsupervised meta-RL algorithm, present a theoretical derivation for an optimal unsupervised meta-learning method, and then instantiate a practical approximation to this theoretically-motivated approach using components from recently proposed exploration and meta-learning algorithms. Our framework, unsupervised meta-RL, consists of a task proposal mechanism and a meta-learning method. Formally, we will define the task distribution as a mapping from a latent variable z ∼ p(z) to a reward function r z (s, a): S × A → R 1. That is, for each value of the random variable z, we have a different reward function r z (s, a). Under this formulation, learning a task distribution amounts to optimizing a parametric form for the reward function r z (s, a) that maps each z ∼ p(z) to a different reward function. The choice of this parametric form represents an important design decision for an unsupervised meta-learning method, and the ing set of tasks is often referred to as a task or goal proposal procedure. The second component is the meta-learning 2 algorithm, which takes the family of reward functions induced by p(z) and r z (s, a) along with the associated CMP, and meta-learns a RL algorithm f that can quickly adapt to any task from the task distribution defined by 1 In most cases p(z) is chosen to be a uniform categorical so it is not challenging to specify 2 A meta-reinforcement learning algorithm learns how to learn: it uses a set of meta-training tasks to learn a learning function f, which can then learn a new task. We refer to this learned learning function f as an "acquired reinforcement learning procedure," following prior work, such as MAML (a) and RL2 (b) p(z) and r z (s, a) in the given CMP. The meta-learned algorithm f can then learn new tasks quickly at meta-test time, when a user-specified reward function is actually provided. This generic design for an unsupervised meta-RL algorithm is summarized in Figure 1. The "no free lunch theorem" might lead us to expect that a truly generic approach to proposing a task distribution would not yield a learning procedure f that is effective on any real tasks, or even on the meta-training tasks. However it is important to note that even without a reward function, an unsupervised meta-learning algorithm can collect and organize meaningful information about the environment. For example, this information could include policies for reaching certain states, estimates for the distances between states, a map of the environment, or a model of the dynamics. Importantly, the meta-learner is not restricted to learning information that is human-interpretable, especially since human-interpretable representations may not be optimal for fast adaptation. In the following sections, we will discuss how to formulate an optimal unsupervised meta-learner that minimizes regret on new meta-test tasks in the absence of any prior knowledge. We first discuss the notion of an optimal meta-learner and then show how we can train one without requiring task distributions to be hand-specified. A more general version of this algorithm might also use f to inform the acquisition of tasks, allowing for an alternating optimization procedure the iterates between learning r z (s, a) and updating f, for example by designing tasks that are difficult for the current algorithm f to handle. However, in this paper we will consider the stagewise approach, which acquires a task distribution once and meta-trains on it, leaving the iterative variant for future work. We first define an abstract notion of an optimal meta-learner, when given a task distribution. This definition will be useful to showing that for certain test-time task distributions, meta-training on self-proposed tasks (i.e., unsupervised meta-training) will yield the optimal meta-learner at test-time. We assume that an optimal meta-learner takes in a distribution over tasks for a given CMP and outputs a learning procedure f that minimizes expected regret when learning tasks drawn from the same distribution as seen in meta-training. As before, the task distribution is defined by a variable z ∼ p(z) and a reward function r z (s, a): S × A → R. An optimal meta-learner optimizes where p(r z) is a distribution over reward functions, parameterized by z, R(f, r z) is the total return obtained by using meta learner f on reward function r z, and f is an optimal meta-learner. This is equivalent to the expected reward objective used by most meta-RL methods (a; b). Note that the behavior of the optimal meta-learner depends on the specific task distribution. For example, the optimal behavior for manipulation tasks involves moving a robot's arms, while the optimal behavior for locomotion tasks involves moving a robot's legs. Optimizing Equation 1 requires inferring new tasks and acquiring the policy for each task. While learning an internal model of the environment might assist in these steps, an optimal meta-learner would also need to know how to leverage this model. In the remainder, we will abstract away in the internal representation of the meta-learner and instead focus on the effect of the training task distribution on the behavior induced in an optimal meta-learner, and how this suggests a method to construct unsupervised meta-learning algorithms. We will now derive an optimal unsupervised meta-learner for the special case of goal reaching tasks, and then generalize this approach to solve arbitrary tasks in Section 3.4. In both, we will restrict our attention to CMPs with deterministic dynamics. In this goal-reaching setting, we consider episodes with finite horizon T and a discount factor of γ = 1. Tasks correspond to reaching an unknown goal state s g. We will only consider the agent's state at the last time step in each episode, so the (unknown) reward function is always of the form We will first assume that goal states are drawn from some known distribution p(s g), and later will show how we can remove this assumption. We define ρ T π (s) as the probability that policy π visits state s at time step t = T. If s g is the true goal, then the event that the policy π reaches s g at the final step of an episode is a Bernoulli random variable with parameter p = ρ T π (s g). Thus, the expected hitting time of this goal state is HITTINGTIME π (s g) = 1 ρ T π (s g) We now define a meta-learner f π in terms of an exploration policy π. Before the goal is found, f π uses policy π to explore. Once the goal s g is found, the meta-learner takes actions to always return to state s g. Meta-learner f π incurs one unit of regret for each step before it has found the goal, and zero regret afterwards. The expected cumulative regret is therefore the expectation of the hitting time, taken with respect to p(s g) = p(r g): In this special case, an optimal meta-learner as defined in Section 3.2 will explore for a number of episodes until it finds the goal state. After the meta-learner finds the goal state, it would always choose the trajectory that reaches that goal state under deterministic dynamics. Thus, the cumulative regret of the meta-learner is the number of episodes required to find the goal state. By our assumption that the meta-learner only receives information about the task if it has reached the goal state at the end of the episode, the meta-learner cannot use information about multiple goals within a single episode. We can minimize the regret in Equation 2 w.r.t. the marginal distribution ρ T π. Using the calculus of variations (for more details refer to Appendix C in), the (exploration) policy for the optimal meta-learner, π *, satisfies: The analysis so far tells us how to obtain the optimal meta-learner if were were given the goal sampling distribution, p(s g). If we do not know this distribution, then we cannot compute the optimal policy using Equation 3. In this case, we resort to bounding the worst-case regret of our policy: Lemma 1. Let π be a policy for which ρ T π (s) is uniform. Then f π has lowest worst-case regret. The proof is in Appendix A. Given this , we know that the exploration policy of the optimal meta-learner should have a uniform state marginal distribution. While Lemma 1 is only directly applicable in settings where there exists a policy that achieves a uniform state marginal distribution, we might expect the intuition to carry over to other settings. The minimax optimal meta-learner corresponds to exploring over a uniform distribution over goal states, so we can acquire this metalearner by training on a goal-reaching task distribution where the goals are uniformly distributed. Constructing this distribution is hard, especially in high-dimensional state spaces. How might we propose uniformly-distributed goals as tasks during unsupervised meta-training? The key idea is that this uniform goal proposal distribution can be obtained by maximizing the mutual information between z and the final state s T: where µ denotes a probability distribution over terminal states s T and latent variables z, and P µ is a subset of such distributions. Observe that this objective contains two competing terms. is a Dirac distribution, centered at a state s z specified by the latent z. Then any distribution µ that maximizes mutual information I µ (s T ; z) The key idea is that a joint distribution µ(s T, z) can be defined implicitly via a latent-conditioned policy. This policy is not a meta-learned model, but rather will become part of the task proposal mechanism. For a given prior µ(z) and latent-conditioned policy µ(a | s, z), the joint likelihood is µ(τ, z) = µ(z)p(s 1) t p(s t+1 | s t, a t)µ(a t | s t, z), and the marginal likelihood is simply given by µ(s T, z) = µ(τ, z)ds 1 a 1 · · · a T −1. We define P µ as all distributions µ(s T, z) that can be constructed in such a way using a Markovian policy. Note that the assumption in Lemma 2 does not always hold, as stochasticity in the environment may mean that µ(s T | z) is never a Dirac distribution, or the existence of unreachable states may mean that µ(s T) can never be uniform. Finally, we construct a distribution over tasks from the joint distribution µ. We then define our task proposal distribution by sampling z ∼ p(z) and using the corresponding reward function r z (s T, a T) log p(s T | z). This reward function does not depend on the action a T. In the case when both the prior µ(z) and the marginal µ(s T) are uniform, this reward function is equivalent to the DIAYN reward function log µ(z | s T), up to additive constants. Since each latent z corresponds to a single state s T when the mutual information is maximized, our task distribution corresponds exactly to sampling goals uniformly when z is sampled uniformly. As argued above, successfully meta-learning on the uniform goal distribution produces the minimax-optimal meta-learner. To extend the analysis in the previous section to the general case, and thereby derive a framework for optimal unsupervised meta-learning, we will consider "trajectory-matching" tasks. These tasks are a trajectory-based generalization of goal reaching: while goal reaching tasks only provide a positive reward when the policy reaches the goal state, trajectory-matching tasks only provide a positive reward when the policy executes the optimal trajectory. The trajectory matching case is more general because, while trajectory matching can represent different goal-reaching tasks, it can also represent tasks that are not simply goal reaching, such as reaching a goal while avoiding a dangerous region or reaching a goal in a particular way. As before, we will restrict our attention to CMPs with deterministic dynamics. These non-Markovian tasks essentially amount to a problem where an RL algorithm must "guess" the optimal policy, and only receives a reward if its behavior is perfectly consistent with that optimal policy. We will show that mutual information between z and trajectories yields the minimum regret solution in this case, and then show that unsupervised meta-learning for the trajectory-matching task is at least as hard as unsupervised meta-learning for general tasks (though, in practice, general tasks may be easier). Formally, we define a distribution of trajectory-matching tasks by a distribution over goal trajectories, p(τ *). For each goal trajectory τ *, the corresponding trajectory-level reward function is Analogous to before, we define the hitting time as the expected number of episodes to match the target trajectory: We then define regret as the expected hitting time: Using the same derivation as before, the exploration policy for the optimal meta-learner is However, obtaining such a policy requires knowing the trajectory distribution p(τ). In the setting where p(τ) is unknown, the minimax policy is simply uniform: Lemma 3. Let π be a policy for which π(τ) is uniform. Then f π has lowest worst-case regret. How can we acquire a policy with a uniform trajectory distribution? Repeating the steps above, we learn a collection of skills using a trajectory-level mutual information objective: Using the same reasoning as Section 3.3, the optimal policy for this objective has a uniform distribution over trajectories that, conditioned on a particular latent z, deterministically produces a single trajectory in a deterministic CMP. Analogous to Section 3.3, we define a distribution over reward functions as r z (τ) log p(τ | z). At optimality, each z corresponds to exactly one trajectory τ z, so the reward function r z (τ) simply indicates whether τ is equal to τ z. Since the distribution over trajectories p(τ | z)p(z)dz is uniform at the optimum, the distribution of reward functions r z corresponds to a uniform distribution over trajectories. Thus, meta-learning on the rewards from trajectory-level mutual information in the minimax-optimal meta-learner. Now that we have derived the optimal meta-learner for trajectory-matching tasks, observe that trajectory-matching is a super-set of the problem of optimizing any possible Markovian reward function at test-time. For a given initial state distribution, each reward function is optimized by a particular trajectory. However, trajectories produced by a non-Markovian policy (i.e., a policy with memory) are not necessarily the unique optimum for any Markovian reward function. Let R τ denote the set of trajectory-level reward functions, and R s,a denote the set of all state-action level reward functions. Bounding the worst-case regret on R τ minimizes an upper bound on the worst-case regret on R s,a: This inequality holds for all policies π, including the policy that maximizes the LHS. While we aim to maximize the RHS, we only know how to maximize the LHS, which gives us a lower bound on the RHS. This inequality holds for all policies π, so it also holds for the policy that maximizes the LHS. In general, this bound is loose because the set of all Markovian reward functions is smaller than the set of all trajectory-level reward functions (i.e., trajectory-matching tasks). However, this bound becomes tight when considering meta-learning on the set of all possible (non-Markovian) reward functions. In the discussion of meta-learning thus far, we have considered tasks where the reward is provided at the last time step T of each episode. In this particular case, the best that an optimal meta-learner can do is go directly to a goal or execute a particular trajectory at every episode according to the optimal exploration policy: for trajectory matching. All intermediate states in the trajectory are uninformative, thus making instances of meta-learning algorithms which explore via schemes like posterior sampling optimal for this class of problems. In the more general case with arbitrary reward functions, intermediate rewards along a trajectory may be informative, and the optimal exploration strategy may be different from posterior sampling (; b;). Nonetheless, the analysis presented in this section provides us insight into the behavior of optimal meta-learning algorithms and allows us to understand the qualities desirable for unsupervised task proposals. The general proposed scheme for unsupervised meta-learning has a number of benefits over standard universal value function and goal reaching style algorithms: it can be applied to arbitrary reward functions going beyond simple goal reaching and it can acquire a significantly more effective exploration strategy than posterior sampling by using intermediate states in a trajectory for intelligent information gathering. Through our analysis, we introduced the notion of optimal meta-learners and analyze their exploration behavior and regret on a class of goal reaching problems. We showed that on these problems, when the test-time task distribution is unknown, the optimal meta-training task distribution for minimizing worst-case test-time regret is uniform over the space of goals. We also showed that this optimal task distribution can be acquired by a simple mutual information maximization scheme. We subsequently extend the analysis to the more general case of matching arbitrary trajectories, as a proxy for the more general class of arbitrary reward functions. In the following section, we will discuss how we can derive a practical algorithm for unsupervised meta-learning from this analysis. Following the derivation in the previous section, we can instantiate a practical unsupervised meta-RL algorithm by constructing a task proposal mechanism based on a mutual information objective. A variety of different mutual information objectives can be formulated, including mutual information between single states and z, pairs of start and end states and z , and entire trajectories and z . We will use DIAYN and leave a full examination of possible mutual information objectives for future work. Data: M \ R, an MDP without a reward function Result: a learning algorithm f: Extract corresponding task reward functions rz(s) using D φ (z|s) Update f using MAML with reward rz(s) DIAYN optimizes mutual information by training a discriminator network D φ (z|·) that predicts which z was used to generate the states in a given rollout according to a latent-conditioned policy π(a|s, z). Our task proposal distribution is thus defined by r z (s, a) = log(D φ (z|s)). The complete unsupervised meta-learning algorithm follows the recipe in Figure 1: first, we acquire r z (s, a) by running DIAYN, which learns D φ (z|s) and a latent-conditioned policy π(a|s, z) (which is discarded). Then, we use z ∼ p(z) to propose tasks r z (s, a) to a standard meta-RL algorithm. This meta-RL algorithm uses the proposed tasks to learn how to learn, acquiring a fast learn algorithm f which can then learn new tasks quickly. While, in principle, any meta-RL algorithm could be used, the discussion at the end of Section 3.4 suggests that meta-RL methods based on posterior sampling might be suboptimal for some tasks. We use MAML (a) as our meta-learning algorithm. Note that the learning algorithm f returned by MAML is defined simply as running gradient descent using the parameters found by MAML as initialization. While it is not immediately apparent how a parameter initialization can define an entire learning algorithm, insightful prior work provides an in-depth discussion of this property. The complete method is summarized in Algorithm 1. In addition to mutual information maximizing task proposals, we will also consider random task proposals, where we also use a discriminator as the reward, according to r(s, z) = log D φrand (z|s), but where the parameters φ rand are chosen randomly (i.e., a random weight initialization for a neural network). While such random reward functions are not optimal, we find that they can be used to acquire useful task distributions for simple tasks, though they are not as effective as the tasks become more complicated. This empirically reinforces the claim that unsupervised meta-RL does not in fact violate any "no free lunch" principle -even simple task proposals that cause the meta-learner to explore the CMP can already accelerate learning of new tasks. In our experiments, we aim to understand whether unsupervised meta-learning as described in Section 3.1 can provide us with an accelerated RL procedure on new tasks. Whereas standard meta-learning requires a hand-specified task distribution at meta-training time, unsupervised meta-learning learns the task distribution through unsupervised interaction with the environment. A fair baseline that likewise uses requires no reward supervision at training time, and only uses rewards at test time, is learning via RL from scratch without any meta-learning. As an upper bound, we include the unfair comparison to a standard meta-learning approach, where the meta-training distribution is manually designed. This method has access to a hand-specified task distribution that is not available to our method. We evaluate two variants of our approach: (a) task acquisition based on DIAYN followed by meta-learning using 2D navigation Half-Cheetah Ant Figure 3: Unsupervised meta-learning accelerates learning: After unsupervised meta-learning, our approach (UML-DIAYN and UML-RANDOM) quickly learns a new task significantly faster than learning from scratch, especially on complex tasks. Learning the task distribution with DIAYN helps more for complex tasks. Results are averaged across 20 evaluation tasks, and 3 random seeds for testing. UML-DIAYN and random also significantly outperform learning with DIAYN initialization or an initialization with a policy pretrained with VIME. MAML, and (b) task acquisition using a randomly initialized discriminator followed by meta-learning using MAML. Our experiments study three simulated environments of varying difficulty: 2D point navigation, 2D locomotion using the "HalfCheetah," and 3D locomotion using the "Ant," with the latter two environments are modifications of popular RL benchmarks (a). While the 2D navigation environment allows for direct control of position, HalfCheetah and Ant can only control their center of mass via feedback control with high dimensional actions (6D for HalfCheetah, 8D for Ant) and observations (17D for HalfCheetah, 111D for Ant). The evaluation tasks, shown in Figure 6, are similar to prior work (a;): 2D navigation and ant require navigating to goal positions, while the half cheetah must run at different goal velocities. These tasks are not accessible to our algorithm during meta-training. We used the default hyperparameters for MAML across all tasks, varying the meta-batch size according to the number of skills: 50 for pointmass, 20 for cheetah, and 20 ant. We found that the default architecture, a 2 layer MLP with 300 units each and ReLU non-linearities, worked quite well for meta-training. We also used the default hyperparameters for DIAYN to acquire skills. We swept over learning rates for learning from scratch via vanilla policy gradient, and found that using ADAM with adaptive step size is the most stable and quick at learning. The comparison between the two variants of unsupervised meta-learning and learning from scratch is shown in Figure 3. We also add a comparison to VIME , a standard noveltybased exploration method, where we pretrain a policy with the VIME reward and then finetune it on the meta-test tasks. In all cases, the UML-DIAYN variant of unsupervised meta-learning produces an RL procedure that outperforms RL from scratch and VIME-init, suggesting that unsupervised interaction with the environment and meta-learning is effective in producing environment-specific but task-agnostic priors that accelerate learning on new, previously unseen tasks. The comparison with VIME shows that the speed of learning is not just about exploration but is indeed about fast adaptation. In our experiments thus far, UML-DIAYN always performs better than learning from scratch, although the benefit varies across tasks depending on the actual performance of DIAYN. We added additional comparisons to using TRPO as a stronger optimizer from scratch with the same trends being observed. We also compared with a baseline of simply initializing from a DIAYN trained contextual policy, and then finetuning the best skill with the actual task reward. From the comparisons in Fig 3, it is apparent that this works poorly as compared with UMRL variants. Interestingly, in many cases (in Figure 4) the performance of unsupervised meta-learning with DIAYN matches that of the hand-designed task distribution. We see that on the 2D navigation task, while handcrafted meta-learning is able to learn very quickly initially, it performs similarly after 100 steps. For the cheetah environment as well, handcrafted meta-learning is able to learn very quickly to start off, but is quickly matched by unsupervised meta-RL with DIAYN. On the ant task, we see that hand-crafted meta-learning does do better than UML-DIAYN, likely because the task distribution is more challenging, and a better unsupervised task proposal algorithm would improve the performance of a meta-learner. The comparison between the two unsupervised meta-learning variants is also illuminating: while the DIAYN-based variant of our method generally achieves the best performance, even the random discriminator is often able to provide a sufficient diversity of tasks to produce meaningful acceleration over learning from scratch in the case of 2D navigation and ant. This has two interesting implications. First, it suggests that unsupervised meta-learning is an effective tool for learning an environment prior. Although the performance of unsupervised meta-learning can be improved with better coverage using DIAYN (as seen in Figure 3), even the random discriminator version provides competitive advantages over learning from scratch. Second, the comparison provides a clue for identifying the source of the structure learned through unsupervised meta-learning: though the particular task distribution has an effect on performance, simply interacting with the environment (without structured objectives, using a random discriminator) already allows meta-RL to learn effective adaptation strategies in a given environment. That is, the performance cannot be explained only by the unsupervised procedure (DIAYN) capturing the right task distribution. We also provide an analysis of the task distributions acquired by the DIAYN procedure in Appendix B.1. We presented an unsupervised approach to meta-RL, where meta-learning is used to acquire an efficient RL procedure without requiring hand-specified task distributions for meta-training. This approach accelerates RL without relying on the manual supervision required for conventional metalearning algorithms. We provide a theoretical derivation that argues that task proposals based on mutual information maximization can provide for a minimum worst-case regret meta-learner, under certain assumptions. We then instantiate an approximation to the theoretically-motivated method by building on recently developed unsupervised task proposal and meta-learning algorithms. Our experiments indicate that unsupervised meta-RL can accelerate learning on a range of tasks, outperforming learning from scratch and often matching the performance of meta-learning from hand-specified task distributions. As our work is the first foray into unsupervised meta-RL, our approach opens a number of questions about unsupervised meta-learning algorithms. One limitation of our analysis is that it only considers deterministic dynamics, and only considers task distributions where posterior sampling is optimal. Extending our analysis to stochastic dynamics and more realistic task distributions may allow unsupervised meta-RL to acquire learning algorithms that can explore and adapt more intelligently, and more effectively solve real-world tasks. Lemma 1 Let π be a policy for which ρ T π (s) is uniform. Then π has lowest worst-case regret. Proof of Lemma 1. To begin, we note that all goal distributions p(s g) have equal regret for policies where ρ T π (s) = 1/|S| is uniform: Now, consider a policy π for which ρ T π (s) is not uniform. For simplicity, we will assume that the argmin is unique, though the proof holds for non-unique argmins as well. The worst-case goal distribution will choose the state s − where that the policy is least likely to visit: Thus, the worst-case regret for policy π is strictly greater than the regret for a uniform π: Thus, a policy π for which ρ T π is non-uniform cannot be minimax, so the optimal policy has a uniform marginal ρ T π. Lemma 2: Mutual information I(s T ; z) is maximized by a task distribution p(s g) which is uniform over goal states. Proof of Lemma 2. We define a latent variable model, where we sample a latent variable z from a uniform prior p(z) and sample goals from a conditional distribution p(s T | z). To begin, note that the mutual information can be written as a difference of entropies: The conditional entropy H p [s T | z] attains the smallest possible value (zero) when each latent variable z corresponds to exactly one final state, s z. In contrast, the marginal entropy H p [s T] attains the largest possible value (log |S|) when the marginal distribution p(s T) = p(s T | z)p(z)dz is uniform. Thus, a task uniform distribution p(s g) maximizes I(s T ; z). Note that for any non-uniform task distribution q(s T), we have is zero, no distribution can achieve a smaller conditional entropy. This, for all non-uniform task distributions q, we have I q (s T ; z) < I p (s T ; z). Thus, the optimal task distribution must be uniform. To understand the method performance more clearly, we also add an ablation study where we compare the meta-test performance of policies at different iterations along meta-training. This shows the effect that additional meta-training has on the fast learning performance for new tasks. This comparison is shown in Figure 5. As can be seen here, at iteration 0 of meta-training the policy is not a very good initialization for learning new tasks. As we move further along the meta-training process, we see that the meta-learned initialization becomes more and more effective at learning new tasks. This shows a clear correlation between additional meta-training and improved meta test-time performance. Ant Half-Cheetah Figure 6: Learned meta-training task distribution and evaluation tasks: We plot the center of mass for various skills discovered by point mass and ant using DIAYN, and a blue histogram of goal velocities for cheetah. Evaluation tasks, which are not provided to the algorithm during meta-training, are plotted as red'x' for ant and pointmass, and as a green histogram for cheetah. While the meta-training distribution is broad, it does not fully cover the evaluation tasks. Nonetheless, meta-learning on this learned task distribution enables efficient learning on a test task distribution. We can analyze the tasks discovered through unsupervised exploration and compare them to tasks we evaluate on at meta-test time. Figure 6 illustrates these distributions using scatter plots for 2D navigation and the Ant, and a histogram for the HalfCheetah. Note that we visualize dimensions of the state that are relevant for the evaluation tasks -positions and velocities -but these dimensions are not specified in any way during unsupervised task acquisition, which operates on the entire state space. Although the tasks proposed via unsupervised exploration provide fairly broad coverage, they are clearly quite distinct from the meta-test tasks, suggesting the approach can tolerate considerable distributional shift. Qualitatively, many of the tasks proposed via unsupervised exploration such as jumping and falling that are not relevant for the evaluation tasks. Our choice of the evaluation tasks was largely based on prior work, and therefore not tailored to this exploration procedure. The for unsupervised meta-RL therefore suggest quite strongly that unsupervised task acquisition can provide an effective meta-training set, at least for MAML, even when evaluating on tasks that do not closely match the discovered task distribution. For all our experiments, we used DIAYN to acquire the task proposals using 20 skills for half-cheetah and for ant and 50 skills for the 2D navigation. We ran the domains using the standard DIAYN hyperparameters described in https://github.com/ben-eysenbach/sac to acquire task proposals. These proposals were then fed into the MAML algorithm https://github.com/ cbfinn/maml_rl, with inner learning rate 0.1, meta learning rate 0.01, inner batch size 40, outer batch size 20, path length 100, using 2 layer networks with 300 units each with ReLu nonlinearities. The test time learning is done with the same parameters for the UMRL variants, and done using REINFORCE with the Adam optimizer for the comparison with learning from scratch. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | S1et1lrtwr | Meta-learning on self-proposed task distributions to speed up reinforcement learning without human specified task distributions |
We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks. The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network. For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems (e.g. converting a Single Shot MultiBox Detection (SSD) model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor). Similarly, we show that re-learning existing low-parameter layers (such as depth-wise convolutions) while keeping the rest of the network frozen also improves transfer-learning accuracy significantly. Our approach allows both simultaneous (multi-task) as well as sequential transfer learning. In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance. Deep neural networks have revolutionized many areas of machine intelligence and are now used for many vision tasks that even few years ago were considered nearly impenetrable BID15 BID26. Advances in neural networks and hardware is ing in much of the computation being shifted to consumer devices, delivering faster response, and better security and privacy guarantees BID11 BID8.As the space of deep learning applications expands and starts to personalize, there is a growing need for the ability to quickly build and customize models. While model sizes have dropped dramatically from >50M parameters of the pioneering work of AlexNet BID15 and VGG BID26 to <5M of the recent Mobilenet BID25 BID8 and ShuffleNet BID30 BID19, the accuracy of models has been improving. However, delivering, maintaining and updating hundreds of models on the embedded device is still a significant expense in terms of bandwidth, energy and storage costs. While there still might be space for improvement in designing smaller models, in this paper we explore a different angle: we would like to be able to build models that require only a few parameters to be trained in order to be re-purposed to a different task, with minimal loss in accuracy compared to a model trained from scratch. While there is ample existing work on compressing models and learning as few weights as possible BID24 BID25 BID8 to solve a single task, to the best of our awareness, there is no prior work that tries to minimize the number of model parameters when solving many tasks together. Our contribution is a novel learning paradigm in which each task carries its own model patcha small set of parameters -that, along with a shared set of parameters constitutes the model for that task (for a visual description of the idea, see FIG0, left side). We put this idea to use in two scenarios: a) in transfer learning, by fine-tuning only the model patch for new tasks, and b) in multi-task learning, where each task performs gradient updates to both its own model patch, and the shared parameters. In our experiments (Section 5), the largest patch that we used is smaller than 10% of the size of the entire model. We now describe our contribution in detail. Transfer learning We demonstrate that, by fine-tuning less than 35K parameters in MobilenetV2 BID25 and InceptionV3, our method leads to significant accuracy improvements over fine-tuning only the last layer (102K-1.2M parameters, depending on the number of classes) on multiple transfer learning tasks. When combined with fine-tuning the last layer, we train less than 10% of the model's parameters in total. We also show the effectiveness of our method over last-layer-based fine-tuning on transfer learning between completely different problems, namely COCO-trained SSD model to classification over ImageNet BID4.Multi-task learning We explore a multi-task learning paradigm wherein multiple models that share most of the parameters are trained simultaneously (see FIG0, right side). Each model has a task-specific model patch. Training is done in a distributed manner; each task is assigned a subset of available workers that send independent gradient updates to both shared and task-specific parameters using standard optimization algorithms. Our show that simultaneously training two such MobilenetV2 BID25 ) models on ImageNet BID4 ) and Places-365 reach accuracies comparable to, and sometimes higher than individually trained models. We apply our multi-task learning paradigm to domain adaptation. For ImageNet BID4, we show that we can simultaneously train MobilenetV2 BID25 ) models operating at 5 different resolution scales, 224, 192, 160, 128 and 96, while sharing more than 98% of the parameters and ing in the same or higher accuracy as individually trained models. This has direct practical benefit in power-constrained operation, where an application can switch to a lower resolution to save on latency/power, without needing to ship separate models and having to make that trade-off decision at the application design time. The cascade algorithm from BID27 can further be used to reduce the average running time by about 15% without loss in accuracy. The rest of the paper is organized as follows: we describe our method in Section 2 and discuss related work in Section 3. In Section 4, we present simple mathematical intuition that contrasts the expressiveness of logit-only fine-tuning and that of our method. Finally, in Section 5, we present detailed experimental . The central concept in our method is that of a model patch. It is essentially a small set of perchannel transformations that are dispersed throughout the network ing in only a tiny increase in the number of model parameters. Suppose a deep network M is a sequence of layers represented by their parameters (weights, biases), W 1,..., W n. We ignore non-trainable layers (e.g., some kinds of activations) in this formulation. A model patch P is a set of parameters W i1,..., W i k, 1 ≤ i 1,..., i k ≤ n that, when applied to M, adds layers at positions i 1,..., i n. Thus, a patched model In this paper, we introduce two kinds of patches. We will see below that they can be folded with the other layers in the network, eliminating the need to perform any explicit addition of layers. In Section 5, we shed some light on why the particular choice of these patches is important. Scale-and-bias patch This patch applies per-channel scale and bias to every layer in the network. In practice this transformations can often be absorbed into normalization layer such as Batch Normalization BID9. Let X be an activation tensor. Then, the batch-normalized version of X DISPLAYFORM0 where µ(X), σ(X) are mean and standard deviation computed per minibatch, and γ, β are learned via backpropagation. These statistics are computed as mini-batch average, while during inference they are computed using global averages. The scale-and-bias patch corresponds to all the γ, β, µ, σ in the network. Using BN as the model patch also satisfies the criterion that the patch size should be small. For instance, the BN parameters in both MobilenetV2 BID25 and InceptionV3 network performing classification on ImageNet amounts to less than 40K parameters, of about 1% for MobilenetV2 that has 3.5 million Parameters, and less than 0.2% for Inception V3 that has 25 million parameters. While we utilize batch normalization in this paper, we note that this is merely an implementation detail and we can use explicit biases and scales with similar . Depthwise-convolution patch The purpose of this patch is to re-learn spatial convolution filters in a network. Depth-wise separable convolutions were introduced in deep neural networks as way to reduce number of parameters without losing much accuracy BID8 BID2. They were further developed in BID25 by adding linear bottlenecks and expansions. In depthwise separable convolutions, a standard convolution is decomposed into two layers: a depthwise convolution layer, that applies one convolutional filter per input channel, and a pointwise layer that computes the final convolutional features by linearly combining the depthwise convolutional layers' output across channels. We find that the set of depthwise convolution layers can be repurposed as a model patch. They are also lightweight -for instance, they account for less than 3% of MobilenetV2's parameters when training on ImageNet. Next, we describe how model patches can be used in transfer and multi-task learning. Transfer learning In transfer learning, the task is to adapt a pretrained model to a new task. Since the output space of the new task is different, it necessitates re-learning the last layer. Following our approach, we apply a model patch and train the patched parameters, optionally also the last layer. The rest of the parameters are left unchanged. In Section 5, we discuss the inclusion/exclusion of the last layer. When the last layer is not trained, it is fixed to its random initial value. Multitask learning We aim to simultaneously, but independently, train multiple neural networks that share most weights. Unlike in transfer learning, where a large fraction of the weights are kept frozen, here we learn all the weights. However, each task carries its own model patch, and trains a patched model. By training all the parameters, this setting offers more adaptability to tasks while not compromising on the total number of parameters. To implement multi-task learning, we use the distributed TensorFlow paradigm 1: a central parameter server receives gradient updates from each of the workers and updates the weights. Each worker reads the input, computes the loss and sends gradients to the parameter server. We allow subsets of workers to train different tasks; workers thus may have different computational graphs, and taskspecific input pipelines and loss functions. A visual depiction of this setting is shown in FIG0. One family of approaches BID29 BID5 widely used by practitioners for domain adaptation and transfer learning is based on fine-tuning only the last layer (or sometimes several last layers) of a neural network to solve a new task. Fine-tuning the last layer is equivalent to training a linear classifier on top of existing features. This is typically done by running SGD while keeping the rest of the network fixed, however other methods such as SVM has been explored as well BID10. It has been repeatedly shown that this approach often works best for similar tasks (for example, see BID5).Another frequently used approach is to use full fine-tuning BID3 ) where a pretrained model is simply used as a warm start for the training process. While this often leads to significantly improved accuracy over last-layer fine-tuning, downsides are that 1) it requires one to create and store a full model for each new task, and 2) it may lead to overfitting when there is limited data. In this work, we are primarily interested in approaches that allow one to produce highly accurate models while reusing a large fraction of the weights of the original model, which also addresses the overfitting issue. While the core idea of our method is based on learning small model patches, we see significant boost in performance when we fine-tune the patch along with last layer (Section 5). This is somewhat in contrast with BID7, where the authors show that the linear classifier (last layer) does not matter when training full networks. Mapping out the conditions of when a linear classifier can be replaced with a random embedding is an important open question. BID16 show that re-computing batch normalization statistics for different domains helps to improve accuracy. In BID24 it was suggested that learning batch normalization layers in an otherwise randomly initialized network is sufficient to build non-trivial models. Recomputing batch normalization statistics is also frequently used for model quantization where it prevents the model activation space from drifting BID13. In the present work, we significantly broaden and unify the scope of the idea and scale up the approach by performing transfer and multi-task learning across completely different tasks, providing a powerful tool for many practical applications. Our work has interesting connections to meta-learning BID22 BID6. For instance, when training data is not small, one can allow each task to carry a small model patch in the Reptile algorithm of BID22 in order to increase expressivity at low cost. Experiments (Section 5) show that model-patch based fine-tuning, especially with the scale-andbias patch, is comparable and sometimes better than last-layer-based fine-tuning, despite utilizing a significantly smaller set of parameters. At a high level, our intuition is based on the observation that individual channels of hidden layers of neural network form an embedding space, rather than correspond to high-level features. Therefore, even simple transformations to the space could in significant changes in the target classification of the network. In this section (and in Appendix A), we attempt to gain some insight into this phenomenon by taking a closer look at the properties of the last layer and studying low-dimensional models. A deep neural network performing classification can be understood as two parts:1. a network base corresponding to a function F: R d → R n mapping d-dimensional input space X into an n-dimensional embedding space G, and 2. a linear transformation s: R n → R k mapping embeddings to logits with each output component corresponding to an individual class. DISPLAYFORM0 We compare fine-tuning model patches with fine-tuning only the final layer s. Fine-tuning only the last layer has a severe limitation caused by the fact that linear transformations preserve convexity. It is easy to see that, regardless of the details of s, the mapping from embeddings to logits is such that if both ξ a, ξ b ∈ G are assigned label c, the same label is assigned to every DISPLAYFORM1 Thus, if the model assigns inputs {x i |i = 1, . . ., n c} some class c, then the same class will also be assigned to any point in the preimage of the convex hull of {F (x i)|i = 1,..., n c }.This property of the linear transformation s limits one's capability to tune the model given a new input space manifold. For instance, if the input space is "folded" by F and the neighborhoods of very different areas of the input space X are mapped to roughly the same neighborhood of the embedding space, the final layer cannot disentangle them while operating on the embedding space alone (should some new task require differentiating between such "folded" regions).We illustrate the difference in expressivity between model-patch-based fine-tuning and last-layerbased fine-tuning in the cases of 1D (below) and 2D (Appendix A) inputs and outputs. Despite the simplicity, our analysis provides useful insights into how by simply adjusting biases and scales of a neural network, one can change which regions of the input space are folded and ultimately the learned classification function. In what follows, we will work with a construct introduced by BID21 that demonstrates how neural networks can "fold" the input space X a number of times that grows exponentially with the neural network depth 2. We consider a simple neural network with one-dimensional inputs and outputs and demonstrate that a single bias can be sufficient to alter the number of "folds", the topology of the X → G mapping. More specifically, we illustrate how the number of connected components in the preimage of a one-dimensional segment [ξ a, ξ b] can vary depending on a value of a single bias variable. As in BID21, consider the following function: DISPLAYFORM2 p is an even number, and b = (b 0, . . ., b p) is a (p + 1)-dimensional vector of tunable parameters characterizing q. Function q(x; b) can be represented as a two-layer neural network with ReLU activations. Set p = 2. Then, this network has 2 hidden units and a single output value, and is capable of "folding" the input space twice. Defining F to be a composition of k such functions DISPLAYFORM3 we construct a neural network with 2k layers that can fold input domain R up to 2 k times. By 0, the number of "folds" can vary from 2 k to 0. We evaluate the performance of our method in both transfer and multi-task learning using the image recognition networks MobilenetV2 BID25 and InceptionV3 et al., 2013), Aircraft BID20, Flowers-102 BID23 and Places-365 ). An overview of these datasets can be found in TAB0. We also show preliminary on transfer learning across completely different types of tasks using MobilenetV2 and Single-Shot Multibox Detector (SSD) networks. We use both scale-and-bias (S/B) and depthwise-convolution patches (DW) in our experiments. Both MobilenetV2 and InceptionV3 have batch normalization -we use those parameters as the S/B patch. MobilenetV2 has depthwise-convolutions from which we construct the DW patch. In our experiments, we also explore the effect of fine-tuning the patches along with the last layer of the network. We compare with two scenarios: 1) only fine-tuning the last layer, and 2) fine-tuning the entire network. We use TensorFlow BID0, and NVIDIA P100 and V100 GPUs for our experiments. Following the standard setup of Mobilenet and Inception we use 224 × 224 images for MobilenetV2 and 299 × 299 for InceptionV3. As a special-case, for Places-365 dataset, we use 256 × 256 images. We use RMSProp optimizer with a learning rate of 0.045 and decay factor 0.98 per 2.5 epochs. To demonstrate the expressivity of the biases and scales, we perform an experiment on MobilenetV2, where we learn only the scale-and-bias patch while keeping the rest of the parameters frozen at their initial random state. The are shown in TAB2 (right side). It is quite striking that simply adjusting biases and scales of random embeddings provides features powerful enough that even a linear classifier can achieve a non-trivial accuracy. Furthermore, the synergy exhibited by the combination of the last layer and the scale-and-bias patch is remarkable. We take MobileNetV2 and InceptionV3 models pretrained on ImageNet (Top1 accuracies 71.8% and 76.6% respective), and fine-tune various model patches for other datasets. Results on InceptionV3 are shown in TAB1. We see that fine-tuning only the scale-and-bias patch (using a fixed, random last layer) in comparable accuracies as fine-tuning only the last layer while using fewer parameters. Compared to full fine-tuning BID3, we use orders of magnitude fewer parameters while achieving nontrivial performance. Our using MobilenetV2 are similar (more on this later).In the next experiment, we do transfer learning between completely different tasks. We take an 18-category object detection (SSD) model ) pretrained on COCO images (Lin et al., TAB2 . Again, we see the effectiveness of training the model patch along with the last layer -a 2% increase in the parameters translates to 19.4% increase in accuracy. Next, we discuss the effect of learning rate. It is common practice to use a small learning rate when fine-tuning the entire network. The intuition is that, when all parameters are trained, a large learning rate in network essentially forgetting its initial starting point. Therefore, the choice of learning rate is a crucial factor in the performance of transfer learning. In our experiments (Appendix B.2, FIG11) we observed the opposite behavior when fine-tuning only small model patches: the accuracy grows as learning rate increases. In practice, fine-tuning a patch that includes the last layer is more stable w.r.t. the learning rate than full fine-tuning or fine-tuning only the scale-and-bias patch. Finally, an overview of on MobilenetV2 with different learning rates and model patches is shown in FIG4. The effectiveness of small model patches over fine-tuning only the last layer is again clear. Combining model patches and fine-tuning in a synergistic effect. In Appendix B, we show additional experiments comparing the importance of learning custom bias/scale with simply updating batch-norm statistics (as suggested by BID16). In this section we show that, when using model-specific patches during multi-task training, it leads to performance comparable to that of independently trained models, while essentially using a single model. We simultaneously train MobilenetV2 BID25 on two large datasets: ImageNet and Places365. Although the network architecture is the same for both datasets, each model has its own private patch that, along with the rest of the model weights constitutes the model for that dataset. We choose a combination of the scale-and-bias patch, and the last layer as the private model patch in this experiment. The rest of the weights are shared and receive gradient updates from all tasks. In order to inhibit one task from dominating the learning of the weights, we ensure that the learning rates for different tasks are comparable at any given point in time. This is achieved by setting hyperparameters such that the ratio of dataset size and the number of epochs per learning rate decay step is the same for all tasks. We assign the same number of workers for each task in the distributed learning environment. The are shown in TAB3. Multi-task validation accuracy using a separate S/B patch for each model, is comparable to singletask accuracy, while considerably better than the setup that only uses separate logit-layer for each task, while using only using 1% more parameters (and 50% less than the independently trained setup). In this experiment, each task corresponds to performing classification of ImageNet images at a different resolution. This problem is of great practical importance because it allows one to build very compact set of models that can operate at different speeds that can be chosen at inference time depending on power and latency requirements. Unlike in Section 5.3, we only have the scaleand-bias patch private to each task; the last layer weights are shared. We use bilinear interpolation to scale images before feeding them to the model. The learning rate schedule is the same as in Section 5.3.The are shown in TAB4. We compare our approach with S/B patch only against two baseline setups. All shared is where all parameters are shared across all models and individually trained is a much more expensive setup where each resolution has its own model. As can be seen from the table, scale-and-bias patch allows to close the accuracy gap between these two setups and even leads to a slight increase of accuracy for a couple of the models at the cost of 1% of extra parameters per each resolution. We introduced a new way of performing transfer and multi-task learning where we patch only a very small fraction of model parameters, that leads to high accuracy on very different tasks, compared to traditional methods. This enables practitioners to build a large number of models with small incremental cost per model. We have demonstrated that using biases and scales alone allows pretrained neural networks to solve very different problems. While we see that model patches can adapt to a fixed, random last layer (also noted in Hoffer et al. FORMULA4), we see a significant accuracy boost when we allow the last layer also to be trained. It is important to close this gap in our understanding of when the linear classifier is important for the final performance. From an analytical perspective, while we demonstrated that biases alone maintain high expressiveness, more rigorous analysis that would allow us to predict which parameters are important, is still a subject of future work. From practical perspective, cross-domain multi-task learning (such as segmentation and classification) is a promising direction to pursue. Finally our approach provides for an interesting extension to the federated learning approach proposed in BID11, where individual devices ship their gradient updates to the central server. In this extension we envision user devices keeping their local private patch to maintain personalized model while sending common updates to the server. Here we show an example of a simple network that "folds" input space in the process of training and associates identical embeddings to different points of the input space. As a , fine-tuning the final linear layer is shown to be insufficient to perform transfer learning to a new dataset. We also show that the same network can learn alternative embedding that avoids input space folding and permits transfer learning. Consider a deep neural network mapping a 2D input into 2D logits via a set of 5 ReLU hidden layers: 2D input → 8D state → 16D state → 16D state → 8D state → m-D embedding (no ReLU) → 2D logits (no ReLU). Since the embedding dimension is typically smaller than the input space dimension, but larger than the number of categories, we first choose the embedding dimension m to be 2. This network is trained (applying sigmoid to the logits and using cross entropy loss function) to map (x, y) pairs to two classes according to the groundtruth dependence depicted in FIG6. Learned function is shown in FIG6 (c). The model is then fine-tuned to approximate categories shown in FIG6. Fine-tuning all variables, the model can perfectly fit this new data as shown in FIG6.Once the set of trainable parameters is restricted, model fine-tuning becomes less efficient. Interestingly, poor performance of logit fine-tuning seen in figure 4(E) extends to higher embedding dimensions as well. Plots similar to those in figure 4, but generated for the model with the embedding dimension m of 4 are shown in FIG7. In this case, we can see that the final layer fine-tuning is again insufficient to achieve successful transfer learning. As the embedding dimension goes higher, last layer fine-tuning eventually reaches acceptable (see FIG8 showing for m = 8).The explanation behind poor logit fine-tuning can be seen by plotting the embedding space of the original model with m = 2 (see FIG9). Both circular regions are assigned the same embedding and the final layer is incapable of disentangling them. But it turns out that the same network could have learned a different embedding that would make last layer fine-tuning much more efficient. We show this by training the network on the classes shown in figure 7(b). This class assignment breaks the symmetry and the new learned embedding shown in figure 7(c) can now be used to adjust to new class assignments shown in figure 7(d), (e) and (f) by fine-tuning the final layer alone. Published as a conference paper at ICLR 2019 The of BID16 suggested that adjusting Batch Normalization statistics helps with domain adaption. Interestingly we found that it significantly worsens for transfer learning, unless bias and scales are allows to learn. We find that fine-tuning on last layer with batch-norm statistics readjusted to keep activation space at mean 0/variance 1, makes the network to significantly under-perform compared to fine-tuning with frozen statistics. Even though adding learned bias/scales signifcanty outperforms logit-only based fine-tuning. We summarize our experiments in TAB5 An application of domain adaptation using model patches is cost-efficient model cascades. We employ the algorithm from BID27 which takes several models (of varying costs) performing the same task, and determines a cascaded model with the same accuracy as the best task but lower average cost. Applying it to MobilenetV2 models on multiple resolutions that we trained via multitask learning, we are able to lower the average cost of MobilenetV2 inference by 15.2%. Note that, in order to achieve this, we only need to store 5% more model parameters than for a single model. Generally, we did not see a large variation in training speed. All fine-tuning approaches needed 50-200K steps depending on the learning rate and the training method. While different approaches definitely differ in the number of steps necessary for convergence, we find these changes to be comparable to changes in other hyperparameters such as learning rate. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | BJxvEh0cFQ | A novel and practically effective method to adapt pretrained neural networks to new tasks by retraining a minimal (e.g., less than 2%) number of parameters |
Adversarial examples have somewhat disrupted the enormous success of machine learning (ML) and are causing concern with regards to its trustworthiness: A small perturbation of an input in an arbitrary failure of an otherwise seemingly well-trained ML system. While studies are being conducted to discover the intrinsic properties of adversarial examples, such as their transferability and universality, there is insufficient theoretic analysis to help understand the phenomenon in a way that can influence the design process of ML experiments. In this paper, we deduce an information-theoretic model which explains adversarial attacks universally as the abuse of feature redundancies in ML algorithms. We prove that feature redundancy is a necessary condition for the existence of adversarial examples. Our model helps to explain the major questions raised in many anecdotal studies on adversarial examples. Our theory is backed up by empirical measurements of the information content of benign and adversarial examples on both image and text datasets. Our measurements show that typical adversarial examples introduce just enough redundancy to overflow the decision making of a machine learner trained on corresponding benign examples. We conclude with actionable recommendations to improve the robustness of machine learners against adversarial examples. Deep neural networks (DNNs) have been widely applied to various applications and achieved great successes BID5 BID36 BID16. This is mostly due to their versatility: DNNs are able to be trained to fit a target function. Therefore, it raises great concerns given the discovery that DNNs are vulnerable to adversarial examples. These are carefully crafted inputs, which are often seemingly normal within the variance of the training data but can fool a well-trained model with high attack success rate BID14. Adversarial examples can be generated for various types of data, including images, text, audio, and software BID4 BID6, and for different ML models, such as classifiers, segmentation models, object detectors, and reinforcement learning systems BID20 BID17. Moreover, adversarial examples are transferable BID38 BID23 )-if we generate adversarial perturbation against one model for a given input, the same perturbation will have high probability to be able to attack other models trained on similar data, regardless how different the models are. Last but not the least, adversarial examples cannot only be synthesized in the digital world but also in the physical world BID7 BID21, which has caused great real-world security concerns. Given such subtle, yet universally powerful attacks against ML models, several defensive methods have been proposed. For example,; BID9 pre-process inputs to eliminate certain perturbations. Other work BID1 suggest to push the adversarial instance into random directions so they hopefully escape a local minimum and fall back to the correct class. The authors are aware of ongoing work to establish metrics to distinguish adversarial examples from benign ones so that one can filter out adversarial examples before they are used by ML models. However, so far, all defense and detection methods have shown to be adaptively attackable. Therefore, intelligent attacks against intelligent defenses become an arms race. Defending against adversarial examples remains an open problem. In this paper, we propose and validate a theoretical model that can be used to create an actionable understanding of adversarial perturbations. Based upon the model, we give recommendations to modify the design process of ML experiments such that the effect of adversarial attacks is mitigated. We illustrate adversarial examples using an example of a simple perceptron network that learns the Boolean equal operator and then generalize the example into a universal model of classification based on Shannon's theory of communication. We further explain how adversarial examples fit the thermodynamics of computation. We prove a necessary condition for the existence of adversarial examples. In summary, the contributions of the paper are listed below:• a model for adversarial examples consistent with related work, physics and information theory;• a proof that using redundant features is a necessary condition for the vulnerability of ML models to adversarial examples;• extensive experiments that showcase the relationship between data redundancy and adversarial examples• actionable recommendations for the ML process to mitigate adversarial attacks. Given a benign sample x, an adversarial example x adv is generated by adding a small perturbation to x (i.e. x adv = x +), so that x adv is misclassified by the targeted classifier g. Related work has mostly focused on describing the properties of adversarial examples as well as on defense and detection algorithms. Goodfellow et al. have hypothesized that the existence of adversarial examples is due to the linearity of DNNs BID14. Later, boundary-based analysis has been derived to show that adversarial examples try to cross the decision boundaries BID15. More studies regarding to data manifold have also been leveraged to better understand these perturbations BID25 BID13 ). While these works provide hints to obtain a more fundamental understanding, to the best of our knowledge, no study was able to create a model that in actionable recommendations to improve the robustness of machine learners against adversarial attacks. Prior work do not a measurement process or theoretically show the necessary or sufficient conditions for the existence of adversarial examples. Several approaches have been proposed to generate adversarial examples. For instance, the fast gradient sign method has been proposed to add perturbations along the gradient directions BID14. Other examples are optimization algorithms that search for the minimal perturbation BID3 BID23. Based on the adversarial goal, attacks can be classified into two categories: targeted and untargeted attacks. In a targeted attack, the adversary's objective is to modify an input x such that the target model g classifies the perturbed input x adv as a targeted class chosen, which differs from its ground truth. In a untargeted attack, the adversary's objective is to cause the perturbed input x adv to be misclassified in any class other than its ground truth. Based on the adversarial capabilities, these attacks can be categorized as white-box and black-box attacks, where an adversary has full knowledge of the classifier and training data in the white-box setting BID37 BID14 BID2 BID28 BID32 BID0 BID19 BID21, but zero knowledge about them in the black-box setting BID31 BID23 BID29 BID30.Interestingly enough, adversarial examples are not restricted to ML. Intuitively speaking, and consistent with the model that is presented in this paper, acoustic noise masking could be regarded as an adversarial attack on our hearing system. Acoustic masking happens, for example, when a clear sinusoid tone cannot be perceived anymore because a small amount of white noise has been added to the signal BID33 ). This effect is exploited in MP3 audio compression and privacy applications. Similar examples exist, such as optical illusions in the visual domain BID34 and defense mechanisms against sensor-guided attacks in the military domain. Intuitively speaking, we want to explain the phenomenon shown in FIG1, which depicts a plane filled with points that were originally perfectly separable with two 2D linear separations. As the of perturbing several points by a mere 10% of the original position, the separation of the two classes requires many more than two linear separators. That is, a small amount of noise can overflow the separation capability of a network dramatically. In the following section, we introduce an example model along which we will derive our mathematical understanding, consistent with our experiments in Section 4 and the related work mentioned in Section 2. Consider a perceptron network which implements the Boolean equal function ("NXOR") between the two variables x 1 and x 2. The input x 3 is redundant in the sense that the of x 1 == x 2 is not influenced by the value of x 3.The first obvious observation is that adding x 3 doubles the input space. Instead of 2 2 = 4 possible input pairs, we now have 2 3 = 8 possible input triples that the network needs to map to sustain the x 1 == x 2 for all possible combinations of x 1, x 2, x 3. The network architecture shown in FIG0, for example, theoretically has the capacity to be trained to learn all 8 input triples BID11. Translating this example into a practical ML scenario, however, this would mean that we have to exhaustively train the entire input space for all possible settings of the noise. This is obviously unfeasible. We will therefore continue our analysis in a more practical setting. We assume a network like in FIG0 is correctly trained to model x 1 == x 2 in the absence of a third input. One example configuration is shown. Now, we train weights w 1 and w 2 to try to suppress the redundant input x 3 by going through all possible combinations for w i ∈ {−1, 0, 1}. This weight choice is without losing generality as the inputs x i are ∈ {0, 1} (see BID35). An adversarial example is defined as a triple (x 1, x 2, x 3) such that the output of the network is not the of x 1 == x 2. Simulating through all configurations exhaustively in TAB1. The only case that allows for 100% accuracy, i.e., no adversarial examples, is the setting w 1 = w 2 = 0, in which case x 3 is suppressed completely. In the other cases, we can roughly say that the more the network pays attention to x 3, the worse the (allowing edges). That is, the is better if one of the w i is set to 0 compared to none. Furthermore, the higher the potential, defined as the difference between the maximum and the minimum possible activation value as scaled by the w i, the worse the is. The intuition behind this is that higher potential change leads to higher potential impacts to the overall network. Using this simple model, one can see the importance of suppressing noise. Thresholds of neurons taking redundant inputs should be high, or equivalently, weights should be close to 0 (and equal to 0 in the optimal scenario). Now generalizing the example to a large network training images with'real-valued' weights, it becomes clear that redundant bits of an image should be suppressed by low enough weights otherwise it is easy to generate an exponential explosion of patterns needed to be recognized. The generalization of the example from the previous section is shown in FIG0. The model shows a machine learner performing the task of matching an unknown noisy pattern to a known pattern (label). For example, a perceptron network implements a function of noisy input data. It quantizes the input to match a known pattern and then outputs the of the learned function from a known pattern to a known output. Formally, the random variable Y encodes unknown patterns that are sent over a noisy channel. The observation at the output of the channel is denoted by the random variable X. For example, X could represent image pixels. The machine learner then erases all the noise bits in X to match against trained patterns which are then mapped to known outputsŶ, for example, the labels. It is well known from the thermodynamics of computing BID10 that setting memory bits and copying them is theoretically energy agnostic. However, resetting bits to zero is not. In other words, we need to spend energy (computation) to reset the noisy bits added by the channel and captured in the observation to get to a distribution of patternsŶ that is isomorphic to the original (unknown) distribution of patterns Y. Connecting back to the NXOR example from the previous section, Y would be the distribution over the input variables x 1 and x 2. The noise added is modeled by x 3 andŶ is the desired output isomorphic to x 1 and x 2 being equal. Now assuming a fully trained model, this model allows us to explain several phenomena explored in the introduction and Section 2.First, as illustrated in the previous section, we can view the machine learner as a trained bit eraser. That is, the machine learner has been trained to erase exactly those bits that are irrelevant to the pattern to be matched. This elimination of irrelevance constitutes the generalization capability. For a black box adversarial attack, we therefore just need to add enough irrelevant input to overflow this bit erasure function. As a , insufficient redundant bits can be absorbed and the remaining bits now create an exponential explosion for the pattern matching functionality. In a whitebox attack, an attacker can guess and check against the bit erasing patterns of the trained machine learner and create a sequence of input bits that specifically overflows the decision making. In both cases, our model predicts that adversarial patterns should be harder to learn as they consist of more bits to erase. This is confirmed in our experiments in Section 4. It is also clear that the theoretical minimum overflow is one bit, which means, small perturbations can have big effects. This will be made rigorous in Section 3.3. It is also well known that, for example, in the image domain one bit of difference is not perceivable by a human eye. Training with noisy examples will most likely make the machine learner more robust as it will learn to reduce redundancies better. However, a specific whitebox attack (with lower entropy than random noise), which constitutes a specific perceptron threshold overflow, will always be possible because training against the entire input space is unfeasible. Second, with training data available, it is highly likely that a surrogate machine learner will learn to erase the same bits. This means that similar bit overflows will work on both the surrogate and the original ML attack, thus explaining transferability-based attacks. In the following we will present a proof based on the model presented in Section 3.2 and the currently accepted definition of adversarial examples that shows that feature redundancy is indeed a necessary condition for adversarial examples. Throughout, we assume that a learning model can be expressed as f (·) = g(T (·)), where T (·) represents the feature extraction function and g(·) is a simple decision making function, e.g., logistic regression, using the extracted features as the input. Definition 1 (Adversarial example ). Given a ML model f (·) and a small perturbation δ, we call x an adversarial example if there exists x, an example drawn from the benign data distribution, such that f (x) = f (x) and x − x ≤ δ. We first observe that ∀x, x ∃δ such that x − x ≤ δ =⇒ f (x) = f (x) is the generalization assumption of a machine learner. The existence of an adversarial x is therefore equivalent to a contradiction of the generalization assumption. This is, x could be called a counter example. Practically speaking, a single counter example to the generalization assumption does not make the machine learner useless though. In the following, and as explained in previous sections, we connect the existence of adversarial examples to the information content of features used for making predictions. DISPLAYFORM0 Theorem 1. Suppose that the feature extractor T (X) is a sufficient statistic for Y and that there exist adversarial examples for the ML model f (·) = g(T (·)), where g(·) is an arbitrary decision making function. Then, T (X) is not a minimal sufficient statistic. We leave the proof to the appendix. The idea of the proof is to explicitly construct a feature extractor with lower entropy than T (X) using the properties of adversarial examples. Theorem 1 shows that the existence of adversarial examples implies that the feature representation contains redundancy. We would expect that more robust models will generate more succinct features for decision making. We will corroborate this intuition in Section 4.2. In this section, we provide empirical to justify our theoretical model for adversarial examples. Our experiments aim to answer the following questions. First, are adversarial examples indeed more complex (e.g. they contain more redundant bits with respect to the target that need to be erased by the machine learner)? If so, adversarial examples should require more parameters to memorize in a neural network. Second, is feature redundancy a large enough cause of the vulnerability of DNNs that we can observe it in a real-world experiment? Third, can we exploit the higher complexity of adversarial examples to possibly detect adversarial attacks? Fourth, does quantization of the input indeed not harm classification accuracy? Our model implies that adversarial examples generally have higher complexity than benign examples. In order to evaluate this claim practically, we need to show that this complexity increase is in fact an increase of irrelevant bits with regards to the encoding performed in neural networks towards a target function. This can be established by showing that adversarial examples are more difficult to memorize than benign examples. In other words, a larger model capacity is required for training adversarial examples. To quantitatively measure how much extra capacity is needed, we measure the capacity of multi-layer perceptrons (MLP) models with or without non-linear activation function (ReLU) on MNIST. Here we define the model capacity as the minimal number of parameters needed to memorize all the training data. To explore the capacity, we first build an MLP model with one hidden layer (units: 64). This model is efficient enough to achieve high performance and memorize all training data (with ReLU). After that, weights are reduced by randomly setting some of their values to zero and marking them untrainable. The error is set to evaluate the training success (training accuracy is larger than 1 −). We explore the minimal number of parameters and utilize binary search to reduce computation complexity. Finally, we change different and repeat the above steps. As illustrated in Figure 3, the benign examples always require fewer number of weights to memorize on different datasets with various attack methods. It is shown that adversarial examples indeed require larger capacity. From the training/testing process given in Figure 4, we can draw the same . The benign examples are always fitted and predicted more efficiently than adversarial examples given the same model. That is to say, adversarial examples have more complexity and therefore require higher model capacity. We now investigate if there are possible ways to exploit the higher complexity of adversarial examples to possibly detect adversarial attacks. That is to say, we need a machine-learning independent measure of entropy to evaluate how much benign and adversarial examples differ. For images, we utilized Maximum Likelihood (MLE), Minimax (JVHW) BID18 and compression estimators for BID2 ) on both MNIST and CIFAR-10 dataset. These are four metrics for entropy measurement, all of which indicate higher unpredictability with the value increasing. For compression estimation, prior work BID12 has found that an optimal quantification ratio exists for DNNs and appropriate perceptual compression is not harmful. Therefore, we consider such information as redundancy and set the quality scale to 20 following their strategy. We also reproduce the experiments in our settings and obtain the same shown in FIG4. As shown in TAB0, the benign images have smallest complexity in all of the four metrics, which suggests less entropy (lower complexity) and therefore higher predictability. Similarly, we also design four metrics for text entropy estimation including mean bits per character, byte-wise entropy (BW), bit-wise entropy (bW) and compression size. More specifically, BW and bW are calculated based on the histogram of the bytes or the bits per word. It is worthwhile to note that all of the metrics are measured on adversarial-benign altered pairs, because adversarial algorithms only modify specific words of the texts. In our evaluations, FGSM, FGVM BID27 and DeepFool attacks are implemented. From TAB1, we can draw the that adversarial texts introduce more redundant bits with regards to the target function which in higher complexity and therefore higher entropy. A reduction of adversarial attacks via entropy measurement is therefore potentially possible for both data types. Inspired by Theorem 1, we investigate the relation between feature redundancy and the robustness of a ML model. We expect that more robust models would employ more succinct features for prediction. To validate this, we design models with different levels of robustness and measure the entropy of extracted features (i.e., the input to the final activation layer for decisions). In experiments, we choose All-CNNs network for it has no fully-connected layer and the last convolutional layer is directly followed by the global average pooling and softmax activation layer, which is convenient for the estimation of entropy. In other words, we can estimate the entropy of the feature maps extracted by the last convolutional layer using perceptual compression and MLE/JVHW estimators. Specifically, we train different models on benign examples and control the ratios of adversarial examples in adversarial re-training period to obtain models with different robustness. In general, the larger ratio of adversarial examples we re-train, the more robust models we will obtain. The robustness in experiments is measured by the test accuracy on adversarial examples. Then we obtain the feature maps on adversarial examples generated by these models and compress them to q = 20, following BID12. Finally, we measure the compressed entropy of using MLE and JVHW estimator like Section 4.2. As illustrated in FIG5, the estimated entropy (blue dots) decreases as the classification accuracy (red dots) increases for all the three adversarial attacks (FGSM, DeepFool, CW) and the two datasets (MNIST, CIFAR), which means that the redundancy of last-layer feature maps is lower when the models become more robust. Surprisingly, adding adversarial examples into the training set serves as an implicit regularizer for feature redundancy. Our theoretical and empirical presented in this paper consistently show that adversarial examples are enabled by irrelevant input that the networks was not trained to suppress. In fact, a single bit of redundancy can be exploited to cause the ML models to make arbitrary mistakes. Moreover, redundancy exploited against one model can also affect the decision of another model trained on the same data as that other model learned to only cope with the same amount of redundancy (transferability-based attack). Unfortunately, unlike the academic example in Section 3.1, we almost never know how many variables we actually need. For image classification, for example, the current assumption is that each pixel serves as input and it is well known that this is feeding the network redundant information e.g., nobody would assume that the upper-most left-most pixel contributes to an object recognition when the object is usually centered in the image. Nevertheless, the highest priority actionable recommendation has to be to reduce redundancies. Before deep learning, manually-crafted features reduced redundancies assumed by humans before the data entered the ML system. This practice has been abandoned with the introduction of deep learning, explaining the temporal correlation with the discovery of adversarial examples. Short of going back to manual feature extraction, automatic techniques can be used to reduce redundancy. Obviously, adaptive techniques, like auto encoders, will be susceptible to their own adversarial attacks. However, consistent with our experiments in Section 4.2, and dependent on the input domain, we recommend to use lossy compression. Similar using quantization have been reported for MP3 and audio compression BID12 as well as molecular dynamics BID22. In general, we recommend a training procedure where input data is increasingly quantized while training accuracy is measured. The point where the highest quantization is achieved at limited loss in accuracy, is the point where most of the noise and least of the content is lost. This should be the point with least redundancies and therefore the operation point least susceptible to adversarial attacks. In terms of detecting adversarial examples, we showed in Section 4 that estimating the complexity of the input using surrogate methods, such as different compression techniques, can serve as a prefilter to detect adversarial attacks. We will dedicate future work to this topic. Ultimately, however, the only way to practically guarantee adversarial attacks cannot happen is to present every possible input to the machine learner and train to 100% accuracy, which contradicts the idea of generalization in ML itself. There is no free lunch. A PROOF OF THEOREM 1Proof. Let X be the set of admissible data points and X denote the set of adversarial examples,We prove this theorem by constructing a sufficient statistic T (X) that has lower entropy than T (X). Consider DISPLAYFORM0 where x is an arbitrary benign example in the data space. Then, for all x ∈ X, g(T (x)) = g(T (x)). It follows that T (x) = T (x), ∀x ∈ X. On the other hand, T (x) = T (x) by construction. Let the probability density of T (X) be denoted by p(t), where t ∈ T (X), and the probability density of T (X) be denoted by q(t) where t ∈ T (X \ X). Then, q(t) = p(t) + w(t) for t ∈ T (X \ X), where w(t) corresponds to the part of benign example probability that is formed by enforcing an originally adversarial example' feature to be equal to the feature of an arbitrary benign example according to. Furthermore, t∈T (X \X) w(t) = t∈T (X) p(t). We now compare the entropy of T (X) and T (X): DISPLAYFORM1 It is evident that U 1 ≥ 0. Note that for any p(t), there always exists a configuration of w(t) such that U 2 ≥ 0. For instance, let t * = arg max t∈T (X \X) p(t). Then, we can let w(t *) = t∈T (X) p(t) and w(t) = 0 for t = t *. With this configuration of w(t), U 2 = (p(t *) + w(t *)) log((p(t *) + w(t *)) − p(t *) log p(t *) Due to the fact that x log x is a monotonically increasing function, U 2 ≥ 0.To sum up, both U 1 and U 2 are non-negative; as a , H(T (X)) > H(T (X)) Thus, we constructed a sufficient statistic T (·) that achieves lower entropy than T (·), which, in turn, indicates that T (X) is not a minimal sufficient statistic. Apart from the adversarial examples, we also observed the same phenomenon for random noise that redundancy will lead to the failure of DNNs. We tested datasets with different signal-to-noise ratios (SNR), generated by adding Gaussian noise to the real pixels. The SNR is obtained by controlling the variance of the Gaussian distribution. Finally, we derived the testing accuracy on hand-crafted noisy testing data. As shown in FIG6 and 7b, a small amount of random Gaussian noise will add complexity to examples and cause the DNNs to fail. For instance, noisy input sets with one tenth the signal strength of the benign examples in only 34.3% test accuracy for DenseNet on CIFAR-10. This indeed indicates, and is consistent with related work, that small amounts of noise can practically fool ML models in general. | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | B1enCo0cK7 | A new theoretical explanation for the existence of adversarial examples |